uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,497,915
arxiv
\section{Introduction} One of the unresolved problems in the theory of nuclear structure is the description of ground state properties and excitation phenomena for heavier nuclei, based on realistic nucleon-nucleon (NN) interactions which reproduce the NN scattering data~\cite{Sto.93,Mac.89,Wir.95}. The use of these realistic interactions for solving the nuclear many-body problem is a challenging task. Presently, only light nuclei can be treated within ab initio schemes like Green's function Monte Carlo \cite{Pie.04}, no-core shell model~\cite{Nav.03}, and coupled cluster method~\cite{Wlo.05}. The Unitary Correlation Operator Method (UCOM), which describes the dominant short-range and tensor correlations explicitly by means of a unitary transformation~\cite{Fel.98,Nef.03,Rot.04,Rot.05}, allows for the use of realistic NN interactions in traditional nuclear structure methods. In contrast to other methods using unitary transformations, e.g. the unitary model operator approach~\cite{Sha.67,Fuj.04}, the correlation operators are given explicitly allowing for the derivation of a system-independent effective interaction operator $V_{\ensuremath{\textrm{UCOM}}}$. Although different by its construction, the correlated NN interaction $V_{\ensuremath{\textrm{UCOM}}}$ is similar to the $V_{low-k}$ low-momentum interaction \cite{Bog.03}. Both approaches lead to the separation of momentum scales, providing a phase-shift equivalent NN interaction in the low-momentum regime. Very recently, the correlated realistic NN interaction constructed within the UCOM framework, has been employed in Hartree-Fock (HF) calculations across the nuclide chart~\cite{RPPH.05}. Based on the UCOM-HF ground state, we have constructed the random-phase approximation (RPA) for the description of low-amplitude collective excitations in atomic nuclei using correlated realistic NN interactions~\cite{PPRH.05}. Various phenomenological RPA and quasiparticle RPA models have been very successful in the past, not only in studies of giant resonances and low-lying states (e.g. Refs.~\cite{Row.70,Dum.83,Daw.90,Ham.97,Col.00}), but also in description of exotic nuclear structure of collective excitations in nuclei away from the valley of $\beta$-stability~\cite{Ham.97,Mat.05,Ter.04,Paa.03,Sar.04,Paa_pp.05,Pap.04,Cao.05}. The present study, however, provides the first insight into the collective excitation phenomena in closed-shell nuclear systems, based on the correlated realistic NN interactions. \section{Unitary Correlation Operator Method (UCOM)} The essential ingredient of the UCOM approach is the explicit treatment of the interaction-induced correlations, i.e. short-range central and tensor correlations~\cite{Fel.98,Nef.03,Rot.04}. The relevant correlations are imprinted into an uncorrelated many-body state $\ket{\Psi}$ through a state-independent unitary transformation defined by the unitary operator $C$, resulting in a correlated state $ \ket{\corr{\ensuremath{\op{\Psi}}}} = C \ket{\Psi} \;$. An equivalent, technically more advantageous approach, is based on using the correlated operators $\widetilde{O}=\adj{C}OC$ with uncorrelated many-body states. The short-range central correlations are described by a distance-dependent shift, pushing two nucleons apart from each other if they are closer than the core distance. The application of the correlation operator in two-body space corresponds to a norm conserving coordinate transformation with respect to the relative coordinate. This transformation is parameterized in terms of correlation functions for each $(S,T)$ channel which are determined by an energy minimization in the two-body system. For purely repulsive channels an additional constraint on the range of the central correlator is used ($I_{R_{+}}^{(S=0,T=0)}$=0.1 fm$^4$, cf. Ref.~\cite{Rot.05}). The details of the determination and parameterization of the standard correlator are given in Ref. \cite{Rot.05}. The tensor correlations between two nucleons are generated by a tangential shift depending on the spin orientation \cite{Nef.03}. The size and the radial dependence is given by a tensor correlation function $\vartheta(r)$ for each of the two $S=1$ channels, whose parameters are also determined from an energy minimization in the two-body system \cite{Rot.05}. The range of the tensor correlation function is restricted through a constraint on the range measure $I_{\vartheta}=\int dr\,r^2 \vartheta(r)$. If one would use for the description of finite nuclei the long-range tensor correlator that is optimal for the deuteron, an effective screening due to other nucleons would emerge through higher-order contributions of the cluster expansion~\cite{Nef.03}. In practical calculations based on two-body approximation, this problem is resolved by restricting the range of the tensor correlation function, which provides an effective inclusion of the screening effect without explicit evaluation of higher terms in the cluster expansion~\cite{Rot.04}. Recent studies within the exact no-core shell model~\cite{Rot.05}, show that $I_{\vartheta}^{(S=1,T=0)}$ = 0.09 fm$^3$ leads to an optimal tensor correlator for the description of binding energies of $^{3}$H and $^{4}$He. In the present work we vary the range of the tensor correlator, $I_{\vartheta}^{(S=1,T=0)}$ = 0.07, 0.08, and 0.09 fm$^3$, in order to probe its impact on the description of the global properties of collective excitation phenomena in atomic nuclei. The contributions of the tensor correlator in $(S,T)=(1,1)$ channel are one order of magnitude smaller~\cite{Rot.04}, and therefore neglected in the present study. \section{Random-phase approximation in the UCOM framework} Starting from the uncorrelated Hamiltonian for the $A$-body system consisting of the kinetic energy operator $T$ and a two-body potential $V$, the formalism of the unitary correlation operator method is employed to generate the correlated Hamiltonian. By combining the central and tensor correlation operators, the correlated many-body Hamiltonian in two-body approximation is given by, \begin{equation} \ensuremath{\op{H}}_{\ensuremath{\textrm{UCOM}}} = {\corr{\ensuremath{\op{T}}}}^{[1]} + {\corr{\ensuremath{\op{T}}}}^{[2]} + {\corr{\ensuremath{\op{V}}}}^{[2]} = \ensuremath{\op{T}} + \ensuremath{\op{V}}_{\ensuremath{\textrm{UCOM}}}, \end{equation} where the one-body contribution comes only from the uncorrelated kinetic energy ${\corr{\ensuremath{\op{T}}}}^{[1]} = \ensuremath{\op{T}}$. Two-body contributions arise from the correlated kinetic energy ${\widetilde{T}}^{[2]}$ and the correlated potential ${\widetilde{V}}^{[2]}$, which together constitute the correlated interaction $V_{\ensuremath{\textrm{UCOM}}}$~\cite{Rot.05}. More details about the evaluation of the two-body matrix elements for $V_{\ensuremath{\textrm{UCOM}}}$ are available in Ref.~\cite{Rot.05}. Assuming spherical symmetry, the correlated realistic NN interaction is employed to solve the HF equations, i.e. to evaluate the single-particle wave functions and energies~\cite{RPPH.05}. The UCOM-HF single-particle spectra are used as a basis for the construction of the $p-h$ configuration space for the RPA. The RPA equations are derived from the equation of motion method using the quasiboson approximation \cite{Row.70}, \begin{equation} \label{rpaeq} \left( \begin{array}{cc} A^J & B^J \\ B^{^\ast J} & A^{^\ast J} \end{array} \right) \left( \begin{array}{c} X^{\nu ,JM} \\ Y^{\nu,JM} \end{array} \right) =\omega_{\nu}\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) \left( \begin{array}{c} X^{\nu,JM} \\ Y^{\nu,JM} \end{array} \right)\; , \end{equation} where the eigenvalues $\omega_{\nu}$ correspond to RPA excitation energies. The residual particle-hole interaction in $A$ and $B$ matrices includes the correlated realistic NN interaction $V_{\ensuremath{\textrm{UCOM}}}$ in a fully consistent way with the Hartree-Fock equations. In addition, the multipole transition operators are consistently transformed by employing the same unitary transformation as for the Hamiltonian. However, it turns out that the effect of the UCOM transformation of the transition operators for monopole and quadrupole modes is negligible~\cite{PPRH.05}. It is interesting to note, that the UCOM-RPA results are in agreement with the study of effective operators in the no-core shell model within the $2\hbar\Omega$ model space, where the B(E2) values are very similar for the bare and the effective operator which includes the two-body contributions~\cite{Ste.05}. An essential property of the present UCOM-RPA scheme is that it is fully self-consistent, i.e. the same correlated realistic NN interaction is used in the HF equations that determine the single-particle basis, as well as the RPA residual interaction, and the multipole transition operators are transformed consistently with $\ensuremath{\op{V}}_{\ensuremath{\textrm{UCOM}}}$. This means that the same unitary transformation of the realistic NN interaction, i.e. central and tensor correlation functions with the same set of parameters are systematically employed in HF and RPA calculations. The effective NN interaction which determines the ground-state properties, also determines the small amplitude motion around the nuclear ground state. This property of the present model ensures that RPA amplitudes do not contain spurious components associated with the center-of-mass translational motion. Models that are not fully self-consistent necessitate the inclusion of an additional free parameter in the residual interaction, to adjust a proper separation of the spurious state. One of the interesting questions is to which extent the UCOM-RPA transition spectra are sensitive to the range of the tensor correlator employed in the unitary transformation. We have verified that the multipole strength distributions do not depend on variations of the central correlator range, around the standard short-range correlator~\cite{PPRH.05}. In Fig.~\ref{figmono2}, we display the UCOM-RPA spectra of isoscalar giant monopole resonances (ISGMR) for several closed-shell nuclei, using the correlated Argonne V18 interaction with different constrains on the range of the tensor correlator, $I_{\vartheta}^{(S=1,T=0)}$=0.07, 0.08, and 0.09 fm$^3$. For comparison, we also denote excitation energies from a selection of experimental~\cite{You.99,Shl.93,Sha.88} and theoretical~\cite{Paa.03,Dro.90,Ma.01} studies. The agreement with experiment and other theoretical results is rather good for the standard correlator set with $I_{\vartheta}^{(S=1,T=0)}$=0.09 fm$^3$. In heavy nuclei, the ISGMR energies are overestimated by $\approx1-3$ MeV. By varying the range of the tensor correlator around its standard value, the transition strength can be fine-tuned to improve the agreement with the experimental data. \begin{figure}[th] \vspace*{20pt} \centerline{\psfig{file=figmono2.eps,width=10cm}} \vspace*{8pt} \caption{The calculated UCOM-RPA strength distribution of ISGMR for the correlated Argonne V18 interaction, using different restrictions on the range of the tensor correlator ($I_{\vartheta}^{(S=1,T=0)}$=0.07, 0.08, and 0.09 fm$^3$). The experimental data \protect\cite{You.99,Shl.93,Sha.88} and results from the nonrelativistic (Dro{\. z}d{\. z} et al.) \protect\cite{Dro.90} and relativistic RPA (NL3) \protect\cite{Paa.03,Ma.01} are denoted by arrows.} \label{figmono2} \end{figure} Next we employ the UCOM-RPA to describe the isovector giant dipole resonances (IVGDR) in $^{90}$Zr, $^{132}$Sn, and $^{208}$Pb (Fig.~\ref{figdip2}). The correlated Argonne V18 interaction is used, with different constraints on the ranges of the tensor part of the correlator, $I_{\vartheta}^{(S=1,T=0)}$=0.07, 0.08, and 0.09 fm$^3$. The calculated dipole response is compared with the experimental data~\cite{Adr.05,Ber.75,Poe.89,Rit.93} and with the theoretical excitation energies from the relativistic RPA~\cite{NVR.02} based on DD-ME2 interaction~\cite{LNVR.05}. In all nuclei under consideration, the resulting IVGDR strength distributions display rather wide resonance-like structures. The decrease in the range of the tensor correlator, i.e. its constraint $I_{\vartheta}^{(S=1,T=0)}$=0.09 fm$^3$ towards 0.07 fm$^3$, results in lower IVGDR peak energies by $\approx$2-3 MeV. However, UCOM-RPA overestimates the IVGDR centroid energies by $\approx$3-7 MeV. This difference can serve as a direct measure of the missing correlations and three-body contributions in the UCOM-RPA scheme. Inclusion of the three-body interaction and long-range correlations beyond the simple RPA method, would probably to a large extent resolve the presently obtained discrepancies with the other studies. \begin{figure}[th] \vspace*{22pt} \centerline{\psfig{file=figdip2.eps,width=10cm}} \vspace*{8pt} \caption{The UCOM-RPA strength distributions for the IVGDR in $^{90}$Zr, $^{132}$Sn, and $^{208}$Pb. The calculations are based on the correlated Argonne V18 interaction, using different constraints on the tensor correlator range ($I_{\vartheta}^{(S=1,T=0)}$= 0.07, 0.08, and 0.09 fm$^3$). The experimental data~\protect\cite{Adr.05,Ber.75,Poe.89,Rit.93} and the relativistic RPA (DD-ME2) energies~\protect\cite{NVR.02,LNVR.05} are shown by arrows.} \label{figdip2} \end{figure} In Fig.~\ref{figquad2} we show the UCOM-RPA isoscalar quadrupole transition strength distributions for $^{40}$Ca, $^{90}$Zr, and $^{208}$Pb (Argonne V18, $I_{\vartheta}^{(S=1,T=0)}$=0.07, 0.08, and 0.09 fm$^3$), in comparison with experimental data~\cite{Ber.79}. The residual interaction constructed from the correlated realistic NN interaction is attractive in the isoscalar channel, generating strongly collective peaks corresponding to isoscalar giant quadrupole resonance (ISGQR). In addition, in the case of $^{90}$Zr, and $^{208}$Pb, the UCOM-RPA model also results with pronounced low-lying $2^+$ states. However, RPA based on the correlated realistic NN interaction, without the long-range correlations and three-body contributions, is not sufficient for a quantitative description of ISGQR excitation energy. For the short range tensor correlator ($I_{\vartheta}^{(S=1,T=0)}$=0.07 fm$^3$) the model still overestimates experimental values by $\approx$ 8 MeV. By decreasing the range of the tensor correlator, the quadrupole response is systematically shifted towards lower energies. The quadrupole response is rather sensitive to the range of the tensor correlator. For $^{40}$Ca and the ranges of tensor correlator determined by $I_{\vartheta}^{(S=1,T=0)}$=0.07, 0.08, and 0.09 fm$^3$, the ISGQR centroid energies read 25.1, 26.2, and 27.1 MeV, respectively. In the cases of heavier nuclei, these differences are smaller, e.g. for $^{208}$Pb, the centroid energy lowers by 1.2 MeV when going from the correlator with $I_{\vartheta}^{(S=1,T=0)}$=0.09 fm$^3$ towards $I_{\vartheta}^{(S=1,T=0)}$=0.07 fm$^3$. \begin{figure}[th] \vspace*{20pt} \centerline{\psfig{file=figquad2.eps,width=10cm}} \vspace*{8pt} \caption{The ISGQR strength distributions for $^{40}$Ca, $^{90}$Zr, and $^{208}$Pb. The UCOM-RPA model is based on the correlated Argonne V18 interaction with different values of $I_{\vartheta}^{(S=1,T=0)}$=0.07, 0.08, and 0.09 fm$^3$ to constrain the range of the tensor correlator. The experimental ISGQR excitation energies are denoted by arrows \protect\cite{Ber.79}.} \label{figquad2} \end{figure} The agreement achieved between the calculated and experimental properties of the ISGMR indicates that the correlated NN interaction corresponds to realistic values of the nuclear matter (NM) incompressibility. It has been demonstrated in the past that, within relativistic and non-relativistic RPA, the energies of the dipole and quadrupole resonances, on one hand, and the value of the effective mass corresponding to the effective interaction used, on the other, are correlated~\cite{Hui.89,Rei.99}. In particular, the relativistic RPA without density-dependent interaction terms, based on the ground state with a small effective mass and relatively high compression modulus, resulted in systematically overestimated energies of giant resonances~\cite{Hui.89}. The discrepancies between UCOM-RPA calculations and experimental data for multipole giant resonances, as well as the low density of single-nucleon UCOM-HF states, suggest that the respective effective mass is too small. Tensor correlations with shorter range increase the single-particle level density and result in a systematic shift of the giant resonances towards lower energies, improving the agreement with experimental data. However, the variations of the ranges of correlation functions can serve only as a tool for fine tuning of the excitation spectra and they can not supplement the effects of the missing long-range correlations and three-body contributions. \section{Conclusions} In the present study, the fully self-consistent RPA is formulated in the single-nucleon Hartree-Fock basis by using correlated realistic NN interactions. The short-range central and tensor correlations induced by the NN potential are explicitly treated within the UCOM framework. It is shown that the correlated NN interactions are successful in generating collective excitation modes, but for an accurate description of experimental data on excitation energies and transition strengths, one has to account for the contributions missing in the present treatment. These are (i) long-range correlations beyond simple RPA, which can be included within a RPA scheme built on the correlated ground state or by including complex configurations within Second-RPA, and (ii) induced and genuine three-body interactions, which one could try to model by a simple effective three-body force. \section*{Acknowledgments} This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) under contract SFB 634. We thank the Institute for Nuclear Theory at the University of Washington for its hospitality and the Department of Energy for partial support during the completion of this work.
1,116,691,497,916
arxiv
\section{Introduction} Top quark physics at hadron colliders plays an important role in testing the Standard Model of particle physics and its possible extensions. In the Standard Model the top quark has a very short lifetime, $\tau_{\nicefrac{1}{2}}\approx\unit[5\times10^{\textrm{-}25}]{s}$, therefore the definite spin state in which the top anti-top pair is produced is not spoilt by hadronisation effects. As a result, the direction of the spin of the top quark is reflected in the angular distributions of its decay products. In contrast to this, the spin of light quarks will flip before they decay, making the spin state they are produced in unobservable. Furthermore, the theoretical calculations necessary in order to predict the angular distributions can be performed for top pairs, resulting in precise theoretical predictions which can be tested by experiment. New physics in either the production or decay mechanism would modify these angular distributions, making spin correlations sensitive to new physics. Until recently only one measurement of spin correlations has been performed. Using $\unit[125]{pb^{\textrm{-}1}}$ of data taken during Run~I of the Tevatron collider at Fermilab the D0 collaboration measured a correlation coefficient in agreement with the Standard Model~\cite{Abbott:2000dt}. However, since the sample contained only six events, the sensitivity was too low to rule out the hypothesis of no spin correlations. Recently the CDF and D0 collaborations performed measurements using up to $\unit[4.3]{fb^{\textrm{-}1}}$ of data taken with the CDF and D0~\cite{Abazov:2005pn} detectors, the results of which are discussed below. \section{Observables} In strong interactions the top and anti-top quark are produced unpolarised at hadron colliders, however the $t\bar{t}$ system is in a definite spin state. At the Tevatron about $\unit[85]{\%}$, at next to leading order, of top quark pairs are produced via quark anti-quark annihilation. At threshold these $t\bar{t}$ systems will be in a $^{3}S_{1}$ state, whereas the $\unit[15]{\%}$ of top quark pairs produced via gluon fusion will be in a $^{1}S_{0}$ state. In the first case the top and anti-top quark will tend to have their spins parallel, in the second case they tend to be anti-parallel. One therefore expects to observe a correlation between the direction of the spins. The strength of the correlation due to the production mechanism can be expressed as the asymmetry $A$,\begin{equation} A=\frac{N_{\uparrow\uparrow}+N_{\downarrow\downarrow}-N_{\downarrow\uparrow}-N_{\uparrow\downarrow}}{N_{\uparrow\uparrow}+N_{\downarrow\downarrow}+N_{\downarrow\uparrow}+N_{\uparrow\downarrow}}\label{eq:production-asymmetry}\end{equation} between the number of events with spins parallel, $N_{\uparrow\uparrow}$ and $N_{\downarrow\downarrow}$, and the number of events with spins anti-parallel, $N_{\uparrow\downarrow}$ and $N_{\downarrow\uparrow}$. In order to measure the direction of the spin vector a quantisation axis needs to be defined. At the Tevatron three sets of quantisation axes, referred to as {}``spin basis'', are commonly used. They are shown in Figure~\ref{fig:Three-spin-basis}. The simplest is the so called {}``beamline basis'' in which the direction of one of the incoming hadrons is used as quantisation axis. This basis is easy to construct and is optimal for $t\bar{t}$ systems produced at threshold. The production asymmetry has been calculated at next to leading order (NLO) in QCD as $A=0.777$~\cite{Bernreuther:2004jv}. The second basis is the {}``helicity basis'' in which the momentum of the (anti)top quark in the top-anti-top quark zero momentum frame is used to quantise the (anti)top quark spin. At the Tevatron the strength of the correlation is smaller than in the {}``beamline basis'', in NLO QCD $A=\textrm{-}0.352$. The opposite sign arises due to the fact that the spins tend to be anti-parallel in this basis. Finally the third basis is the {}``off-diagonal basis''. The direction of the quantisation axes are defined by the angle $\omega$ with respect to the (anti)top quark momentum. The angle $\omega$ is given by $\tan\omega=\sqrt{1-\beta^{2}}\tan\theta$, where $\beta$ is the speed of the top quark and $\theta$ is its scattering angle. This basis interpolates between the {}``beamline basis'' close to threshold (low $\beta$) and the {}``helicity basis'' above threshold (large $\beta$). The production asymmetry is $A=0.782$. While this is slightly larger than in the {}``beamline basis'' it is more complex to reconstruct. \begin{figure} \begin{centering} \includegraphics[width=0.8\columnwidth]{figs/spin-bases} \par\end{centering} \caption{\label{fig:Three-spin-basis}The three choices of quantisation axis used at the Tevatron. The {}``beamline basis'' (left) is optimal for top pairs produced at threshold, the {}``helicity basis'' (centre) is used for above threshold top pairs and the {}``helicity basis'' (right) interpolates between the two.} \end{figure} The angular distribution of decay product $i$ in the top quark rest frame is given by:\begin{equation} \frac{1}{\sigma}\frac{\textrm{d}\sigma}{\textrm{d}\cos\theta_{i}}=\frac{1}{2}\left(1-\alpha_{i}\cdot\cos\theta_{i}\right)\label{eq:top-decay}\end{equation} where $\theta_{i}$ is the angle between the direction of flight of decay product $i$ and the direction of the spin vector; $\alpha_{i}$ is the so-called spin analysing power. From Equation~\ref{eq:top-decay} it is clear that the angular distribution of a decay product with $\alpha_{i}=0$ contains no information about the direction of the top quark spin and the angular distribution of a decay product with $\alpha_{i}=\pm1$ will contain most information. The spin analysing power of the various top quark decay products are listed in Table~\ref{tab:Spin-analysing-power}. The particles with the highest spin analysing power are the lepton and the down type quark from the W boson decay. \begin{table} \caption{\label{tab:Spin-analysing-power}Spin analysing power of the top quark decay products. The up type quark, down type quark, neutrino and lepton are the decay products of the W boson. For the antiparticles the sign is reversed.} \begin{tabular}{ccccc} \toprule & lepton, down type quark & neutrino & up type quark & b quark\tabularnewline \midrule \midrule analysing power $\alpha$ & +1 & +0.31 & +0.31 & +0.41\tabularnewline \bottomrule \end{tabular} \end{table} In order to observe a correlation between the direction of the spin of the top and anti-top quark one must consider the angle $\theta$ of a decay product of the top quark and the angle of a decay product of the anti-top quark simultaneously. The double differential distribution for a top quark decay product $i$ and anti-top quark decay product $j$ is given by:\begin{equation} \frac{1}{\sigma}\frac{\mathrm{d^{2}}\sigma}{\mathrm{d}\cos\theta_{i}\mathrm{d}\cos\theta_{j}}=\frac{1}{4}\left(1-A\alpha_{i}\alpha_{j}\cos\theta_{i}\cos\theta_{j}\right)\label{eq:coscos}\end{equation} where $\sigma$ is the total cross section, $A$ is the production asymmetry, and $\alpha_{i,\, j}$ is the spin analysing power of the $i,\, j$-th decay product. In all analyses presented here the spin correlation parameter $C=A\alpha_{i}\alpha_{j}$ is measured. A measurement of the distribution given in Equation~\ref{eq:coscos} should be performed as follows: \begin{enumerate} \item Reconstruct the top and anti-top quark momenta in the laboratory frame, \item Perform a boost from the laboratory frame to the rest frame of the $t\bar{t}$ system. Define the vectors $\hat{b}_{i}$ and $\hat{b}_{j}$ along which to quantise the top and anti-top quark spins respectively. \item Boost the top (anti-top) quark decay product to the top (anti-top) quark rest frame and calculate $cos\theta_{i,\, j}=\hat{b}_{i,\, j}\cdot\hat{q}_{i,\, j}$. \end{enumerate} The difference between the case of no spin correlations, $A=0$, and SM spin correlations as measured in the {}``beamline basis'', $A=0.777$, using leptons as spin analysers is shown in Figure~\ref{fig:parton-coscos}. \begin{figure} \begin{centering} \includegraphics[width=0.55\columnwidth]{figs/parton-level-coscos} \par\end{centering} \caption{\label{fig:parton-coscos}The distribution $\cos\theta_{1}\cos\theta_{2}$ for a sample of top anti-top quark events using generated partons. With spin correlations (red dashed) and without (black solid). Here both top quarks decayed to leptons which subsequently were used as spin analysers~\cite{d0dilep}.} \end{figure} \section{Measurements} While in theory the down type quark is as powerful a spin analyser as the lepton, it is more difficult to identify in practise. This leads to two different approaches. In the first one, one selects a pure sample of top pairs in which both the top and anti-top quark decay to leptons. In the second a sample with higher statistics is selected by requiring only one top quark to decay to a lepton. In the following the advantages, challenges and results are discussed for both approaches. \subsection{Dilepton final states} The advantages of the dilepton final state are that it is simple to identify the final state particles of interest and the high purity of the sample. The disadvantage is that one suffers from a low branching ratio and needs to deal with two neutrinos when reconstructing the kinematics of the event. Both CDF and D0 select events with two high $p_{t}$ leptons of opposite charge and at least two jets. The detailed event selections are described in References~\cite{cdfdilep,d0dilep} for CDF and D0, respectively. In final states with same flavour leptons ($e^{+}e^{\textrm{-}}$ and $\mu^{+}\mu^{\textrm{-}}$) the main background arises from Drell-Yan, $Z\textrm{/}\gamma*\rightarrow\ell^{\textrm{-}}\ell^{\textrm{+}}$, production. In the $e\mu$ final state the main background is instrumental, this occurs mainly due to W+jets events in which a jet is misidentified as a lepton. The second largest background arises due to semileptonic decays of $Z\textrm{/}\gamma*\rightarrow\tau^{\textrm{-}}\tau^{\textrm{+}}$. Further sources of background in all three final states are the diboson processes WW, WZ and ZZ. Both signal and background are modelled using Monte Carlo simulation, except for the instrumental background which is estimated from data. In order to reconstruct the momentum of the top and anti-top quark, one needs to deal with the two neutrinos in the final state. To fully characterise the kinematics of the final state one needs 18 quantities, assuming the masses of the final state particles are known. While the leptons and jets are observable in the detector, the two neutrinos escape detection. It is possible to infer the sum of the momenta of the neutrinos in the $x$ and $y$ plane from the missing transverse energy, ${\displaystyle {\not}E_{T}^{x}}$ and ${\displaystyle {\not}E_{T}^{y}}$. Using this information and making an assumption about the mass of the W boson and the top quark it is possible to write down a set of quartic equations which fully describe the final state. Solving them yields up to four solutions per event. Additionally one needs to try both lepton-jet pairings which increases the number of possible solutions to eight. \newpage In the CDF measurement a likelihood function is constructed from several observables and maximised with respect to the unknown neutrino momenta ($\vec{p}_{\nu}$, $\vec{p}_{\bar{\nu}}$) and the energies of the bottom quark jets ($E_{b}^{\textrm{guess}}$, $E_{\bar{b}}^{\textrm{guess}}$): \[ \begin{array}{l} L\left(\vec{p}_{\nu},\,\vec{p}_{\bar{\nu}},\, E_{b}^{\textrm{guess}},\, E_{\bar{b}}^{\textrm{guess}}\right)=P\left(p_{z}^{t\bar{t}}\right)P\left(p_{T}^{t\bar{t}}\right)P\left(M_{t\bar{t}}\right)\times\\ \frac{1}{\sigma_{b}}\exp\left(-\frac{1}{2}\left(\frac{E_{b}^{\textrm{meas}}-E_{b}^{\textrm{guess}}}{\sigma_{b}}\right)^{2}\right)\times\frac{1}{\sigma_{\bar{b}}}\exp\left(-\frac{1}{2}\left(\frac{E_{\bar{b}}^{\textrm{meas}}-E_{\bar{b}}^{\textrm{guess}}}{\sigma_{\bar{b}}}\right)^{2}\right)\times\\ \frac{1}{\sigma_{x}^{\textrm{MET}}}\exp\left(-\frac{1}{2}\left(\frac{{\displaystyle \not}E_{x}^{\textrm{meas}}-{\displaystyle \not}E_{x}^{\textrm{guess}}}{\sigma_{x}^{\textrm{MET}}}\right)^{2}\right)\times\frac{1}{\sigma_{y}^{\textrm{MET}}}\exp\left(-\frac{1}{2}\left(\frac{{\displaystyle \not}E_{y}^{\textrm{meas}}-{\displaystyle \not}E_{y}^{\textrm{guess}}}{\sigma_{y}^{\textrm{MET}}}\right)^{2}\right)\end{array}\] where $P\left(p_{z}^{t\bar{t}}\right)$, $P\left(p_{T}^{t\bar{t}}\right)$ and $P\left(M_{t\bar{t}}\right)$ are probability density functions obtained from \noun{Pythia} $t\bar{t}$ Monte Carlo events, $E_{b,\,\bar{b}}^{\textrm{meas}}$ the measured energies of the bottom/anti-bottom quark jets, ${\displaystyle \not}E_{x,\, y}^{\textrm{meas}}$ the measured components of ${\displaystyle \not}E_{T}$, and $\sigma_{i}$ the respective resolutions. The maximisation is performed for both lepton-jet pairings and the combination with the larger $L$ is kept. As the \noun{Pythia }Monte Carlo simulation does not contain spin correlations, templates for values of $C=-1,\,0.8,\dots,\,1$ are obtained by reweighting the signal Monte Carlo at the generator level using a weight $w\sim1-C\cdot\cos\theta_{1}\cos\theta_{2}$. For each value of $C$ a two dimensional template in the decay angles of the lepton and anti-lepton, $\cos\theta_{\ell^{+}}$ , $\cos\theta_{\ell^{\textrm{-}}}$ and a template in the decay angles of the bottom quark jets, $\cos\theta_{b}$ , $\cos\theta_{\bar{b}}$ is created. The two templates are fit with an analytical function $f^{\ell,\, b}\left(x,\, y;\, C\right)$. The measurement is then performed on the $N$ candidate events by maximising the likelihood function: \[ L\left(C\right)=\prod_{i=0}^{N}f^{l}\left(x,\, y;\, C\right)f^{b}\left(x,\, y;\, C\right).\] In order to extract limits from the measurement, a confidence belt according to the Feldman-Cousins prescription \cite{Feldman:1997qc} is created. This naturally includes both statistical and systematic uncertainties and allows one to decide before looking at the data whether to quote a one or two sided limit. Using $\unit[2.8]{fb^{\textrm{-}1}}$ of data the best fit value is $C=0.32_{-0.78}^{+0.55}\textrm{(stat + syst)}$ and the corresponding confidence belts are shown in Figure~\ref{fig:FC-belts}. The measurement was performed in the {}``helicity basis''. The result is consistent with the expected value of $C=0.782$. The largest contributions to the systematic uncertainty come from evaluating the PDF uncertainties and the finite number of Monte Carlo events used to form the templates. \begin{figure} \begin{centering} \includegraphics[width=0.8\columnwidth]{figs/cdf-dilep-fc} \par\end{centering} \caption{\label{fig:FC-belts}The 68\% (stat only), 68\% and 95\% Confidence Level intervals constructed according to the Feldman-Cousins prescription including statistical and all systematic uncertainties for the CDF measurement. The best fit value is $C=0.32_{-0.78}^{+0.55}$~\cite{cdfdilep}.} \end{figure} At D0 the neutrino weighting technique is used to solve for the event kinematics. By making an assumption about the rapidity, $\eta$, of the neutrino and anti-neutrino, it is possible to solve the event kinematics, while not using ${\displaystyle {\not}E_{T}^{x}}$ and ${\displaystyle {\not}E_{T}^{y}}$ in the process but instead to assign a weight, $w$, to each solution given by:\[ w=\exp\left(-\frac{\left({\displaystyle {\not}E_{T}^{x}-\nu_{x}-\bar{\nu}_{x}}\right)^{2}}{\sigma^{2}}\right)\times\exp\left(-\frac{\left({\displaystyle {\not}E_{T}^{y}-\nu_{y}-\bar{\nu}_{y}}\right)^{2}}{\sigma^{2}}\right)\] where $\nu_{x,\, y}$ and $\bar{\nu}_{x,\, y}$ are the x and y components of the neutrino and anti-neutrino momentum for a given solution and $\sigma$ is the ${\not}E_{T}^{x}$ resolution. Many solutions are obtained by sampling the neutrino and anti-neutrino rapidity based on Monte Carlo simulation. No dependence of the neutrino rapidity on the presence of spin correlations is observed. The weighted mean of all solutions for an event is used as estimator for the true value of $\cos\theta_{\ell^{+}}\cos\theta_{\ell^{-}}$. As for the CDF measurement, the \noun{Pythia} Monte Carlo simulation is used to model the signal sample. A one dimensional template in the variable $\cos\theta_{\ell^{+}}\cos\theta_{\ell^{-}}$ is created for $C=0$ and $C=0.777$ by reweighting the distribution at the generator level. In order to extract a value of $C$ a linear combination of the two templates is fit to the data. Pseudo-experiments are created for each value of $C$ and fit with signal and background templates. Each source of systematic uncertainty is considered as a nuisance parameter during the fit. Feldman-Cousins confidence belts are constructed from the pseudo experiments. Using up to $\unit[4.2]{fb^{\textrm{-}1}}$ of data the best fit value is $C=-0.17_{-0.53}^{+0.64}\textrm{(stat + syst)}$. In this measurement the {}``beamline basis'' was used and the measured value is consistent with the Standard Model expectation of $C=0.777$ at the two sigma confidence level. The two main sources of systematic uncertainty are the variation of the assumed top mass during the event reconstruction from $\unit[175]{GeV}$ to $\unit[170]{GeV}$ and the test of the reweighting method. For the latter, the two \noun{Pythia} signal templates were replaced by \noun{Alpgen}, which contains spin correlations, and MC@NLO where spin correlations were turned off. \begin{figure} \begin{centering} \includegraphics[width=0.4\columnwidth]{figs/d0-dilep-fc}\includegraphics[width=0.5\columnwidth]{figs/d0-data} \par\end{centering} \caption{Left: the 68\%, 95\% and 99\% Feldman-Cousins confidence belts are shown. The best fit value can be read of at the intersection of the dashed black line and the thin blue line. Right: The sum of all dilepton channels is shown. The open black histogram shows the expected distribution for the case of no spin correlations, $C=0$ and the filled red histogram the expected distribution for Standard Model spin correlations, $C=0.777$~\cite{d0dilep}.} \end{figure} \subsection{Semileptonic final states} Selecting semileptonic events results in a higher yield, but the challenge is to identify the down type quark. This is done probabilistically by choosing the jet closest to the bottom type jet in the W boson rest frame \cite{Mahlon:1995zn}, which will result in picking the correct jet about 60\% of the time. Events are selected by requiring at least one high $p_{T}$, central lepton, large missing transverse energy and four or more jets, one of which must be identified as a b-jet. The backgrounds are estimated both from simulation and data. For details of the selection see Reference~\cite{cdfljets}. Using $\unit[4.3]{fb^{\textrm{-}1}}$ of data a total of 1001 events are selected of which 786 are expected to be top pair events. When produced in pairs the top and anti-top quark either have the same helicity or opposite helicity. The fraction of top pairs with opposite helicity is given by:\[ f_{O}=\frac{\sigma\left(\bar{t}_{R}t_{L}\right)+\sigma\left(\bar{t}_{L}t_{R}\right)}{\sigma\left(\bar{t}_{R}t_{R}+\bar{t}_{L}t_{L}+\bar{t}_{R}t_{L}+\bar{t}_{L}t_{R}\right)},\] where $\sigma\left(\bar{t}_{L,\, R}t_{L,\, R}\right)$ denotes the cross section for each possible helicity configuration. Using Equation~\ref{eq:production-asymmetry} one can show that a measurement of $f_{O}$ is equivalent to a measurement of $A$ in the helicity basis. One template for top pairs with same helicity and one template for top pairs of opposite helicity are created using a modified version of the \noun{Herwig} event generator. The opposite helicity fraction is extracted with a binned maximum likelihood fit of the two templates to the data, with contributions from backgrounds taken into account. The best fit value is $f_{O}=0.80\pm0.26\textrm{(stat + syst)}$ or equivalently $A=2f_{O}-1=0.60\pm0.52\textrm{(stat + syst)}$. This is consistent with the Standard Model expectation of $A=0.4$. The two main systematic uncertainties are Monte Carlo statistics and jet energy scale. \begin{figure} \begin{centering} \includegraphics[width=0.45\columnwidth]{figs/cdf-ljets-lepbot}\includegraphics[width=0.45\columnwidth]{figs/cdf-ljets-lepdown} \par\end{centering} \caption{The best fit of same helicity, opposite helicity and background templates for the CDF semileptonic decay channel. On the (left) the distribution of the product of the decay angle of the lepton and the bottom quark. On the (right) the distribution of the product of the decay angle of the lepton and the down type quark. The best fit value from a simultaneous fit to both distributions is $f_{O}=0.8\pm0.26\textrm{(stat + syst)}$ or $C=0.6\pm0.52\textrm{(stat + syst)}$~\cite{cdfljets}.} \end{figure} \section{Conclusions} The spin correlation parameter $C$ has been measured in dilepton and semileptonic decays of top and anti-top quark pairs using up to $\unit[4.3]{fb^{-1}}$ of data collected with the CDF and D0 detectors. Measurements were performed in the {}``beamline'', {}``helicity'' and {}``off-diagonal'' bases. The measurements are found to be in agreement with the Standard Model predictions. All three measurements are still statistically limited. Considering that the Tevatron collider has delivered nearly twice as much integrated luminosity since the analyses have been performed, updates of all measurements can be expected soon. \bibliographystyle{varenna}
1,116,691,497,917
arxiv
\section{Introduction} First report of field emission from carbon NTs had appeared a decade ago\cite{Science0}. It was followed by a demonstration \cite{sourse} that arrays of NTs can be patterned into emitting and non-emitting regions. Since then, the field emission properties of carbon NTs command a steady interest from researchers worldwide. The uniqueness of these properties originates from geometry of a NT. Namely, due to a small NT radius, $r$, the electric field applied between the substrate (cathode), on which the NTs are grown, and the anode is enhanced by a large factor $\beta \gg 1$ near the nanotube tip. Such an enhancement translates into high probability of electron tunnelling toward the anode, leading to desirable low turn-on voltage for field emission. This property, combined with high emission current density, made possible a successful fabrication of the row-column matrix-addressable flat panel display based on carbon NTs \cite{flat0,flat1,flat2,flat3,flat4,flat5,flat6,flat7,flat8}. Currently, flat panel displays constitute one of the most prominent applications of NTs \cite{baughman02}. Geometrical characteristics of individual NTs utilized in the first display \cite{flat0} were highly dispersed. Further advances in fabrication \cite{Science1} allowed to achieve excellent vertical alignment and high homogeneity in the lengths and radii of NTs \cite{latest1,latest3,latest2,latest4}. This suggests that NTs on the cathode of a display can, in the first approximation, be viewed as constituting a {\em regular array} of identical NTs. Such an array is schematically illustrated in Fig.~1. \begin{figure}[t] \centerline{\includegraphics[width=90mm,angle=0,clip]{tubes1.eps}} \caption{Schematic illustration of the forrest of NTs of a height, $h$, grown on the substrate, $z=0$. The average distance between neighboring NTs is $d$. Shaded regions at the NT tips illustrate the charges, induced in the NTs by the external field, $F$. For a dense forrest, the typical penetration depth, $a$, exceeds $d$. Vertical arrows illustrate electric field, $F_{ind}$, created by induced charges. Regions of lower NT density correspond to higher field-enhancement-factor, $\beta_A=F_{ind}/F$. In these regions, external field penetrates deeper into the forrest. Fluctuations in the induced charge density, due to the randomness in the NT positions, exceed the average density at depth $(h-z)>a^3/d^2$.} \label{fig:1} \end{figure} On the theoretical side, the focus of the previous studies \cite{theory1,theory2,theory3,theory4,theory5,theory6,theory7,theory8,theory9,theory10} of field emission from nanotubes was the effect of band structure and tip geometry of an {\em individual} NT on the emission probability. These studies left out the fact that {\em all} NTs are coupled to each other {\em electrostatically}. The main point of the present paper is that, for a regular NT array mutual electrostatic coupling of NTs in the array has a dramatic effect on the field emission, especially in dense arrays. By dense we mean the arrays in which the separation, $d$, between neighboring NTs is much smaller than the NT height, $h$. The situation $d\ll h$ is quite common in realistic field-emission setups, see, {\em e.g.,} Refs.~\onlinecite{latest3,latest2,latest4}. To substantiate this point, consider first two parallel NTs separated by $d\ll h$ in the external electric field, $F$, directed along their axes, see Fig.~1. It might seem, that, if $d$ exceeds the tunnelling length for field emission from each of the NTs, then both NTs emit independently. This, however, is not the case. The reason is that the enhancement of the electric field near the tip of each NT is governed by the charge density, induced by the external field. For two parallel NTs, the induced charge density {\em per} NT is approximately {\em two times smaller} than for an isolated NT.\cite{remark} As a result, the field enhancement near the tip of each NT becomes smaller due to the presence of the neighbor. Compared to a pair of neighboring NTs, the suppression of the field enhancement becomes much more pronounced {\em in the NT array}. On the qualitative level, this conclusion follows from the fact that each NT in the array interacts with $\sim (h/d)^2 \gg 1$ neighbors. As we will demonstrate below, external field simply {\em does not penetrate} into the sufficiently dense array. On the intuitive level, this trend has been previously understood.\cite{Nilsson, latest2, Manohara} In particular, in Ref.~\onlinecite{Nilsson} numerical simulations illustrating the suppression of the field enhancement for three parallel NTs with decreasing distance between them were reported. However, full understanding of this suppression {\em in the array} with arbitrary ratio $d/h$ requires an analytical theory. Such a theory is developed in the present paper. In particular we demonstrate that {\em i)} penetration of the external field into the array is described by a simple function, $\sinh(z/a)$, where the penetration depth, $a$, is much smaller than $h$ for a dense array, but still much bigger than $d$; {\em ii)} distributions of induced charge density in a regular and completely random NT arrays are approximately {\em the same}; {\em iii)} with regard to the field emission, the enhancement of external field for the array, as compared to the individual NT, is suppressed by the factor $\approx (h/a)$. The reason why the electrostatic problem, in which the variables cannot be separated, allows an asymptotic analytical solution is the presence of large parameters $(h/r) \gg 1$ and $(d/r) \gg 1$. As it was demonstrated in Ref. \onlinecite{we}, for a single NT, the relation $h\gg r$ allows one to find analytically the distribution of induced charges in external field. Here the approach of Ref. \onlinecite{we} is generalized to the NT array. The paper is organized as follows. In Sect.~II we review the Thomas-Fermi description \cite{we} of polarization of a single NT in external field. In Sect.~III we generalize the Thomas-Fermi equation to the NT array. In Sections IV-V we analyze this equation for a regular array, and find its asymptotic [in the parameter $(d/r)\gg 1$] solution. Robustness of this solution with respect to randomness in the NT positions is demonstrated in Sect.~VI. In Sect.~VII we apply to the obtained solution for distribution of the induced charge density to calculate the field emission current from the array. Relation of our theory to experiment is addressed in Sect.~VIII. \section{Single NT} Denote with $\rho(z)$ the {\em linear} charge density on the NT surface at a distance $z<h$ from the substrate. The Thomas-Fermi equation for $\rho(z)$ reads\cite{we} \begin{eqnarray} \label{basic} eFz=\frac{1}{g}~\rho (z)+\frac{1}{e}\int _0^h\!dz^{\prime}\;{\cal S}_0(z,z^{\prime})\rho (z^{\prime}), \end{eqnarray} where the kernel, ${\cal S}_0(z,z^{\prime})$, is defined as \begin{eqnarray} \label{S} {\cal S}_0(z,z^{\prime})=\Phi (z-z^{\prime}) -\Phi(z+z^{\prime}). \end{eqnarray} Here the function $\Phi (z)$ is the interaction potential between two points on the NT surface, separated vertically by $z$. It represents an azimuthal average of the Coulomb potential \begin{equation} \label{interaction} \Phi(x)= \frac{e}{\pi} \int_0^{\pi}\!\frac{d\alpha} {\left[x^2+4r^2\sin^2(\alpha/2)\right]^{1/2}}~~, \end{equation} where $r$ is the NT radius. The second term in Eq.~(\ref{S}) accounts for the image charges. The meaning of Eq.~(\ref{basic}) is the following. The lhs is the bare potential. The potential, acting on a given electron at the NT surface represents the sum of this potential and of the potential, created by the induced charges. This resulting potential defines the {\em local} value of the Fermi energy, which, in turn, fixes the local value of the Fermi momentum. This Fermi momentum, on the other hand, is linearly proportional to the charge density in one dimension. This is a standard reasoning behind the Thomas-Fermi description. Within this description Eq.~(\ref{basic}) is nothing but the condition, that the electrochemical potential remains constant along the NT. The prime simplification that allows analytical solution of Eq.~(\ref{basic}) is that, with logarithmic accuracy, $\rho (z^{\prime})$ in the integrand in the rhs of Eq.~(\ref{basic}) can be replaced by $\rho (z)$. Upon this replacement, we have \begin{eqnarray} \label{approximate} \frac{1}{e}\int_0^h \!dz^{\prime}\;{\cal S}_0(z,z^{\prime})\approx \ln{\frac{h^2}{r^2}} - \ln{\frac{h^2}{z^2}} +\ln{\Biggl(\frac{1-z/h}{1+z/h}\Biggr)}, \end{eqnarray} where we assumed that $(h-z) \gg r$. With the same logarithmic accuracy, for $z\gg r$, the rhs of Eq.~(\ref{approximate}) can be replaced by $2{\cal L}_h$, where ${\cal L}_h~=~\ln(h/r)$. Then we obtain the following analytical solution for the induced charge density\cite{we} \begin{equation} \label{single} \rho(z)\approx \frac{gFz}{1+2g{\cal L}_h}, \end{equation} where we have introduced a dimensionless interaction parameter $g=2Ne^2/\pi\hbar v_0$. The above result for $\rho(z)$ is approximate, in the sense, that the numerical factor in the argument of a large logarithm, ${\cal L}_h \gg 1$, is not specified. Equation (\ref{approximate}) represents the solution of Eq.~(\ref{basic}), which satisfies the obvious condition $\rho (0)=0$. An improved analytical description yielding the result coinciding with Eq.~(\ref{single}) in the limit of large ${\cal L}_h$ was recently reported in Ref. \onlinecite{chinese}. The remarkable property of the solution (\ref{single}) is that the NT with poor ``metallicity'', $g<1$, eventually becomes metallic as the length, $h$, of NT is increased. This is not the case if the NT is located parallel to the conducting gate at distance $D\ll h$. In the latter case,\cite{rotkin,blanter} one has to replace ${\cal L}_h$ by $\ln (D/r)$. \section{Thomas-Fermi equation for an array} Consider an array of parallel NTs located at points, ${\bf R}_i$, on the substrate, see Fig.~1. To set the Thomas-Fermi equation for a given NT, $i$, in the array, one has to take into account that the external potential, leading to the charge separation, contains, in addition to $eFz$, the potentials created by charges induced on all other NTs. Then the generalized Eq.~(\ref{basic}) reads \begin{eqnarray} \label{generalized} eFz=\frac{1}{g}~\rho_i(z)+\int_0^{h} dz^{\prime}\sum_{j}\rho_j(z^{\prime}) {\cal S}(z,z^{\prime};{\bf R}_i-{\bf R}_j), \end{eqnarray} where the kernel, ${\cal S}$, is given by \begin{eqnarray} {\cal S}(z,z^{\prime};{\bf R})&=&\frac{1}{\sqrt{(z-z^{\prime})^2+\vert{\bf R}\vert^2}}\nonumber\\ &-& \frac{1}{\sqrt{(z+z^{\prime})^2+\vert{\bf R}\vert^2}}.\quad \end{eqnarray} It is convenient to rewrite Eq.~(\ref{generalized}) in the ``continuous'' form by introducing the position-dependent density, $\rho (z,{\bf R})$, and the local concentration of NTs \begin{eqnarray} \label{nn} {\cal N}({\bf R})=\sum_i\delta\left({\bf R}-{\bf R}_i\right). \end{eqnarray} In the new notations Eq.~(\ref{generalized}) takes the form \begin{eqnarray} \label{EQN1} eFz=\frac{1}{g}~\rho (z,{\bf R}) \qquad \qquad \qquad \qquad \qquad \qquad \nonumber \\ +\int d{\bf R}^{\prime} {\cal N}({\bf R^{\prime}})\!\!\int _0^h\!\! dz^{\prime}\;{\cal S}(z,z^{\prime};{\bf R}-{\bf R}^{\prime})\rho (z^{\prime},{\bf R}^{\prime}).\quad \end{eqnarray} The Thomas-Fermi equation in the form (\ref{EQN1}) is convenient for further analysis. This is because, as we will see below, the distributions, $\rho_i(z)$, are almost the same for {\em all} $i$ even for completely random array. \section{Regular array} Consider a regular array in the form of a square lattice with a lattice constant, $d$. Then the coordinates of ${\bf R}_i$ in Eq.~(\ref{nn}) are $\{nd,md\}$ with integer $m$ and $n$. Obviously, for the regular array the induced charge density, $\rho(z)$, is the same for all NTs, so that Eq.~(\ref{EQN1}) acquires the form \begin{eqnarray} \label{EQN11} eFz=\left(\frac{1+ 2g{\cal L}_{d}}{g} \right)~\rho (z)+ \int _0^h\!\! dz^{\prime}\;{\cal S}_{ext}(z,z^{\prime})\rho (z^{\prime}),\quad \end{eqnarray} where \begin{eqnarray} \label{array} {\cal S}_{ext}(z,z^{\prime})=\sum_{\{m,n\}\neq\{0;0\}}\left[ \frac{1}{\sqrt{\left(z-z^{\prime}\right)^2+\left(m^2+n^2\right)d^2}} \right. \nonumber \\ \left. -\frac{1}{\sqrt{\left(z+z^{\prime}\right)^2+\left(m^2+n^2\right)d^2}}\right ].\qquad \end{eqnarray} Here we have isolated the self-action, $m=n=0$, of a NT. For a single NT this self-action is described by a large logarithm ${\cal L}_h$. In the array, however, the interaction is screened at distances $\vert z-z^{\prime}\vert \gtrsim d$ by the neighboring NTs. To account for this screening, the logarithm, ${\cal L}_h$, in Eq.~(\ref{EQN11}) is replaced by ${\cal L}_d=\ln(d/r)$ . It is apparent that both terms in the sum (\ref{array}) diverge due to the contributions from large $m$, $n$. However, the divergent parts in both terms cancel each other. Physically, this reflects the screening by the image charges (see Fig. 1). Replacing the sums over $m$ and $n$ in Eq.~(\ref{array}) by integrals, we obtain the following expression for the kernel ${\cal S}_{ext}(z,z^{\prime})$ \begin{eqnarray} \label{S-result} {\cal S}_{ext}(z,z^{\prime})= 2\pi {\cal N}_0\Bigl[z+z^{\prime}-\vert z-z^{\prime}\vert\Bigr]=\nonumber\\ 4\pi {\cal N}_0\Bigl[z^{\prime}\Theta(z-z^{\prime})+z\Theta(z^{\prime}-z)\Bigr], \end{eqnarray} where $\Theta(x)$ is the step-function. Note that both steps, replacing ${\cal L}_h$ in Eq.~(\ref{EQN11}) by ${\cal L}_d$ and replacing the sums in Eq.~(\ref{array}) by integrals are by no means obvious and require justification. This justification is provided in the next Section. In the present Section we demonstrate that the integral equation (\ref{EQN11}) with the kernel (\ref{S-result}) can be solved analytically. Upon taking the derivative from both sides of Eq.~(\ref{EQN11}), we obtain \begin{eqnarray} \label{equation} eF=\left(\frac{1+ 2g{\cal L}_{d}}{g} \right)\frac{d\rho}{dz}+{4\pi}{\cal N}_0 \int_z^{h}\!\!dz^{\prime}\rho(z^{\prime}). \end{eqnarray} It is now easy to see that, within a factor, the first term in Eq.~(\ref{equation}) is the second derivative of the second term. Thus, Eq.~(\ref{equation}) can be viewed as a second-order differential equation with respect to $\int_z^h\!\! dz^{\prime}\rho(z^{\prime})$. The solution of this equation, satisfying the condition $\rho(0)=0$, has the form \begin{eqnarray} \label{final1} \rho(z)= \rho_0\sinh \left(z/a\right), \end{eqnarray} where $\rho_0$ is defined as \begin{eqnarray} \label{final2} \rho_0=\frac{eFga}{\left(1+2g{\cal L}_d\right)\cosh(h/a)}, \end{eqnarray} and the length, $a$, is given by \begin{eqnarray} \label{depth} a=\frac{1}{2}\;\sqrt{\frac{1+2g{\cal L}_d}{\pi g {\cal N}_0}}. \end{eqnarray} The above expression for the induced charge density constitutes the main result of the present paper. It is seen from Eq.~(\ref{final1}) that $a$ plays the role of the penetration depth of the external field into the array. In the limit of very low density, ${\cal N}_0~=~d^{-2} \ll 1/h^2$,~~ Eqs.~(\ref{final1})--(\ref{depth}) reproduce the result Eq.~(\ref{single}) for a single NT, as it could be expected, since the mutual influence of NTs, separated by a distance $\gtrsim h$ is negligible. It also follows from Eqs.~(\ref{final1})--(\ref{depth}) that in the limit of large ${\cal N}_0$, such that $a \ll h$, the induced charge density is concentrated near the NT's tips and falls off towards the substrate exponentially, as $\exp\{-(h-z)/a\}$. This weak penetration of external field into the array is a consequence of the collective screening. Indeed, in terms of screening properties, the array of a high density can be viewed as a homogeneous metallic plate. Our crucial observation, however, is that the penetration depth exceeds {\em parametrically} the lattice constant, $d$, both for large and small values of the interaction parameter, $g$. By virtue of the relation $a/d >1$, there are {\em many} NTs within the penetrations depth. This, in turn, suggests that Eqs.~(\ref{final1})--(\ref{depth}) apply to the random array with average areal concentration of NTs ${\cal N}_0=d^{-2}$. \section{Derivation} In the previous section the derivation was based on two intuitive assumptions. Namely, we have assumed the self-action of a NT in the array is described by ${\cal L}_d$ instead of ${\cal L}_h$ for an isolated NT, and that the sum over $m$ and $n$ in Eq.~(\ref{array}) can be replaced by the integral. To justify these assumptions, below we calculate the sum Eq.~(\ref{array}) more accurately. In order to do so, we employ the following (obvious) identity. Consider a two-dimensional vector, ${\bf b}$, with projections $b_x,b_y$. Then, for arbitrary $x$, we have \begin{eqnarray} \label{2} \frac{1}{\sqrt{x^2+\vert{\bf b}\vert^2}}&=& \nonumber \\ \int \frac{dq_xdq_y}{2\pi}&\Biggl[&\frac{\exp\left(-\sqrt{q_x^2+q_y^2}\;x\right)}{\sqrt{q_x^2+q_y^2}} \Biggr] \exp(iq_xb_x+iq_yb_y).\nonumber\\ \end{eqnarray} To use this identity in Eq.~(\ref{array}), we set $b_x=nd$, $b_y=md$. Then the summation over $n$ and $m$ can be readily performed, yielding the sum of $\delta$-functions, i.e. \begin{equation} \label{3} \left(\frac{2\pi}{d}\right)^2\sum_{p,l}\delta\Biggl(q_x-\frac{2\pi p}{d}\Biggr) \delta\Biggl(q_y-\frac{2\pi l}{d}\Biggr), \end{equation} where $p$ and $l$ assume all integer values. After that the rhs in Eq.~(\ref{EQN1}) acquires the form \begin{equation} \label{4} \int_0^{h}dz^{\prime}\rho(z^{\prime}){\cal S}(z,z^{\prime}), \end{equation} where $S(z,z^{\prime})$ is given by \begin{eqnarray} \label{5} {\cal S}(z,z^{\prime})=\frac{2\pi}{d^2}\Bigl(z+z^{\prime} -\vert z-z^{\prime}\vert \Bigr)+ \qquad \qquad \qquad \quad\qquad \nonumber \\ \sum_{p,l\neq 0,0}\frac{1}{d\sqrt{p^2+l^2}}\left \{ \exp\Biggl[\;-\left(\frac{2\pi}{d}\right)\;\sqrt{p^2+l^2}\;\sqrt{(z-z^{\prime})^2}\;\Biggr]\right. \nonumber \\ \left.-\exp\Biggl[\;-\left(\frac{2\pi}{d}\right)\;\sqrt{p^2+l^2}\;\sqrt{(z+z^{\prime})^2}\;\Biggr]\right \}.\qquad \end{eqnarray} The first term in (\ref{5}) describes the continuous limit and comes from $p=l=0$ in Eq.~(\ref{3}). It coincides with the kernel, $S_{ext}(z,z^{\prime})$, defined by Eq.~(\ref{S-result}). The remaining sum over $p$ and $l$ recovers the kernel $S_0$ in Eq.~(\ref{basic}). The easiest way to see this is to replace the sums over $p$ and $l$ by corresponding integrals, which would immediately yield $S_0(z,z^{\prime})$. However, such a replacement is justified only when the large number of terms contribute to the sum. This is the case only when the condition $\vert z-z^{\prime}\vert \lesssim d$ is met. For $\vert z-z^{\prime}\vert \gtrsim d$ the sum over $p$ and $l$ in Eq.~(\ref{5}) is dominated by the terms $p=0$,~$l=\pm 1$ and $l=0$,~ $p=\pm 1$, and is {\em exponentially} decaying function of $\vert z~-z^{\prime}\vert$. This suggests that $S_0(z,z^{\prime})$ should be substituted into Eq.~(\ref{4}), in which the integration over $z^{\prime}$ should be restricted to the interval $\vert z-z^{\prime}\vert \lesssim d$. Within this interval, $\rho(z^{\prime})$ in the integrand of Eq.~(\ref{4}) can be replaced by $\rho(z)$. The remaining integral yields $2{\cal L}_d=2\ln(d/r)$, similarly to Eq.~(\ref{approximate}) with $h$ replaced by $d$. The product $2{\cal L}_d\rho(z)$ is nothing but the first term in the rhs of Eq.~(\ref{EQN1}). The restriction of the integration interval in Eq.~(\ref{4}) to $\vert z-z^{\prime}\vert \lesssim d$ for the part of $S(z,z^{\prime})$, coming from the second term in Eq.~(\ref{5}), is, in fact, a delicate step. Although this part decays as $\exp\Bigl\{-2\pi\vert z-z^{\prime}\vert/d\Bigr\}$ outside this interval, the behavior of $\rho(z^{\prime})$ outside this interval is also exponential, namely, it increases as $\exp(z^{\prime}/a)$. Therefore, the restriction of the integration interval in (\ref{4}) is allowed only when the exponent in $\rho(z^{\prime})$ is slower, i.e. $a$ is $\gtrsim d$. However, we know from Eq.~(\ref{depth}) that this is indeed the case. \section{Fluctuations of induced charge density in a random array} The conclusion drawn in Sect.~IV that the charge distribution Eq.~(\ref{final1}) applies not only to regular but also to a random NT array was based on the relation $a>d$ between the penetration depth and the lattice constant. Thus, this conclusion pertains only to the ``body'' $z \sim a$ of the distribution. Since $\rho(z)$ falls off exponentially away from the NT tip, it might be expected that in the tail $(h-z) \gg a$ the randomness in NT positions would terminate the applicability of Eq.~(\ref{final1}). To verify this fact, one can incorporate the positional disorder into Eq.~(\ref{EQN1}) {\em perturbatively}, i.e. to find the correction to the average $\rho(z)$ linear in the fluctuation of the NT density. Then the region of applicability of Eq.~(\ref{final1}) to the random array can be established from the condition that the {\em typical} disorder-induced correction is smaller than the average. This program is carried out below. In a random array, fluctuations in the areal concentration of NTs, $\delta {\cal N}({\bf R})= {\cal N}({\bf R})-\langle {\cal N}({\bf R})\rangle$, lead to the fluctuations in the distribution of the induced charge density $\delta \rho(z,{\bf R})$. We linearize Eq.~(\ref{EQN1}) and in the first order over the fluctuations obtain \begin{equation} \label{h} \hat{\huge {\cal H}}\!~ \bigl\{\delta\rho(z,{\bf R})\bigr\} ={\cal F}(z,{\bf R}), \end{equation} where the integral operator, $\hat{\huge {\cal H}}$, is defined as \begin{eqnarray} \label{definition} \hat{\huge {\cal H}}\!~ \bigl\{f(z,{\bf R})\bigr\}=4\pi{\cal N}_0\;a^2\Bigl[f(z,{\bf R})\qquad \qquad \nonumber \\ +\frac{1}{4\pi a^2}\int \!d{\bf R^{\prime}}\int _0^h\!dz^{\prime}{\cal S}_{ext}(z,z^{\prime};{\bf R}-{\bf R^{\prime}})f(z^{\prime},{\bf R^{\prime}})\Bigr].\qquad \end{eqnarray} The rhs in Eq.~(\ref{h}) is the potential created by the fluctuation, $\delta{\cal N}({\bf R})$, of the density of NTs with unperturbed charge distribution, Eq.~(\ref{final1}). As follows from Eq.~(\ref{EQN1}), this potential is given by \begin{eqnarray} \label{calF} {\cal F}(z,{\bf R})=-\int \!d{\bf R^{\prime}}~\delta {\cal N}({\bf R^{\prime}})\int _0^h\!dz^{\prime}{\cal S}(z,z^{\prime};{\bf R}-{\bf R^{\prime}})\rho(z^{\prime}). \nonumber\\ \end{eqnarray} The fact that the kernel of the integral operator, $\hat{\huge {\cal H}}$, depends on the difference $\bigl({\bf R}-{\bf R}^{\prime}\bigr)$ suggests transformation from $\delta{\cal N}({\bf R})$ and $\delta \rho({\bf R},z)$ to the Fourier harmonics $\delta\tilde{\cal N}({\bf q})$ and $\delta \rho(z,{\bf q})$, where ${\bf q}$ is the in-plane wave vector. Upon the Fourier transform, Eq.~(\ref{h}) assumes the form \begin{eqnarray} \label{q-linear} 4\pi{\cal N}_0a^2\; \delta \rho(z,{\bf q})+\frac{2\pi {\cal N}_0}{q}\int _0^h\!dz^{\prime}\delta \rho(z^{\prime},{\bf q}) \qquad\quad\nonumber \\ \times\Bigl\{\exp \bigl[-\vert z-z^{\prime}\vert q\; \bigr]-\exp\bigl[ -(z+z^{\prime})q\;\bigr]\Bigr\}={\cal F}(z,{\bf q}),\qquad \ \end{eqnarray} where the rhs is proportional to $\delta \tilde{\cal N}({\bf q})$ \begin{eqnarray} \label{calF1} {\cal F}(z,{\bf q}) = \frac{4 \pi a\rho _0}{q(a^2q^2-1)} \Biggl\{\sinh(q z)\;\exp\{-q h\} \qquad \quad\\ \times \Bigl[\cosh({h}/{a}) + q a \sinh({h}/{a})\Bigr] -a q \sinh(z/a)\Biggl\}\;\delta \tilde {\cal N}({\bf q}).\nonumber \end{eqnarray} The structure of the kernel in Eq.~(\ref{q-linear}) is similar to that in the unperturbed equation Eq.~(\ref{EQN11}). It appears that, due to this similarity, Eq.~(\ref{q-linear}) can be solved {\em analytically} in the same way as Eq.~(\ref{EQN11}). Namely, upon taking the second derivative from both sides, Eq.~(\ref{q-linear}) reduces to the following second-order differential equation with $z$-independent coefficients \begin{eqnarray} \label{ODE} \delta \rho ^{\prime \prime}(z,{\bf q})- \gamma_q^2\delta \rho (z,{\bf q})= \frac{1}{4\pi{\cal N}_0 a^2}\Big[{\cal F}^{\prime \prime}(z,{\bf q})-q^2{\cal F}(z,{\bf q})\Bigr],\nonumber\\ \end{eqnarray} where $\gamma_q$ is defined as \begin{equation} \label{gama} \gamma_q^2= q^2+\frac{1}{a^2}. \end{equation} Note that the rhs of Eq.~(\ref{ODE}) can be cast in the following simple form \begin{eqnarray} \label{FF} {\cal F}^{\prime \prime}(z,{\bf q})-q^2{\cal F}(z,{\bf q}) =4\pi\delta {\tilde {\cal N}}({\bf q})\rho_0 \sinh(z/a). \end{eqnarray} It can be now seen from Eq.~(\ref{FF}) that $\lambda_q\delta\tilde{\cal N}({\bf q})\sinh(z/a)$ is a particular solution of the differential equation (\ref{ODE}). However, to find the solution of the original integral equation (\ref{q-linear}), one has to complement the particular solution with the solution of the homogeneous equation, i.e. to write \begin{eqnarray} \label{sol} \delta \rho (z,{\bf q})&=& \Bigl[\chi _q \sinh (\gamma_q z) + \lambda _q \sinh (z/a)\Bigr] \delta{\cal N}({\bf q})\nonumber\\&=&P(z,{\bf q}) \delta\tilde{\cal N}({\bf q}), \end{eqnarray} and find the constants $\chi_q$ and $\lambda_q$ by substituting Eq.~(\ref{sol}) into Eq.~(\ref{q-linear}). This yields \begin{eqnarray} \label{set} \chi_q =\frac{2\rho_0}{{\cal N}_0a^3q^2}\;\Biggl[\frac{\cosh({h}/{a}) +q a\;\sinh({h}/{a})}{\gamma_q\cosh(\gamma_q h)+q \sinh(\gamma_q h)}\Biggr], \end{eqnarray} \begin{equation} \label{lambda} \lambda _q=-\frac{\rho_0}{{\cal N}_0a^2 q^2}. \end{equation} Note that $\lambda_q$ diverges at small $q$. However, the full solution Eq.~(\ref{sol}) remains finite in the limit $q \rightarrow 0$. It also satisfies the obvious condition $\delta \rho (0,{\bf q}) =0$. Equations (\ref{sol})-(\ref{lambda}) allow one to quantify the effect of disorder in the NT positions on the distribution of induced charge. The most interesting case is $h \gg a$, when this distribution is determined by collective screening involving many NTs. In this limit Eq.~(\ref{sol}) can be simplified by replacing $\sinh(h/a)$ and $\cosh(h/a)$ by $\exp(h/a)$ and introducing $z_1=(h-z)\ll h$. Then $h$ drops out from $z_1$-dependent part of $P({\bf q})$ in Eq.~(\ref{sol}), and we obtain \begin{eqnarray} \label{avrg-rho1} P(z_1,{\bf q})=\frac{\rho_0 e^{h/a}}{{\cal N}_0a^2}\qquad\qquad\qquad \qquad\qquad\qquad\\ \times\Bigg[ \frac{\exp (-\gamma_q z_1)-\exp (-z_1/a)}{q^2} -\frac{\exp (-\gamma_q z_1)}{(q+\gamma_q)(\gamma_q+1/a)}\nonumber \Bigg]. \end{eqnarray} The form (\ref{avrg-rho1}) is very convenient to study the effect of disorder in the ``tail'', i.e. at large $z_1$. Indeed, assuming Gaussian fluctuations in $\delta {\cal N}({\bf R})$, so that \begin{eqnarray} \label{noise} \langle\delta \tilde {\cal N}({\bf q_1})\delta \tilde {\cal N}({\bf q_2})\rangle=2 \pi {\cal N}_0\delta({\bf q_1}-{\bf q_2}), \end{eqnarray} the variance of random fluctuations in the induced charge density can be expressed as follows \begin{eqnarray} \label{average} \langle \delta\rho(z_1)^2 \rangle &=& \frac{1}{A}\int d{\bf R}\; \langle\delta\rho(z_1,{\bf R})^2\rangle \nonumber\\ &=& \frac{{\cal N}_0}{2 \pi} \int d{\bf q}\;P(z_1,{\bf q})^2. \end{eqnarray} Here $A$ is the normalization area. It is now seen that the $q$-dependence of $P(z_1,q)$ is dominated by the first term in Eq.~(\ref{avrg-rho1}). The reason for this is the following. As was explained in the beginning of this Section, the applicability of Eq.~(\ref{final1}), obtained for the regular array, is expected to be terminated in the random array at ``depths'' $z_1$ that are $\geq a$. At these depths the average field is strongly suppressed. On the other hand, for $z_1 > a$ one can use the expansion $\gamma_q=\sqrt{q^2+1/a^2}\simeq \frac{1}{a}+\frac{a q^2}{2}$. This, in turn, suggests that characteristic values of the wave vector, $q$, are $q \lesssim 1/(az_1)^{1/2}\le 1/a$. Then the typical ratio of the second and the first terms in (\ref{avrg-rho1}) is $q^2a^2 \ll 1$. It also follows from the expansion of $\gamma_q$ that the main exponents in $\delta\rho(z_1)$ and in the average $\rho(z_1)$ are the same. Upon neglecting the second term in Eq.~(\ref{avrg-rho1}), the $q$-dependence of $P(z_1,q)$ acquires the form $P(z_1,q)~\propto \exp(-aq^2z_1/2)/q^2$. Then the $q$-integration in Eq.~(\ref{average}) can be easily performed. We will present the final result as the ratio of variance, $\langle[\delta\rho(h-z_1)]^2\rangle$ and the square of average charge density \begin{eqnarray} \label{avrg-rho2} \frac{ \langle[\delta \rho(h-z_1)]^2\rangle}{[\rho(h-z_1)]^2}=\frac{\ln 2}{2}\; \Biggl(\frac{{z_1 }}{{\cal N}_0\;a^3}\Biggr). \end{eqnarray} The above result offers the quantitative answer to the question about fluctuations of the induced charge density due to the randomness in the NT positions. In particular, it can be concluded from Eq.~(\ref{avrg-rho2}) that the disorder-induced fluctuations in the charge density are negligible, if $z_1 \lesssim {\cal N}_0a^3$. Since this value is much bigger than $a$, Eq.~(\ref{avrg-rho2}) confirms our earlier claim that Eq.~(\ref{final1}) applies not only for regular, but also for the random array. However, this applicability is limited by the distance $z_1 \lesssim {\cal N}_0a^3$. For larger $z_1$ the variance exceeds the average suggesting that the charge density strongly fluctuates within the plane $z_1=const$. Note, however, that these fluctuations are smooth with characteristic scale $(z_1a)^{1/2}$, which is much smaller than $z_1$, but much bigger than the penetration depth, $a$. As a final remark of this Section, we point out that the {\em lower} is the density of the random array the {\em bigger} is the depth, $z_1$, down to which Eq.~(\ref{final1}) applies, as it follows from Eq.~(\ref{avrg-rho2}). However, the {\em magnitude} of the decay of the charge density, $\rho(h-z_1)/\rho(h)$, is governed by the ratio $z_1/a$. For $z_1={\cal N}_0a^3$, this ratio depends on the density of the array only weakly (logarithmically). \section{Implications for field emission} \subsection{Single NT} It is commonly accepted that the field emission current, $J$, from the NT tip is described by the Fowler-Nordheim law\cite{fowler} \begin{eqnarray} \label{fowler} \mbox{\Large$|$}\ln(J(F)/J_0)\mbox{\Large$|$}=\frac{4\sqrt{2mW^3}}{3e\hbar \;\beta F}, \end{eqnarray} where $J_0$ is the prefactor, $m$ is the electron mass, $W$ is the work function, which, in principle, is dependent on the tip geometry\cite{tip1,tip2,tip3}. Parameter $\beta$ is the field enhancement factor. Various applications of the field emission from NTs are based on the fact that $\beta$ is large as a result of the NT geometry, more specifically, due to the large ratio $h/r$. The expression for the enhancement factor routinely used in the fitting the experimental $I$-$V$ curves\cite{latest} is $\beta =Ch/r$, where $C\sim 1$ depends on specific geometry of the tip. Within the Thomas-Fermi description of the induced charge distribution, outlined in Sect. II, the expression for field at a distance, $z_1$, from the NT tip is given by the derivative of the potential, $\phi(z_1)$, created by the induced charges \begin{eqnarray} \label{enhancement} F_{ind}(z_1)= \frac {d\phi(z_1)}{dz_1}=\frac{d}{dz_1}\int_0^h dz\; \rho(z) S_0(z,z_1), \end{eqnarray} where $\rho(z)$ is given by Eq.~(\ref{single}). Then the evaluation of the integral (\ref{enhancement}) yields \begin{eqnarray} \label{enhancement1} \frac{F_{ind}(z_1)}{F}=\Biggl(\frac{h}{2{\cal L}_h r}\Biggr)\min \bigl\{1, r/z_1\bigr\}, \end{eqnarray} where we had assumed $F_{ind}\gg F$. It is seen from Eq.~(\ref{enhancement1}) that the enhancement factor indeed has the conventional form, $\beta =Ch/r$, with $C\approx (2{\cal L}_h)^{-1}$ for $z_1 \lesssim r$, but it falls off with increasing $z_1$. This suggests that for low enough applied fields, when the electron tunnelling length $\sim W/F_{ind}$ exceeds $r$, the $I$-$V$ characteristics deviates from the Fowler-Nordheim law. In order to estimate this deviation, we substitute \begin{eqnarray} \label{phi} \phi(z_1)=\frac{Fh}{2{\cal L}_h}\Biggl\{\frac{z_1}{r}\Theta(r-z_1)+\Bigl[1-\ln(r/z_1)\Bigr] \Theta(z_1-r)\Biggr\}\nonumber \\ \end{eqnarray} into the tunnelling action \begin{eqnarray} \label{current} \mbox{\Large$|$} \ln(J(F)/J_0) \mbox{\Large$|$} =\frac{2 \sqrt{2m}}{\hbar} \int_0^{z_t}\!dz_1\sqrt{W-e\phi (z_1)},\nonumber\\ \end{eqnarray} where $z_t$ is the turning point at which the expression under the square root is zero. In (\ref{current}) we had neglected the bare potential $eFz_1$. It is now convenient to measure the electric field in terms of $F_0$, defined as $F_0=W/e\beta r= 2W{\cal L}_h/eh$. The integral in Eq.~(\ref{current}) can be reduced to the error function, $\mbox{erf}\;(x)$, after which Eq.~(\ref{current}) acquires the form \begin{eqnarray} \label{emission} \mbox{\Large$|$}\ln(J(F)/J_0)\mbox{\Large$|$}=\frac{4\sqrt{2mW^3}}{3e\hbar \;\beta F_0}G(F_0/F), \end{eqnarray} where the dimensionless function $G(\tau)$ is defined as \begin{eqnarray} \label{G_function} G(\tau)=\tau,\qquad \tau <1; \qquad\qquad\qquad\qquad\nonumber\\ G(\tau)=\tau-\Bigl(\tau+\frac{1}{2}\Bigr)\left(1-\frac{1}{\tau}\right)^{1/2}+\qquad\qquad \nonumber\\ \frac{3}{4}\sqrt{\frac{\pi}{\tau}}\exp(\tau-1)\;\mbox{erf}\;(\sqrt{\tau-1}),\quad \tau>1. \end{eqnarray} The plot of the function $G(\tau)$ is shown in Fig. 2. Strictly speaking, the Fowler-Nordheim region, corresponds to $\tau < 1$, where the slope of $G(\tau)$ is identically unity. However, $G(\tau)$ can be linearized around $\tau >1$, where the slope is larger. For example, at $\tau =2$ the slope is $\approx 2$. This can be interpreted as a two-times reduction of the enhancement factor, $\beta$, in Eq.~(\ref{emission}). A significant reduction of the enhancement factor ({\em e.g.}, 30 times) occurs around $\tau\approx 5$. It should be noted, that, since Eqs.~(\ref{emission}) and (\ref{G_function}) were derived neglecting the bare potential, their applicability is limited by $\tau<\tau_{max}$, where $\tau_{max}$ corresponds to the applied field $F=F_0/\tau_{max}$, for which the turning point, $z_t$, in the tunnelling action Eq.~(\ref{current}) reaches the value $W/F$. The latter condition can be rewritten in the form $(\tau_{max}-1)=\ln(h\tau_{max}/r)$, yielding $\tau_{max}\approx \ln(h/r)={\cal L}_h$. For $\tau >\tau_{max}$, i.e. for applied fields $F> F_0/\tau_{max}$, the $I$-$V$ characteristics is given by the Fowler-Nordheim law (\ref{fowler}) with $\beta=1$. Overall, Fig. 2 indicates that, for low enough applied fields there are significant deviations from the Fowler-Nordheim law in the $I$-$V$ characteristics of an individual NT. For such fields the linearity of the Fowler-Nordheim plots shows only within very narrow interval of $F$. On the experimental side, there are reports, {\em e.g.}, Ref. \onlinecite{longFowler}, where applicability of the Fowler-Nordheim law was demonstrated within a rather wide (exceeding $3$ times) interval of change of $F$. In other reported measurements, see, {\em e.g.}, Ref. \onlinecite{few0}, linearity of $\ln J$ vs. $1/F$ holds only within a limited (less than $2$ times) interval of applied fields. Whether or not Eq.~(\ref{emission}), derived for a single NT, is suitable to fit experimental results depends crucially on the {\em geometry} of the array, as we discuss below. \subsection{Array of NTs} As it was mentioned in the Introduction, increasing the density of NTs in the array leads to dramatic suppression of the enhancement factor. To illustrate this point, consider first the array of low density, when the tunnelling length is much smaller than the distance between the neighboring NTs. Then the field created by induced charges near the tip of a given NT can be calculated from Eq.~(\ref{enhancement}), with $\rho(z)$ given by Eqs.~(\ref{final1}), (\ref{final2}), This calculation yields the generalization of the field enhancement factor Eq.~(\ref{enhancement1}) to the case of the array of low density \begin{eqnarray} \label{enchancement2} \frac{F_{ind}(z_1)}{F}=\Biggl[\frac{a}{2\;r{\cal L}_d }\tanh(h/a)\Biggr]\min \bigl\{1, r/z_1\bigr\}. \end{eqnarray} The above expression recovers the enhancement factor for a single NT in the limit $a\rightarrow \infty$, or equivalently, ${\cal N}_0\rightarrow 0$. For $a< h$, we conclude that the enhancement factor for the array, compared to the single NT is suppressed as \begin{eqnarray} \label{suppressed} \frac{\beta({\cal N}_0)}{\beta(0)}=\frac{a{\cal L}_h}{h{\cal L}_d}= \Biggl[\frac{{\cal L}_h^2} {2\pi{\cal L}_d{\cal N}_0h^2}\Biggr]^{1/2} \ll 1. \end{eqnarray} We now turn to the high-density array. In such an array, the tunnelling length of an emitted electron can exceed the distance, ${\cal N}_0^{-1/2}$, between the neighboring NTs. Then the form of the tunnelling barrier is not anymore given by the potential created by a single NT with charge distribution modified by neighboring NTs. Instead, one has to use a general expression \begin{eqnarray} \label{pot} \phi_{{\tiny A}}(z_1)=\int_0^h\!dz\;\rho_0 \sinh[(h-z)/a]{\cal S}(z,z_1), \end{eqnarray} where ${\cal S}$ is the kernel defined by Eq.~(\ref{5}). The first term in the kernel (\ref{5}) corresponds to the ``continuous'' limit, and yields a contribution $Fz_1$ to $\phi_{{\tiny A}}(z_1)$. The second term in Eq.~(\ref{5}) exhibits different behavior for large $\left(\vert z-z_1\vert \gg {\cal N}_0^{-1/2}\right)$ and small $\left(\vert z-z_1\vert \ll {\cal N}_0^{-1/2}\right)$ distances. The expression for $\phi_{{\tiny A}}(z_1)$ that captures both long, and short-distance behaviors has the form \begin{eqnarray} \label{pot_AR} \phi_{ {\tiny A}}(z_1)=Fz_1\qquad\qquad\qquad \qquad \qquad \qquad\qquad\qquad\qquad \nonumber\\ +\frac{Fd}{2\sqrt{2{\cal L}_d}}\Biggl\{ \frac{a\;\Theta (z_1-d)}{2\pi a+d}\Bigl[\exp(-2\pi z_1/d)-1\Bigr]\quad \nonumber\\ + \Theta(d-z_1)\ln\left(\frac{d }{z_1}\right) \exp(-z_1/a) \Biggr\}.\quad\quad \end{eqnarray} The long-distance behavior is described by the first term in the square brackets. It represents the correction to the ``continuous'' first term, $Fz_1$, in (\ref{pot_AR}) due to discreteness of the array. Clearly, the field enhancement due to this term is negligible. It is the second term in the square brackets that is responsible for the field enhancement. The physics, captured by this term, is that at distance $z_1 < {\cal N}_0^{-1/2}$ the tunnelling electron ``sees'' not only the NT, from which it was emitted, but also neighboring NTs. This term, however, does not contain the NT radius, $r$, which was set zero in the derivation of Eq.~(\ref{pot_AR}). The dependence on $r$ can be reinstated in the same way as for a single NT in Eq.~(\ref{phi}), namely \begin{eqnarray} \label{ret-phi} \phi_{{\tiny A}}(z_1)\approx \frac{Fd}{2\sqrt{2{\cal L}_d}}\Biggl\{ \Theta(r-z_1)\frac{z_1}{r}+\qquad\qquad\qquad\nonumber\\ \Theta(z_1-r)\Theta(d-z_1) \left[1-\ln\left(\frac{r}{z_1}\right)\right] \exp\bigl(-[z_1-r]/a\bigr) \Biggr\},\quad \end{eqnarray} where we have retained only the part, responsible for the field enhancement. The subsequent calculation of $I$-$V$ characteristics using Eq.~(\ref{ret-phi}) is completely identical to the case of an isolated NT. The result can be presented in the form similar to Eqs.~(\ref{emission}), (\ref{G_function}) \begin{eqnarray} \label{modified} \mbox{\Large$|$}\ln(J(F)/J_0)\mbox{\Large$|$}=\frac{4\sqrt{2mW^3}}{3e\hbar \;\beta({\cal N}_0) F_{{\tiny A}}}G_{{\tiny A}}(F_{{\tiny A}}/F), \end{eqnarray} \begin{eqnarray} \label{GA} G_{{\tiny A}}(\tau)=\tau \Biggl[1-\Bigr(1-{1 \over \tau}\Bigl)^{3/2}\Biggr]+\qquad\qquad\nonumber\\ {1\over \sqrt \tau}\int_1^{u_\tau}\!\!du\Biggl[(1+\ln u_\tau) \exp\Bigl\{-{r\over a}(u_\tau-1)\Bigr\}- \nonumber \\ (1 + \ln u) \exp\Bigl\{-{r\over a}(u-1)\Bigr\}\Biggr], \end{eqnarray} where $u_\tau$, which is related to the turning point, $z_\tau$, as $u_\tau = z_\tau/r$, satisfies the following equation \begin{eqnarray} \label{u_tau} \tau=(1+\ln u_\tau)\exp\Bigl\{-{r\over a}(u_\tau-1)\Bigr\}. \end{eqnarray} The boundary value of the external field, $F_A$, corresponds to the turning point $z_\tau=(F_A/F)r=\tau _A r\geq r$. The field $F_A$ is related to the single NT boundary field, $F_0$, by \begin{eqnarray} \label{F_A} F_A=\frac{{\cal L}_d h}{{\cal L}_h a}F_0. \end{eqnarray} Therefore, the variable, $\tau_A$, defined above, is related to the corresponding single NT variable, $\tau$, as \begin{eqnarray} \label{tau_A} \tau _A= \Biggl[\frac {2\pi{\cal L}_d{\cal N}_0h^2}{{\cal L}_h^2}\Biggr]^{1/2}\tau . \end{eqnarray} The function $G_{{\tiny A}}(\tau_{{\tiny A}})$ is plotted in Fig. 2 together with the function $G(\tau)$. We see that, while the parameter ${\cal N}_0h^2$ changes within a wide interval, the function $G(\tau_A)$ remains close to $G(\tau)$. This means that the {\em form} of the $I$-$V$ characteristics for the array is similar to that for a single NT. The difference essentially amounts to rescaling of the characteristic field, $F_0$, by the factor ${\cal L}_dh/{\cal L}_ha$. In other words, increasing the density of the array results in suppression of the field emission without the change of the shape of the $I$-$V$ characteristics. It should be pointed out that this conclusion pertains to the array of randomly positioned, but completely {\em identical} NTs. As we will see below, the situation changes dramatically when the heights of NTs are random. \begin{figure}[ht] \centerline{\includegraphics[width=90mm,angle=0,clip]{G-final.eps}} \caption{Thin solid line is the Fowler-Nordheim law $\vert\ln (J/J_0)\vert \propto \tau$, where $\tau=F_0/F$. Thick solid line is the dimensionless current-voltage characteristics $\vert\ln (J/J_0)\vert $ vs $\tau=F_0/F$ of an individual NT plotted from Eq.~(\ref{G_function}). Dashed curves (1), (2), and (3) are the current-voltage characteristics, $\vert\ln J\vert$ vs $\tau=F_{{\tiny A}}/F$ plotted from Eq.~(\ref{GA}) for $(h/r)=10^4$ and dimensionless array densities ${\cal N}_0h^2= 100, 10, 1 $, respectively.} \label{fig:2} \end{figure} \section{Discussion and Concluding remarks} The main result of the present paper is Eq.~(\ref{final1}) that describes the crossover of the induced charge density distribution from a single NT to the dense regular array of NTs. We have also demonstrated that Eq.~(\ref{final1}) applies to the random array. Disorder in the NT positions terminates the applicability of Eq.~(\ref{final1}) only at large distances from the tips, where the charge density had already dropped significantly. Concerning the field emission, our calculations quantify a strong suppression of the emission current in a dense array. The field enhancement factor falls off with the NT density, ${\cal N}_0$, as ${\cal N}_0^{-1/2}$ [see Eq.~(\ref{suppressed})]. This conclusion might seem to contradict the majority of experiments, where high enhancement factors for dense arrays were reported. More precisely, in majority of experiments the dependence of emission current on the NT density is simply not addressed, and the $I$-$V$ characteristics are interpreted basing on the properties (such as work-function) of individual NTs. In fact, in those few papers where this issue is addressed, the suppression of field emission with increasing the NT density is pointed out on {\em qualitative} level. The resolution of this contradiction, in our opinion, lies in the fact that in realistic situations the heights of NTs in the array are {\em widely dispersed}. To get an insight how this dispersion in heights affects the field emission, consider a regular array in which one NT is higher than others by $h_1$, which is much larger than the average NT separation, ${\cal N}_0^{-1/2}$, but much smaller than $h$. Within the interval $0<z<h$ the distribution of charge in this ``sticking out'' NT is ``enforced'' by the neighbors, and is given by Eqs.~(\ref{final1})--(\ref{depth}). However, within the interval $h<z<(h+z_1)$ this NT ``sees'' the rest of the array as an equipotential plane. From this observation we immediately conclude that, within the interval $h<z<(h+z_1)$, the charge density in the sticking out NT is given by Eq.~(\ref{single}) with $z$ replaced by $(z-h)$. This, in turn, suggests that the enhancement factor of external field in the sticking out NT is high and is equal to $h_1/2{\cal L}_{h_1}r$, as follows from Eq.~(\ref{enhancement1}). The above reasoning suggests that the conjecture that the field emission current is dominated by sparse sticking out NTs allows to account for the high values of the enhancement factor observed in experiment. We will now demonstrate that this conjecture also allows to explain why the dependence $\ln(J(F)/J_0)$ follows the Fowler-Nordheim law (\ref{fowler}) within a wide interval of $F$, while Eqs.~(\ref{emission}) and (\ref{G_function}) predict strong deviations from Eq.~(\ref{fowler}) as $F$ is decreased. Obviously, the probability, $P(h_1)$, to find within the array a NT that sticks out by $h_1$ {\em decreases} with $h_1$. Contribution of NTs with given $h_1$ to the emission current is determined by the product \begin{eqnarray} \label{product} J_{h_1}\propto \exp\Bigl\{\bigl[-4(2mWr^2)^{1/2}/3\hbar\bigr] \;G(F_0/F)\Bigr\}P(h_1),\quad \end{eqnarray} where the first term is the tunneling action, which depends on $h_1$ through $F_0=2W{\cal L}_{h_1}/eh_1$; the function $G$ is defined by Eq.~(\ref{G_function}). Since the tunneling action {\em increases} rapidly with $h_1$, the product (\ref{product}) has a sharp maximum at a certain optimal $h_1$. Therefore, $\ln(J(F)/J_0)$ is determined by the logarithm of the rhs of Eq.~(\ref{product}) taken at optimal $h_1$. The natural choice for $P(h_1)$ is the Poisson distribution, $\exp(-h_1/H)$. We can also use the fact that within the interval $2<\tau<10$ the function $G(\tau)$ can be approximated with high accuracy by the power law \begin{equation} \label{approximated} G(\tau)\approx 0.23\tau^{9/2}. \end{equation} Using this approximation, the optimal $h_1$ can be easily found analytically. It is convenient to cast the final result for $\ln(J(F)/J_0)$ in the following form \begin{eqnarray} \label{cast} \mbox{\Large$|$}\ln(J(F)/J_0)\mbox{\Large$|$}= \Biggl(\frac{4\sqrt{2mW^3}}{3e\hbar \;\beta_{H} F}\Biggr)^{9/11}, \end{eqnarray} where the ``effective enhancement factor is defined as \begin{eqnarray} \label{effective} \beta_H=4.8\frac{H}{r{\cal L}_H} \Biggl(\frac{2mr^2W}{\hbar^2}\Biggr)^{7/18}. \end{eqnarray} Firstly, we see from Eq.~(\ref{cast}) that the $I$-$V$ characteristics is very close to the Fowler-Nordheim law, since the exponent $9/11$ is close to $1$. This should be contrasted to the $I$-$V$ characteristics of a single NT, for which $\mbox{\Large$|$}\ln(J(F)/J_0)\mbox{\Large$|$} \propto F^{-9/2}$, as follows from Eq. (\ref{approximated}). Secondly, the effective enhancement factor (\ref{effective}) is large and depends rather weakly on the work function $W$. Summarizing, the dense array of NTs can exhibit the Fowler-Nordheim field emission provided there is a sufficient spread in the NT heights. In fact, this conclusion is in accord with reported experimental findings. In particular, direct imaging of emission intensity by means of scanning \cite{few0,few2} and electron emission \cite{few1,few3} microscopy reveals that only a tiny portion of NTs in ($10^{-4}$ or even smaller) contributes to the net current. \acknowledgements This work was supported by NSF under Grant No. DMR-0503172 and by the Petroleum Research Fund under Grant No. 43966-AC10.
1,116,691,497,918
arxiv
\section{Linear-Response Time-Dependent Density Functional Theory and its Ion-Orbital Variant}\label{sec:OOTDDFTSI} The standard time-dependent density functional theory (TDDFT) orbital Hessians are used for the ``ion-orbital'' TDDFT approach, albeit from a nonstationary {\em n}-electron reference state that is constructed from the \textit{n}--1-electron molecular orbitals (MOs) of the core-ionized system. The usual TDDFT $\mathbf{A}$ and $\mathbf{B}$ matrices take the form, \begin{equation}\label{eq:OOTDDFT} \begin{split} A_{ia,ib} &= E^{(n)}\delta_{ab}+F_{ab}^{(n)}-\varepsilon_i^{(n)}\delta_{ab} + (ia|ib) - C_{\text{HF}}(ii|ab) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n)}|ib)\\ B_{ia,ib} &= (ia|ib) - C_{\text{HF}}(ib|ai) + (1-C_{\text{HF}})(ib|f_{\text{xc}}^{(n)}|ai) \end{split} \end{equation} where $f_{\text{xc}}^{(n)}$ is the exchange-correlation kernel, defined as, \begin{equation} f_{\text{xc}}^{(n)} = \frac{\partial V_{\text{xc}}[\rho^{(n)}]}{\partial\rho^{(n)}}\;, \end{equation} and where all quantities denoted with superscript $(n)$ are computed using the {\em n}-electron density. In the case of IO\protect\nobreakdash-TDDFT, these {\em n}-electron quantities are constructed from the {\em n}-electron density built from the unrelaxed \textit{n}--1-electron MOs of the core-ionized system: \begin{equation} P_{\mu\nu}^{(n)} = \sum\limits_{i}^{N} C_{\mu i}^{(n-1)} (C_{\nu i}^{(n-1)})^\ast \end{equation} \section{Derivation of the \textit{n}--1-electron Response Kernel}\label{sec:Partials} In order to correct for the particle-hole interaction error encountered in the intermediate {\em n}-electron state obtained after electron addition from the continuum MO, we take the response of the applied field on the {\em n}-electron state, which yields the Casida equations for the restricted case, \begin{equation} \begin{split} A^{(n)}_{ia,ib} &= E^{(n)}\delta_{ab}+F_{ab}^{(n)}-F_{ii}^{(n)}\delta_{ab} + 2(ia|ib) - C_{\text{HF}}(ii|ab) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n)}|ib)\\ B^{(n)}_{ia,ib} &= 2(ia|ib) - C_{\text{HF}}(ib|ai) + (1-C_{\text{HF}})(ib|f_{\text{xc}}^{(n)}|ai) \end{split} \end{equation} and subtract the response of the core orbital with associated the Fock matrix elements, \begin{equation}\label{eq:NSFock_SI} F_{pq}^{\text{CO}} = F_{pq}^{(n)} - F_{pq}^{(n-1)} = (ii|pq)-C_{\text{HF}}(ip|iq)+(1-C_{\text{HF}})(p|V_{\text{xc}}^{(n)}-V_{\text{xc}}^{(n-1)}|q) \;, \end{equation} where $F_{pq}^{(i)}$ is the core electron's contribution to the Fock matrix of the {\em n}-electron system. This form of the Fock matrix accounts for all couplings between the core-electron components and the remainder of the {\em n}-electron density. The associated density matrix is idempotent and contains one electron in the core orbital, naturally constraining the excitations {\em via} the idemptency condition such that they can only emerge from core MO $i$. The response for the corresponding density matrix takes the form, \begin{equation}\label{eq:NSResponseDerivs_SI} \begin{split} A^{\text{CO}}_{ia,ib} &= E^{\text{CO}}\delta_{ab} + F_{ab}^{\text{CO}}-F_{ii}^{\text{CO}}\delta_{ab} + \frac{\partial\mathbf{F}_{ia}^{\text{CO}}}{\partial\mathbf{P}_{ib}}\\ B^{\text{CO}}_{ia,ib} &= \frac{\partial\mathbf{F}_{ai}^{\text{CO}}}{\partial\mathbf{P}_{ib}} \end{split}\; , \end{equation} where $E^{\text{CO}} = \tilde{E}(n) - E_0(n-1)$ (the nonstationary {\em n}-electron energy minus the stationary \textit{n}--1-electron energy of the core ion) and the partial derivatives yield the final expression for the core-orbital response, \begin{equation}\label{eq:NSResponse_SI} \begin{split} A^{\text{CO}}_{ia,ib} &= E^{\text{CO}}\delta_{ab}+F_{ab}^{\text{CO}}-F_{ii}^{\text{CO}}\delta_{ab} + (ia|ib) - C_{\text{HF}}(ii|ab) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n)}-f_{\text{xc}}^{(n-1)}|ib)\\ B^{\text{CO}}_{ia,ib} &= (ia|ib) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n)}-f_{\text{xc}}^{(n-1)}|ib) \end{split}\; . \end{equation} Finally, subtracting the core-orbital part of the response from the full {\em n}-electron response leads to, \begin{equation}\label{eq:NSCorrectedResponse_SI} \begin{split} A^{(n)}_{ia,ib} - A^{\text{CO}}_{ia,ib} &= E_0(n-1)\delta_{ab} + F_{ab}^{(n-1)}-F_{ii}^{(n-1)}\delta_{ab} + (ia|ib) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n-1)}|ib)\\ B^{(n)}_{ia,ib} - B^{\text{CO}}_{ia,ib} &= (ia|ib) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n-1)}|ib) \end{split}\; . \end{equation} We note here that the energy $E^{\text{CO}}$ is equal to the energy of orbital $i$ only for the exact functional or Hartree-Fock theory, so the explicit form of this energy is never assumed. \begin{comment} \section{Understanding the ``Nonstationary'' Response: An Ensemble Perspective} We subtract the nonstationary component of the second linear response in order to correct for long-range self-interaction ({\em i.e.}particle-hole interaction) error, but it might not be immediately obvious what the relationship is between these objects and why the HF functional incurs zero error from this ``nonstationarity''. Beginning from the expression for the perturbed {\em n}-electron density and corresponding Fock matrices, \begin{equation}\label{eq:DoubleResponse_SI} \begin{split} \mathbf{P}(t') &= \mathbf{P}^{(n)}_0 + \delta\mathbf{P}_{\text{NS}}(t') + \delta\mathbf{P}_{\text{EF}}(t')\\ \mathbf{F}(t') &= \mathbf{F}^{(n)}_0 + \delta\mathbf{F}_{\text{NS}}(t') + \delta\mathbf{F}_{\text{EF}}(t') \end{split} \end{equation} and noting that the {\em n}-electron density has been constructed from the \textit{n}--1-electron MOs, we may write the density exactly in ensemble form, \begin{equation}\label{eq:EnsembleDensity} \mathbf{P}^{(n)}= \mathbf{P}_0^{(n-1)} + \mathbf{\tilde{P}} \; , \end{equation} where $\mathbf{\tilde{P}}$ is the density of the (previously unoccupied) core orbital. This decomposes $\mathbf{P}^{(n)}$ into its static and time-varying components, which implies that we may substitute Eq.~\ref{eq:EnsembleDensity} into Eq.~\ref{eq:DoubleResponse_SI}, \begin{equation} \begin{split} \mathbf{P}(t') &= \mathbf{P}^{(n-1)}_0 + \mathbf{\tilde{P}} + \delta\mathbf{P}_{\text{EF}}(t')\\ \mathbf{F}(t') &= \mathbf{F}^{(n-1)}_0 + \mathbf{\tilde{F}} + \delta\mathbf{F}_{\text{EF}}(t') \end{split} \end{equation} where $\mathbf{\tilde{F}} = \mathbf{F}^{(n)} - \mathbf{F}^{(n-1)}$, comes from the densities comprising the ensemble in Eq.~\ref{eq:EnsembleDensity}. By subtracting the nonstationary part of the response given by $\mathbf{\tilde{P}}$, we find, \begin{equation} \begin{split} \mathbf{P}(t') &= \mathbf{P}^{(n-1)}_0 + \delta\mathbf{P}_{\text{EF}}(t')\\ \mathbf{F}(t') &= \mathbf{F}^{(n-1)}_0 + \delta\mathbf{F}_{\text{EF}}(t') \end{split} \end{equation} which is just the response of the core-ionized density matrix to an electric field. Substitution of these equations into the Liouville von-Neumann equation gives, \begin{equation} i\frac{\partial\delta\mathbf{P}_{\text{EF}}(t')}{\partial t'} = [\mathbf{F}_0^{(n-1)},\delta\mathbf{P}_{\text{EF}}(t')] + [\mathbf{P}_0^{(n-1)},\delta\mathbf{F}_{\text{EF}}(t')] \; , \end{equation} into which we may substitute the ensemble density relationship from Eq.~\ref{eq:EnsembleDensity} yielding, \begin{equation} i\frac{\partial\delta\mathbf{P}_{\text{EF}}(t')}{\partial t'} = [\mathbf{F}^{(n)}-\mathbf{\tilde{F}},\delta\mathbf{P}_{\text{EF}}(t')] + [\mathbf{P}^{(n)}-\mathbf{\tilde{P}},\delta\mathbf{F}_{\text{EF}}(t')] \; . \end{equation} After some rearrangement, the response of the \textit{n}--1-electron density can be written in terms of two uncoupled linear responses, \begin{equation} i\frac{\partial\delta\mathbf{P}_{\text{EF}}(t')}{\partial t'} = [\mathbf{F}^{(n)},\delta\mathbf{P}_{\text{EF}}(t')] + [\mathbf{P}^{(n)},\delta\mathbf{F}_{\text{EF}}(t')] - \big\{[\mathbf{\tilde{F}},\delta\mathbf{P}_{\text{EF}}(t')] + [\mathbf{\tilde{P}},\delta\mathbf{F}_{\text{EF}}(t')]\big\} \end{equation} The first response is just that of the (nonstationary) {\em n}-electron density while the second is the ``nonstationarity'' correction discussed in the main text. The ensemble DFT perspective yields unique insights, as both of these responses satisfy the idempotency condition, allowing excitations from the core orbital into the virtual space. It also draws a direct correlation between the nonstationarity-corrected response and particle-hole interaction error in TDDFT. Specifically, the nonstationarity-corrected response is really the response of the {\em n}-electron system without the electron in the core orbital $i$, therefore nullifying any particle-hole interaction errors in their entirety. This is why we obtain the seemingly paradoxical result that nonstationarity errors do not afflict TDHF. This paradox is resolved by this connection offered by ensemble densities, where the ``nonstationarity'' correction manifests more directly as a ``particle-hole interaction'' correction by means of subtracting the component of the ensemble that corresponds to having a particle in the core orbital. \end{comment} \section{Long-Range Self-Interaction Metric}\label{sec:SIEmetric} Within an ion-orbital {\em ansatz} such as IO\nbd-TDA, Eq.~\ref{eq:NSResponse_SI} is suggestive of a metric that can be used to quantify the degree of long-range self-interaction error ({\em i.e.} the degree of inexact particle-hole interaction) in approximate density functionals. Considering only the change in the excitation energy offered by the core-orbital correction, the total particle-hole interaction error for {\em ion-orbital} TDA approximations is, \begin{equation}\label{eq:metric} A^{\text{CO}}_{ia,ib} = F_{ab}^{\text{CO}}-F_{ii}^{\text{CO}}\delta_{ab} + (ia|ib) - C_{\text{HF}}(ii|ab) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n)}-f_{\text{xc}}^{(n-1)}|ib) \end{equation} In Hartree-Fock theory Eq.~\ref{eq:NSFock_SI} implies that $F_{ii}^{\text{CO}}=0$ and that $F_{ab}^{\text{CO}}+(ia|ib)-(ii|ab)=0$, resulting in a long-range self-interaction error of exactly zero. It also implies that IO\nbd-TDA\ with the HF functional should give equivalent results to STEX if the nonorthogonality with the {\em n}-electron ground state is not projected out of the STEX Hamiltonian. This is indeed the case, as IO\nbd-TDA\ and EA-TDA produce exactly the same results if the HF functional is used. If this metric produces a nonzero value, then the density functional approximation being used incurs some degree of inexact particle-hole interaction and the larger the value of the metric, the larger the long-range self-interaction error of the functional. \begin{comment} In order to evaluate the partial derivatives from the response equation (Eq.~\ref{eq:EATDDFT_1} in the main text), \begin{equation}\label{eq:EATDDFT_1_SI} \begin{split} A_{ia,ib} &= F_{ab}^{(n-1)}-F_{ii}^{(n-1)}\delta_{ab} + \frac{\partial\mathbf{F}_{ia}^{(n-1)}}{\partial\mathbf{P}_{ib}}\\ B_{ia,ib} &= \frac{\partial\mathbf{F}_{ai}^{(n-1)}}{\partial\mathbf{P}_{ib}} \end{split}\; , \end{equation} we consider the relationship between the \textit{n}--1-electron and {\em n}-electron Fock matrices, \begin{equation} F_{pq}^{(n)} = F_{pq}^{(n-1)}+(ii|pq)-C_{\text{HF}}(ip|iq)+(1-C_{\text{HF}})\Delta_{\text{xc}} \;, \end{equation} where $\Delta_{\text{xc}} = (p|V_{\text{xc}}^{(n)}-V_{\text{xc}}^{(n-1)}|q)$ is included to respect the nonlinearity of the difference between N and \textit{n}--1-electron exchange-correlation potentials. Therefore, the derivative in Eq.~\ref{eq:EATDDFT_1_SI} can be evaluated as, \begin{equation}\label{eq:Derivative} \frac{\partial\mathbf{F}_{ia}^{(n-1)}}{\partial\mathbf{P}_{ib}} = \frac{\partial}{\partial\mathbf{P}_{ib}} \Big[F_{ia}^{(n)} - (ii|ab) + C_{\text{HF}}(ia|ib) - (1-C_{\text{HF}})\Delta_{\text{xc}}\Big] \end{equation} The derivative of the {\em n}-electron Fock matrix yields the usual response kernel from TDDFT, \begin{equation}\label{eq:FNderiv} \frac{\partial\mathbf{F}_{ia}^{(n)}}{\partial\mathbf{P}_{ib}} = 2(ia|ib) - C_{\text{HF}}(ii|ab) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n)}|ib) \;, \end{equation} for closed-shell systems, and the derivative of the final three terms on the right-hand side of Eq.~\ref{eq:Derivative} yields, \begin{equation}\label{eq:OtherTermsDeriv} \frac{\partial}{\partial\mathbf{P}_{ib}}\Big[C_{\text{HF}}(ia|ib) - (ii|ab) - (1-C_{\text{HF}})\Delta_{\text{xc}}\Big] = C_{\text{HF}}(ii|ab) - (ia|ib) - (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n)}|ib) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n-1)}|ib) \end{equation} where the Coulomb-like and exchange-like terms emerge from the exchange and Coulomb components of the ground state potential, respectively. Adding Eq.~\ref{eq:FNderiv} to Eq.~\ref{eq:OtherTermsDeriv} and substituting the result back into Eq.~\ref{eq:EATDDFT_1_SI}, we find the full expression for the second linear-response (Eq.~\ref{eq:EATDDFT_2} in the main text), \begin{equation}\label{eq:EATDDFT_2_SI} \begin{split} A_{ia,ib} &= F_{ab}^{(n-1)} - F_{ii}^{(n-1)}\delta_{ab} + (ia|ib) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n-1)}|ib)\\ B_{ia,ib} &= (ia|ib) + (1-C_{\text{HF}})(ia|f_{\text{xc}}^{(n-1)}|ib) \end{split} \end{equation} \end{comment} \section{Overlap-Free Transition Dipole Moments}\label{sec:TDMs} The EA\nbd-TDA\ spectrum is comprised of states, \{$\Psi_i^a$\}, that are not orthogonal to the ground state reference, $\Phi_0$, which must be considered when computing transition properties. Despite our double-linear-response formalism, we are only interested the usual transition dipole moments that are observed in one-dimensional x-ray spectroscopy. Nonorthogonality between excited state determinants and the ground state can have severely detrimental effects on transition moments,\cite{WorFeiMan21} but a simple fix is to subtract the overlap-weighted ground-state dipole moment from the transition dipole, \begin{equation} \vec{\mu} = \sum\limits_{a} X_i^a \big(\langle\Phi_0|\hat{\mu}|\Psi_i^a\rangle - \langle\Phi_0|\hat{\mu}|\Phi_0\rangle \langle\Phi_0|\Psi_i^a\rangle\big) \;, \end{equation} where $X_i^a$ are eigenvalues of the Tamm-Dancoff approximated Hermitian eigenvalue equation, \begin{equation} \mathbf{A}\mathbf{X} = \omega\mathbf{X}\; . \end{equation} This is equivalent to translating the center of charge of the molecule to the origin prior to calculating the transition moments. \section{Additional Data} \begin{figure}[h!!] \centering \fig{1.0}{JacobsLadder_SignedError.pdf} \caption{ EA\nbd-TDA\ signed error statistics for 65 experimental K-edge transitions (lowest energy transition only). The aug-pcseg-1 basis was used for H and Br, aug-pcX-2 for all other atoms. A negative sign indicates an underestimation in the excitation energy. Upper and lower delimiters indicate maximum and minimum errors, respectively. Upper and lower bounds of each box are the upper and lower quartiles, respectively. Median absolute errors are indicated by horizontal lines and overlapping notches identify statistical similarities between distributions to the 95\% confidence level. Outliers are indicated by asterisks. } \label{fig:MSE} \end{figure} \begin{figure} \centering \fig{1.0}{OOTDA_MAE_resize.pdf} \caption{ IO\nbd-TDA\ absolute error statistics for 65 experimental K-edge transitions (lowest energy transition only). The aug-pcseg-1 basis was used for H and Br, aug-pcX-2 for all other atoms. Upper and lower delimiters indicate maximum and minimum errors, respectively. Upper and lower bounds of each box are the upper and lower quartiles, respectively. Median absolute errors are indicated by horizontal lines and overlapping notches identify statistical similarities between distributions to the 95\% confidence level. Outliers are indicated by asterisks. } \label{fig:OOTDAerrors} \end{figure} \begin{figure} \centering \fig{1.0}{TDA_MAE_resize.pdf} \caption{ Standard TDA absolute error statistics for 65 experimental K-edge transitions (lowest energy transition only). The aug-pcseg-1 basis was used for H and Br, aug-pcX-2 for all other atoms. Upper and lower delimiters indicate maximum and minimum errors, respectively. Upper and lower bounds of each box are the upper and lower quartiles, respectively. Median absolute errors are indicated by horizontal lines and overlapping notches identify statistical similarities between distributions to the 95\% confidence level. Outliers are indicated by asterisks. } \label{fig:LRTDAerrors} \end{figure} \begin{figure} \centering \fig{1.0}{NH3_Spectra_FullPage.pdf} \caption{Ammonia K-edge X-ray absorption spectra for (a) EA\nbd-TDA\ and (b) OO\nbd-DFT\ juxtaposed against experimental data from Ref.~\citen{SchTroRan93}. Calculations used the aug-pcX-2 and aug-pcseg-1 basis sets for N and H, respectively.} \label{fig:ammonia} \end{figure} \begin{table} \caption{Difference between EA\nbd-TDA(HF) and STEX on 132 K-edge transitions: Be--N}\label{table:EATDAvSTEX_1} \begin{center} \scalebox{0.8}{ \begin{tabular}{lcl ......} \hline\hline \multirow{2}{*}{Species} & \multirow{2}{*}{Atom} & \multirow{2}{*}{Transition} & \mc{2}{c}{STEX$^a$} & \mc{2}{c}{EA\nbd-TDA$^a$} & \multirow{2}{*}{$\Delta$Energy} & \multirow{2}{*}{$\Delta$Strength}\\ \cline{4-5} \cline{6-7} & & & \mc{1}{c}{Energy} & \mc{1}{c}{Strength} & \mc{1}{c}{Energy} & \mc{1}{c}{Strength} & \\ \hline Be$^b$ & Be & 1s$\rightarrow$2p & 115.814 & 9.08$\rm E$-02 & 115.814 & 9.08$\rm E$-02 & 0.000 & 0.00\\ CH$_4$$^c$ & C & 1s$\rightarrow$3s & 287.303 & 0.00 & 287.323 & 0.00 & 0.019 & 0.00\\ CH$_4$$^c$ & C & 1s$\rightarrow$3p & 288.441 & 6.30$\rm E$-03 & 288.513 & 6.30$\rm E$-03 & 0.072 & 1.00$\rm E$-08\\ C$_2$H$_2$$^d$ & C & 1s$\rightarrow\pi^\ast$ & 287.219 & 3.81$\rm E$-02 & 287.225 & 3.81$\rm E$-02 & 0.006 & 0.00\\ C$_2$H$_2$$^d$ & C & 1s$\rightarrow$3s & 288.444 & 4.55$\rm E$-04 & 288.461 & 4.79$\rm E$-04 & 0.016 & 2.45$\rm E$-05\\ C$_2$H$_2$$^d$ & C & 1s$\rightarrow$3p & 289.492 & 9.82$\rm E$-04 & 289.531 & 9.82$\rm E$-04 & 0.039 & 0.00\\ C$_2$H$_4$$^d$ & C & 1s$\rightarrow\pi^\ast$ & 286.419 & 4.27$\rm E$-02 & 286.428 & 4.28$\rm E$-02 & 0.010 & 1.03$\rm E$-04\\ C$_2$H$_4$$^d$ & C & 1s$\rightarrow$3s & 287.669 & 1.89$\rm E$-03 & 287.695 & 1.99$\rm E$-03 & 0.026 & 9.65$\rm E$-05\\ C$_2$H$_4$$^d$ & C & 1s$\rightarrow$3p & 288.263 & 3.48$\rm E$-03 & 288.304 & 3.65$\rm E$-03 & 0.041 & 1.70$\rm E$-04\\ C$_2$H$_6$$^d$ & C & 1s$\rightarrow$3s & 287.465 & 2.45$\rm E$-03 & 287.487 & 2.52$\rm E$-03 & 0.022 & 7.09$\rm E$-05\\ C$_2$H$_6$$^d$ & C & 1s$\rightarrow$3p & 288.334 & 4.73$\rm E$-03 & 288.388 & 5.19$\rm E$-03 & 0.054 & 4.60$\rm E$-04\\ C$_6$H$_6$$^d$ & C & 1s$\rightarrow\pi^\ast$ & 286.837 & 4.02$\rm E$-02 & 286.835 & 4.02$\rm E$-02 & -0.003 & -4.17$\rm E$-05\\ C$_6$H$_6$$^d$ & C & 1s$\rightarrow$3s & 287.812 & 2.16$\rm E$-03 & 287.784 & 2.05$\rm E$-03 & -0.028 & -1.06$\rm E$-04\\ C$_6$H$_6$$^d$ & C & 1s$\rightarrow$3p & 288.354 & 9.21$\rm E$-04 & 288.320 & 9.31$\rm E$-04 & -0.034 & 1.06$\rm E$-05\\ H$_2$CO$^e$ & C & 1s$\rightarrow\pi^\ast$ & 288.041 & 5.94$\rm E$-02 & 288.048 & 5.95$\rm E$-02 & 0.007 & 6.33$\rm E$-05\\ H$_2$CO$^e$ & C & 1s$\rightarrow$3s & 291.305 & 4.18$\rm E$-03 & 291.309 & 4.27$\rm E$-03 & 0.004 & 8.80$\rm E$-05\\ H$_2$CO$^e$ & C & 1s$\rightarrow$3p (b$_2$) & 292.189 & 9.88$\rm E$-03 & 292.219 & 1.01$\rm E$-02 & 0.030 & 2.35$\rm E$-04\\ H$_2$CO$^e$ & C & 1s$\rightarrow$3p (b$_1$) & 292.429 & 2.56$\rm E$-05 & 292.450 & 2.85$\rm E$-05 & 0.022 & 2.87$\rm E$-06\\ HFCO$^f$ & C & 1s$\rightarrow\pi^\ast$ & 290.804 & 7.08$\rm E$-02 & 290.808 & 7.09$\rm E$-02 & 0.004 & 6.52$\rm E$-05\\ HFCO$^f$ & C & 1s$\rightarrow$3s & 294.246 & 8.72$\rm E$-03 & 294.250 & 8.74$\rm E$-03 & 0.004 & 2.74$\rm E$-05\\ HFCO$^f$ & C & 1s$\rightarrow$3p & 295.192 & 1.49$\rm E$-03 & 295.197 & 1.52$\rm E$-03 & 0.005 & 3.06$\rm E$-05\\ HCOOH$^g$ & C & 1s$\rightarrow\psi^\ast$ & 290.529 & 7.06$\rm E$-02 & 290.533 & 7.07$\rm E$-02 & 0.004 & 8.80$\rm E$-05\\ HCOOH$^g$ & C & 1s$\rightarrow$3s & 293.292 & 5.71$\rm E$-03 & 293.302 & 5.91$\rm E$-03 & 0.010 & 2.00$\rm E$-04\\ HCOOH$^g$ & C & 1s$\rightarrow$3p & 293.592 & 2.76$\rm E$-03 & 293.602 & 2.68$\rm E$-03 & 0.009 & -7.42$\rm E$-05\\ HCN$^h$ & C & 1s$\rightarrow\pi^\ast$ & 288.098 & 4.63$\rm E$-02 & 288.103 & 4.64$\rm E$-02 & 0.005 & 3.99$\rm E$-05\\ C$_2$N$_2$$^h$ & C & 1s$\rightarrow\pi_{\rm u}^\ast$ & 288.103 & 3.44$\rm E$-02 & 288.103 & 3.44$\rm E$-02 & 0.000 & -1.00$\rm E$-08\\ C$_2$N$_2$$^h$ & C & 1s$\rightarrow$3s & 292.167 & 2.71$\rm E$-04 & 292.166 & 2.82$\rm E$-04 & -0.001 & 1.13$\rm E$-05\\ C$_2$N$_2$$^h$ & C & 1s$\rightarrow\pi_{\rm g}^\ast$\slash 3p & 293.187 & 4.72$\rm E$-03 & 293.187 & 4.72$\rm E$-03 & 0.000 & 0.00\\ CO$^i$ & C & 1s$\rightarrow\pi^\ast$ & 289.125 & 7.77$\rm E$-02 & 289.125 & 7.77$\rm E$-02 & 0.000 & 0.00\\ CO$^j$ & C & 1s$\rightarrow$3s\slash$\sigma$ & 294.131 & 3.67$\rm E$-03 & 294.127 & 3.68$\rm E$-03 & -0.004 & 1.04$\rm E$-05\\ CO$^j$ & C & 1s$\rightarrow$3p\slash$\pi$ & 295.025 & 4.08$\rm E$-03 & 295.025 & 4.08$\rm E$-03 & 0.000 & 0.00\\ CO$_2$$^k$ & C & 1s$\rightarrow\pi_{\rm u}^\ast$ & 292.941 & 8.25$\rm E$-02 & 292.941 & 8.25$\rm E$-02 & 0.000 & 1.00$\rm E$-08\\ CO$_2$$^l$ & C & 1s$\rightarrow$3s & 295.276 & 0.00 & 295.269 & 0.00 & -0.007 & 0.00\\ CO$_2$$^l$ & C & 1s$\rightarrow$3p & 297.228 & 1.36$\rm E$-03 & 297.228 & 1.36$\rm E$-03 & 0.000 & 0.00\\ MeOH$^m$ & C & 1s$\rightarrow$3s & 289.026 & 3.77$\rm E$-03 & 289.044 & 3.77$\rm E$-03 & 0.018 & 3.69$\rm E$-06\\ butadiene$^n$ & C(t) & 1s$\rightarrow\pi^\ast$ & 286.051 & 3.68$\rm E$-02 & 286.056 & 3.68$\rm E$-02 & 0.005 & 3.18$\rm E$-05\\ butadiene$^n$ & C(c) & 1s$\rightarrow\pi^\ast$ & 286.715 & 3.68$\rm E$-02 & 286.717 & 3.68$\rm E$-02 & 0.003 & -7.55$\rm E$-06\\ furan$^o$ & C (3 or 4) & 1s$\rightarrow\pi^\ast$ & 287.490 & 3.16$\rm E$-02 & 287.495 & 3.16$\rm E$-02 & 0.004 & 3.98$\rm E$-05\\ furan$^o$ & C (2 or 5) & 1s$\rightarrow\pi^\ast$ & 288.160 & 4.24$\rm E$-02 & 288.164 & 4.24$\rm E$-02 & 0.003 & 2.97$\rm E$-05\\ glycine$^p$ & C(CO) & 1s$\rightarrow\pi^\ast$ & 290.885 & 7.32$\rm E$-02 & 290.885 & 7.32$\rm E$-02 & 0.000 & 2.95$\rm E$-05\\ glycine$^p$ & C(sp3) & 1s$\rightarrow\sigma^\ast$ & 289.222 & 4.10$\rm E$-03 & 289.233 & 4.18$\rm E$-03 & 0.011 & 8.60$\rm E$-05\\ HCN$^h$ & N & 1s$\rightarrow\pi^\ast$ & 400.857 & 4.29$\rm E$-02 & 400.862 & 4.30$\rm E$-02 & 0.005 & 5.14$\rm E$-05\\ NH$_3$$^c$ & N & 1s$\rightarrow$3s & 401.222 & 3.36$\rm E$-03 & 401.226 & 3.37$\rm E$-03 & 0.003 & 1.36$\rm E$-05\\ NH$_3$$^c$ & N & 1s$\rightarrow$3p & 402.702 & 8.56$\rm E$-03 & 402.737 & 8.56$\rm E$-03 & 0.035 & 0.00\\ NH$_3$$^c$ & N & 1s$\rightarrow$3p & 403.387 & 5.99$\rm E$-03 & 403.523 & 5.99$\rm E$-03 & 0.136 & -4.74$\rm E$-06\\ N$_2$$^i$ & N & 1s$\rightarrow\pi^\ast$ & 402.252 & 5.53$\rm E$-02 & 402.252 & 5.53$\rm E$-02 & 0.000 & 0.00\\ N$_2$O$^l$ & N(t) & 1s$\rightarrow\pi^\ast$ & 402.300 & 4.61$\rm E$-02 & 402.300 & 4.61$\rm E$-02 & 0.000 & 0.00\\ N$_2$O$^l$ & N(t) & 1s$\rightarrow$3s\slash$\sigma$ & 405.630 & 1.68$\rm E$-03 & 405.616 & 1.69$\rm E$-03 & -0.014 & 1.10$\rm E$-05\\ N$_2$O$^l$ & N(t) & 1s$\rightarrow$3p\slash$\pi$ & 407.377 & 1.85$\rm E$-03 & 407.377 & 1.85$\rm E$-03 & 0.000 & 0.00\\ N$_2$O$^l$ & N(c) & 1s$\rightarrow\pi^\ast$ & 406.062 & 5.99$\rm E$-02 & 406.062 & 5.99$\rm E$-02 & 0.000 & -1.00$\rm E$-08\\ N$_2$O$^l$ & N(c) & 1s$\rightarrow$3s\slash$\sigma$ & 410.517 & 2.11$\rm E$-04 & 410.516 & 2.15$\rm E$-04 & -0.002 & 3.71$\rm E$-06\\ N$_2$O$^l$ & N(c) & 1s$\rightarrow$3p\slash$\sigma$ & 412.007 & 1.34$\rm E$-04 & 412.007 & 1.34$\rm E$-04 & 0.000 & 0.00\\ C$_2$N$_2$$^h$ & N & 1s$\rightarrow\pi_{\rm u}$ & 400.150 & 3.62$\rm E$-02 & 400.150 & 3.62$\rm E$-02 & 0.000 & 0.00\\ C$_2$N$_2$$^h$ & N & 1s$\rightarrow$3s & 404.379 & 5.90$\rm E$-05 & 404.376 & 5.92$\rm E$-05 & -0.004 & 1.80$\rm E$-07\\ C$_2$N$_2$$^h$ & N & 1s$\rightarrow\pi_{\rm g}$\slash 3p & 405.526 & 3.47$\rm E$-04 & 405.526 & 3.47$\rm E$-04 & 0.000 & 0.00\\ Imidazole$^q$ & N (CH=N-CH) & 1s$\rightarrow\pi^\ast$ & 401.220 & 3.40$\rm E$-02 & 401.222 & 3.40$\rm E$-02 & 0.002 & 3.26$\rm E$-05\\ Imidazole$^q$ & N (CH-NH-CH) & 1s$\rightarrow\pi^\ast$ & 403.650 & 2.43$\rm E$-02 & 403.654 & 2.43$\rm E$-02 & 0.004 & 5.08$\rm E$-05\\ pyrrole$^r$ & N & 1s$\rightarrow\pi^\ast$ & 403.397 & 2.38$\rm E$-02 & 403.402 & 2.39$\rm E$-02 & 0.005 & 1.20$\rm E$-04\\ glycine$^p$ & N (NH) & 1s$\rightarrow\sigma^\ast$ & 401.922 & 2.73$\rm E$-03 & 401.927 & 2.77$\rm E$-03 & 0.004 & 4.35$\rm E$-05\\ glycine$^p$ & N (NC) & 1s$\rightarrow\pi^\ast$ & 402.761 & 4.67$\rm E$-03 & 402.788 & 4.89$\rm E$-03 & 0.028 & 2.24$\rm E$-04\\ \hline\hline \mc{9}{l}{ $^a$aug-pcX-2 for non-H and non-Br atoms, aug-pcseg-1 otherwise. Data from: $^b$Ref.~\citen{KraMar97}, $^c$Ref.~\citen{SchTroRan93}, $^d$Ref.~\citen{HitBri77}, $^e$Ref.~\citen{RemDomPus92}, $^f$Ref.~\citen{RobIshMcL88}, $^g$Ref.~\citen{PriRicSim03}, $^h$Ref.~\citen{HitBri79}, }\\ \mc{9}{l}{ $^i$Ref.~\citen{SodBri84}, $^j$Ref.~\citen{HitBri80a}, $^k$Ref.~\citen{TroKinRea79}, $^l$Ref.~\citen{PriAvaCor99}, $^m$Ref.~\citen{HemPiaHei99}, $^n$Ref.~\citen{SodBri85}, $^o$Ref.~\citen{DufFlaGiu03}, $^p$Ref.~\citen{PleFeyRic07}, $^q$Ref.~\citen{ApeHitGla93}, $^r$Ref.~\citen{PavHalHen95} }\\ \end{tabular}} \end{center} \end{table} \begin{table} \caption{Difference between EA\nbd-TDA(HF) and STEX on 132 K-edge transitions: O--Ne}\label{table:EATDAvSTEX_2} \begin{center} \scalebox{0.8}{ \begin{tabular}{lcl ......} \hline\hline \multirow{2}{*}{Species} & \multirow{2}{*}{Atom} & \multirow{2}{*}{Transition} & \mc{2}{c}{STEX$^a$} & \mc{2}{c}{EA\nbd-TDA$^a$} & \multirow{2}{*}{$\Delta$Energy} & \multirow{2}{*}{$\Delta$Strength}\\ \cline{4-5} \cline{6-7} & & & \mc{1}{c}{Energy} & \mc{1}{c}{Strength} & \mc{1}{c}{Energy} & \mc{1}{c}{Strength} & \\ \hline CO$^b$ & O & 1s$\rightarrow\pi^\ast$ & 534.584 & 3.11$\rm E$-02 & 534.584 & 3.11$\rm E$-02 & 0.000 & 0.00\\ CO$^c$ & O & 1s$\rightarrow$3s\slash$\sigma^\ast$ & 538.608 & 8.71$\rm E$-04 & 538.604 & 8.69$\rm E$-04 & -0.005 & -2.35$\rm E$-06\\ CO$^c$ & O & 1s$\rightarrow$3p\slash$\pi^\ast$ & 539.574 & 2.26$\rm E$-05 & 539.570 & 8.40$\rm E$-07 & -0.004 & -2.18$\rm E$-05\\ CO$_2$$^d$ & O & 1s$\rightarrow\pi^\ast$ & 536.345 & 2.57$\rm E$-02 & 536.345 & 2.57$\rm E$-02 & 0.000 & -1.00$\rm E$-08\\ CO$_2$$^d$ & O & 1s$\rightarrow$3s & 536.619 & 2.57$\rm E$-03 & 536.606 & 2.58$\rm E$-03 & -0.013 & 1.07$\rm E$-05\\ CO$_2$$^e$ & O & 1s$\rightarrow$3p\slash$\pi_{\rm u}^\ast$ & 538.751 & 3.44$\rm E$-05 & 538.751 & 3.44$\rm E$-05 & 0.000 & 0.00\\ CO$_2$$^e$ & O & 1s$\rightarrow$3p\slash$\sigma^\ast$ & 539.051 & 1.53$\rm E$-03 & 539.050 & 1.52$\rm E$-03 & -0.001 & -6.25$\rm E$-06\\ MeOH$^f$ & O & 1s$\rightarrow\sigma^\ast$ & 534.543 & 6.25$\rm E$-03 & 534.547 & 6.27$\rm E$-03 & 0.004 & 1.62$\rm E$-05\\ H$_2$CO$^g$ & O & 1s$\rightarrow\pi^\ast$ & 531.745 & 3.69$\rm E$-02 & 531.747 & 3.69$\rm E$-02 & 0.002 & 4.57$\rm E$-05\\ H$_2$CO$^g$ & O & 1s$\rightarrow$3s & 535.045 & 5.46$\rm E$-04 & 535.062 & 5.48$\rm E$-04 & 0.017 & 2.13$\rm E$-06\\ H$_2$CO$^g$ & O & 1s$\rightarrow$3p & 535.978 & 1.16$\rm E$-05 & 535.992 & 4.55$\rm E$-05 & 0.014 & 3.40$\rm E$-05\\ HCFO$^h$ & O & 1s$\rightarrow\pi^\ast$ & 533.115 & 3.49$\rm E$-02 & 533.118 & 3.49$\rm E$-02 & 0.002 & 3.51$\rm E$-05\\ HCFO$^h$ & O & 1s$\rightarrow$3s & 536.817 & 6.47$\rm E$-04 & 536.822 & 6.22$\rm E$-04 & 0.005 & -2.47$\rm E$-05\\ HCFO$^h$ & O & 1s$\rightarrow$3p & 537.142 & 1.93$\rm E$-03 & 537.151 & 1.98$\rm E$-03 & 0.009 & 5.62$\rm E$-05\\ HCOOH$^f$ & O (CO) & 1s$\rightarrow\pi^\ast$ & 533.189 & 3.07$\rm E$-02 & 533.192 & 3.08$\rm E$-02 & 0.003 & 6.54$\rm E$-05\\ HCOOH$^f$ & O (OH) & 1s$\rightarrow\pi^\ast$\slash 3s & 536.361 & 7.86$\rm E$-03 & 536.360 & 7.87$\rm E$-03 & -0.001 & 4.09$\rm E$-06\\ H$_2$O$^i$ & O & 1s$\rightarrow$3s & 534.399 & 7.46$\rm E$-03 & 534.398 & 7.37$\rm E$-03 & -0.001 & -9.52$\rm E$-05\\ H$_2$O$^i$ & O & 1s$\rightarrow$3p & 536.086 & 1.35$\rm E$-02 & 536.110 & 1.33$\rm E$-02 & 0.024 & -1.95$\rm E$-04\\ N$_2$O$^d$ & O & 1s$\rightarrow\pi^\ast$ & 535.211 & 1.88$\rm E$-02 & 535.211 & 1.88$\rm E$-02 & 0.000 & 1.00$\rm E$-08\\ N$_2$O$^d$ & O & 1s$\rightarrow$3s\slash$\sigma^\ast$ & 537.223 & 3.31$\rm E$-03 & 537.209 & 3.35$\rm E$-03 & -0.014 & 4.24$\rm E$-05\\ N$_2$O$^d$ & O & 1s$\rightarrow$3p\slash$\pi^\ast$ & 538.930 & 1.57$\rm E$-03 & 538.930 & 1.57$\rm E$-03 & 0.000 & 0.00\\ glycine$^j$ & O (CO) & 1s$\rightarrow\pi^\ast$ & 533.775 & 3.02$\rm E$-02 & 533.774 & 3.02$\rm E$-02 & -0.001 & 3.70$\rm E$-05\\ glycine$^j$ & O (OH) & 1s$\rightarrow\sigma^\ast$ & 536.307 & 7.00$\rm E$-03 & 536.306 & 7.04$\rm E$-03 & -0.001 & 3.58$\rm E$-05\\ HCFO$^h$ & F & 1s$\rightarrow\pi^\ast$ & 688.940 & 8.69$\rm E$-03 & 688.943 & 8.70$\rm E$-03 & 0.003 & 1.08$\rm E$-05\\ HF$^k$ & F & 1s$\rightarrow\sigma^\ast$ & 687.533 & 1.34$\rm E$-02 & 687.539 & 1.34$\rm E$-02 & 0.006 & -3.42$\rm E$-05\\ HF$^k$ & F & 1s$\rightarrow$3p\slash$\sigma^\ast$ & 690.955 & 6.39$\rm E$-03 & 691.037 & 6.94$\rm E$-03 & 0.082 & 5.48$\rm E$-04\\ F$_2$$^k$ & F & 1s$\rightarrow\sigma_{\rm u}$ & 684.087 & 5.18$\rm E$-02 & 684.076 & 5.18$\rm E$-02 & -0.010 & -5.61$\rm E$-05\\ F$_2$$^k$ & F & 1s$\rightarrow$3s & 693.333 & 9.40$\rm E$-04 & 693.333 & 9.14$\rm E$-04 & -0.001 & -2.64$\rm E$-05\\ F$_2$$^k$ & F & 1s$\rightarrow$3p & 693.557 & 2.15$\rm E$-03 & 693.557 & 2.15$\rm E$-03 & 0.000 & 0.00\\ Ne$^{\dagger,l}$ & Ne & 1s$\rightarrow$3s & 864.931 & 0.00 & 864.923 & 0.00 & -0.008 & 0.00\\ Ne$^{\dagger,l}$ & Ne & 1s$\rightarrow$3p & 866.679 & 3.15$\rm E$-03 & 866.679 & 2.48$\rm E$-03 & 0.000 & -6.71$\rm E$-04\\ \hline\hline \mc{9}{l}{ $^a$aug-pcX-2 for non-H and non-Br atoms, aug-pcseg-1 otherwise. }\\ \mc{9}{l}{ $^\dagger$Doubly-augmented d-aug-pcX-3 basis due to large basis set incompleteness errors }\\ \mc{9}{l}{ Data from: $^b$Ref.~\citen{SodBri84}, $^c$Ref.~\citen{HitBri80a}, $^d$Ref.~\citen{PriAvaCor99}, $^e$Ref.~\citen{OkaYosSen02}, $^f$Ref.~\citen{PriRicSim03}, $^g$Ref.~\citen{RemDomPus92}, $^h$Ref.~\citen{RobIshMcL88}, $^i$Ref.~\citen{SchTroRan93}, $^j$Ref.~\citen{PleFeyRic07}, $^k$Ref.~\citen{HitBri81}, $^l$Ref.~\citen{HitBri80b} }\\ \end{tabular}} \end{center} \end{table} \begin{table} \caption{Difference between EA\nbd-TDA(HF) and STEX on 132 K-edge transitions: Si--Cl}\label{table:EATDAvSTEX_3} \begin{center} \scalebox{0.8}{ \begin{tabular}{lcl ......} \hline\hline \multirow{2}{*}{Species} & \multirow{2}{*}{Atom} & \multirow{2}{*}{Transition} & \mc{2}{c}{STEX$^a$} & \mc{2}{c}{EA\nbd-TDA$^a$} & \multirow{2}{*}{$\Delta$Energy} & \multirow{2}{*}{$\Delta$Strength}\\ \cline{4-5} \cline{6-7} & & & \mc{1}{c}{Energy} & \mc{1}{c}{Strength} & \mc{1}{c}{Energy} & \mc{1}{c}{Strength} & \\ \hline SiH$_4$$^b$ & Si & 1s$\rightarrow$t2 & 1845.109 & 1.59$\rm E$-03 & 1845.132 & 1.56$\rm E$-03 & 0.023 & -3.40$\rm E$-05\\ SiH$_4$$^b$ & Si & 1s$\rightarrow$4p & 1845.860 & 1.02$\rm E$-03 & 1845.882 & 1.18$\rm E$-03 & 0.021 & 1.61$\rm E$-04\\ SiF$_4$$^b$ & Si & 1s$\rightarrow$a1 & 1849.793 & 0.00 & 1849.785 & 0.00 & -0.008 & 0.00\\ SiF$_4$$^b$ & Si & 1s$\rightarrow$t2 & 1851.330 & 8.73$\rm E$-04 & 1851.330 & 8.73$\rm E$-04 & 0.000 & 0.00\\ SiF$_4$$^b$ & Si & 1s$\rightarrow$4p & 1852.743 & 4.17$\rm E$-03 & 1852.743 & 4.17$\rm E$-03 & 0.000 & 0.00\\ SiCl$_4$$^b$ & Si & 1s$\rightarrow$a1 & 1848.188 & 0.00 & 1848.171 & 0.00 & -0.018 & 0.00\\ SiCl$_4$$^b$ & Si & 1s$\rightarrow$t2 & 1849.331 & 5.53$\rm E$-03 & 1849.331 & 5.53$\rm E$-03 & 0.000 & 0.00\\ SiBr$_4$$^b$ & Si & 1s$\rightarrow$a1 & 1846.973 & 0.00 & 1846.993 & 0.00 & 0.020 & 0.00\\ SiBr$_4$$^b$ & Si & 1s$\rightarrow$t2 & 1848.593 & 5.57$\rm E$-03 & 1848.593 & 5.57$\rm E$-03 & 0.000 & 0.00\\ PH$_3$$^c$ & P & 1s$\rightarrow\sigma^\ast$ & 2148.203 & 5.84$\rm E$-04 & 2148.208 & 5.89$\rm E$-04 & 0.005 & 4.81$\rm E$-06\\ PF$_3$$^c$ & P & 1s$\rightarrow\sigma^\ast$ & 2152.812 & 6.60$\rm E$-03 & 2152.812 & 6.60$\rm E$-03 & 0.000 & 0.00\\ PF$_5$$^c$ & P & 1s$\rightarrow\sigma^\ast$ & 2159.064 & 4.39$\rm E$-03 & 2159.064 & 4.39$\rm E$-03 & 0.000 & 0.00\\ POF$_3$$^c$ & P & 1s$\rightarrow\sigma^\ast$ & 2157.270 & 4.95$\rm E$-03 & 2157.270 & 4.95$\rm E$-03 & 0.000 & 1.00$\rm E$-08\\ H$_2$S$^d$ & S & 1s$\rightarrow\sigma^\ast$ & 2475.231 & 1.28$\rm E$-03 & 2475.236 & 1.28$\rm E$-03 & 0.005 & -3.03$\rm E$-06\\ H$_2$S$^d$ & S & 1s$\rightarrow$Ry & 2477.071 & 8.87$\rm E$-05 & 2477.183 & 6.19$\rm E$-05 & 0.112 & -2.68$\rm E$-05\\ CS$_2$$^e$ & S & 1s$\rightarrow$2$\pi_{\rm u}$ & 2473.461 & 2.55$\rm E$-03 & 2473.461 & 2.55$\rm E$-03 & 0.000 & 0.00\\ CS$_2$$^e$ & S & 1s$\rightarrow$3$\sigma_{\rm g}$\slash 3$\sigma_{\rm u}$ & 2476.323 & 1.22$\rm E$-05 & 2476.321 & 1.24$\rm E$-05 & -0.002 & 1.10$\rm E$-07\\ SF$_4$$^f$ & S & 1s$\rightarrow$b$_2^\ast$ & 2481.921 & 8.83$\rm E$-03 & 2481.921 & 8.83$\rm E$-03 & 0.000 & 0.00\\ SF$_4$$^f$ & S & 1s$\rightarrow$a$_1^\ast$ & 2484.905 & 4.82$\rm E$-03 & 2484.886 & 4.82$\rm E$-03 & -0.018 & -6.42$\rm E$-06\\ SF$_4$$^f$ & S & 1s$\rightarrow$b$_1^\ast$ & 2485.866 & 1.01$\rm E$-02 & 2485.866 & 1.01$\rm E$-02 & 0.000 & 0.00\\ SF$_6$$^d$ & S & 1s$\rightarrow\sigma^\ast$ (a$_1$) & 2487.304 & 0.00 & 2487.267 & 0.00 & -0.037 & 0.00\\ SF$_6$$^d$ & S & 1s$\rightarrow\sigma^\ast$ (t) & 2490.756 & 1.56$\rm E$-03 & 2490.756 & 1.56$\rm E$-03 & 0.000 & 1.00$\rm E$-08\\ SO$_2$$^d$ & S & 1s$\rightarrow\sigma^\ast$ (b1) & 2476.306 & 8.39$\rm E$-03 & 2476.286 & 8.37$\rm E$-03 & -0.020 & -1.90$\rm E$-05\\ SO$_2$$^d$ & S & 1s$\rightarrow\sigma^\ast$ (a1) & 2481.978 & 2.58$\rm E$-03 & 2481.977 & 2.58$\rm E$-03 & -0.002 & 2.80$\rm E$-07\\ SO$_2$$^d$ & S & 1s$\rightarrow\sigma^\ast$ (b2) & 2482.856 & 2.31$\rm E$-03 & 2482.856 & 2.31$\rm E$-03 & 0.000 & 0.00\\ SCO$^e$ & S & 1s$\rightarrow$3$\pi$ & 2474.910 & 2.38$\rm E$-03 & 2474.945 & 2.38$\rm E$-03 & 0.034 & 8.91$\rm E$-06\\ SCO$^e$ & S & 1s$\rightarrow$5$\sigma$ & 2476.277 & 8.34$\rm E$-04 & 2476.272 & 8.36$\rm E$-04 & -0.005 & 1.89$\rm E$-06\\ SCO$^e$ & S & 1s$\rightarrow$6$\sigma$ & 2477.339 & 1.17$\rm E$-03 & 2477.339 & 1.17$\rm E$-03 & -0.001 & -3.82$\rm E$-06\\ SF$_5$Cl$^g$ & S & 1s$\rightarrow\sigma^\ast$ & 2484.815 & 9.44$\rm E$-04 & 2484.793 & 9.23$\rm E$-04 & -0.021 & -2.05$\rm E$-05\\ SF$_5$Cl$^g$ & S & 1s$\rightarrow\sigma^\ast$ & 2488.638 & 6.67$\rm E$-03 & 2488.631 & 6.71$\rm E$-03 & -0.007 & 4.04$\rm E$-05\\ SF$_5$Cl$^g$ & S & 1s$\rightarrow\sigma^\ast$ & 2489.517 & 1.12$\rm E$-04 & 2489.516 & 1.09$\rm E$-04 & -0.001 & -2.71$\rm E$-06\\ HCl$^h$ & Cl & 1s$\rightarrow$3p $\sigma^\ast$ & 2826.188 & 2.95$\rm E$-03 & 2826.184 & 2.94$\rm E$-03 & -0.004 & -2.03$\rm E$-06\\ HCl$^h$ & Cl & 1s$\rightarrow$4s $\sigma$ & 2828.593 & 1.06$\rm E$-03 & 2828.593 & 1.06$\rm E$-03 & 0.000 & 9.70$\rm E$-07\\ HCl$^h$ & Cl & 1s$\rightarrow$4p $\pi$\slash 4p $\sigma$ & 2829.288 & 1.62$\rm E$-04 & 2829.287 & 1.62$\rm E$-04 & -0.001 & 5.60$\rm E$-07\\ Cl$_2$$^h$ & Cl & 1s$\rightarrow$3p\slash$\sigma_{\rm u}^\ast$ & 2823.962 & 5.58$\rm E$-03 & 2823.959 & 5.58$\rm E$-03 & -0.003 & -4.35$\rm E$-06\\ Cl$_2$$^h$ & Cl & 1s$\rightarrow$4p\slash 3d & 2829.889 & 6.51$\rm E$-04 & 2829.888 & 6.52$\rm E$-04 & -0.002 & 1.14$\rm E$-06\\ CH$_3$Cl$^i$ & Cl & 1s$\rightarrow$a$_1$ & 2826.649 & 2.29$\rm E$-03 & 2826.652 & 2.31$\rm E$-03 & 0.002 & 1.82$\rm E$-05\\ CH$_3$Cl$^i$ & Cl & 1s$\rightarrow$Ry & 2827.995 & 1.57$\rm E$-04 & 2828.063 & 1.70$\rm E$-04 & 0.068 & 1.22$\rm E$-05\\ SF$_5$Cl$^g$ & Cl & 1s$\rightarrow\sigma^\ast$ & 2825.235 & 4.49$\rm E$-03 & 2825.233 & 4.49$\rm E$-03 & -0.002 & -3.67$\rm E$-06\\ SF$_5$Cl$^g$ & Cl & 1s$\rightarrow$4p & 2829.231 & 4.91$\rm E$-04 & 2829.230 & 4.93$\rm E$-04 & -0.001 & 1.88$\rm E$-06\\ CCl$_3$F$^i$ & Cl & 1s$\rightarrow$e & 2826.865 & 3.38$\rm E$-03 & 2826.863 & 3.37$\rm E$-03 & -0.002 & -3.19$\rm E$-06\\ \hline\hline \mc{9}{l}{ $^a$aug-pcX-2 for non-H and non-Br atoms, aug-pcseg-1 otherwise. }\\ \mc{9}{l}{ Data from: $^b$Ref.~\citen{BodMilNen90}, $^c$Ref.~\citen{CavJur99}, $^d$Ref.~\citen{ReyGavBis96}, $^e$Ref.~\citen{PerLaV84}, $^f$Ref.~\citen{BodHit87}, $^g$Ref.~\citen{ReyBodMar92}, $^h$Ref.~\citen{BodMarRey90}, $^i$Ref.~\citen{LinCowJac91} }\\ \end{tabular}} \end{center} \end{table} \clearpage \pagebreak \providecommand{\refin}[1]{\\ \textbf{Referenced in:} #1}
1,116,691,497,919
arxiv
\section{Introduction} It has now become an accepted fact that the expansion of the universe is accelerating due to an exotic energy source having large negative pressure called dark energy (DE). But the mystery is that, a little is known about dark energy: it violates strong energy condition and can cluster at large scale. Moreover DE does not interact with baryonic matter and therefore making it difficult to detect. It dominates the present universe and was less effective in early time. Recent Planck results estimate a lion share of $68\%$ for DE in the cosmic mass energy budget \cite{Ade14}. The natural choice for DE is the cosmological constant but it suffers from many puzzles like the fine tuning and coincidence problem. Therefore many dynamically varying DE candidates have been proposed in literature (for details see Refs.\cite{Copeland, Wang16} for reviews). Observations have confirmed that the cosmic speed up is a late time phenomena and has occurred at a redshift of the order $z_t \sim 1$. This indicates that, the universe has undergone a transition from a decelerated phase of expansion to an accelerating state in recent past. This cosmic transit phenomenon speculates an evolving deceleration parameter with a signature flipping. The rate at which the transition occurs usually determines the transit redshift $z_t$. The standard cosmological model is based upon the assumption of large scale isotropy and homogeneity of space. However, one can expect small scale anisotropies in the universe in view of the observations of temperature anisotropy in the Cosmic Microwave Background (CMB) radiation data from Wilkinson Microwave Anisotropy Probe (WMAP) and Planck. These data show some non trivial topology of the large scale cosmic geometry with asymmetric expansion \citep{ Watan09, Buiny06,tripa2}. Planck data also show a slight redshit of the power spectrum from exact scale invariance. Though the standard $\Lambda$CDM model is well supported by different measurements, at least at low multipoles, it does not fit to the temperature power spectrum data \cite{Ade14}. Dark energy is also believed to be associated with the breakdown of global isotropy and can display anisotropic features \cite{Picon04, Jaffe05, Koivisto08a, Cooray10}. Employing the Union compilation of Type Ia Supernovae, Cook and Lynden-Bell found that a weak anisotropy in dark energy, mostly observable for higher redshift group with $z>0.56$, directed roughly towards the cosmic microwave background dipole \cite{Cook10}. While Friedman models (FRW) in the framework of General Relativity (GR) provide satisfactory explanations to various issues in cosmology, Bianchi type models have been constructed in recent times to handle the smallness in the angular power spectrum of temperature anisotropy \cite{campa1, campa3, gruppo, Koivisto06, Koivisto08, Bat09, akarsu1, akarsu2}. Moreover, Bianchi type models are the generalisation of the FRW models where the spatial isotropy is relaxed. In the present work, we consider a general form of non interacting dark energy with anisotropic pressures along different spatial directions and have investigated the effect of anisotropic components on the dark energy density parameter and the equation of state. In addition to the anisotropic DE fluid, cosmic strings aligned along x-direction are also considered to incorporate some anisotropic effect. In some of our earlier studies, we have investigated the dynamical behaviour of pressure anisotropies either in the framework of GR or alternative gravity theory \cite{Mishra15, mishrampla, skt15}. The paper is organized as follows. In Sect. 2, we discuss the basic formalism of an anisotropic dark energy. The effect of anisotropy on dark energy and the DE equation of state (EoS) parameter is discussed in Sect. 3 and we summarize our results in Sect. 4. \section{Basic Formalism} We consider a spatially homogeneous and anisotropic Bianchi V (BV) space time in the form \begin{equation} \label{eq:1} ds^{2}= dt^{2}-a_1^2 dx^2-e^{2\alpha x}\left(a_2^2 dy^2+a_3^2 dz^2\right) \end{equation} where, the directional scale factors $a_i (i=1,2,3)$ are functions of cosmic time only, $a_i=a_i(t)$. In general, $a_i$s are considered to be different and thereby provide a description for anisotropic expansions along the three orthogonal spatial directions. The model reduces to FRW model when the directional scale factors become equal. Here, $\alpha$ is a non zero arbitrary positive constant. We choose the unit system such that $8\pi G=c=1$, $G$ is the Newtonian gravitational constant and $c$ is the speed of light. The energy momentum tensor for a given environment of two non interacting fluids is given by \begin{equation} T_{\mu\nu}=T_{\mu\nu}^{s}+T_{\mu\nu}^{D} \label{eq:2} \end{equation} where $T_{\mu\nu}^{s}$ and $T_{\mu\nu}^{D}$ respectively denote the contribution to energy momentum tensor from one dimensional cosmic strings and DE. For a cosmic fluid containing one dimensional strings \cite{letelier, stachel}, the energy momentum tensor is \begin{equation}\label{eq:3} T^{s}_{\mu\nu} = (\rho + p) u_{\mu} u_{\nu}- pg_{\mu\nu}+ \lambda x_{\mu}x_{\nu} \end{equation} where $u^{\mu}u_{\mu}=-x^{\mu}x_{\mu}=1$ and $u^{\mu}x_{\mu}=0$. In a co moving coordinate system, $u^{\mu}$ is the four velocity vector and $p$ is the isotropic pressure of the fluid. $x^{\mu}$ represents the direction of cosmic strings (here x-direction). $\rho$ is the proper energy density and is composed of energy density due to massive particles and the string tension density $\lambda$. In the absence of any string phase, the total contribution to the baryonic energy density comes from particles only. In contrast to the isotropic pressure of usual cosmic fluid, we wish to incorporate some degree of anisotropy in the DE pressure and consider the energy momentum tensor for DE as \begin{eqnarray} \label{eq:4} T^{D}_{\mu\nu} & = & diag[-\rho_{D}, p_{D_x},p_{D_y},p_{D_z}] \nonumber\\ & = & diag[-1, \omega_{D}+\delta,\omega_{D}+\gamma,\omega_{D}+\eta]\rho_{D} , \end{eqnarray} where $\omega_{D}$ is the DE equation of state parameter and $\rho_{D}$ is the DE density. The skewness parameters $\delta$, $\gamma$ and $\eta$ are the deviations from $\omega_{D}$ along $x$, $y$ and $z$ axes respectively. The DE pressure becomes isotropic when $\delta$, $\gamma$ and $\eta$ vanish identically. The field equations, $G_{\mu\nu}=T_{\mu\nu}$, for a two fluid system consisting of cosmic strings and DE in a BV metric are obtained as \begin{equation}\label{eq:5} \frac{\ddot{a_2}}{a_2}+\frac{\ddot{a_3}}{a_3}+\frac{\dot{a_2}\dot{a_3}}{a_2a_3}-\frac{\alpha^{2}}{a_1^{2}}=-p-\lambda-(\omega_{D}+\delta)\rho_{D} \end{equation} \begin{equation}\label{eq:6} \frac{\ddot{a_1}}{a_1}+\frac{\ddot{a_3}}{a_3}+\frac{\dot{a_1}\dot{a_3}}{a_1a_3}-\frac{\alpha^{2}}{a_1^{2}}=-p-(\omega_{D}+\gamma)\rho_{D} \end{equation} \begin{equation}\label{eq:7} \frac{\ddot{a_1}}{a_1}+\frac{\ddot{a_2}}{a_2}+\frac{\dot{a_1}\dot{a_2}}{a_1a_2}-\frac{\alpha^{2}}{a_1^{2}}=-p-(\omega_{D}+\eta)\rho_{D} \end{equation} \begin{equation}\label{eq:8} \frac{\dot{a_1}\dot{a_2}}{a_1a_2}+\frac{\dot{a_2}\dot{a_3}}{a_2a_3}+\frac{\dot{a_3}\dot{a_1}}{a_3a_1}-\frac{3\alpha^{2}}{a_1^{2}}=\rho+\rho_{D} \end{equation} \begin{equation} \label{eq:9} 2\frac{\dot {a_1}}{a_1}-\frac{\dot{a_2}}{a_2}-\frac{\dot{a_3}}{a_3}=0, \end{equation} where an over dot represents the differentiation with respect to time $t$. One can note that, anisotropy is incorporated in two different manner in the field equations: one due to the inherent anisotropic DE fluid and the other through the presence of aligned cosmic strings along x-axis. These anisotropic components leads to the anisotropy in the cosmic geometry described through the BV space time. The cosmic spatial volume and the scalar expansion $\theta$ can be obtained respectively as $V=R^3=a_1a_2a_3$ and $\theta=3H$, where $H=\frac{\dot{R}}{R}=\frac{1}{3}\sum_{i=1}^{3} H_i$ is the mean Hubble parameter. $H_i=\frac{\dot{a_i}}{a_i}$ are the directional Hubble parameters along different spatial directions. The shear scalar is given by $\sigma^2=\frac{1}{2}(\sum_{i=1}^{3} H_i^2-\frac{\theta^2}{3})$ and the deceleration parameter can be calculated from the relation $ q=\frac{d}{dt} \left(H^{-1}\right)-1$. It is straightforward to obtain $a_1^2=a_2a_3$ from \eqref{eq:9}. In order to get an anisotropic relation between the directional scale factors, we assume $a_2=a_3^k$ where $k$ is an arbitrary positive constant else than 1. Consequently, the scale factors along different directions can be $a_1=R,~ a_2=R^{\frac{2k}{k+1}}$ and $a_3=R^{\frac{2}{k+1}}$. The directional Hubble parameters can be obtained as, $H_1=H$, $H_2=\big(\frac{2k}{k+1}\big)H$ and $H_3=\big(\frac{2}{k+1}\big)H$. The anisotropic parameter $\mathcal{A}$ can be obtained as ${\mathcal A}=\frac{1}{3}\sum \left(1-\frac{H_i}{H}\right)^2=\frac{2}{3}\left(\frac{k-1}{k+1}\right)^2$. The energy conservation equation for the anisotropic fluid, $T^{\mu\nu}_{;\nu}=0$ yields \begin{equation} \label{eq:10} \dot{\rho}+3(p+\rho)H+\lambda H_1+\dot{\rho_{D}}+3\rho_{D}(\omega_{D}+1)H+\rho_{D}(\delta H_1+\gamma H_2+\eta H_3)=0. \end{equation} We consider the cosmic string and DE to be non interacting and obtain two separate equations from \eqref{eq:10}, \begin{equation} \label{eq:11} \dot{\rho}+3H\left(\rho+p+\frac{\lambda}{3}\right)=0 \end{equation} and \begin{equation} \label{eq:12} \dot{\rho_{D}}+3H\rho_{D}(\omega_{D}+1)+\rho_{D}(\delta H_1+\gamma H_2+\eta H_3)=0. \end{equation} The equations of state for strings and isotropic fluid can be considered as $\lambda=3\xi \rho$ and $p=\omega \rho $ respectively, where $\xi$ and $\omega$ are assumed to be non evolving state parameters. From \eqref{eq:11}, we get \begin{equation} \label{eq:13} \rho=\rho_0R^{-3(1+\omega+\xi)} \end{equation} where $\rho_0$ is rest energy density due to the matter field at the present epoch. Subsequently, pressure and string tension density can be obtained as \begin{eqnarray} p &=& \omega\rho_0R^{-3(1+\omega+\xi)},\label{eq:14}\\ \lambda &=& 3\xi\rho_0R^{-3(1+\omega+\xi)}. \label{eq:15} \end{eqnarray} The DE density is obtained from \eqref{eq:8} and \eqref{eq:13} as \begin{equation}\label{eq:16} \rho_{D}= 3\left(\Omega_{\sigma}-\Omega_k\right)\left(\frac{\dot{R}}{R}\right)^2-\rho, \end{equation} where $\Omega_{\sigma}= 1-\frac{\mathcal{A}}{2}$ and $\Omega_k=\frac{\alpha^{2}}{\dot{R}^{2}}$. The density parameters can be expressed as \begin{eqnarray} \Omega_{D} &=& \Omega_{\sigma}-\Omega_k-\Omega_m, \label{eq:17}\\ \Omega_m &=& \frac{\rho}{3H^2}.\label{eq:18} \end{eqnarray} The total density parameter becomes \begin{equation} \Omega=\Omega_m+\Omega_{D}= \Omega_{\sigma}-\Omega_k. \label{eq:19} \end{equation} Obviously, for a flat isotropic universe with $\alpha=0$ and $k=1$, the total density parameter reduces to be 1. However, in an anisotropic background the dynamics of dark energy density parameter is substantially affected according to the behaviour of $\Omega_{\sigma}$. Eq. \eqref{eq:12} can be split into two parts corresponding to the deviation free part and the one involving the deviation from EoS parameter: \begin{eqnarray} \dot{\rho_{D}}+3\rho_{D}(\omega_{D}+1)\frac{\dot{R}}{R} &=& 0,\label{eq:20}\\ \left[\delta +\gamma \left(\frac{2k}{k+1}\right)+\eta \left(\frac{2}{k+1}\right)\right]\rho_{D} \frac{\dot{R}}{R} &=& 0. \label{eq:21} \end{eqnarray} Integration of \eqref{eq:20} yields the DE density as \begin{equation} \label{eq:22} \rho_{D}=\rho_{D_0} R^{-3(\omega_{D}+1)}. \end{equation} It is obvious that, the behaviour of DE density $(\rho_{D})$ is diagnosed by the deviation free part of DE EoS. Here, $\rho_{D_0}$ is the DE density at present epoch. We obtain the skewness parameters from the field equations \eqref{eq:5},\eqref{eq:6}and \eqref{eq:7} as \begin{eqnarray} \delta &=& -\frac{2}{3\rho_{D}}\left[\zeta^2(k) F(R)-\lambda\right] , \label{eq:23}\\ \gamma &=& \frac{1}{3\rho_{D}}\biggl[\frac{(k+5)}{(k+1)} \zeta(k) F(R)+\lambda\biggr], \label{eq:24}\\ \eta &=& -\frac{1}{3\rho_{D}}\biggl[\frac{(5k+1)}{(k+1)} \zeta(k) F(R)-\lambda\biggr], \label{eq:25} \end{eqnarray} where $\zeta (k)= \dfrac{k-1}{k+1}$, represents the amount of deviation from isotropic behaviour of the model and $F(R)=\biggl( \dfrac{\ddot{R}}{R}+ \dfrac{2 \dot{R}^{2}}{R^{2}} \biggr)$. Besides the control of the parameter $k$, the pressure anisotropies are decided by the behaviour of the functional $F(R)$ and the string tension density. In some of the models in GR, the functional $F(R)$ may vanish leading to an isotropic DE fluid in the absence of any aligned cosmic strings. For isotropic model with $k=1$, $\zeta(k)$ vanishes and the pressure anisotropies are developed only due to the presence of one dimensional cosmic strings. The DE EoS parameter $\omega_{D}$ is now obtained as, \begin{align} \label{eq:26} - \omega_{D} \rho_{D} & = \left[\dfrac{2\ddot{R}}{ R} + \left(\dfrac{\dot{R}}{R}\right)^2 \right] \Omega_{\sigma}-\dfrac{\alpha ^{2}}{R^2}+ \rho \left(\omega+ \xi \right) , \end{align} It is interesting to note that, in a cosmic fluid that blends DE with one dimensional cosmic strings in a non interacting manner, the dynamics of the DE EoS parameter will also be governed by the presence of the cosmic string fluid. In the absence of any matter field, the skewness in pressure anisotropies and the DE EoS reduce to \begin{eqnarray} \delta &=& -\frac{2}{3\rho_{D}}\biggl[\zeta^2(k) F(R)\biggr] , \label{eq:27}\\ \gamma &=& \frac{1}{3\rho_{D}}\biggl[\frac{(k+5)}{(k+1)} \zeta(k) F(R)\biggr], \label{eq:28}\\ \eta &=& -\frac{1}{3\rho_{D}}\biggl[\frac{(5k+1)}{(k+1)} \zeta(k) F(R)\biggr], \label{eq:29}\\ - \omega_{D} \rho_{D} & =& \left[\dfrac{2\ddot{R}}{ R} + \left(\dfrac{\dot{R}}{R}\right)^2 \right] \Omega_{\sigma}-\dfrac{\alpha ^{2}}{R^2}. \label{eq:30} \end{eqnarray} The above expressions \eqref{eq:27}-\eqref{eq:30} are the same as obtained in our earlier work \cite{mishrampla}. One should note that, the evolutionary behaviour of the pressure anisotropies and the DE EoS depend on the assumed dynamics of the present day universe. In particular, if we have a presumed dynamics of the universe in the form of a scale factor pertaining to the late time cosmic speed up, the background cosmology can be well studied. Consequently, the evolutionary behaviour of these properties can be well assessed. In view of this, in the present work, we consider a hybrid scale factor ( as in Ref. \cite{mishrampla}) that can simulate a transit universe from a decelerated phase to an accelerated phase. \section{Hybrid scale factor and anisotropy effects} It has been a usual practice to consider the scale factor to behave either as de Sitter type expansion or as the power law expansion. Both the power law and exponential law of the scale factor lead to a constant deceleration parameter. However, the belief that the universe has undergone a transition from a decelerated phase of expansion to an accelerated one requires the deceleration parameter to evolve from some positive values at remote past to negative values at late phase of cosmic evolution. Such a behaviour can be generated by a hybrid scale factor \cite{mishrampla, akarsu3}. The hybrid scale factor behaves as power law in the early cosmic phase and as de Sitter type at late phase of cosmic acceleration. It is expressed through two adjustable parameters $a$ and $b$ as $R=e^{at}t^{b}$. For this model, the Hubble parameter is expressed as $H = a+\dfrac{b}{t}$. Moreover, Tripathy \citep{tripa2} has considered a general form $H = a+ \dfrac{b}{t^{n}}$ of the Hubble parameter to address the role of skewness in anisotropic models where $n$ is a positive constant . The present case can be considered as a special case of that in Ref. \cite{tripa2}. The deceleration parameter for this model is obtained as $q = -1+ \dfrac{b}{(at+b)^{2}}$. The parameter $b$ can be constrained in the range $0<b<1$ from simple arguments but $a$ can be constrained from a detailed analysis of $H(z)$ data \cite{mishrampla}. In the present work, we consider the same range for $b$ and leave $a$ as an open parameter. The deceleration parameter generated by a hybrid scale factor is shown in Fig.1. Deceleration parameter for power law cosmology and de Sitter type expansion have also been shown in the figure for comparison. The deceleration parameter for hybrid scale factor evolves from positive values at past epochs to negative values at late time. The transition from deceleration to acceleration occurs at some transition redshift $z_t$ which is decided by the parameter $a$. In the present work, we have considered two representative values of $a$ i.e. $0.1$ and $0.075$ corresponding to $z_t=0.8$ and $0.4$ respectively. It is clear from the figure that at an early time, the behaviour is dominated by the power law factor and the exponential factor dominates at late times . The rate of transition depends on the parameter $a$; transition is faster for higher value of $a$. \begin{figure}[h!] \includegraphics[scale=0.45]{fig1} \caption{Deceleration parameter $q$ as a function of redshift.} \end{figure} \subsection{Dark energy density and skewness parameters} The DE density parameter has contributions from the anisotropic aspects of the model and the matter field (including the string tension density). For an isotropic model with $k=1$, the anisotropic part $\Omega_{\sigma}$ becomes 1 (refer to eq. \eqref{eq:17}). It is obvious that at remote past, the matter field has a dominant contribution whereas at late phase of cosmic evolution the dark energy dominates. With an increase in the geometrical anisotropy through an increase in the parameter $k$, $\Omega_{\sigma}$ decreases from its isotopic value of 1 but the contributions from matter field and curvature part remain the same. This indicates that, with an increase in the value of $k$, there occurs a decrease in the value $\Omega_D$ at a given redshift. This feature of the DE density parameter is clearly visible in Fig.2, where we have shown $\Omega_D$ as a function of redshift for some representative values of $k$. In order to investigate the effect of string phase on DE density parameter, we have considered three different values of $\xi$ namely $\xi=0, \frac{1}{6}$ and $\frac{1}{3}$ corresponding to no string phase, Nambu strings and geometric string phase respectively. Presence of string phase incorporates some kind of anisotropy. It is evident from Fig.3 that the presence of string phase affects the DE density parameter in the early phase of cosmic evolution. With an increase in the value of $\xi$, $\Omega_D$ decreases to some lower values in the past epoch. However, at late times, aligned cosmic strings seem to have little effect on $\Omega_D$. \begin{figure}[t] \includegraphics[scale=0.45]{fig2} \caption{DE density parameter as a function of redshift for three representative values of $k$. $\xi$ is taken to be 1/6.} \end{figure} \begin{figure}[h] \includegraphics[scale=0.45]{fig3} \caption{DE density parameter as a function of redshift for three representative values of $\xi$ for $k=1.0001633$.} \end{figure} One can note from eqs. \eqref{eq:23}-\eqref{eq:25} that, the anisotropic DE pressure with the skewness parameters depend on the evolutionary behaviour of the functional $\frac{F(R)}{\rho_D}$ and $\frac{\lambda}{\rho_D}$. In the absence of string phase, the pressure anisotropies depend only on the behaviour of $\frac{F(R)}{\rho_D}$. In Fig.4, the evolutionary behaviour of the skewness parameters $\delta, \gamma$ and $\eta$ are shown either in presence or in absence of cosmic strings. One can note that, in absence of cosmic strings (dotted lines in the figure), the skewness parameters show almost a non evolving behaviour in most of the past epochs and evolve at late phase of cosmic time. $\delta$ and $\eta$ assume negative values whereas $\gamma$ assumes positive values during the evolutionary process. The anisotropy in the DE pressure along x-direction is almost unaffected by cosmic expansion. But the DE pressure along y-direction shows an increasing trend and the DE pressure along z-direction shows a decreasing trend at late times. In presence of cosmic strings, the behaviours of the pressure anisotropies remain almost unaltered at late phase of evolution whereas at an early epoch, the cosmic strings show their presence in a dominant manner. As a result, $\delta$ increases as we move back to past and it changes sign from negative to positive values at some redshift. $\gamma$ also show a similar increasing behaviour with an increase in redshift. However, $\delta$ is more affected by the presence of cosmic strings compared to $\gamma$ and $\eta$. The DE pressure along z-direction is least affected. The reason behind the sensitivity of $\delta$ to the presence of cosmic strings may be due to the fact that, we have considered the strings to align along x-direction. We have also investigated the effect of anisotropy parameter $k$ on pressure anisotropies. It is found that, a variation of $k$, in the given range, does not have any significant effect on the overall behaviour of the skewness parameters. \begin{figure}[h] \includegraphics[scale=0.45]{fig4} \caption{Evolution of skewness parameters in presence of cosmic strings. The dotted lines represent the behaviour of skewness parameters in the absence of cosmic strings.} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.5]{fig5} \caption{Evolution of DE EoS parameter for three representative strengths of cosmic strings.} \end{figure} \subsection{DE equation of state} The dynamics of the universe is studied through the evolution of the DE EoS parameter $\omega_D$ in Fig.5. $\omega_D$ is not sensitive to the choice of the parameter $k$ at least at cosmic times spanning from some recent past to late phase. However, at early times the choice of $k$ may weakly affect the dynamics. In view of this, in the figure we have considered a representative value $k=1.6$ and examined the dynamics in presence of cosmic strings. It is obvious that, at a late phase of cosmic expansion, the dark energy strongly dominates even in the presence of cosmic strings and therefore, the DE EoS is not affected by string anisotropy. But at an early phase, the cosmic string has a substantial contribution to the density parameter and therefore, the dynamics of DE EoS is greatly affected. In the absence of any cosmic strings, the model behaves like quintessence field and the DE EoS smoothly decreases in the quintessence region. With an increase in string tension density, the model gathers some amount of energy in the early phase to behave differently. In such cases, the DE EoS first increases with cosmic time and then decreases. \section{Conclusion} In the present work, we have investigated the role of anisotropic components on the dynamical aspects of dark energy. We considered the dark energy to have anisotropic pressures along different spatial directions. Some anisotropic effects have also been incorporated through the inclusion of cosmic strings aligned along x-direction. A hybrid scale factor having two adjustable parameters have been considered to mimic a cosmic transition from an early decelerating phase to a late time acceleration. The parameters of the hybrid scale factor have been fixed from some physical basis. An increase in the anisotropic parameter decreases the DE density parameter and therefore shares some of the burden to provide an acceleration. Presence of string phase substantially modifies the DE density at early epoch. The anisotropy in DE pressure continues along with the cosmic expansion and grows a bit rapidly at a later period. In general, the pressure anisotropies along different directions increase in the early phase dominated by cosmic strings. Along the x-direction, the anisotropic effect in DE pressure is more due to the string contribution. Along the z-direction, the presence of string phase has little affect on the pressure anisotropy. The DE EoS is also affected due to the anisotropic affect of cosmic strings at early phase. However, the dominance of DE at late times prevents the DE EoS to be affected substantially by cosmic strings. \section{Acknowledgement} BM and SKT acknowledge the hospitality of IUCAA, Pune during an academic visit where a part of this work is done.
1,116,691,497,920
arxiv
\section{Introduction} \label{sec:Sec1} The existence of different anomalous astrophysical and cosmological phenomena like the cosmic acceleration, the dynamics of galaxies and gas in clusters of galaxies, the galactic rotation curves, etc. recently boosted the growth of several long-range modifications of the usual laws of gravitation. These mentioned phenomena did not find satisfactory explanations in terms of the standard Newton-Einstein gravitational physics, unless exotic and still undetected forms of matter-energy are postulated: dark matter and dark energy. A recent approach is to try to explain these phenomena without using new material ingredients like dark matter and dark energy, but using well-motivated generalization and extensions of General Relativity (GR). Several alternative gravity theories have been proposed (see e.g. \cite{fisc99,clif12,capo11a,capo11b,cope04,soti10,noji11} for reviews), such as: MOND \cite{milg83}, scalar-tensor \cite{bran61,moff05,moff06,bisw12}, conformal \cite{behn02,barb06}, Yukawa-like corrected gravity theories \cite{fisc92,card11,stab, stab1}, theories of "massive gravity" \cite{ruba08,babi10,pitt07,babi09,rham11,yun13,babi13}. Alternative approaches to Newtonian gravity in the framework of the weak field limit \cite{clif05} of fourth-order gravity theory have been proposed and constraints on these theories have been discussed \cite{capo06,capo07,zakh06,zakh07,frig07,nuci07,bork12,capo09a, iori10,bork13,capo14,zakh14}. The philosophy is to search for alternative form of gravity, i.e. of the Einstein-Hilbert theory, so that such modifications could naturally explain some astrophysical and cosmological phenomena without invoking the presence of new material ingredients that, at the present state of the art, seem hard to be detected. Besides, this extended approach can be connected to effective theories that emerge both from the quantization on curved spacetimes and from several unification schemes \cite{clif12,capo11a,capo11b}. The simplest extension of the Einstein-Hilbert action is based on straightforward generalizations of the Einstein theory where the gravitational action (the Einstein-Hilbert action) is assumed to be linear in the Ricci curvature scalar $R$. If this action consists in modifying the geometric part considering a generic function $f(R)$, we get so called $f(R)$ gravity which was firstly proposed in 1970 by Buchdahl \cite{buch70}. Generally, the most serious problem of $f(R)$ theories is that these theories cannot easily pass the standard Solar System tests \cite{chib03,olmo05}. However, there exists some classes of them that can solve this problem \cite{noji03}. It can be shown that $f(R)$ theories, in principle, could explain the evolution of the Universe, from a matter dominated early epoch up to the present, late-time self accelerating phase. Several debates are open in this perspective \cite{amen07a,gohe09,capo06b,amen07b} but the crucial point is that suitable self-consistent model can be achieved. $f(R)$ theories have also been studied in the Palatini approach, where the metric and the connection are regarded as independent fields \cite{olmo}. Metric and Palatini approaches are certainly equivalent in the context of GR, i.e., in the case of the linear Einstein-Hilbert action. This is not so for extended gravities. The Palatini variational approach leads to second order differential field equations, while the resulting field equations in the metric approach are fourth order coupled differential equations. These differences also extend to the observational aspects. A novel approach, that consists of adding to the metric Einstein-Hilbert Lagrangian an $f(R)$ term constructed within the framework of the Palatini formalism, was recently proposed \cite{hark12,capo13,capo13a}. The aim of this formulation is twofold: from one side, one wants to describe the extra gravitational budget in metric-affine formalism, from the other side, one wants to cure the shortcomings emerging in $f(R)$ gravity both in metric and Palatini formulations. In particular, hybrid gravity allows to disentangle the metric and the geodesic structures pointing out that further degrees of freedom coming from $f(R)$ can be recast as an auxiliary scalar field. In such a case, problems related to the Brans-Dicke-like representation of $f(R)$ gravity in terms of scalar-tensor theory (the so called O'Hanlon transformation) are immediately avoided (see \cite{capo13a} for details and the discussion in Sec. 2). Due to this feature, the scalar-tensor representation of hybrid gravity results preferable with respect to other scalar-tensor representations of gravitational interaction. As byproducts, the appearance of ghosts is avoided and the correct weak field limit of $f(R)$ gravity with respect to GR is recovered. Furthermore, several issues related to the galactic dynamics, the formulation of the virial theorem in alternative gravity, the dark energy behavior seem to be better addressed than in $f(R)$ gravity considered in both metric or Palatini formulations. In summary, the hybrid metric-Palatini theory opens up new possibilities to approach, in the same theoretical framework, the problems of both dark energy and dark matter disentangling the extra degrees of freedom of gravitational field with respect to GR. For a brief review on the hybrid metric-Palatini theory, we refer the reader to \cite{capo13b}. In this perspective, star dynamics around the Galactic Centre could be a useful test bed to probe the effective gravitational potentials coming from the theory. In particular, S-stars are the young bright stars which move around the centre of our Galaxy \cite{ghez00,scho02,gill09a,gill09b,ghez08,genz10} where the compact radio source Sgr A$^\ast$ is located. For more details about S2 star see references \cite{gill12,genz10}. There are some observational indications that the orbits of some of them, like S2, could deviate from the Keplerian case \cite{gill09a,meye12}, but the current astrometric limit is not sufficient to unambiguously confirm such a claim \cite{bork13,frit10}. Here we study a possible application of hybrid modified gravity within Galactic Central Parsec, in order to explain the observed precession of orbits of S-stars. This paper is a continuation of previous studies where we considered different extended gravities, such as power law $f(R)$ gravity \cite{bork12,zakh14}, $f(R,\phi)$ gravity implying Yukawa and Sanders-like gravitational potentials in the weak field limit \cite{bork13, capo14}. Results obtained using hybrid gravity point out that, very likely, such a theory is the best candidate among those considered to explain (within the same picture) different gravitational phenomena at different astronomical scales. More details about hybrid gravity you can find in \cite{olmo,hark12,capo13a,capo13b}. It is shown in \cite{capo13a,capo13b} that this type of modified gravity is coherently addressing the Solar System issues, and motivations for addressing them are discussed in details in \cite{capo13b}. The modified theory of gravity needs to be constrained at different scales: at laboratory distances, at Solar system, at galaxies, at galactic clusters and at cosmological scales. Obtaining constraints at any of these scales is a fundamental issue to select or rule out models. In particular, it is important to investigate gravity in the vicinity of very massive compact objects because the environment around these objects is drastically different from that in the Solar System framework. The S2 star orbit is a unique opportunity to test gravity at the sub-parsec scale of a few thousand AU. For example, gravity is relatively well constrained at short ranges (especially at sub-mm scale) by experimental tests, however for long ranges further tests are still needed (see Figures 9 and 10 from \cite{adel09} for different ranges). It is worth stressing that a phenomenological approach can be useful in this context. In particular, the motion of S2-star is a suitable tool to test alternative gravity. For the reasons that we will discuss in detail below, hybrid gravity is a reliable paradigm to describe gravitational interaction without considering dark energy and dark matter. Specifically, the massive compact object inside the Galactic Center is surrounded by a matter distribution and deviations of S2-star motion from the Keplerian orbit are observed in detail. These deviations can be triggered both by the masses of the surrounding bodies and by the strong field regime at the Galactic Center. This peculiar situation constitutes a formidable opportunity to test theories of gravity. However, it is important to stress that numerical results reported here by comparing models with astronomical observations, represent only upper bounds for the precession angle on the deviation from GR. More accurate studies will be necessary in future work to better constrain dynamics around the Galactic Center. The present paper is organized as follows: in Sec. 2 we sketch the theory of hybrid gravity. In Sec. 3 we describe our simulations of stellar orbits in the gravitational potential derived in the weak field limit of hybrid gravity and we describe the fitting procedure to simulate orbits with respect to astrometric observations of S2 star. Results are presented in Sec. 4. Conclusions are drawn in Sec. 5. \section{Hibrid metric-Palatini gravity} \label{sec:Sec2} In this Section, we present the basic formalism for the hybrid metric-Palatini gravitational theory within the equivalent scalar-tensor representation (we refer the reader to \cite{capo13a,capo13b,koiv10,capo12} for more details). The $f(R)$ theories are the special limits of the one-parameter class of theories where the scalar field depends solely on the stress energy trace $T$ (Palatini version) or solely on the Ricci curvature $R$ (metric version). Here, we consider a one-parameter class of scalar-tensor theories where the scalar field is given as an algebraic function of the trace of the matter fields and the scalar curvature \cite{koiv10}: \begin{equation} \label{st_action} S = \int d^D x \sqrt{-g}\left[\frac{1}{2}\phi R - \frac{D-1}{2(D-2)\left(\Omega_A-\phi\right)}(\partial\phi)^2 - V(\phi)\right]. \end{equation} The theories can be parameterized by the constant $\Omega_A$. The limiting values $\Omega_A=0$ and $\Omega_A \rightarrow \infty$ correspond to scalar-tensor theories with the Brans-Dicke parameter $\omega=-(D-1)/(D-2)$ and $\omega=0$. These limits reduce to $f(R)$ gravity in the Palatini and the metric formalism, respectively. For any finite value of $\Omega_A$, its value depends both on matter and curvature. In the limit $\Omega_A \rightarrow \infty$ the propagating mode is given solely by the curvature, $\phi(R,T) \rightarrow \phi(R)$, and in the limit $\Omega_A\rightarrow 0$ solely the matter fields $\phi(R,T) \rightarrow \phi(T)$. In the general case, the field equations are fourth order both in the matter and in the metric derivatives as we will show below. More specifically, the intermediate theory with $\Omega_A=1$ and $D=4$, corresponds to the hybrid metric-Palatini gravity theory proposed in \cite{hark12, capo13a}, where the action is given by \begin{equation} \label{action} S= \int d^4 x \sqrt{-g} \left[ R + f(\mathcal{R}) + 2\kappa^2 \mathcal{L}_m \right]\,. \end{equation} where $\kappa^2\equiv 8\pi G$, $R$ is the Einstein-Hilbert term, $\mathcal{R} \equiv g^{\mu\nu}\mathcal{R}_{\mu\nu} $ is the Palatini curvature with the independent connection $\hat{\Gamma}^\alpha_{\mu\nu}$ as \begin{equation} \mathcal{R} \equiv g^{\mu\nu} \mathcal{R}_{\mu\nu} \equiv g^{\mu\nu}\left( \hat{\Gamma}^\alpha_{\mu\nu , \alpha} - \hat{\Gamma}^\alpha_{\mu\alpha , \nu} + \hat{\Gamma}^\alpha_{\alpha\lambda}\hat{\Gamma}^\lambda_{\mu\nu} - \hat{\Gamma}^\alpha_{\mu\lambda}\hat{\Gamma}^\lambda_{\alpha\nu}\right)\, .\label{r_def} \end{equation} The Palatini-Ricci tensor $\mathcal{R}_{\mu\nu}$ is \begin{equation} \mathcal{R}_{\mu\nu} \equiv \hat{\Gamma}^\alpha_{\mu\nu ,\alpha} - \hat{\Gamma}^\alpha_{\mu\alpha , \nu} + \hat{\Gamma}^\alpha_{\alpha\lambda}\hat{\Gamma}^\lambda_{\mu\nu} -\hat{\Gamma}^\alpha_{\mu\lambda}\hat{\Gamma}^\lambda_{\alpha\nu}\,. \end{equation} Varying the action given with respect to the metric, one obtains the field equations \begin{equation} \label{efe} G_{\mu\nu} + F(\mathcal{R})\mathcal{R}_{\mu\nu}-\frac{1}{2}f(\mathcal{R})g_{\mu\nu} = \kappa^2 T_{\mu\nu}\,, \end{equation} where the matter stress-energy tensor is \begin{equation} \label{memt} T_{\mu\nu} \equiv -\frac{2}{\sqrt{-g}} \frac{\delta (\sqrt{-g}\mathcal{L}_m)}{\delta(g^{\mu\nu})}. \end{equation} The independent connection is compatible with the metric $F(\mathcal{R})g_{\mu\nu}$, conformal to $g_{\mu\nu}$, with the conformal factor given by $F(\mathcal{R}) \equiv df(\mathcal{R})/d\mathcal{R}$. This fact gives \begin{equation} \begin{array}{l} \label{ricci} \mathcal{R}_{\mu\nu} = R_{\mu\nu} + \frac{3}{2}\frac{1}{F^2(\mathcal{R})}F(\mathcal{R})_{,\mu}F(\mathcal{R})_{,\nu} \\ ~~~~~ - \frac{1}{F(\mathcal{R})}\nabla_\mu F(\mathcal{R})_{,\nu} - \frac{1}{2}\frac{1}{F(\mathcal{R})}g_{\mu\nu}\nabla_\alpha \nabla^\alpha F(\mathcal{R})\,. \end{array} \end{equation} The Palatini curvature $\mathcal{R}$ is obtained from the trace of the field equations (\ref{efe}), which is \begin{equation} \label{trace} F(\mathcal{R})\mathcal{R} -2f(\mathcal{R})= \kappa^2 T + R \equiv X\,. \end{equation} $\mathcal{R}$ can be algebraically expressed in terms of $X$ if $f(\mathcal{R})$ is analytic. In other words, the variable $X$ measures how the theory deviates from GR trace equation $R=-\kappa^2 T$. We can express the field equations (\ref{efe}) in terms of the metric and $X$ as \begin{eqnarray} \label{efex} G_{\mu\nu} & = & \frac{1}{2}f(X)g_{\mu\nu}- F(X)R_{\mu\nu} + F'(X) \nabla_{\mu}X_{,\nu} \nonumber \\ & + & \frac{1}{2}\left[ F'(X)\nabla_\alpha \nabla^\alpha X + F''(X)\left( \partial X\right)^2 \right] g_{\mu\nu} \nonumber \\ & + & \left[ F''(X)-\frac{3}{2}\frac{\left( F'(X)\right)^2}{F(X)}\right] X_{,\mu}X_{,\nu} + \kappa^2 T_{\mu\nu}\,, \end{eqnarray} being $(\partial X)^2=X_{,\mu}X^{,\mu}$. The trace of the field equations is now \begin{eqnarray} \label{trace2} F'(X)\nabla_\alpha \nabla^\alpha X + \left[ F''(X)-\frac{1}{2}\frac{\left( F'(X)\right)^2}{F(X)}\right] \left( \partial X\right)^2 \nonumber \\ + \frac{1}{3}\left[ X + 2f(X)-F(X)R\right]= 0 \,, \end{eqnarray} while the relation between the metric scalar curvature $R$ and the Palatini scalar curvature $\mathcal{R}$ is \begin{equation} \label{ricciscalar} \mathcal{R}(X) = R+\frac{3}{2}\left[ \left(\frac{F'(X)}{F(X)}\right)^2-2\frac{\nabla_\alpha \nabla^\alpha F(X)}{F(X)}\right]\,, \end{equation} which can be obtained by contracting Eq.~(\ref{ricci}). As for pure metric and Palatini cases \cite{capo11b}, the action (\ref{action}) for the hybrid metric-Palatini theory can be recast into a scalar-tensor theory by an auxiliary field $A$ such that \begin{equation} \label{eq:S_scalar0} S= \frac{1}{2\kappa^2}\int d^4 x \sqrt{-g} \left[R + f(A)+f_A(\mathcal{R}-A)\right] +S_m \ , \end{equation} where $f_A\equiv df/dA$ and $S_m$ is the matter action. Rearranging the terms and defining $\phi\equiv f_A$, $V(\phi)=A f_A-f(A)$, Eq. (\ref{eq:S_scalar0}) becomes \begin{equation} \label{eq:S_scalar1} S= \frac{1}{2\kappa^2}\int d^4 x \sqrt{-g} \left[R + \phi\mathcal{R}-V(\phi)\right] +S_m \ . \end{equation} The variation of this action with respect to the metric, the scalar $\phi$ and the connection leads to the field equations \begin{eqnarray} R_{\mu\nu}+\phi \mathcal{R}_{\mu\nu}-\frac{1}{2}\left(R+\phi\mathcal{R}-V\right)g_{\mu\nu}&=&\kappa^2 T_{\mu\nu}\,, \label{eq:var-gab}\\ \mathcal{R}-V_\phi&=&0 \label{eq:var-phi} \,, \\ \hat{\nabla}_\alpha\left(\sqrt{-g}\phi g^{\mu\nu}\right)&=&0 \,, \label{eq:connection}\ \end{eqnarray} respectively. The solution of Eq.~(\ref{eq:connection}) implies that the independent connection is the Levi-Civita connection of a metric $h_{\mu\nu}=\phi g_{\mu\nu}$, that is we are dealing with a bi-metric theory and $\mathcal{R}_{\mu\nu}$ and $R_{\mu\nu}$ are related by \begin{equation} \label{eq:conformal_Rmn} \mathcal{R}_{\mu\nu}=R_{\mu\nu}+\frac{3}{2\phi^2}\partial_\mu \phi \partial_\nu \phi-\frac{1}{\phi}\left(\nabla_\mu \nabla_\nu \phi+\frac{1}{2}g_{\mu\nu}\nabla_\alpha \nabla^\alpha \phi\right) \ , \end{equation} which can be used in the action (\ref{eq:S_scalar1}) to obtain the following scalar-tensor representation \begin{equation} \label{eq:S_scalar2} S= \frac{1}{2\kappa^2}\int d^4 x \sqrt{-g} \left[ (1+\phi)R +\frac{3}{2\phi}\partial_\mu \phi \partial^\mu \phi -V(\phi)\right] +S_m \ . \end{equation} We have to stress that, by the substitution $\phi \rightarrow -(\kappa\phi)^2/6$, the action (\ref{eq:S_scalar2}) reduces to the case of a conformally coupled scalar field with a self-interaction potential. This redefinition makes the kinetic term in the action (\ref{eq:S_scalar2}) the standard one, and the action itself becomes that of a massive scalar-field conformally coupled to the Einstein gravity. Of course, it is not the Brans-Dicke gravity where the scalar field is massless. As discussed above, in the limit $\Omega_A\rightarrow 0$, the theory (\ref{eq:S_scalar2}) becomes the Palatini-$f(\mathcal{R})$ gravity, and in the limit $\Omega_A\rightarrow \infty$ it is the metric $f(R)$ gravity. Apart from these singular cases, any theory with a finite $\Omega_A$ is in the ''hybrid'' regime, which from this point of view provides a unique interpolation between the two a priori completely distinct classes of gravity theories. Using Eq.~(\ref{eq:conformal_Rmn}) and Eq.~(\ref{eq:var-phi}) in Eq.~(\ref{eq:var-gab}), the metric field equations are \begin{eqnarray} (1+\phi) R_{\mu\nu} & = & \kappa^2\left(T_{\mu\nu}-\frac{1}{2}g_{\mu\nu} T\right) \nonumber \\ && + \frac{1}{2}g_{\mu\nu}\left(V+\nabla_\alpha \nabla^\alpha \phi\right) \nonumber \\ && +\nabla_\mu\nabla_\nu\phi-\frac{3}{2\phi} \partial_\mu \phi \partial_\nu \phi \ \label{eq:evol-gab}\,, \end{eqnarray} and then the spacetime curvature is sourced by both matter and scalar field. The scalar field equation can be manipulated in two different ways that illustrate how this theory is related with the $w=0$ and $w=-3/2$ cases, which corresponds to the metric and Palatini scalar-tensor representations of $f(R)$-gravity \cite{capo11b} respectively. Considering the trace of Eq.~(\ref{eq:var-gab}) with $g^{\mu\nu}$, we find $-R-\phi\mathcal{R}+2V=\kappa^2T$, and using Eq.~(\ref{eq:var-phi}), it is \begin{equation}\label{eq:phi(X)} 2V-\phi V_\phi=\kappa^2T+ R \ . \end{equation} Similarly as in the Palatini case ($w=-3/2$), this equation says that the field $\phi$ can be expressed as an algebraic function of the scalar $X\equiv \kappa^2T+ R$, i.e., $\phi=\phi(X)$. In the pure Palatini case, however, $\phi$ is just a function of $T$. Therefore the right-hand side of Eq.~(\ref{eq:evol-gab}) contains matter terms associated with the trace $T$, its derivatives, and also the curvature $R$ and its derivatives. In other words, this theory can be seen as a higher-derivative theory in both matter and metric fields. However, such an interpretation can be avoided if $R$ is replaced in Eq. (\ref{eq:phi(X)}) with the relation \begin{equation} R=\mathcal{R}+\frac{3}{\phi}\nabla_\mu \nabla^\mu \phi-\frac{3}{2\phi^2}\partial_\mu \phi \partial^\mu \phi \end{equation} together with $\mathcal{R}=V_\phi$. One then finds that the scalar field dynamics is given by a second-order equation that becomes, for $\Omega_A=1$, \begin{equation}\label{eq:evol-phi} -\nabla_\mu \nabla^\mu \phi+\frac{1}{2\phi}\partial_\mu \phi \partial^\mu \phi+\frac{\phi[2V-(1+\phi)V_\phi]} {3}=\frac{\phi\kappa^2}{3}T\,, \end{equation} which is a Klein-Gordon equation. This result shows that, unlike in the Palatini case ($w=-3/2$), the scalar field is dynamical. In this sense, the theory is not affected by the microscopic instabilities that arise in Palatini models (see \cite{olmo} for details). \section{The weak field limit and the fitting procedure} In the weak field limit and far from the sources, the scalar field behaves as $\phi(r) \approx \phi_0 + ( 2G\phi_0 M /3r) e^{-m_\phi r}$; the effective mass is defined as \begin{equation} \label{mass} m_\phi^2 \equiv \left. (2V-V_{\phi}-\phi(1+\phi)V_{\phi\phi})/3\right| _{\phi=\phi_0}\,, \end{equation} where $\phi_0$ is the amplitude of the background value of $\phi$. Furthermore $V$, $V_\phi$ and $V_{\phi\phi}$ are respectively the potential and its first and the second derivatives with respect to $\phi$. The metric perturbations yield \begin{eqnarray} \label{pippo} h_{00}^{(2)}(r) &=& \frac{2G_{\rm eff} M}{r} +\frac{V_0}{1+\phi_0} \frac{r^2}{6}\,, \nonumber \\ h_{ij}^{(2)}(r) &=& \left(\frac{2\gamma G_{\rm eff} M}{r}-\frac{V_0}{1+\phi_0}\frac{r^2} {6}\right)\delta_{ij} \label{cor3}\ , \end{eqnarray} where $V_0$ is the minimum of the potential $V$. The effective Newton constant $G_{\rm eff}$ and the post-Newtonian parameter $\gamma$ are defined as \begin{eqnarray} \label{pippo1} G_{\rm eff} &\equiv& \frac{G}{1+\phi_0}\left[1-\left(\phi_0/3\right)e^{-m_\phi r}\right]\,, \nonumber \\ \gamma &\equiv& \frac{1+\left(\phi_0/3\right)e^{-m_\phi r}}{1-\left(\phi_0/3\right)e^{- m_\phi r}} \,. \end{eqnarray} The coupling of the scalar field to the local system depends on $\phi_0$. If $\phi_0 \ll 1$, then $G_{\rm eff}\approx G$ and $\gamma\approx 1$ regardless of the value of $m_\phi^2$. This is in contrast with the result obtained in the metric version of $f(R)$ theories. For sufficiently small $\phi_0$, this modified theory allows to pass the Solar System tests, even if the scalar field is very light \cite{capo13b}. According to these considerations, the leading parameters are $m_\phi$ and $\phi_0$. Their value give both an estimation of the deviation with respect to GR and how the affine contribution (i.e. the Palatini term) is relevant with respect to the metric $f(R)$ gravity. Constraining both of them by observations gives immediately information on the hybrid gravity. Starting from the above results, the modified gravitational potential can be written in the form: \begin{eqnarray} \label{pot} \Phi \left( r \right)= -\frac{G}{1+\phi_0}\left[1-\left(\phi_0/3\right)e^{-m_\phi r}\right] M/r. \end{eqnarray} An important remark is necessary at this point. We have not chosen the form of $V(\phi)$ since the only requirement is that the scalar field potential is an analytic function of $\phi$. In such a case, the effective mass (\ref{mass}) can be always defined. Clearly, the aim is to derive specific forms of the potential starting from the observations. This means a sort of "inverse scattering procedure" by which the $V(\phi)$ potential can be reconstructed from the observed values of the parameters $M$, $\phi_0$, $m_\phi$ and $\gamma$. To this end, let us use eq. (\ref{pot}) to simulate orbits of S2 star in the hybrid modified gravity potential and then we compare the obtained results with the set of S2 star observations obtained by the New Technology Telescope/Very Large Telescope (NTT/VLT). The simulated orbits of S2 star are obtained by numerical integration of equations of motion where the hybrid gravitational potential is adopted, i.e. \begin{equation} \mathbf{\dot{r}}=\mathbf{v},\hspace*{0.5cm} \mu\mathbf{\ddot{r}}=-\triangledown\Phi\left( \mathbf{r}\right), \label{2body} \end{equation} \noindent where $\mu$ is the reduced mass in the two-body problem. In that way we obtained the simulated orbit of S2 star around Galactic Centre in the weak field approximation of hybrid gravity where eqs. (\ref{pippo}) and (\ref{pippo1}) stand. Taking into account that $\gamma$ = $\gamma$($\phi_0$, $m_{\phi}$), the considered weak field solution depends on the following three parameters: $M$, $\phi_0$, and $m_{\phi}$. Mass $M$ of the central object can be obtained independently using different observational techniques, such as e.g. virial analysis of the ionized gas in the central parsec \cite{lacy82} (yielding $M = 3 \times 10^6 M_{sun}$), $M-\sigma$ (mass - bulge velocity dispersion) relationship for the Milky Way \cite{trem02} (yielding $M = 9.4 \times 10^6 M_{sun}$) or from Keplerian orbits of S-stars \cite{gill09b} (yielding $M = 4.3 \times 10^6 M_{sun}$). Since our goal was not to make a new estimate of mass $M$ using hybrid gravity, but instead to study the possible deviations from Keplerian orbit of S2 star which could indicate signatures for hybrid gravity on these scales, we adopted the last of three previously mentioned estimates for mass of the central object ($M = 4.3 \times 10^6 M_{sun}$), as well as the distance to the S2 star given by \cite{gill09a} ($d_\star$ = 8.3 kpc), and constrained only the remaining two free parameters ($\phi_0$, $m_{\phi}$). Parameter $\phi_0$ is dimensionless, while $m_{\phi}$ is given in AU$^{-1}$ (AU being astronomical unit), so that $m_{\phi}^{-1}$ represents a scaling parameter for gravity interaction. Non-zero values of these two parameters, if obtained, would indicate a potential deviation from GR. In order to obtain the constraints on $\phi_0$ and $m_{\phi}$, these two parameters were varied. For each their combination the simulated coordinates $x$ and $y$ and velocity components $v_x$ and $v_y$ of S2 star were calculated. Calculations were performed for each observational epoch and then compared with its corresponding observed positions and velocities. $\chi^2$ between the observed and calculated coordinates of S2 star is minimized using LMDIF1 routine from MINPACK-1 Fortran 77 library which solves the nonlinear least squares problems by a modification of Marquardt-Levenberg algorithm \cite{more80} (for more details on fitting procedures see \cite{bork13}). \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{fig1a.eps} \hspace*{0.8cm} \includegraphics[width=0.45\textwidth]{fig1b.eps} \caption{(Color online) Comparisons between the orbit of S2 star in Newtonian gravity (red dashed line) and hybrid gravity during 5 orbital periods (blue solid line) for (left panel) $\phi_0$ = -0.00033 and $m_\phi$ = -0.0028, and for (right panel) $\phi_0$ = -0.000033 and $m_\phi$ = -0.00028.} \label{fig01} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{fig2a.eps} \hspace*{0.8cm} \includegraphics[width=0.45\textwidth]{fig2b.eps} \caption{(Color online) The precession per orbital period for $\phi_0$ in the range $[-0.0009, -0.0002]$ and $m_\phi$ in $[-0.0034, -0.0025]$ (left panel), and $\phi_0$ in the range $[-0.0004, -0.0002]$ and $m_\phi$ in $[-0.0029, -0.0027]$ (right panel) in the case of hybrid modified gravity potential. With a decreasing value of angle of precession colors are darker.} \label{fig02} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{fig3a.eps} \hspace*{1cm} \includegraphics[width=0.45\textwidth]{fig3b.eps} \caption{(Color online) The maps of the reduced $\chi^{2}$ over the $\phi_0 - m_\phi$ parameter space for all simulated orbits of S2 star which give at least the same or better fits than the Keplerian orbits. With a decreasing value of $\chi^{2}$ (better fit) colors ingrey scale are darker. A few contours are presented for specific values of reduced $\chi^{2}$ given in the figure's legend.} \label{fig03} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{fig4a.eps} \hspace*{1cm} \includegraphics[width=0.45\textwidth]{fig4b.eps} \caption{(Color online) The same as in Fig. \ref{fig03}, but for the zoomed range of parameters.} \label{fig04} \end{figure*} \section{Results: simulations vs observations} \label{sec:Sec3} Let us now discuss the numerical simulations that we want to compare with observations in order to select the range of the potential parameters (3.4). As we will see, analysis by hybrid gravity fixes better the observational data than the standard Keplerian analysis. \subsection{Numerical calculation of S2 star orbit and orbital precession} The simulated orbits of S2 star around the central object in hybrid gravity (blue solid line) and in Newtonian gravity (red dashed line) for $\phi_0$ = -0.00033 and $m_\phi$ = -0.0028 (left panel), as well as for $\phi_0$ = -0.000033 and $m_\phi$ = -0.00028 (right panel) during 5 orbital periods, are presented in Fig. \ref{fig01}. As it can be seen from this figure, hybrid gravity causes the orbital precession in the same direction as GR, but precession angle is much bigger. When both $\phi_0$ and $m_\phi$ are decreased for an order of magnitude, the precession is much smaller (see the right panel of Fig. 1). This analysis also shows that Keplerian orbit is recovered when $\phi_0$ and $m_\phi$ tend to 0. We calculate orbital precession in hybrid modified gravity potential and results are reported in Fig. \ref{fig02} as a function of $\phi_0$ and $m_\phi$. Assuming that the hybrid potential does not differ significantly from Newtonian potential, we derive the perturbed potential as \begin{equation} V(r) = \Phi \left( r \right) - {\Phi_N}\left( r \right)\begin{array}{*{20}{c}} ;&{{\Phi_N}\left( r \right) = - \dfrac{{GM}}{r}} \end{array}. \label{equ04} \end{equation} \noindent The obtained perturbing potential is of the form: \begin{eqnarray} V(r)= \frac{G}{1+\phi_0}\left[1+\left(1/3\right)e^{-m_\phi r}\right] M \phi_0/r. \end{eqnarray} \noindent and it can be used for calculating the precession angle according to Eq. (30) in Ref. \cite{adki07}: \begin{equation} \Delta \theta = \dfrac{-2L}{GM e^2}\int\limits_{-1}^1 {\dfrac{z \cdot dz}{\sqrt{1 - z^2}}\dfrac{dV\left( z \right)}{dz}}, \label{equ06} \end{equation} \noindent where $r$ is related to $z$ via: $r = \dfrac{L}{1 + ez}$. By differentiating the perturbing potential $V(z)$ and substituting its derivative and ($L = a\left( {1 - {e^2}} \right)$) in the above Eq. (\ref{equ06}), and taking the same values for orbital elements of S2 star like in Ref. \cite{bork12} we obtain numerically, for $\phi_0$ = -0.00033 and $m_\phi$ = -0.0028, that the precession per orbital period is $3^\circ.26$. Graphical representation of precession per orbital period for $\phi_0$ in the range $[-0.0009, -0.0002]$ and $m_\phi$ in $[-0.0034, -0.0025]$ is given in the left panel of Fig. \ref{fig02}. As one can see, the pericenter advance (like in GR) is obtained. The precession per orbital period for $\phi_0$ in the range $[-0.0004, -0.0002]$ and $m_\phi$ in $[-0.0029, -0.0027]$ is given in the right panel of Fig. \ref{fig02}. \subsection{Comparison of theoretical results and observations} Let us give some constraints on parameters $\phi_0$ and $m_{\phi}$ of hybrid gravity potentials according to current available observations of S2 star orbit. However we should note that the present astrometric limit is still not sufficient to definitely confirm that the S2 orbit deviates from the Keplerian one, but there is great probability that it is the case because the astrometric accuracy is constantly improving from around 10 mas during the first part of the observational period, currently reaching less than 0.3 mas (see reference \cite{frit10}). There are also some recent studies that provide more and more evidence that the orbit of S2 star is not closing (see e.g. Fig. 2 in paper \cite{meye12}). In this paper we fitted the NTT/VLT astrometric observations of S2 star, which contain a possible indication for orbital precession around the massive compact object at Galactic Centre, in order to constrain the parameters of hybrid gravity potential, since this kind of potential has not been tested at these scales yet. We have to stress that in the reference \cite{gill09b} on page 1092, Fig. 13, authors presented the Keplerian orbit, but in order to obtain it they had to move the position of central point mass. In that way they implicitly assumed orbital precession. In our orbit calculation we do not need to move central point mass in order to get a satisfactory fit. In fact, our comparison with astronomical observations represents upper bounds for precession angle on deviation from GR. The most probably results for precession are in between these upper bounds and GR results. In future, using more precise astronomical observations, we could obtain more accurate results. Figs. \ref{fig03} and \ref{fig04} present the maps of the reduced $\chi^{2}$ over the $\phi_0 - m_\phi$ parameter space for all simulated orbits of S2 star which give at least the same or better fits than the Keplerian orbits. These maps are obtained by the same fitting procedure as before. As it can be seen from Figs. \ref{fig03} and \ref{fig04}, the most probable value for the parameter $\phi_0$ in the case of NTT/VLT observations of S2 star is between -0.0009 and -0.0002 and for the parameter $m_{\phi}$ is between -0.0034 and -0.0025 (see the darkest regions in Figs. \ref{fig04}). In other words, we obtain reliable constraints on the parameters $\phi_0$ and $m_{\phi}$ of hybrid modified gravity. The absolute minimum of the reduced $\chi^{2}$ ($\chi^{2}=1.503$) is obtained for $\phi_0$ = -0.00033 and $m_\phi$ = -0.0028, respectively. We simulated orbits of S2 star around the central object considering both the hybrid gravitational potential and the Newtonian potential. Our analysis shows that the hybrid modified gravity potential induces the precession of S2 star orbit in the same direction of GR. We used these simulated orbits to fit the observed orbits of S2 star. The best fit (according to NTT/VLT data) is obtained for the $\phi_0$ from between -0.0009 and -0.0002, and for the $m_{\phi}$ between -0.0034 and -0.0025. This range corresponds to scale parameter $m_{\phi}^{-1}$ from (1/0.0034) AU to (1/0.0025) AU ($\approx 300-400$ AU, i.e $1.4-1.9$ mpc) which is comparable to the size of S2 star orbit. We believe that comparison with astronomical observation is important, and data we used are the best currently published and available. GR predicts that the pericenter of S2 star should advance by $0^\circ.18$ per orbital revolution \cite{gill09b}. Using our fitting procedure, we get a much bigger precession $3^\circ.26$. Figure 2 in this paper gives theoretically calculated precession per orbital period for hybrid gravity of $\phi_0 - m_\phi$ parameter space. In the future, with much more precise data maybe observation will find smaller value of precession, and using Figure 2 (i.e. the same procedure), we will be able to get again hybrid gravitational parameters $m_\phi$ and $\phi_0$, hoping that observations will give smaller values. We calculated the map of parameters theoretically for broad range of precession angles. More precise observations probably will change best fit parameters, but procedure for theoretical calculation will be the same. \section{Conclusions} \label{sec:Sec4} In this paper, the orbit of S2 star around the galactic Centre has been investigated in the framework of the hybrid modified gravity. Using the observed positions of S2 star, we constrained the parameters of hybrid modified gravity. Our simulation results are: \begin{enumerate} \item the range of values for $\phi_0$ parameter, coming from S2 star, is between -0.0009 and -0.0002; \item the range of $m_{\phi}$ is between -0.0034 and -0.0025; \item precession of S2 star orbit, in the hybrid modified gravity potential, has the same direction as in GR, but the upper limit in magnitude is much bigger than GR. \end{enumerate} The above results allow to compare the orbital motion of S2 star in the framework of hybrid gravity with analogous results in other theories. In particular, hybrid gravity can be compared with metric $f(R)$ models, discussed in \cite{bork12,bork13} and with $f(R, \phi)$, discussed in \cite{capo14}. Also in these papers, the motion of S2 star has been studied according to the effective gravitational potentials achieved in the weak field limit. As discussed above, the main reason to introduce hybrid gravity lies on the fact that models like $f(R)$ gravity (both in metric and Palatini formalism) and $f(R, \phi)$ gravity suffer problems in passing the standard Solar System tests \cite{chib03,olmo05}. On the other hand, as reported in \cite{capo13b}, hybrid gravity allows to bypass shortcomings deriving from local tests and connect models to galactic dynamics and late time cosmic acceleration. Using S2 star orbits, it is possible to achieve additional constraints at sub-parsec scales and promote this model with respect to other extended gravity approaches. In particular, $\phi_0$ and $m_\phi$ are the specific parameters of hybrid gravity and differ from $f(R)$ gravity models both in metric and Palatini formalism. In the case of $f(R, \phi)$ gravity, it is possible to achieve a Sanders like potential; the parameter $m_\phi$ is also present and could have the same value, but the parameter $\phi_0$ of hybrid gravity and $\alpha$ of $f(R,\phi)$ differ \cite{capo14}. The two effective gravitational potentials, in the week field limit, have similar, but not the same forms at sub-parsec scales. In conclusion, the comparison of the observed orbits of S2 star and theoretical calculations performed by the hybrid modified gravity model can provide a powerful method for the observational test of the theory, and for observationally discriminating among the different modified gravity models. It seems that hybrid gravity potential is sufficient in addressing the problem of dark matter at galactic scales \cite{capo13b}, and it gives indications that alternative theories of gravity could be viable in describing galactic dynamics. Furthermore, orbital solutions derived from such a potential are in good agreement with the reduced $\chi^2$ deduced for Keplerian orbits. This fact allows to fix the range of variation of $\phi_0$ and $m_{\phi}$. The precession of S2 star orbit, obtained for the best fit parameter values ($\phi_0$ from -0.0009 to -0.0002 and $m_\phi$ from -0.0034 to -0.0025), has the positive direction, as in GR, but for these values of parameters, we obtain much larger orbital precession of S2 star in hybrid gravity compared to prediction of GR. We can conclude that hybrid gravity effective potential is probably the best candidate among the other considered gravity models such as e.g. $R^n$ \cite{bork12,zakh14}, Yukawa-like \cite{bork13} and Sanders-like \cite{capo14} to explain gravitational phenomena at different astronomical scales. It is important to stress that our comparison with astronomical observations represents only upper bounds for precession angle on the deviation from GR. Although observational data seem to indicate that the S2 star orbit is not Keplerian, the nowadays astrometric limits are not sufficient to unambiguously confirm such a claim. We hope that forthcoming observational data will allow more accurate measurements of stellar positions. A final remark is due now. From an astrophysical point of view, the main motivation to introduce hybrid gravity is to address the problems of dark matter and dark energy \cite{capo13b}. First of all, we have to say, according to the observations, that dark matter has very negligible effects around the Galactic Centre \cite{genz10}. Despite of this fact, here we adopted hybrid gravity dynamics only to fit the orbit of S2 star around the Galactic Centre. The interest of the reported results, if confirmed, lies on the fact that hybrid dynamics is independent of the dark issues but can be connected to a fine analysis of geodesic structure. In other words, the further gravitational degrees of freedom, coming from hybrid gravity, contribute to dynamics as soon as orbital analysis related to GR is not sufficient to describe in detail peculiar situations as those around the Galactic Centre. However, in order to better confirm this statement, one needs more precise astronomical data describing stellar dynamics around Galactic Center. \paragraph{Acknowledgments} D.B., P.J. and V.B.J. wish to acknowledge the support by the Ministry of Education, Science and Technological Development of the Republic of Serbia through the project 176003. S.C. acknowledges the support of INFN ({\it iniziative specifiche} QGSKY and TEONGRAV). All authors acknowledge support by Bilateral cooperation between Serbia and Italy ''Testing Extended Theories of Gravity at different astrophysical scales'' and by ''NewCompStar'', COST Action MP1304. D.B. would like to thank to Dr. A.F. Zakharov for many usefull discussions.
1,116,691,497,921
arxiv
\section{Introduction \label{INTROD}}\leavevmode\hbox to\parindent{\hfill} In this paper we will consider the $\SL2$ current algebra at level $k=-4$. The BRST operator is then nilpotent quantum-mechanically without the introduction of an auxiliary set of currents, and the cohomology problem can be posed for the algebra itself (with the appropriate ghosts) rather than for a coset space. The cosets of $\SL2$ for arbitrary $k$ have been considered in a number of papers, e.g.\ in~\cite{[AGSY],[HY],[BMP],[HR]}. Choosing $k=-4$ allows us to use several specific tools, such as a spectral sequence that converges to the $\SL2$ BRST cohomology and leads to uncovering an underlying (level-1) $\N4$ superconformal algebra. The $sl(2)$ singular vectors then take the form of singular vectors in an $\N4$ module, and these can be used to produce Lian-Zuckerman states~\cite{[LZ]} in the corresponding $c=-2$-matter + gravity theory. The correspondence between the $\SL2$ WZW model and a matter+gravity theory, which follows from the cohomological analysis, can also be arrived at in rather explicit terms, by using a representation for $\SL2$ currents obtained by inverting the hamiltonian reduction~\cite{[S-sing]}. This representation is built by tensoring a {\it Virasoro\/} Verma module with free-field modules of the Liouville and ghost fields in such a way that the Hamiltonian reduction maps it back into the Virasoro module, while the Liouville and ghost fields correspond to those of a non-critical bosonic string. The matter theory is chosen according to the $\SL2$ level $k$ and in our case of $k\!=\!-4$, the $\SL2$ representation in question leads to an explicit relation between the $\SL2_{k=-4}$ algebra and the bosonic string with $c=-2$ matter~\cite{[Distler]} (cf.\ the previous analysis~\cite{[MV]} of the $k=-3$ WZW model which has lead to $c=1$ matter). The mappings between $\SL2$ and matter+gravity models can be viewed in a more general setting of Universal string theory \cite{[BV],[Fof],[IK]}, in which equivalences between theories with different underlying symmetry algebras are established. Two models of conformal field theories are usually considered equivalent if they lead to identical physical results, which can be taken to hold in (at least) three ways: as equivalence of their BRST cohomologies, as isomorphism of the correlators, or as `reducibility' of singular vectors of one of the underlying algebras to those of the other algebra. These three approaches to establishing isomorphisms between different theories are heuristically equivalent. Indeed, correlators (to be precise, conformal blocks) in conformal field theories are solutions to differential equations derived from vanishing conditions of singular vectors. Singular vectors are in turn related to BRST cohomology in the following way. One observes that singular vectors in Verma modules are BRST-exact, $\nket{{\rm sing}}={\cal Q}_0\nket\psi$, where $\nket\psi$ cannot be BRST-exact; now in the (usually irreducible) module obtained by factoring with respect to a submodule generated by $\nket{{\rm sing}}$, $\nket\psi$ becomes BRST-closed and therefore represents a state in the cohomology. Within the BRST-cohomology approach, a useful tool for establishing isomorphism between cohomology spaces is given by homotopy transformations. A homotopy transformation is required to provide a splitting of the BRST-operator of a given theory into a sum of two BRST-operators, one of which has trivial cohomology and the other coincides with the BRST operator of another theory, whose equivalence to the first theory is to be established. However, such an explicit mapping between the respective BRST cohomology spaces is not always easy to find; fortunately, the homotopy transformation approach is a special case of a more powerful technique of spectral sequences: a homotopy transformation can be viewed as such a lucky choice of grading that results in a spectral sequence with only two terms. In general, spectral sequences contain more terms and therefore have a broader applicability. On the other hand, they are less convenient, as compared with homotopy transformations, for establishing {\it explicit\/} mappings between states in the cohomology. In this paper, we will trace, both at the cohomological level and in terms of explicit operator constructions, the relation between $\SL2_{-4}$ theory and the $c\!=\!-2$ matter dressed with ghosts and Liouville fields. We will introduce a spectral sequence on the $\SL2_{-4}$ BRST complex and use it in combination with an explicit operator realization of the $\SL2$ currents. The $\SL2$ representation that we use is not a free-field `bosonization', as it involves as a building-block an arbitrary Virasoro Verma module which need not be bosonized through free fields. As a result, it does not lead to an `accidental' vanishing of singular vectors and therefore can be applied to the description of the cohomology in irreducible modules `generated' by singular vectors as explained above. We will thus systematically work with singular vectors rather than with the corresponding cohomology elements. It is a remarkable fact that a generating construction~\cite{[MFF]} (to be referred to as MFF) is known for the $\SL2$ (in fact, $\SL{n}$) singular vectors. Our representation for $\SL2$ currents can be made `compatible' with the spectral sequence converging to the BRST cohomology, which will allow us to map the MFF singular vectors into a class of singular vectors in a representation of the $(\N4)_{k=1}$ algebra (the corresponding reformulation of the whole MFF {\it generating formula\/} would then be a variation on a similar construction for the $\N2$ algebra~\cite{[ST2]}). These $\N4$ singular vectors give rise to the $c=-2$ Lian--Zuckerman states, thereby providing a `Lie-algebraic' origin of the latter. Their ghost numbers, in particular, can be derived from the embedding diagram of the $\SL2_{k=-4}$ Verma modules. We will also consider relations of the $\N4$ algebra emerging in the spectral sequence with other algebras known to be relevant in non-critical bosonic strings~\cite{[GS3],[BLNW]}. \medskip In section~2, we introduce a spectral sequence for the $\SL2_{-4}$ BRST operator and observe an $\N4$ algebra in its first term. In section~3, we map the $\SL2_{-4}$ singular vectors into singular vectors in an $\N4$ module; then the $\SL2_{-4}$ cohomology can be considered entirely in $\N4$ terms. We also give here the embedding diagram of the $\SL2_{k=-4}$ Verma modules, which will then `project' onto the $c=28$ Virasoro Verma module embedding diagram, related to the Lian--Zuckerman states. In section~4, the use of the $\SL2$ representation from ref.~\cite{[S-sing]} leads us to identifying a $c=-2$ bosonic string `inside' the $\SL2_{-4}$ theory. The formula for $\N4$ singular vectors then projects, on the one hand, into singular vectors in the $c=28$ Verma module, and on the other hand, provides us with a construction for a class of Lian--Zuckerman states in the $c=-2$ bosonic string. In section~5, we finally discuss the relation of the observed $\N4$ symmetry with the known symmetries~\cite{[GS3],[BLNW]} of matter dressed with gravity. The appearance of the $\N4$ algebra will be interpreted as a result of fitting together two twisted $\N2$ algebras known to exist when dressing a matter theory with ghosts and the Liouville~\cite{[GS3]}. \section{$s\ell(2)$ BRST operator, spectral sequence, and $N\!=\!4$ \label{BRSTOPER}} \subsection{$s\ell(2)$ currents, ghosts, and the BRST operator}\leavevmode\hbox to\parindent{\hfill} In this subsection, we fix our notations and introduce the BRST complex, Weyl group action, and the representations of $\SL2$ we are going to consider in this paper. To begin with, the $s\ell(2)$ current algebra operator products are taken in the form \BE\oldfalse\BA{rcl} J^0(z)J^{\pm}(w)&=&\pm{J^{\pm}\over{z-w}}\\ J^{+}(z)J^{-}(w)&=&{-k/2\over{(z-w)^2}}-{J^0\over{z-w}}\\ J^0(z)J^0(w)&=&{k/2\over{(z-w)^2}} \label{SL2}\end{array}\end{equation} We also introduce three ghost systems ${\cal B}^{+}$, ${\cal C}_+$; ${\cal B}^0$, ${\cal C}_0$, and ${\cal B}^{-}$, ${\cal C}_-$ associated with the $s\ell(2)$ generators $J^+$, $J^0$ and $J^-$ respectively. Together, these make up an algebra \BE {\cal A}=s\ell(2)\oplus [{\cal B}^+,{\cal C}_+]\oplus [{\cal B}^-,{\cal C}_-]\oplus [{\cal B}^0,{\cal C}_0] \label{A} \end{equation} where $[{\cal B}^+,{\cal C}_+]$, \ $[{\cal B}^-,{\cal C}_-]$ and $[{\cal B}^0,{\cal C}_0]$ are superalgebras spanned by the corresponding ghost systems. The full energy-momentum tensor, which will be denoted by ${\cal T}_{\cal A}$, is equal to the sum of Sugawara and ghost energy-momentum tensors and for $k=-4$ reads \BE {\cal T}_{\cal A}={\textstyle{1\over2}}(-J^0J^0 + J^+J^- + J^-J^+ ) -{\cal B}^+\partial{\cal C}_+ - {\cal B}^-\partial{\cal C}_- - {\cal B}^0\partial{\cal C}_0 \label{EMT} \end{equation} All the ${\cal B}$ ghosts thus have dimension 1. In the formula \req{EMT} and other similar equations below, the ghost monomials are normal-ordered with respect to the $sl_2$-invariant ghosts vacua. For $k=-4$, the algebra ${\cal A}$ is made into a BRST complex by introducing the BRST current according to the standard recipe \cite{[KSch]} \BE {\cal J}_{\cal A}={\cal C}_+J^{+}+{\cal C}_0J^0+{\cal C}_-J^{-}-{\cal C}_-{\cal C}_0{\cal B}^{-}- {\cal C}_0{\cal C}_+{\cal B}^{+}-{\cal C}_-{\cal C}_+{\cal B}^0\,. \label{BRST} \end{equation} The corresponding BRST charge \BE {\cal Q}_{\cal A}=\oint{{\cal J}_{\cal A}} \label{QA} \end{equation} is indeed nilpotent, ${\cal Q}_{\cal A}^2=0$, when $k$ is equal to minus two Coxeter numbers, $k=-4$ \cite{[AGSY]} \footnote{ To avoid misunderstanding, let us remind the reader that, in our normalization (which differs from that adopted in refs.~\cite{[AGSY],[ISRA]}), `the other' critical value, at which the universal enveloping algebra acquires an infinite-dimensional center~\cite{[Frenkel]}, is $k=-2$.}. In this case the BRST {\it current\/} is also OPE-isotropic, ${\cal J}_{\cal A}(z){\cal J}_{\cal A}(w)=0$. In the following, $k$ will be set equal to $-4$. The energy-momentum tensor\ \req{EMT} turns out to be BRST-{\it exact\/}: \BE {\cal T}_{\cal A}(z)=[{\cal Q}_{\cal A},{\cal G}_{\cal A}(z)]\,,\qquad {\cal G}_{\cal A}={\cal B}^+J^--{\cal B}^0J^0+{\cal B}^-J^+\,. \label{Texact}\end{equation} It follows then that all the states in the cohomology of ${\cal Q}_{\cal A}$ must have vanishing dimension. Another condition on the cohomology follows by considering the currents \BE {\widehat J}^{\pm,0}= [{\cal Q}_{\cal A},\,B^{\pm,0}]\,,\qquad\left\{ \oldfalse\BA{rcl} {\widehat J}^+&=&J^+ - {\cal B}^+{\cal C}_0 + {\cal C}_-{\cal B}^0\,,\\ {\widehat J}^0&=&J^0 + {\cal B}^+{\cal C}_+ - {\cal B}^-{\cal C}_-\,,\\ {\widehat J}^-&=&J^- + {\cal B}^-{\cal C}_0 - {\cal C}_+{\cal B}^0\,\\ \end{array}\right. \label{hatJ}\end{equation} that satisfy an $s\ell(2)$ algebra at level $k+4=0$ (hence, in particular, ${\widehat J}^0$ is OPE-isotropic). Since ${\widehat J}^0$ is BRST-trivial, states in the cohomology must have ${\widehat J}^0$-spin equal to zero. The current ${\cal J}_{\cal A}$ itself is also BRST-exact, \BE {\cal J}_{\cal A}=[{\cal Q}_{\cal A},\,{\cal H}_{\cal A}]\,,\qquad {\cal H}_{\cal A}\equiv {\cal B}^+{\cal C}_+ + {\cal B}^-{\cal C}_- + {\cal B}^0{\cal C}_0\,. \label{hatJexact} \end{equation} Note that the cohomology is naturally graded by zero mode of ${\cal H}_{\cal A}$. As is well known \cite{[AGSY]}, cohomology of ${\cal Q}_{\cal A}$ is given by $H_{\rm rel}^*\oplus({\cal C}_0)_0 H_{\rm rel}^*$ where $H_{\rm rel}^*$ denotes the {\it relative\/} cohomology, which will be the only one we are going to consider. \smallskip It will be useful to extend the action of the affine Weyl group $\tilde W$ on the ${\cal B}^{\pm}{\cal C}_{\pm}$ ghosts by demanding that $\tilde W$ act on the $\widehat J$ currents in the same way as it acts on $J^{\pm,0}$, namely, for $\tilde W\ni(s,\ell)$ where $s\in W$ is $+$ or $-$ and $\ell$ is an element of the weight lattice, \BE\oldfalse\BA{rcl} (s,\ell)\,.\,{\widehat J}^\alpha_n&=&{\widehat J}^{s\alpha}_{n+\alpha\ell}\,,\\ (s,\ell)\,.\,{\widehat J}^0_n&=& s\,{\widehat J}^0_n\,. \end{array}\label{W1}\end{equation} It follows then that \BE\oldfalse\BA{rcl} (s,\ell)\,.\,{\cal B}^\alpha_n&=&s\,{\cal B}^{s\alpha}_{n+\alpha\ell}\,,\\ (s,\ell)\,.\,({\cal C}_\alpha)_n&=&s\,({\cal C}_{s\alpha})_{n-\alpha\ell}\,. \end{array}\label{W2}\end{equation} \smallskip Now we specify a representation of the algebra \req{A}. Consider a highest-weight representation of $s\ell(2)$ with an integral spin $j$: \BE\oldfalse\BA{rcl} J^0_0\ket{j}&=&j\ket{j}\,,\qquad J^0_n\ket{j}~=~0,\quad n\geq1\\ J^+_n\ket{j}&=&0\,,\quad n\geq0\\ J^-_n\ket{j}&=&0\,,\quad n\geq1\,\label{sl2highest} \end{array}\end{equation} and tensor it with the corresponding ghost vacua into a `highest-weight' of ${\cal A}$: \BE \ket{j}_{\cal A}= \left\{\oldfalse\BA{ll} \ket{j}\otimes\ket{j+1}_+ \otimes\ket0_0 \otimes \ket1_- &j\geq0\,,\\ \ket{j}\otimes\ket{0}_+ \otimes\ket0_0 \otimes \ket{-j}_- &j<0 \end{array}\right. \label{jdressed} \end{equation} The ghost vacua $\ket{0}_{\pm,0}$ are defined as follows. For a $bc$ system of dimension $\lambda$, we define $\ket q$ by~\cite{[FMS]} \BE b_{\geq1-\lambda+q}\ket{q}=0\,,\qquad c_{\geq\lambda-q}\ket{q}=0 \label{ghostconditions} \end{equation} (this is $sl_2$-invariant for $q=0$). This state has dimension \BE \Delta_\lambda(q)={\textstyle{1\over2}} q(q+1-2\lambda)\,, \end{equation} the formula to be used when dressing $\SL2$ states with ghosts so as to get dimension-0 states (as will be necessary for states in the cohomology). All monomials in $b,c$ will always be assumed normal-ordered with respect to the $sl_2$-invariant vacuum $\ket0$. Then, in particular \BE (bc)_0\ket{q}=-q\ket{q}\,, \end{equation} which explains the choice of ghost states in \req{jdressed}, yielding the vanishing ${\widehat J}^0$-spin. Now, to return to \req{ghostconditions}, the ghost vacua in the formula \req{jdressed} for $j>0$, for instance, are such that \BE\oldfalse\BA{rclcrcl} ({\cal C}_+)_n\ket{j+1}_+&=&0,\quad n\geq-j\,, &{}&{{\cal B}^+}_n\ket{j+1}_+&=&0,\quad n\geq1+j\,,\\ ({\cal C}_0)_n\ket{0}_0&=&0,\quad n\geq1\,,&{}& {{\cal B}^0}_n\ket{0}_0&=&0,\quad n\geq0\,,\\ ({\cal C}_-)_n\ket{1}_-&=&0,\quad n\geq0\,, &{}&{{\cal B}^-}_n\ket{1}_-&=&0,\quad n\geq1 \label{sl2ghost} \end{array}\end{equation} (recall that our ghost systems have dimensions $\lambda=1$). The representation of the ${\cal A}$ algebra is built over the vacuum~\req{jdressed} by acting on it with the creation operators. The ${\widehat J}^0$-spin-zero vacuum state \req{jdressed} can be considered as a representative of the only state in the cohomology of the {\it Verma\/} module for integral $j$. Simple considerations based on the `compensation' of the $\SL2$ spin $j$ by the ghost contributions show that there is not even a vacuum in the cohomology of the Verma module for half-integral $j$, and so we restrict ourselves to the case of integral $j$. In what follows, we will look for the cohomology of the irreducible modules obtained as factors of the Verma module. \subsection{Spectral sequence and $N\!=\!4$ algebra}\leavevmode\hbox to\parindent{\hfill} Since the energy-momentum tensor\ and the BRST current ${\cal J}_{\cal A}$ are BRST-exact, one might expect an underlying topological algebra in the system. This is indeed the case, and the topological algebra turns out to be a certain extension \cite{[ISRA],[Kazama]} of the twisted $N\!=\!2$ algebra~\cite{[Ey],[W-top]}. It differs from the twisted $N\!=\!2$ by new terms in the operator product ${\cal G}_{\cal A}\cdot{\cal G}_{\cal A}$ (where ${\cal G}_{\cal A}$ is the superpartner to the energy-momentum tensor\ from eq.~\req{Texact}): \BE {\cal G}_{\cal A}(z){\cal G}_{\cal A}(w)={{\cal W}\over z-w}\quad\hbox{where}\quad {\cal W}(z)=[{\cal Q}_{\cal A},{\cal V}(z)]\quad\hbox{and}\quad{\cal V}={\cal B}^-{\cal B}^+{\cal B}^0\,. \label{VW}\end{equation} ${\cal V}$ and ${\cal W}$ generate a commutative ideal in the extended topological algebra, and the twisted $N\!=\!2$ algebra is a factor with respect to this ideal. This extended topological algebra is not very useful to work with (at least as compared to the true $\N2$), however, as we are going to show, this is not needed, since the first step in evaluating the cohomology of ${\cal Q}_{\cal A}$ will effectively lead to factoring over the ideal generated by ${\cal V}$ and ${\cal W}$. \medskip Observe that a filtration $F^i{\cal A}$ exists on the BRST complex $({\cal A}, {\cal Q}_{\cal A})$ (i.e., the filtration is compatible with the action of the BRST operator in the sense that ${\cal Q}_{\cal A}(F^i{\cal A})\subset F^i{\cal A}$). The filtration can be described by first assigning the following {\it gradings\/} to our fields: \BE\oldfalse\BA{l} \mathop{\rm deg}\nolimits{\cal C}_0=\mathop{\rm deg}\nolimits{\cal B}^0=\mathop{\rm deg}\nolimits J^0=\mathop{\rm deg}\nolimits {\cal B}^-=\mathop{\rm deg}\nolimits {\cal C}_-=0\,,\\ \mathop{\rm deg}\nolimits J^-=-\mathop{\rm deg}\nolimits J^+=1\,,\\ -\mathop{\rm deg}\nolimits {\cal B}^+=\mathop{\rm deg}\nolimits{\cal C}_+=3\,. \end{array}\label{degrees}\end{equation} The algebra ${\cal A}$ is then decomposed into a direct sum of subspaces $G_l$ with definite degrees, and the filtration \ $\ldots\subset F^i{\cal A}\subset F^{i+1}{\cal A}\subset F^{i+2}{\cal A}\subset\ldots$ \ is defined by $F^i{\cal A}=\bigoplus_{l\geq i}G_l$. Then we can split the BRST current into a finite sum of terms with non-negative degrees: \BE {\cal J}=\ldots+0+{\cal J}^{(0)}+{\cal J}^{(1)}+{\cal J}^{(2)}+{\cal J}^{(3)}+0+\ldots \label{Qdecomposition}\end{equation} where \BE\oldfalse\BA{l} {\cal J}^{(0)}={\cal C}_0{\widehat J}^0\,,\\ {\cal J}^{(1)}={\cal C}_-J^-,\\ {\cal J}^{(2)}={\cal C}_+J^+,\\ {\cal J}^{(3)}={\cal C}_+{\cal C}_-{\cal B}^0. \end{array}\label{SPSEQU}\end{equation} Since the degrees of all the non-vanishing terms in \req{Qdecomposition} are non-negative, there exists a spectral sequence associated with this filtration which converges to the cohomology of ${\cal Q}_{\cal A}$. Observe that the differentials ${\cal Q}^{(0)}$, ${\cal Q}^{(1)}$, ${\cal Q}^{(2)}$, ${\cal Q}^{(3)}$ corresponding to ${\cal J}^{(0)}$, ${\cal J}^{(1)}$, ${\cal J}^{(2)}$ ${\cal J}^{(3)}$ respectively are nilpotent separately and ${\cal Q}^{(0)}$ anticommutes with ${\cal Q}^{(1)}$ and ${\cal Q}^{(2)}$. As can be seen from the form of ${\cal Q}^{(0)}=\oint{\cal J}^{(0)}$, it effectively imposes the constraint ${\widehat J}^0\sim0$. Further, ${\cal Q}^{(3)}$ is zero on the cohomology of ${\cal Q}^{(0)}$, hence the spectral sequence degenerates after the third term. Cohomology of the BRST operator \req{BRST} is therefore given by the cohomology of ${\cal Q}^{(2)}$ evaluated on the cohomology of ${\cal Q}^{(1)}$ which is evaluated on the cohomology of ${\cal Q}^{(0)}$. As follows from a careful reading of the last phrase, the first step in the analysis of the spectral sequence consists therefore in restricting to the cohomology of ${\cal Q}^{(0)}$. This will have an immediate effect on the extended topological algebra referred to in the beginning of this subsection. Namely, the fields ${\cal V}$ and ${\cal W}$ vanish on the cohomology of ${\cal Q}^{(0)}$ since they consist of terms which are either ${\cal Q}^{(0)}$-exact or not ${\cal Q}^{(0)}$-closed. As a result, the extended topological algebra reduces to the twisted $N\!=\!2$ algebra~\cite{[Ey],[W-top]}. A useful choice of representatives of the $N\!=\!2$ algebra generators is given by \BE\oldfalse\BA{rcl} \widehat{\cal J}_{\cal A}&=&{\cal C}_-J^- + {\cal C}_+J^+\,,\\ \widehat{\cal G}_{\cal A}&=&{\cal B}^-J^+ + {\cal B}^+J^-\,,\\ \widehat{\cal T}_{\cal A}&=&J^-J^+ + \partial{\cal B}^+{\cal C}_+ - 2{\cal B}^-\partial{\cal C}_- - \partial{\cal B}^-{\cal C}_- + {\textstyle{1\over2}}(J^0{\cal B}^+{\cal C}_+ - J^0{\cal B}^-{\cal C}_- + \partial J^0) \,,\\ \widehat{\cal H}_{\cal A}&=&{\cal B}^+{\cal C}_+ + {\cal B}^-{\cal C}_-\,.\end{array}\label{N2}\end{equation} These close to an $N\!=\!2$ algebra modulo ${\cal Q}^{(0)}$-exact terms. The structure of the cohomology of ${\cal Q}^{(0)}$ can in fact be refined considerably by noticing that it bears a representation of an $N\!=\!4$ algebra that extends the above $N\!=\!2$ algebra. Representatives of the $N\!=\!4$ generators can be chosen as \BE\oldfalse\BA{rclcrcl} {\cal T}&=&\widehat{\cal T}_{\cal A}\,, &\qquad&{\cal G}^1&=&{\cal C}_+J^+\,, \\ J^+_{N=4} &=&{\cal C}_-{\cal C}_+\,,&{}&{\cal G}^2&=&{\cal B}^-J^+\,,\\ J^0_{N=4} &=&-{\textstyle{1\over2}}\widehat{\cal H}_{\cal A}\,,&{}&\overline{\cal G}_1&=&{\cal B}^+J^-\,,\\ J^-_{N=4} &=&{\cal B}^-{\cal B}^+\,,&{}&\overline{\cal G}_2&=&{\cal C}_-J^-\,. \end{array}\label{GEN4ALGSL}\end{equation} Then it can be checked that the following $N\!=\!4$ OPEs~\cite{[Mats]} are satisfied modulo ${\cal Q}^{(0)}$-exact terms~\footnote{Greek superscripts and subscripts $\alpha=0,\pm$ denote $s\ell(2)$ triplets, while Latin superscripts and subscripts $a,b$ run over 1,2 and label $s\ell(2)$ doublet and antidoublet representations. The sigma matrices $\sigma^0,\sigma^+$ and $\sigma^-$ are defined as: $$\oldfalse\BA{rcl} {\sigma^0=\left(\matrix{-{\textstyle{1\over2}}&0\cr 0&{\textstyle{1\over2}}\cr}\right)}& {\sigma^+=\left(\matrix{0&0\cr 1&0\cr}\right)}& {\sigma^-=\left(\matrix{0&-1\cr 0&0\cr}\right)} \end{array}$$ Superscripts label rows and subscripts, columns. The metric tensor $\eta_{\alpha\beta}$ is: ${\textstyle{1\over2}}\eta_{00}=-\eta_{+-}=-\eta_{-+}=1$ and other components are equal to zero.}: \BE\oldfalse\BA{rclcrcl} J_{N=4}^{\alpha}(z){\cal G}^a(w)&=&-{(\sigma^{\alpha })^a_b{\cal G}^b\over z-w},&{}& J_{N=4}^{\alpha}(z)\overline{\cal G}_a(w)& =&{\overline{\cal G}_b(\sigma^{\alpha})_a^b\over z-w}\,,\\ {\cal T}(z){\cal G}^a(w)&=&{a{\cal G}^a(w)\over(z-w)^2}+{\partial{\cal G}^a\over z-w}\,, &{}& {\cal T}(z)\overline{\cal G}_a(w)&=&{(3-a)\overline{\cal G}_a(w)\over(z-w)^2}+ {\partial\overline{\cal G}_a\over z-w}\\ {\cal T}(z)J_{N=4}^{\alpha}(w)&=&\multicolumn{5}{l}{ {-\delta_{\alpha,0}\over(z-w)^3} + {(1-\alpha)J_{N=4}^{\alpha}(w)\over(z-w)^2}+ {\partial J_{N=4}^{\alpha}\over z-w}\,,\quad\alpha=+1,0,-1\,,}\\ {\cal G}^a(z){\cal G}^b(w)&=&0,&{}& \overline{\cal G}_a(z)\overline{\cal G}_b(w)&=&0,\\ \multicolumn{7}{l}{ {\cal G}^a(z)\overline{\cal G}_b(w)={2\,\delta^a_b\over(z-w)^3}- {2(\sigma^{\alpha})^a_b\eta_{\alpha\beta} J_{N=4}^{\beta} (w) \over(z-w)^2}+ {-(\sigma^{\alpha})^a_b\eta_{\alpha\beta}\partial J_{N=4}^{\beta} +\delta^a_b({\cal T}-\partial J_{N=4}^0)\over z-w}} \label{SUPERALGSL}\end{array}\end{equation} where $J_{N=4}^{+}$, $J_{N=4}^0$ and $J_{N=4}^-$ make up an $s\ell(2)$ algebra at level 1: \BE\oldfalse\BA{rclcrcl} J_{N=4}^0 (z) J_{N=4}^{\pm} (w)&=&{ J_{N=4}^{\pm}\over z-w} ,&{}& J_{N=4}^{+} (z) J_{N=4}^{-} (w)&=& -{1\over(z-w)^2}-{2 J_{N=4}^0 \over z-w} ,\\ J_{N=4}^0 (z) J_{N=4}^0 (w)&=&{1/2\over(z-w)^2}\,. \end{array}\end{equation} This is in fact a {\it twisted\/} algebra, in particular $J_{N=4}^{\pm}$ have dimensions $1\mp1$, so that the corresponding commutation relations read: \BE [(J_{N=4}^+)_m,\,(J_{N=4}^-)_n]=-\delta_{m+n,0}(m-1) - 2 (J_{N=4}^0)_{m+n}\,. \end{equation} \medskip The Weyl group action \req{W1}, \req{W2} carries over to the $\N4$ algebra. Translations along the weights from the affine Weyl group act on the generators trivially, while the reflection `$-$' acts as \BE\oldfalse\BA{rclcrcl} {\cal T}&\mapsto&{\cal T}\,,&\qquad&{\cal G}^1&\mapsto&-\overline{\cal G}_2\,, \\ J^+_{N=4} &\mapsto&-J^+_{N=4}\,,&{}&{\cal G}^2&\mapsto&-\overline{\cal G}_1\,,\\ J^0_{N=4} &\mapsto&J^0_{N=4}\,,&{}&\overline{\cal G}_1&\mapsto&-{\cal G}^2\,,\\ J^-_{N=4} &\mapsto&-J^-_{N=4}\,,&{}&\overline{\cal G}_2&\mapsto&-{\cal G}^1 \end{array}\label{N4Weyl} \end{equation} where the minus signs in front of the fermionic generators can be omitted without affecting the $\N4$ commutation relations. Evaluating the transformation of ${\cal T}$ when this $\N4$ generator is represented as in \req{GEN4ALGSL}, we find, literally, ${\cal T}\mapsto{\cal T}+2\partial\widehat J^0$ but this does not actually change ${\cal T}$ as an element in the cohomology of ${\cal Q}^{(0)}$. Note also that $N\!=\!4$ algebra admits, along with~\req{N4Weyl}, another automorphism: \BE\oldfalse\BA{rclcrcl} {\cal T}&\to&{\cal T}-2\partial J^0_{N=4}\,,&{}& {\cal G}^1&\to&{\cal G}^2,\\ J^+_{N=4}&\to&- J^-_{N=4}\,,&{}&{\cal G}^2&\to&{\cal G}^1\\ J^0_{N=4}&\to&- J^0_{N=4}\,,&{}&\overline{\cal G}_1&\to&\overline{\cal G}_2\\ J^-_{N=4}&\to&- J^+_{N=4}\,,&{}&\overline{\cal G}_2&\to&\overline{\cal G}_1 \end{array}\label{automorphism}\end{equation} which is induced in the construction \req{GEN4ALGSL} by interchanging the ghosts as \BE\oldfalse\BA{rclcrcl} {\cal B}^+&\rightarrow&{\cal C}_-\,,&{}&{\cal C}_-&\rightarrow&{\cal B}^+\,,\\ {\cal C}_+&\rightarrow&{\cal B}^-\,,&{}&{\cal B}^-&\rightarrow&{\cal C}_+\,.\\ \end{array}\label{pairs1}\end{equation} \subsection{Representations}\leavevmode\hbox to\parindent{\hfill} Now let us see what representation of the $\N4$ algebra is arrived at starting with an $\SL2$ representation. We consider the $\SL2$ highest-weight representations built on highest weights $\ket{j}$ with $j={}$ $j_+(r,s)$ or $j_-(r,s)$, labeled by two positive integers $r$ and $s$ via \begin{eqnarray} j_+(r,s)&=&{\textstyle{1\over2}}(r-1)-{\textstyle{1\over2}}(k+2)(s-1)\,,\qquad r,s\geq1\,,\label{jplus}\\ \noalign{\noindent and} j_-(r,s)&=&-{\textstyle{1\over2}}(r+1)+{\textstyle{1\over2}}(k+2)s\,,\qquad r,s\geq1\,.\label{jminus} \end{eqnarray} Such states will be eigenstates of the $\N4$ current $J^0_{N=4}$: we evaluate $2(J^0_{N=4})_0$ on $\ket{j}_{\cal A}$ as \BE 2(J^0_{N=4})_0\ket{j}_{\cal A}= \left\{\oldfalse\BA{ll} (j+2)\ket{j}_{\cal A}\,,&j+2={r-1\over2}+s+1\geq2\,,\\ -j\ket{j}_{\cal A}\,,&-j={r+1\over2}+s\geq2 \end{array}\right. \end{equation} Therefore, fixing a ${\,\ssf j\,}\geq2$ that will be the eigenvalue of $2(J^0_{N=4})^{\phantom{Y}}_0$ -- (twice) the {\it N=4 spin\/} -- we arrive at {\it two\/} $\N4$ states defined in the following way: for each of these states, \BE\oldfalse\BA{rcllcrcll} {{\cal L}}^{\phantom{Y}}_n\ket{{\,\ssf j\,},\pm}_{N=4}&=&0\,, &n\geq0\,, \\ (J^0_{N=4})^{\phantom{Y}}_n\ket{{\,\ssf j\,},\pm}_{N=4}&=&0\,, &n \geq 1\,, &{\quad}& (J^+_{N=4})^{\phantom{Y}}_n\ket{{\,\ssf j\,},\pm}_{N=4}&=&0\,, &n \geq -{\,\ssf j\,}+1\,,\\ 2(J^0_{N=4})^{\phantom{Y}}_0\ket{{\,\ssf j\,},\pm}_{N=4}&=& \multicolumn{2}{l}{{\,\ssf j\,}\ket{{\,\ssf j\,},\pm}_{N=4}\,,} &{}& (J^-_{N=4})^{\phantom{Y}}_n\ket{{\,\ssf j\,},\pm}_{N=4}&=&0\,, &n \geq {\,\ssf j\,}-1\,, \end{array}\label{skewed0}\end{equation} while \BE\oldfalse\BA{rcllcrcll} ({\cal G}^1)_n\ket{{\,\ssf j\,},+}_{N=4}&=&0\,, &n\geq -{\,\ssf j\,}+1\,, &{\quad}& ({\cal G}^2)_n\ket{{\,\ssf j\,},+}_{N=4}&=&0\,, &n\geq 0\,,\\ (\overline{\cal G}_1)_n\ket{{\,\ssf j\,},+}_{N=4}&=&0\,, &n\geq {\,\ssf j\,}-1\,,&{}& (\overline{\cal G}_2)_n\ket{{\,\ssf j\,},+}_{N=4}&=&0\,, &n\geq 0\,, \end{array} \label{skew4plus}\end{equation} whereas \BE\oldfalse\BA{rcllcrcll} ({\cal G}^1)_n\ket{{\,\ssf j\,},-}_{N=4}&=&0\,, &n\geq 0\,, &{\qquad}& ({\cal G}^2)_n\ket{{\,\ssf j\,},-}_{N=4}&=&0\,, &n\geq {\,\ssf j\,}-1\,,\\ (\overline{\cal G}_1)_n\ket{{\,\ssf j\,},-}_{N=4}&=&0\,, &n\geq 0\,,&{}& (\overline{\cal G}_2)_n\ket{{\,\ssf j\,},-}_{N=4}&=&0\,, &n\geq -{\,\ssf j\,}+1\,. \end{array} \label{skew4minus}\end{equation} Note that the two types of highest-weight conditions, eqs.~\req{skew4plus} and~\req{skew4minus}, are related by the Weyl reflection \req{N4Weyl} on the $\N4$ generators\footnote{Note similar `skewed' highest-weight conditions in~\cite{[PT]}.}. One can notice that the above `skewed' highest-weight states $\ket{{\,\ssf j\,},\pm}_{N=4}$ are related to $\N4$ highest weights that exist in the series of spin-${\textstyle{1\over2}}$ $(\N4)_{k=1}$ representations labeled by conformal dimensions~\cite{[ET]}. Namely, consider a highest-weight state $\ket{{\textstyle{1\over2}},\Delta}_{N=4}$ satisfying the conditions \BE\oldfalse\BA{l} {\cal L}_{\geq1}\ket{{\textstyle{1\over2}},\Delta}_{N=4}= (J^{\pm,0}_{N=4})_{\geq1}\ket{{\textstyle{1\over2}},\Delta}_{N=4}= ({\cal G}^a)_{\geq1}\ket{{\textstyle{1\over2}},\Delta}_{N=4}= (\overline\cG_a)_{\geq1}\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,,\\ ({\cal G}^2)_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}=(\overline\cG_1)_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,,\\ {\cal L}_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}=\Delta\ket{{\textstyle{1\over2}},\Delta}_{N=4}\,,\quad (J^0_{N=4})_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}={\textstyle{1\over2}}\,\ket{{\textstyle{1\over2}},\Delta}_{N=4}\,. \label{EThighest}\end{array}\end{equation} (all the Verma modules built over any $\ket{{\textstyle{1\over2}},\Delta}_{N=4}$ except $\ket{{\textstyle{1\over2}},0}_{N=4}$ are not unitary; The $\ket{{\textstyle{1\over2}},0}_{N=4}$ representation is called massless~\cite{[ET]}). In ref.~\cite{[ET]}, singular vectors of $\N4$ algebra were arrived at by restricting to a representation of the $s\ell(2)$ subalgebra of the $\N4$ algebra and noticing that singular vectors of the $s\ell(2)$ subalgebra are singular vectors of $\N4$ algebra. For unitary representations, other singular vectors are absent~\cite{[ET]}. As we will see, more singular vectors exist for non-unitary representations. In our case we have a realization of the $\N4$ algebra in which the $s\ell(2)$ subalgebra is constructed out of two ghost pairs, and therefore all singular vectors of ref.~\cite{[ET]} vanish. Thus the $\N4$ representations `induced' from the $\SL2_{-4}$ Verma module tensored with ghosts are related to $\N4$ Verma modules after factorization of the latter with respect to singular vectors from ref.~\cite{[ET]}: namely, the `skewed' highest-weight states characterized by~\req{skewed0}--\req{skew4minus}, and the highest-weights \req{EThighest} are related via \BE \ket{{\,\ssf j\,},+}=({\cal G}^1)_{-{\,\ssf j\,}+1}\ldots({\cal G}^1)_{-1}\ket{{\textstyle{1\over2}},\Delta}_{N=4} \qquad\hbox{when $\Delta=-{\textstyle{1\over2}} {\,\ssf j\,}({\,\ssf j\,}-1)$}\, \label{dressinjpl}\end{equation} and, similarly, \BE \ket{{\textstyle{1\over2}},\Delta}_{N=4}=({\cal G}^2)_{0}\ldots({\cal G}^2)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},-} \qquad\hbox{when $\Delta=-{\textstyle{1\over2}} {\,\ssf j\,}({\,\ssf j\,}-3)-1$}\, \label{dressfromjmin} \end{equation} Heuristically, our states $\ket{{\,\ssf j\,},\pm}$ are each a `half' of the simplest singular (or co-singular) vector in the `proper' highest-weight module. The factorization leads to fulfillment of several conditions, including \BE\oldfalse\BA{l} \left((J^-_{N=4})_0\right)^2\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,,\qquad ({\cal G}^2)_0(J^-_{N=4})_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,,\qquad (\overline\cG_1)_0(J^-_{N=4})_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,,\\ \left((J^+_{N=4})_0\right)^2\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,,\qquad ({\cal G}^1)_0(J^+_{N=4})_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,,\qquad (\overline\cG_2)_0(J^+_{N=4})_0\ket{{\textstyle{1\over2}},\Delta}_{N=4}=0\,. \label{svectcond}\end{array}\end{equation} These vanishing conditions will be used in the next section. \medskip The states $\ket{{\textstyle{1\over2}},\Delta}_{N=4}$ (except $\ket{{\textstyle{1\over2}},0}_{N=4}=\ket{-1}_{\cal A}$) cannot be represented in intrinsic terms of the algebra ${\cal A}$, i.e.\ in terms of the $\SL2_{-4}$ algebra and ghosts. Constructing the $\ket{{\textstyle{1\over2}},\Delta}_{N=4}$ states requires identifying in the $\SL2$ representation a matter sector (the result of Hamiltonian reduction) and the `complementary' ghost sectors, as will be shown in section~4. \medskip To return to the spectral sequence, observe now that the differentials ${\cal Q}^{(1)}$ and ${\cal Q}^{(2)}$ from \req{SPSEQU} are given by zero modes of the $N\!=\!4$ generators $\overline{\cal G}_2$ and ${\cal G}^1$ respectively. Therefore the cohomology of ${\cal Q}_{\cal A}$ \req{QA} will be given in the $\N4$ terms as the cohomology of $({\cal G}^1)_0$ evaluated on the cohomology of $(\overline{\cal G}_2)_0$. Recall further that, while there is only a vacuum in the cohomology of the Verma module, factoring with respect to submodules generated by singular vectors gives rise to non-trivial cohomology; the standard `cohomology-generating' mechanism relies on the fact that the $s\ell(2)$-singular vectors are (upon dressing with ghosts appropriately) BRST-trivial, and therefore their BRST-primitives become cohomological states in the module where singular vectors vanish. In our case, combining these considerations with the existence of the spectral sequence, we will be able to give a more detailed structure of the cohomology. In the next section, we will consider how the cohomology of $\SL2$ is generated from the state $\ket{j}_{\cal A}$ by the $\N4$ algebra operators. It looks plausible that the cohomology of \label{hypothesis} ${\cal Q}^{(0)}=\oint{\cal C}_0{\widehat J}^0$ is generated precisely by the currents of the $N\!=\!4$ algebra. \section{MFF vectors and cohomology}\leavevmode\hbox to\parindent{\hfill} In this section, we begin with the MFF singular states and then discuss how, upon an appropriate dressing with ghosts, they rewrite as $\N4$ singular states, and the further analysis can be carried out in terms of the $\N4$ algebra. \subsection{MFF vectors and Verma modules at $k=-4$}\leavevmode\hbox to\parindent{\hfill} Consider singular states in the $\SL2$ Verma module built on the highest-weight state $\ket{j}$. They are given by the MFF construction \cite{[MFF]} and are labeled by two positive integers $r$ and $s$. For $j=j_+(r,s)$ (see~\req{jplus}) one has \BE\oldfalse\BA{rcl}\ket{{\rm MFF}\{r,s\},-}&=& (J^-_0)^{r+(s-1)(k+2)}(J^+_{-1})^{r+(s-2)(k+2)}(J^-_0)^{r+(s-3)(k+2)} \ldots\\ {}&{}&{}\times (J^+_{-1})^{r-(s-2)(k+2)} (J^-_0)^{r-(s-1)(k+2)}\ket{j_+(r,s)}\end{array}\label{mff}\end{equation} The MFF states $\ket{{\rm MFF}\{r,s\}}$ are annihilated by the same set of annihilation operators as the highest-weight $\ket{j}$ (see~\req{sl2highest}) but have different spin and dimension (which for $k=-4$ are equal to $j-r$ and $-{\textstyle{1\over2}}(j-r)(j-r+1)$ respectively). Singular states determine the pattern of Verma module embeddings. In our case of $k=-4$, every Verma module contains only a finite number of submodules (corresponding to singular vectors) but can itself be embedded into an infinite number of other modules ({\it co\/}singular vectors). Several lower-$j$ embeddings that correspond to singular vectors \req{mff} are shown here: \begin{equation} { \unitlength=1.00mm \begin{picture}(75.00,70.00)(40.00,05.00) \put(70.00,10.00){\vector(0,1){9.00}} \put(70.00,21.00){\vector(0,1){8.00}} \put(70.00,31.00){\vector(0,1){8.00}} \put(70.00,41.00){\vector(0,1){8.00}} \put(70.00,51.00){\vector(0,1){8.00}} \put(70.00,61.00){\vector(0,1){4.00}} \put(69.50,67.00){$\vdots$} \put(70.00,10.00){\makebox(0,0)[cc]{$\bullet$}} \put(70.00,20.00){\makebox(0,0)[cc]{$\bullet$}} \put(70.00,30.00){\makebox(0,0)[cc]{$\bullet$}} \put(70.00,40.00){\makebox(0,0)[cc]{$\bullet$}} \put(70.00,50.00){\makebox(0,0)[cc]{$\bullet$}} \put(70.05,13.00){\makebox(0,0)[lc]{$-1$}} \put(71.00,22.50){\makebox(0,0)[lc]{$0$}} \put(71.00,32.30){\makebox(0,0)[lc]{$1$}} \put(71.00,42.00){\makebox(0,0)[lc]{$2$}} \put(71.00,52.00){\makebox(0,0)[lc]{$3$}} \put(71.00,62.00){\makebox(0,0)[lc]{$4$}} \put(51.00,11.00){\vector(1,1){18.00}} \put(51.00,11.00){\vector(1,2){19.00}} \put(51.00,21.00){\vector(1,1){18.00}} \put(51.00,21.00){\vector(1,2){19.00}} \put(51.00,31.00){\vector(1,1){18.00}} \put(51.00,41.00){\vector(1,1){18.00}} \put(70.00,60.00){\makebox(0,0)[cc]{$\bullet$}} \put(50.00,20.00){\makebox(0,0)[cc]{$\bullet$}} \put(50.00,30.00){\makebox(0,0)[cc]{$\bullet$}} \put(50.00,40.00){\makebox(0,0)[cc]{$\bullet$}} \put(50.00,50.00){\makebox(0,0)[cc]{$\bullet$}} \put(50.00,60.00){\makebox(0,0)[cc]{$\bullet$}} \put(50.00,10.00){\makebox(0,0)[cc]{$\bullet$}} \put(48.00,10.00){\makebox(0,0)[rc]{$-2$}} \put(48.00,20.00){\makebox(0,0)[rc]{$-3$}} \put(48.00,30.00){\makebox(0,0)[rc]{$-4$}} \put(48.00,40.00){\makebox(0,0)[rc]{$-5$}} \put(48.00,50.00){\makebox(0,0)[rc]{$-6$}} \put(48.00,60.00){\makebox(0,0)[rc]{$-7$}} \put(49.50,63.00){$\vdots$} \put(71.00,25.00){\oval(10.00,29.00)[r]} \put(73.55,39.03){\vector(-4,1){3}} \put(71.00,35.00){\oval(12.00,29.00)[r]} \put(73.55,49.05){\vector(-4,1){3}} \put(71.00,45.00){\oval(14.00,29.00)[r]} \put(73.55,59.08){\vector(-4,1){3}} \put(71.00,35.00){\oval(18.00,49.80)[r]} \put(73.55,59.80){\vector(-4,0){3}} \put(100.00,10.00){\makebox(0,0)[cc]{$\bullet$}} \put(102.00,10.00){\makebox(0,0)[lc]{$1$}} \put(99.80,11.00){\line(0,1){7.40}} \put(100.20,11.00){\line(0,1){7.40}} \put(100.00,18.10){\vector(0,1){1}} \put(100.00,20.00){\makebox(0,0)[cc]{$\bullet$}} \put(102.00,20.00){\makebox(0,0)[lc]{$0$}} \put(99.80,21.00){\line(0,1){7.40}} \put(100.20,21.00){\line(0,1){7.40}} \put(100.00,28.10){\vector(0,1){1}} \put(100.00,30.00){\makebox(0,0)[cc]{$\bullet$}} \put(102.00,30.00){\makebox(0,0)[lc]{$-2$}} \put(99.80,31.00){\line(0,1){7.40}} \put(100.20,31.00){\line(0,1){7.40}} \put(100.00,38.10){\vector(0,1){1}} \put(100.00,40.00){\makebox(0,0)[cc]{$\bullet$}} \put(102.00,40.00){\makebox(0,0)[lc]{$-5$}} \put(99.80,41.00){\line(0,1){7.40}} \put(100.20,41.00){\line(0,1){7.40}} \put(100.00,48.10){\vector(0,1){1}} \put(100.00,50.00){\makebox(0,0)[cc]{$\bullet$}} \put(102.00,50.00){\makebox(0,0)[lc]{$-9$}} \put(99.80,51.00){\line(0,1){7.40}} \put(100.20,51.00){\line(0,1){7.40}} \put(100.00,58.10){\vector(0,1){1}} \put(100.00,60.00){\makebox(0,0)[cc]{$\bullet$}} \put(102.00,60.00){\makebox(0,0)[lc]{$-14$}} \put(99.80,61.00){\line(0,1){3.40}} \put(100.20,61.00){\line(0,1){3.40}} \put(100.00,64.10){\vector(0,1){1}} \put(99.50,66.00){$\vdots$} \end{picture} }\label{picture} \end{equation} The numbers give the values of $j$; it should not be forgotten that there is an infinite number of arrows going out of any dot to the `higher' ones. The right column with double arrows represents {\it Virasoro\/} Verma modules obtained via Hamiltonian reduction (the numbers give dimensions). The arrows are drawn in the direction of {\it embeddings\/}. There are precisely $j+1$ arrows entering a dot labelled by spin $j>0$. This pattern is determined by the fact that, for integral $j$ and negative integral $k$, there exist several ways to represent the spin $j$ as $j_+(r,s)$ with positive integral $r$ and $s$. A `dual' version of this embedding diagram exists for $j=j_-(r,s)$ (see~\req{jminus}), based on the MFF vectors given by a construction similar to \req{mff}. The corresponding counterpart of \req{mff} reads \BE\oldfalse\BA{rcl}\ket{{\rm MFF}\{r,s\},+}&=& (J^+_{-1})^{r+(s-1)(k+2)}(J^-_0)^{r+(s-2)(k+2)}(J^+_{-1})^{r+(s-3)(k+2)} \ldots\\ {}&{}&{}\times (J^-_0)^{r-(s-2)(k+2)} (J^+_{-1})^{r-(s-1)(k+2)}\ket{j_-(r,s)}\end{array}\label{mffneg}\end{equation} (with $k\to4$). In what follows, we will mainly give explicit expressions for constructions related to the MFF vectors $\ket{{\rm MFF}\{r,s\},-}$, denoting them simply as $\ket{{\rm MFF}\{r,s\}}$. \subsection{From $\SL2_{-4}$ to $N\!=\!4$ singular vectors}\leavevmode\hbox to\parindent{\hfill} By dressing with ghosts, the $\SL2$ singular vectors can be made into singular vectors in the $\N4$ representation considered above. Thus the BRST-primitives that represent the cohomology in the corresponding irreducible models, will be given by vectors in the $\N4$ representation; therefore the cohomology of ${\cal Q}_{\cal A}$ is concentrated in the $\N4$ term of the spectral sequence. Now that we have an (almost) explicit formula for the MFF singular vectors, it is interesting to see to what extent it can be carried over to the $\N4$ algebra that arises in the spectral sequence. Taking an MFF state and tensoring it with the ghost vacua from section~2, as \BE \ket{{\rm MFF}\{r,s\}}\otimes\ket{j+1}_+ \otimes\ket0_0 \otimes \ket1_- \end{equation} we observe that this can be dressed with ghost modes so as to produce a state with zero ${\widehat J}^0$-spin: for $j>0$ (with the formula \req{mff} valid for $\SL2$ singular vectors), the dressed states would read \BE \ket{{\rm MFF}\{r,s\}}_{\cal A}= \Biggl\{\!\!\oldfalse\BA{ll} \!\ket{{\rm MFF}\{r,s\}}\otimes {\cal B}^+_{j-r+1}\ldots{\cal B}^+_{j-1}{\cal B}^+_{j}\ket{j+1}_+ \otimes\ket0_0 \otimes\ket1_-,& r\leq j\!+\!1 \\ \!\ket{{\rm MFF}\{r,s\}}\!\otimes\! {\cal B}^+_{0}\ldots{\cal B}^+_{j-1}{\cal B}^+_{j}\ket{j+1}_+ \!\otimes\!\ket0_0\!\otimes\!({\cal C}_-)_{j-r+1}\ldots({\cal C}_-)_{-1}\ket1_-,& r>j\!+\!1 \end{array} \label{mffdressed}\end{equation} The states thus obtained turn out to be ${\cal Q}_{\cal A}$-closed and, moreover, ${\cal Q}_{\cal A}$-exact. Then, any state $\ket*$ such that \BE \ket{{\rm MFF}\{r,s\}}_{\cal A} = {\cal Q}_{\cal A}\ket{{*}} \label{MFFexact} \end{equation} would be a representative in the ${\cal Q}_{\cal A}$-cohomology of the irreducible module obtained by factorization of the Verma module over the null vector $\ket{\rm MFF\{r,s\}}$. These cohomology elements occur in a particular term of the spectral sequence associated with \req{SPSEQU}, namely as states in the representation of the $\N4$ algebra from the previous section. They can indeed be constructed by acting with modes of the $N\!=\!4$ generators~\req{GEN4ALGSL} on the vacuum~$\ket{{\,\ssf j\,},+}_{N=4}$ (similarly, for $j<0$, the corresponding MFF vectors \req{mffneg} can be dressed with ghosts in such a way that would allow rewriting them as $\N4$ singular vectors built on the vacuum $\ket{{\,\ssf j\,},-}$). For definiteness, we will consider explicitly the $\N4$ singular vectors built on the $\ket{{\,\ssf j\,},+}$ vacua. Consider first the $r1$ MFF states. They are built on the $\SL2$ highest-weight state of spin $j={\textstyle{1\over2}}(r-1)$ and can thus be written as $\ket{{\rm MFF}\{2j+1,1\}}$~\footnote{more precisely, $\ket{{\rm MFF}\{2j+1,1\},-}$; the `$+$'-counterpart reads $\ket{{\rm MFF}\{k+1-2j,1\},+}$.}. When dressed with the ghosts, the states $\ket{{\rm MFF}\{2j+1,1\}}_{\cal A}$ are identically rewritten as elements of the $(\N4)_1$ representation, i.e.\ generated from the vacuum by the action of the $\N4$ generators: \BE \ket{{\rm MFF}\{2{\,\ssf j\,}-1,1\}}_{\cal A}= (\overline{\cal G}_2)_{-{\,\ssf j\,}+2}(\overline{\cal G}_2)_{-{\,\ssf j\,}+3}\ldots(\overline{\cal G}_2)_{-1}\, (\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4} \label{mffn4} \end{equation} Moreover, these states being $(\overline\cG_2)_0$-exact, the corresponding BRST-primitive state $\ket{*}$ in~\req{MFFexact} can be obtained by pulling out the BRST operator ${\cal Q}^{(1)}=(\overline\cG_2)_0$. We then arrive at the following representation for the primitives in terms of $N\!=\!4$ algebra generators acting on the vacuum: \BE \ket{*}=(J^0_{N=4})_{-{\,\ssf j\,}+2}\,(\overline{\cal G}_2)_{-{\,\ssf j\,}+3} \ldots(\overline{\cal G}_2)_{-1}\, (\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4} \label{primitiveN4} \end{equation} (as compared with \req{mffn4}, the leftmost mode $(\overline{\cal G}_2)_m$ gets replaced by the same mode of $J^0_{N=4}$). This expression thus gives an element in cohomology upon factorization over the submodule generated by the MFF vector. More generally, consider an arbitrary MFF vector for $k=-4$ with the only condition that all the powers in the MFF formula \req{mff} be non-negative. Among the $\ket{{\rm MFF},-}$-vectors these are $\ket{{\rm MFF}\{j+l,s\}}$ for $1\leq l\leq j+1$. To elucidate their construction in terms of the $\N4$ algebra generators, we write them down together with the corresponding MFF formula: \BE \oldfalse\BA{l} \oldfalse\BA{lcl} \ket{{\rm MFF}\{{2{\,\ssf j\,}-s-1},s\},-}_{\cal A}=&{}&\ket{{\rm MFF}\{{2{\,\ssf j\,}-s-1},s\}}=\\ \quad(\overline{\cal G}_2)_{-{\,\ssf j\,}+s}\ldots(\overline{\cal G}_2)_{-1}\, (\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-s}&&\quad(J^-_0)^{2{\,\ssf j\,}-2s+1}\\ \qquad({\cal G}^1)_{-{\,\ssf j\,}+s-1}\ldots({\cal G}^1)_{-1}\, ({\cal G}^2)_{0}\ldots({\cal G}^2)_{-{\,\ssf j\,}-s+1}\,&{}&\qquad(J^+_{-1})^{2{\,\ssf j\,}-2s+3}\\ \qquad\qquad\qquad\qquad\qquad\vdots&{}&\qquad\qquad\vdots\\ \qquad\qquad(\overline{\cal G}_2)_{-{\,\ssf j\,}+4}\ldots(\overline{\cal G}_2)_{-1}\, (\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-4}&{}&\qquad\qquad(J^-_0)^{2{\,\ssf j\,}-7}\\ \qquad\qquad\quad({\cal G}^1)_{-{\,\ssf j\,}+3}\ldots({\cal G}^1)_{-1}\, ({\cal G}^2)_{0}\ldots({\cal G}^2)_{{\,\ssf j\,}-3}\,&{}&\qquad\qquad\quad(J^+_{-1})^{2{\,\ssf j\,}-5}\\ \qquad\qquad\qquad(\overline{\cal G}_2)_{-{\,\ssf j\,}+2}\ldots(\overline{\cal G}_2)_{-1}\, (\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4}&{}& \qquad\qquad\qquad(J^-_0)^{2{\,\ssf j\,}-3}\ket{{\,\ssf j\,}\!-\!2} \end{array} \end{array} \label{generalmffn4} \end{equation} The $\N4$ spin (the eigenvalue of $2(J^0_{N=4})_0$) of this state is equal to $l={\,\ssf j\,}-2s+1$. Every MFF factor $(J^{\pm}_{-1,0})^m$ corresponds to a group of $m$ factors given by modes of the $\N4$ generators. The groups corresponding to $(J^-_{0})^m$ consist of ${\textstyle{1\over2}}(m+1)$ generators $\overline{\cal G}_1$, their modes ranging from ${\textstyle{1\over2}}(m-1)$ to $0$ (recall that $m$ is always odd). In addition, the same group contains a product of modes of $\overline{\cal G}_2$, from $(\overline{\cal G}_2)_{-1}$ to $(\overline{\cal G}_2)_{{-m+1\over2}}$. Thus, when passing the zero mode, the $\N4$ generators inside one group get replaced according to the action of the automorphism \req{automorphism} (however the left subgroup is one element shorter). To obtain the structure of the groups of $\N4$ factors corresponding to $(J^+_{-1})^m$, one should drop the leftmost and the rightmost modes in the right neighbouring group (the one corresponding to $(J^-_{0})^{m+2}$) and then act on the remaining modes with the Weyl reflection \req{N4Weyl}. \medskip To check that the states constructed {\it are\/} singular vectors in the $\N4$ module, consider the state obtained by acting on $\ket{{\,\ssf j\,},+}_{N=4}$ with only the first (counting from the right) group in~\req{generalmffn4} (the case $s=1$), \BE (\overline{\cal G}_2)_{-{\,\ssf j\,}+2}\ldots(\overline{\cal G}_2)_{-1}\, (\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4}\,. \label{discuss}\end{equation} The eigenvalue of $2(J^0_{N=4})^{\phantom{Y}}_0$ on~\req{discuss} is given by ${\,\ssf j\,}-({\rm\#\ of}\ \overline{\cal G}_1) +({\rm\#\ of}\ \overline{\cal G}_2)={\,\ssf j\,}-1$. The state~\req{discuss} is in fact a $\ket{{\,\ssf j\,}-1,-}_{N=4}$. To see this, note first of all that it is annihilated by $(\overline{\cal G}_1)_{\geq0}$, since the modes $(\overline{\cal G}_1)_{\geq{\,\ssf j\,}-1}$ used to annihilate $\ket{{\,\ssf j\,},+}_{N=4}$ while the remaining modes ${\cal G}_{0\leq n\leq{\,\ssf j\,}-2}$ will square to zero. Similarly,~\req{discuss} is annihilated by $(\overline{\cal G}_2)_{\geq-{\,\ssf j\,}+2}$. This gives a half of the highest-weight conditions \req{skew4minus} for spin ${\,\ssf j\,}-1$. The other half can be deduced as follows. Let us evaluate the action of ${\cal G}^1_n$ on~\req{discuss} for $n\geq0$. When being plugged to the right, ${\cal G}^1_n$ can hit one of the $\overline{\cal G}_2$ or one of the $\overline{\cal G}_1$ modes. Consider first \BE{[}\,{\cal G}^1_n,\,(\overline\cG_2)_{-{\,\ssf j\,}+2}\ldots(\overline\cG_2)_{-1}{]}\, (\overline\cG_1)_0\ldots(\overline\cG_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4}\,. \label{work1}\end{equation} Using the $\N4$ commutation relations that follow from \req{SUPERALGSL} (the brackets $[~,~]$ always mean the {\it super\/}commutator) \BE\oldfalse\BA{rcl} {[}{\cal G}^a_n\,,(\overline{\cal G}_b)_m{]}&=&\delta^a_b(n+a-1)(n+a-2)\delta_{m+n,0} +(m-n+3-a-b)(\sigma^\alpha)^a_b\eta_{\alpha\beta} (J_{N=4}^\beta)_{m+n}^{\phantom{Y}}\\ &{}&{}+\delta^a_b{\cal L}_{m+n}+(m+n+1)\delta^a_b(J_{N=4}^0)_{m+n}^{\phantom{Y}}\,, \end{array}\end{equation} we see that $[{\cal G}^1_n\,,(\overline{\cal G}_2)_m]=(m-n)(J_{N=4}^+)_{m+n}^{\phantom{Y}}$ and the resulting $(J_{N=4}^+)_{m+n}$ can be moved to the right until it meets $(\overline{\cal G}_1)_0$. Commuting $(J_{N=4}^+)_{m+n}$ with the modes $(\overline{\cal G}_1)_r$ will produce $(\overline{\cal G}_2)_{n+m+r}$. Of these, $(\overline{\cal G}_2)_{p}$ with $p\geq0$ will annihilate the state $\ket{{\,\ssf j\,},+}_{N=4}$, while those with $-{\,\ssf j\,}+2\leq p\leq-1$ will square to zero due to the presence of the same mode among the $(\overline{\cal G}_2)_{-{\,\ssf j\,}+2}\ldots(\overline{\cal G}_2)_{-1}$ unless this mode has been `spent' in the commutator $[{\cal G}^1_n\,,(\overline{\cal G}_2)_m]$; however, that would never happen for $n>0$, and thus the result of commuting ${\cal G}^1_n$ through the $(\overline{\cal G}_2)_{-{\,\ssf j\,}+2}\ldots(\overline{\cal G}_2)_{-1}$ is effectively zero for $n>0$. When $n=0$, however, the mode $(\overline\cG_2)_m$ will be restored when commuting $(J_{N=4}^+)_m$ with $(\overline\cG_1)_0$, which gives \BE\oldfalse\BA{l} {[}\,{\cal G}^1_0,\,(\overline\cG_2)_{-{\,\ssf j\,}+2}\ldots(\overline\cG_2)_{-1}{]}\, (\overline\cG_1)_0\ldots(\overline\cG_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4}\\ {}\qquad{}=\sum_{m=-{\,\ssf j\,}+2}^{-1}m(-1)^{{\,\ssf j\,}+m} (\overline\cG_2)_{-{\,\ssf j\,}+2}\ldots~\Bigl/\!\!\!\!\!\!(\overline\cG_2)_m\ldots(\overline\cG_2)_{-1}\cdot (\overline\cG_2)_m\,(\overline\cG_1)_1\ldots(\overline\cG_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4}\\ {}\qquad{}={\textstyle{1\over2}}(-1)^{{\,\ssf j\,}}({\,\ssf j\,}-1)({\,\ssf j\,}-2) (\overline\cG_2)_{-{\,\ssf j\,}+2}\ldots(\overline\cG_2)_{-1}\cdot (\overline\cG_1)_1\ldots(\overline\cG_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4} \end{array}\label{group1}\end{equation} It remains to see how ${\cal G}^1_n$ commutes with $(\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-2}$, namely to evaluate \BE (-1)^{{\,\ssf j\,}}(\overline\cG_2)_{-{\,\ssf j\,}+2}\ldots(\overline\cG_2)_{-1}\, {[}\,{\cal G}^1_n,\,(\overline\cG_1)_0\ldots(\overline\cG_1)_{{\,\ssf j\,}-2}{]}\,\ket{{\,\ssf j\,},+}_{N=4} \label{group2}\end{equation} Here, in accordance with \req{SUPERALGSL} \BE {[}\,{\cal G}^1_n\,,(\overline{\cal G}_1)_m{]}=n(n-1)\delta_{m+n,0} +2n(J_{N=4}^0)_{m+n}^{\phantom{Y}} + {\cal L}_{m+n}\,, \end{equation} and for $n=0$ we find $[{\cal G}^1_0\,,(\overline{\cal G}_1)_0]={\cal L}_0$; plugging ${\cal L}_0$ to the right amounts to adding up the mode numbers of $(\overline\cG_1)_1\ldots(\overline\cG_1)_{{\,\ssf j\,}-2}$ as $$ \sum_{r=1}^{{\,\ssf j\,}-2}(-r)=-{\textstyle{1\over2}}({\,\ssf j\,}-1)({\,\ssf j\,}-2)\,, $$ and the resulting term will precisely cancel~\req{group1}. All other commutators in \req{group2} for $n=0$ give a vanishing contribution. For $n\geq1$, \req{group2} vanishes altogether. Analyzing similarly the action of the modes \ $(J^+_{N=4})_{n\geq-{\,\ssf j\,}+2}$, \ $({\cal G}^2)_{n\geq{\,\ssf j\,}}$, and $(J^-_{N=4})_{n\geq{\,\ssf j\,}}$ on~\req{discuss}, we also find that these are annihilators. The necessary highest-weight relations (see~\req{skew4minus}) require in addition two more vanishing conditions, namely those for $(J^-_{N=4})_{{\,\ssf j\,}-1}$ and $({\cal G}^2)_{{\,\ssf j\,}-1}$. These are satisfied by virtue of the relations~\req{svectcond}. We thus conclude that the state $(\overline{\cal G}_2)_{-{\,\ssf j\,}+2}\ldots(\overline{\cal G}_2)_{-1}\, (\overline{\cal G}_1)_0\ldots(\overline{\cal G}_1)_{{\,\ssf j\,}-2}\ket{{\,\ssf j\,},+}_{N=4}$ is indeed proportional to $\ket{{\,\ssf j\,}-1,-}_{N=4}$. Then the second from the right group in~\req{generalmffn4} maps the resulting vector into $\ket{{\,\ssf j\,}-2,+}_{N=4}$ etc., which can be shown either directly or simply by noticing that the $\N4$ generators in the adjacent groups are related by the Weyl reflection \req{N4Weyl} \footnote{and the relations~\req{svectcond} are Weyl-symmetric as well.} (and the groups become shorter as moving from right to left, according to how the $J^0_{N=4}$-spin decreases). The vectors~\req{generalmffn4} are therefore singular in our $(\N4)_1$ module. To compare with the formulation of ref.~\cite{[ET]}, we notice that a singular vector in the module built over the `proper' highest weight~\req{EThighest} can be constructed by means of the following procedure. One dresses the state $\ket{{\textstyle{1\over2}},\Delta}_{N=4}$ \req{EThighest} with $\Delta=-{\textstyle{1\over2}} {\,\ssf j\,}({\,\ssf j\,}-1)$ using the formula \req{dressinjpl} and obtains a $\ket{{\,\ssf j\,},+}$; then one builds over $\ket{{\,\ssf j\,},+}$ the singular vector \req{generalmffn4} and obtains $\ket{{\rm MFF}\{{2{\,\ssf j\,}-s-1},s\},-}_{\cal A}$. And finally dressing it as in \req{dressfromjmin} one obtains a new highest weight obeying~\req{EThighest} with a new dimension $\Delta=-{\textstyle{1\over2}}({\,\ssf j\,}-2s)({\,\ssf j\,}-2s-1)$. Therefore, non-unitary $(\N4)_1$ modules are embedded into one another according to the pattern described above, with the massless representation ($\Delta=0$) being embedded into all the other. \medskip The ${\cal Q}_{\cal A}$-primitives of the singular states thus constructed are given in the same way as in \req{primitiveN4}, hence follow representatives in the cohomology of the irreducible modules. \medskip Note that once the Weyl group action on the $\N4$ generators has been identified, the mechanism behind the above derivation is very similar to the one underlying the MFF construction. The analogies with the $\SL2$ MFF formula are rather straightforward, however we have not carried out an `analytic' continuation of the construction~\req{generalmffn4} off the positive integer points. Recall that the power of the MFF construction is that it can be given meaning for {\it all\/} values of the parameters, when the exponents in~\req{mff} are complex numbers. The $\N4$ analogue of the `continued' formula would require introducing operators with non-integer modding (which is the easy part of the construction) and replacing the products of modes with intertwiners (which is somewhat more involved). We actually expect a close analogy with the case of (twisted) $\N2$ algebra, for which the `analytically' (in fact, {\it algebraically\/}) continued construction can indeed be built~\cite{[ST2]}. Yet we have not tried to extend, in $\N4$ terms, the formula \req{generalmffn4} to the case of $r$ and $s$ being arbitrary positive integers. We have only checked in a number of lower-level cases that the MFF singular vectors taken in the {\it polynomial\/} form do rewrite, upon dressing with ghosts, as $\N4$ descendants. That they satisfy $\N4$ highest-weight conditions can be proven in general, and therefore we conjecture a 1:1 correspondence between $\SL2_{k=-4}$ singular vectors and those in the $(\N4)_{k=1}$ module. As explained above, they give rise to cohomology in factor-modules. \section{${\cal Q}^{(0)}$ cohomology by bosonization \label{CBB}}\leavevmode\hbox to\parindent{\hfill} We have observed the $\N4$ algebra in the cohomology of ${\cal Q}^{(0)}$. It is very useful to `straighten' the construction by explicitly projecting onto the cohomology of ${\cal Q}^{(0)}$. This amounts to projecting out (normal-ordered) operator monomials that contain ${\cal B}^0$ and ${\widehat J}^0$ (and their derivatives), since ${\cal B}^0$ is not ${\cal Q}^{(0)}$-closed while ${\widehat J}^0$ is ${\cal Q}^{(0)}$-exact (see~\req{hatJ},\req{hatJexact}). Effectively solving the constraint ${\widehat J}^0\sim0$ will now be achieved by using a particular representation of the $s\ell(2)$ currents and will result in a `strong' $N\!=\!4$ algebra (rather than the one modulo ${\cal Q}^{(0)}$-exact terms). \subsection{Bosonizing $s\ell(2)$ currents and ghosts}\leavevmode\hbox to\parindent{\hfill} In this subsection, we will introduce a representation of the $\SL2$ currents and the associated ghosts that would allow us to extract the $\N4$ algebra at the level of operator products, not only in cohomology. We start with the representation from ref.~\cite{[S-sing]} which is `induced' from a Verma module of a minimal conformal theory. Generally, this representation can be considered as an embedding $s\ell(2)\hookrightarrow{\cal M}_d\oplus {\cal L}\oplus [B,C]\oplus U(1)_v$ where ${\cal M}_d$ denotes the minimal model, ${\cal L}$ is the theory of a free scalar $\varphi$, $[B,C]$ is a ghost system, and $U(1)_v$ is an additional scalar theory of a field $v$ whose signature is opposite to that of $\varphi$. The minimal-model central charge $d$ is taken to satisfy the hamiltonian reduction formula~\cite{[BO1]} \BE d=13 -6(k+2) - {6\over k+2} \end{equation} where $k$ is the level of the $s\ell(2)$ algebra. Setting, in our case, $k=-4$, leads to $d=28$. Such a $d=28$-model is naturally viewed as a `Liouville' theory; similarly, the scalar $\varphi$ that used to be a `Liouville' in ref.~\cite{[S-sing]} is redefined by absorbing the imaginary unit into it, which gives $\varphi$ a matter-like signature (and restores, in our conventions, real background charge). The signature of the additional scalar $\partial v$ changes similarly from matter-like in the formulae of \cite{[S-sing]} to Liouville-like. After all these redefinitions, the formulae of ref.~\cite{[S-sing]} for $s\ell(2)_{-4}$ currents take the form \BE\oldfalse\BA{rcl} J^{+}&=&e^{\varphi-v},\qquad J^0~{}={}~ BC - \partial\varphi + 2\partial v, \\ J^{-}&=&(T_m+{\textstyle{1\over2}}\partial\varphi\partial\varphi-{\textstyle{3\over2}}\partial^2\varphi-\partial B C -2B\partial C + BC\partial\varphi)e^{-(\varphi-v)} \end{array} \label{SemikhBos} \end{equation} where \BE v(z)v(w)=-\log(z-w)\,,\qquad \varphi(z)\varphi(w)=\log(z-w)\,. \label{OPE2} \end{equation} The ghost, $\varphi$- and $v$- energy-momentum tensor s read \BE T_{BC}=- B\partial C\,,\qquad T_\varphi={\textstyle{1\over2}}(\partial\varphi)^2 - {\textstyle{3\over2}}\partial^2\varphi\,,\qquad T_v=-{\textstyle{1\over2}}(\partial v)^2 + {\textstyle{3\over2}}\partial^2v \end{equation} Then the currents $J^+$, $J^0$, and $J^-$ are given dimensions 0, 1, and 2 respectively with respect to the energy-momentum tensor\ $T_{BC}+T_\varphi+T_v+T_{\rm m}$. In addition to the $s\ell(2)$ currents, the system of $T_{\rm m}$, $\varphi$, $v$ and $BC$ ghosts allows one to represent an independent fermionic ghost system, which we are going to identify with ${\cal B}^-{\cal C}_-$: \BE {\cal B}^-=B\,e^{-(\varphi-v)},\qquad{\cal C}_-=C\,e^{\varphi-v} \label{BCbosonization}\end{equation} Evaluating the twisted Sugawara and ${\cal B}^-{\cal C}_-$ ghosts' energy-momentum tensor s in terms of the `elementary' fields $T_{\rm m}$, $\varphi$, $v$ and $BC$, we arrive at the relation \BE \widetilde{T^{\rm S}} - {\cal B}^-\partial{\cal C}_-= T_{\rm m} + {\textstyle{1\over2}}(\partial\varphi)^2 - {\textstyle{3\over2}}\partial^2\varphi - B\partial C -{\textstyle{1\over2}}(\partial v)^2 + {\textstyle{3\over2}}\partial^2v \label{Tidentity} \end{equation} which shows that we have an equal number of degrees of freedom in the ${\cal M}_d\oplus {\cal L}\oplus [B,C]\oplus U(1)_v$ system and in the $s\ell(2)_{-4}$ currents with one ghost pair (${\cal B}^-{\cal C}_-$). In section~2, however, we had more ghost pairs. Our next objective is to extend the representation~\req{SemikhBos}, \req{BCbosonization} so as to incorporate the other ghosts and then derive a realization for the $N\!=\!4$ algebra. This would require changing coordinates on the field space so as to effectively solve the constraint ${\widehat J}^0\sim0$. To this end, we first notice that since ${\widehat J}^0$ is OPE-isotropic, ${\widehat J}^0(z){\widehat J}^0(w)=0$, it can be represented by a complex scalar $\partial\phi$, with \BE \phi(z)\overline\phi(w)=\log(z-w)\,,\quad {\widehat J}^0=\partial\phi\,. \end{equation} In the formulae \req{SemikhBos} and \req{BCbosonization}, we did have an isotropic combination $\varphi-v$, and from the $s\ell(2)$ algebra we see that this must be conjugate to $\partial\phi={\widehat J}^0$. We thus take $\overline\phi$ to be equal to $\varphi-v$, which allows us to have $J^+=\exp\overline\phi$. \ Next we extend the field space by an independent ghost system denoted by $bc$, and express $(\partial v, \partial\varphi, {\cal B}^+,{\cal C}_+)$ through $(\partial\phi, \partial\overline\phi, b,c)$ via \begin{eqnarray} \partial v&=&bc+\partial\phi-\partial\overline\phi,\\ \partial\varphi &=&bc+\partial\phi,\\ {\cal B}^+&=&c\,e^{\overline\phi}\,,\qquad {\cal C}_+~{}={}~b\,e^{-\overline\phi}\label{BCplus} \end{eqnarray} after which the $s\ell(2)_{-4}$ currents take the form: \begin{eqnarray} J^{+}&=&e^{\overline{\phi}} \nonumber\\ J^{0}&=& bc + BC + \partial\phi - 2\partial \overline{\phi} \label{BosSL2}\\ J^{-}&=&(T_{\rm m} + {\textstyle{1\over2}}\partial\phi\partial\phi - {\textstyle{3\over2}}\partial^2\phi + bc\partial\phi + BC\partial\phi - 2b\partial c - \partial b c - 2B\partial C - \partial B C + BCbc ) e^{-\overline{\phi}}\nonumber \end{eqnarray} while the ${\cal B}^+{\cal C}_+$ and ${\cal B}^-{\cal C}_-$ ghosts are now given by: \BE\oldfalse\BA{rclcrcl} {\cal B}^-&=&Be^{-\overline{\phi}}\,,&{}&{\cal C}_-&=&Ce^{\overline{\phi}}\,,\\ {\cal B}^+&=&c\,e^{\overline\phi}\,,&{}&{\cal C}_+&=&b\,e^{-\overline\phi}\,. \end{array}\label{pmghosts} \end{equation} Note that the ${\cal B}^0{\cal C}_0$ ghosts `decouple' -- they do not participate in field redefinitions. Using formulae \req{BosSL2}, \req{pmghosts}, we can now evaluate the energy-momentum tensor~\req{EMT} in terms of the new fields $\phi,\overline\phi,bc,BC,T_{\rm m}$: \BE {\cal T}_{\cal A}= \underbrace{T_{\rm m}-b\partial c-2B\partial C-\partial B C} +\partial\phi\partial\overline\phi -\partial^2\phi - {\cal B}^0\partial{\cal C}_0 \label{BOSEMT} \end{equation} It follows that $\dim B=2$ while $\dim b=1$; \ $bc$ thus represent a $c=-2$ matter and together with $T_{\rm m}$ (which plays the r\^ole of a Liouville) and the $BC$ ghosts make up a realization of the bosonic string. \subsection{Constructing the states}\leavevmode\hbox to\parindent{\hfill} Let us see now how the representation space can be constructed for the realization \req{BosSL2}, \req{pmghosts}. The vacuum is found by translating the vacuum determined by \req{sl2highest}, \req{sl2ghost} to the present picture. This gives for~\req{jdressed} (assuming $j>0$ for definiteness) \BE \ket{j}\otimes\underbrace{\ket{j+1}_+\otimes\ket{0}_0\otimes\ket{1}_- }_{{\cal B}{\cal C}\ {\rm ghosts}} =\underbrace{\ket{\Delta(r,s)}_{\rm m}\otimes\ket{-j-1}_{bc}\otimes\ket{1}_{BC} }_{c=-2\ \rm bosonic\ string} \otimes\ket{0}_{0}\otimes\ket{\{0,0\}}_{\phi\,\overline\phi} \label{vacua}\end{equation} where $\ket{\{0,0\}}_{\phi\,\overline\phi}$ is the trivial vacuum in the complex scalar theory. The $bc$ and $BC$ ghost vacua from the RHS of \req{vacua} are characterized by creation/annihilation conditions depending (for $bc$) on the $s\ell(2)$ spin $j$. \BE\oldfalse\BA{lclclcl} c_n \ket{-j-1}_{b,c}&=&0\quad n\geq j+2\,,&{}& b_n \ket{-j-1}_{b,c}&=&0\quad n\geq-j-1\,,\\ C_n \ket1_{B,C}&=&0\quad n\geq1\,,&{}& B_n \ket1_{B,C}&=&0\quad n\geq0\,. \label{caoper} \end{array}\end{equation} while the conformal dimension of the primary matter state $\ket{\Delta(r,s)}_{\rm m}$ is taken from the Ka\v{c} table for $d=28$: \BE \Delta(r,s)={\textstyle{1\over4}}(-{\textstyle{1\over2}}(r^2-1)-2(s^2-1)-2rs+2)= -{\textstyle{1\over8}}\Bigl( (r+2s)^2-9\Bigr)\,, \label{dimensions}\end{equation} which rewrites as \BE \Delta(r,s)=-{\textstyle{1\over2}} j(r,s)(j(r,s)+3) \label{dimmatter} \end{equation} where $j(r,s)=j_+(r,s)$, i.e.\ \BE j(r,s)={r-1\over2}+2{s-1\over2}\quad\hbox{$r$ and $s$ are integer} \end{equation} For $j=j_-(r,s)$, we can do by analogy with the previous case, for example the formula \req{vacua} rewrites as \BE \ket{j}\otimes\underbrace{\ket{0}_+\otimes\ket{0}_0\otimes\ket{-j}_- }_{{\cal B}{\cal C}\ {\rm ghosts}} =\underbrace{\ket{\Delta(r,s)}_{\rm m}\otimes\ket{0}_{bc}\otimes\ket{-j}_{BC} }_{c=-2\ \rm bosonic\ string} \otimes\ket{0}_{0}\otimes\ket{\{0,0\}}_{\phi\,\overline\phi} \label{vacuamin}\end{equation} The formula \req{dimmatter} for the dimension of the matter state $\ket{\Delta(r,s)}_{\rm m}$ remains the same. \medskip As mentioned after the formula~\req{dressfromjmin}, it is possible to build the $\N4$ states $\ket{{\textstyle{1\over2}},\Delta}_{N=4}$ in terms of matter $\ket{\Delta(r,s)}_{\rm m}$ and ghosts $BC$ and $bc$; the explicit formula reads \BE \ket{{\textstyle{1\over2}},\Delta}_{N=4}= \ket{\Delta(r,s)}_{\rm m}\otimes\ket{0}_{bc}\otimes\ket{1}_{BC} \label{n4statesboson} \end{equation} where $\Delta=\Delta(r,s)-1$, with $-1$ being accounted for by a $c$-ghost contribution. \medskip The subspace built on $\ket{\Delta(r,s)}_{\rm m}\otimes\ket{-j-1}_{bc}\otimes\ket{1}_{BC}$ can be thought of as the space of states of a non-critical bosonic string with $c=-2$ matter. We are going to consider it in more detail. \subsection{Bosonization of spectral sequence and the LZ states}\leavevmode\hbox to\parindent{\hfill} Now we are going to translate the spectral sequence associated with the decomposition \req{Qdecomposition} to the representation in terms of the `elementary' fields $\phi,\overline\phi,bc,BC,T_{\rm m}$. These latter have then to be assigned the following degrees (cf.~\req{degrees}): \BE\oldfalse\BA{l} \mathop{\rm deg}\nolimits T_{\rm m}=\mathop{\rm deg}\nolimits{\cal C}_0=\mathop{\rm deg}\nolimits{\cal B}^0=\mathop{\rm deg}\nolimits\partial\phi=\mathop{\rm deg}\nolimits\partial\overline\phi=0\,,\\ \mathop{\rm deg}\nolimits C=-\mathop{\rm deg}\nolimits B=\mathop{\rm deg}\nolimits e^{-\overline\phi}=-\mathop{\rm deg}\nolimits e^{\overline\phi}=1\,,\\ \mathop{\rm deg}\nolimits b=-\mathop{\rm deg}\nolimits c=2\,. \end{array}\end{equation} The different parts \req{SPSEQU} of the BRST current now take the form \BE\oldfalse\BA{l} {\cal J}^{(0)}={\cal C}_0\partial\phi\,,\\ {\cal J}^{(1)}=C(T_{\rm m}-b\partial c)-CB\partial C +\partial^2 C-\partial(Cbc) +C({\textstyle{1\over2}}\partial\phi\partial\phi-{\textstyle{1\over2}}\partial^2\phi+\partial\phi bc)-\partial(\partial\phi C)\,,\\ {\cal J}^{(2)}=b\,,\\ {\cal J}^{(3)}=bC{\cal B}^0\,. \end{array}\label{Bosdecomposition}\end{equation} We can thus project onto the cohomology of ${\cal Q}^{(0)}$. Observe that the BRST current ${\cal J}^{(1)}$ and the energy-momentum tensor\ \req{BOSEMT} can be rewritten as \BE\oldfalse\BA{l} {\cal T}_{\cal A}={\cal T}_{\rm str} + [{\cal Q}^{(0)},\,{\cal B}^0\partial\overline\phi-\partial^2{\cal B}^0]\,,\\ {\cal J}^{(1)}={\cal J}_{\rm str} + [{\cal Q}^{(0)}\,, \,{\textstyle{1\over2}}{\cal B}^0(C\partial\phi+Cbc-2\partial C)-\textstyle{3\over2}\partial{\cal B}^0] \end{array}\end{equation} where fields in the cohomology can be taken as \BE\oldfalse\BA{l} {\cal T}_{\rm str}=T_{\rm m}-b\partial c-2B\partial C-\partial B C\,,\\ {\cal J}_{\rm str}=C(T_{\rm m}-b\partial c)-CB\partial C +\partial^2 C-\partial(Cbc)\,.\\ \end{array}\label{TQstring}\end{equation} These are identified with the energy-momentum tensor\ and the BRST current of a bosonic string with a $c=-2$ matter represented by the $bc$ system. Further, along with ${\cal T}_{\cal A}$ and ${\cal J}_{\cal A}$, the $N\!=\!4$ generators \req{GEN4ALGSL} can be projected onto the ${\cal Q}^{(0)}$-cohomology. This will produce a `strong' $N\!=\!4$ algebra, i.e.\ relations~\req{SUPERALGSL} will be satisfied exactly rather than modulo ${\cal Q}^{(0)}$-exact terms as was the case with the generators~\req{GEN4ALGSL}. Namely, the $N\!=\!4$ generators now take the form \BE\oldfalse\BA{rclcrcl} {\cal T}&=&{\cal T}_{\rm str} \,,&{}&{\cal G}^1&=&b\,, \\ J^+_{N=4}&=&Cb\,,&{}&{\cal G}^2&=&B\,,\\ J^0_{N=4} &=&{\textstyle{1\over2}}(bc-BC)\,,&{}&\overline{\cal G}_1&=& c(T_{\rm m}-B\partial C)+bc\partial c-\partial{(cBC)}+\partial^2 c\,,\\ J^-_{N=4} &=&Bc\,,&{}&\overline{\cal G}_2&=&{\cal J}_{\rm str} \end{array}\label{GEN4ALG}\end{equation} Here, ${\cal T}={\cal T}_{\rm str}$ and $\overline{\cal G}_2={\cal J}_{\rm str}$ are given by eqs.~\req{TQstring}, while ${\cal G}^2$ plays the r\^ole of a superpartner to ${\cal T}$. The system described by the energy-momentum tensor\ ${\cal T}_{\rm str}$ represents a $c=-2$ matter coupled to gravity, with $T_{\rm m}$ playing (in accordance with the value of its central charge) the r\^ole of a Liouville. Cohomology of the bosonized ${\cal Q}^{(0)}$ operator coincides with the $\N4$ algebra representation built on the ($\N4$) highest-weight state obtained by dressing the string vacuum $\ket{\Delta(r,s)}_{\rm m}\otimes\ket{0}_{bc}\otimes\ket{0}_{BC}$ with the appropriate number of ghosts, eq.~\req{n4statesboson}. \medskip The BRST-primitives of the states \req{generalmffn4} can now be written as BRST-primitives w.r.t.\ the BRST operator ${\cal Q}_{\rm str}= \oint{\cal J}_{\rm str}$. An important fact is that the MFF singular vectors \req{generalmffn4} take in our representation the form (see \cite{[S-sing]} where this claim was based on using the (general form of) representation \req{BosSL2}, or \cite{[GP]} where reductions of singular vectors were studied by other means) \BE ({\rm singular\ vector\ of\ }T_{\rm m}{\rm \ minimal\ model}) \otimes({\rm ghost\ part}) \label{split}\end{equation} Their BRST-primitives then become, upon factorization over the module generated by the null vector, the Lian-Zuckerman states in the theory of $c=-2$ matter dressed with gravity. The ghost number of a LZ state obtained in this way from an MFF vector $\ket{{\rm MFF}\{r,s\}}$ is equal to $r-j-1={\textstyle{1\over2}}(r+1)-s$ for $r>j$ (where the ghost number of the $c=-2$ string counts the number of $C$ operators minus the number of $B$ operators and is zero for the $sl_2$-invariant ghost vacuum), and 0 for $r\leq j$. Such an accumulation of the LZ states in the ghost number 0 is related to the embedding pattern of the $c=28$ Virasoro Verma modules shown in \req{picture} where, in the right column, every module is embedded into all the higher ones. Moreover, the Verma module embedding diagrams then project as in \req{picture}, where the right column shows Virasoro Verma modules and dimensions of their ground states. Every Virasoro Verma module is embedded into {\it all\/} the higher ones. For the $\SL2$ modules this is not so, instead there exist two $\SL2$-modules (corresponding to two different spins $j$) that project onto the same Virasoro module (modules on the same level in \req{picture}). Their embeddings add up to the `complete' set of embeddings for the Virasoro Verma modules. Our expression \req{generalmffn4} for the $\N4$ singular vectors now gives rise, in view of~\req{split}, to an expression for singular vectors in the $c=28$ Virasoro Verma module. Rather curiously, writing these in a closed form requires the introduction of ghost systems (which, as noted above, decouple in the course of the evaluation of \req{generalmffn4} in terms of the representation \req{GEN4ALG}). The possibility to arrive at such a representation for the Virasoro singular vectors rests on the fact that the representation \req{BosSL2} for the $\SL2_{k=-4}$ currents (or, more generally, the $\SL2_k$ representation from ref.~\cite{[S-sing]}), unlike conventional `bosonizations', does not imply the vanishing of any singular vector. The explicit construction \req{generalmffn4} does not, however, give all the singular vectors; as noted above, we have not continued it to all possible values of $r$ and $s$ in a closed form. This leaves aside a part of the $c=28$-singular vectors and the corresponding Lian-Zuckerman states. For these we only have explicit constructions in lower-level cases. In particular, the state $\ket{{\rm MFF}\{1,2\}}_{\cal A}$ gives rise to the ground ring generator $x$, and its explicit construction in terms of the $N\!=\!4$ generators reads \BE x=BCb\,\Psi_{\rm m}-{\textstyle{1\over2}}\partial b\,\Psi_{\rm m} +{\textstyle{1\over2}} b\,\partial\Psi_{\rm m} =((J^0_{N=4})_{-1}{\cal G}^2_0 +{\textstyle{1\over2}}{\cal G}^2_{-1})\, (\overline{\cal G}^1)_1\ket{1}_{\cal A}\,, \end{equation} where $\Psi_{\rm m}$ is the operator that corresponds to the state $\ket{\Delta(1,2)}$ in the matter sector. It would be interesting to obtain this operator in a systematic way, by extending the formula~\req{generalmffn4} along the lines of~\cite{[ST2]}. \section{$N\!=\!4$ and matter{}${}+{}$gravity}\leavevmode\hbox to\parindent{\hfill} In this section we will comment on the relation of the $\N4$ algebra represented in the cohomology of ${\cal Q}^{(0)}$ to the known symmetries of matter+gravity systems. The $\N4$ algebra \req{GEN4ALG} contains two twisted $\N2$ subalgebras, realized on a common energy-momentum tensor\ ${\cal T}$ and $U(1)$ current ${\cal H}=2J^0$: \BE \left\{\oldfalse\BA{rcl} {\cal T}_1&=&{\cal T}\\ {\cal H}_1&=&{\cal H}\\ {\cal Q}_1&=&{\cal G}^1\\ {\cal G}_1&=&\overline{\cal G}_1 \end{array}\right. \qquad{\rm and}\qquad \left\{\oldfalse\BA{rcl} {\cal T}_2&=&{\cal T}\\ {\cal H}_2&=&{\cal H}\\ {\cal Q}_2&=&\overline{\cal G}_2\\ {\cal G}_2&=&{\cal G}^2 \end{array}\right. \label{subalgebras} \end{equation} In the first of these algebras, we can bosonize one of the ghost pairs as \BE B=e^{i\varphi}\,,\qquad C=e^{-i\varphi}\label{IDENT} \end{equation} and consider $\varphi$ as the Liouville scalar. Then the construction becomes the $k\!=\!-4$-case of the known $N\!=\!2$ representation \cite{[GS3]} in terms of matter dressed with gravity: \BE\oldfalse\BA{rcl} {\cal Q}_1&=&b\\ {\cal G}_1&=&c(T-{\textstyle{1\over2}} (\partial\varphi)^2+{\alpha_++\alpha_-\over\sqrt{2}}\partial^2\varphi)+bc\partial c +{\sqrt 2\alpha_+}\partial c\partial\varphi+{\textstyle{1\over2}} (1-2\alpha_+^2)\partial^2c\\ {\cal H}_1&=&-bc-{\sqrt 2\alpha_+}\partial\varphi\\ {\cal T}_1&=&T-{\textstyle{1\over2}} (\partial\varphi)^2+{\alpha_++\alpha_-\over\sqrt{2}}\partial^2\varphi-b\partial c \label{spin1}\end{array}\end{equation} where $\alpha_-=-\sqrt{k+2}$, $\alpha_+=-{1/\alpha_-}$ and, in our case, $k$ is set to $-4$. \ $T$ is the energy-momentum tensor\ of matter with central charge equal to $1-{6(k+1)^2\over k+2}$ which becomes 28 when $k\!=\!-4$, in which case $T$ coincides with $T_{\rm m}$. Similarly, the other twisted $N\!=\!2$ subalgebra from \req{subalgebras} becomes, after the bosonization \BE b=e^{i\varphi},\qquad c=e^{-i\varphi}\end{equation} the other $\N2$ realization from \cite{[GS3]}: \BE\oldfalse\BA{rcl} {\cal Q}_2&=&C(T-{\textstyle{1\over2}} (\partial\varphi)^2+{\alpha_+-\alpha_-\over\sqrt 2} \partial^2\varphi)+BC\partial C +{\sqrt 2\alpha_+}\partial C\partial\varphi+{\textstyle{1\over2}} (1-2\alpha_+^2)\partial^2C\\ {\cal G}_2&=&B\\ {\cal H}_2&=&BC+{\sqrt 2\alpha_+}\partial\varphi\\ {\cal T}_2&=&T-{\textstyle{1\over2}} (\partial\varphi)^2+{\alpha_+-\alpha_-\over\sqrt 2}\partial^2\varphi -\partial BC-2B\partial C \label{spin2}\end{array}\end{equation} evaluated at $k=-4$. The two realizations~\req{spin1} and \req{spin2} are related by an involutive automorphism of the twisted $N\!=\!2$ algebra \cite{[GS3]}. Now we see that this automorphism lifts to the automorphism \req{automorphism} of the $N=4$ algebra \req{GEN4ALG}. It is induced, in the bosonized picture, by ghost permutations (cf.~\req{pairs1}) \BE b\leftrightarrow B\,,c\leftrightarrow C \label{AUT}\end{equation} This acts as identity on the $s\ell(2)_{-4}$ algebra \req{BosSL2}. More generally, one can construct a 3-parameter family of $N\!=\!2$ algebras out of the $\N4$ generators \req{GEN4ALG} so that all members of this family share the energy-momentum tensor\ ${\cal T}$ and the $U(1)$ current $2J^0_{N=4}$. These $N\!=\!2$ algebras are described by \BE\oldfalse\BA{rcl} {\cal T}&=&T_{\rm m}-b\partial c -2B\partial C-\partial BC, \\ {\cal H}&=&bc-BC,\\ {\cal Q}&=&a_1\overline{\cal G}_2+a_2{\cal G}^1,\\ {\cal G}&=&a_3{\cal G}^2+a_4\overline{\cal G}_1 \label{family}\end{array}\end{equation} where $a_1,a_2,a_3,a_4$ are arbitrary parameters subject to the equation \BE a_1a_3+a_2\,a_4=1\label{EQU} \end{equation} Different algebras from the set \req{family} can be connected by a combination of transformations of the form $e^{-\oint{\cal A}}(\ldots)e^{\oint{\cal A}}$ with ${\cal A}$ being equal to: \BE\oldfalse\BA{rcl} {\cal A}&=&p_1bc-p_2BC\\ {\rm or}&&{}\\ {\cal A}&=&p(T_{\rm m}C\,c-Cb\partial c\,c-B\partial C\,C\,c+C\partial^2c) \end{array}\end{equation} On the set of parameters $a_1,a_2,a_3,a_4$ these transformations act as \BE\oldfalse\BA{rcl} (a_1,a_2,a_3,a_4)&\to& (e^{p_2}a_1,e^{p_1}a_2,e^{-p_2}a_3,e^{-p_1}a_4)\\ {}{\rm and}\hfill &&{}\\ (a_1,a_2,a_3,a_4)&\to&(a_1+pa_2,a_2,a_3,a_4+pa_3) \end{array}\end{equation} respectively. As one can see, not every two algebras of the family \req{family} can be mapped into each other by such a transformation. The space of parameters $a_1,a_2,a_3,a_4$ falls into three domains: \BE\oldfalse\BA{rcl} {}&(a_1,0,a_3,a_4)&{}\\ {}&(a_1,a_2,0,a_4)&{}\\ {}&(a_1,a_2,a_3,a_4)&{} \end{array} \label{threesets} \end{equation} where in the latter case neither $a_2$ nor $a_3$ is equal to zero. For an algebra that belongs to the first domain (that is $a_2=0$, hence $a_3\neq0$ from \req{EQU}) the cohomology of ${\cal G}$ is trivial, because a dimension-$(-1)$ field exists: \BE \Psi^{{\cal G}}_1={1\over a_3}\,C-{a_4\over a_3^2}\,c\,\partial C\,C \end{equation} that is conjugate to ${\cal G}$: \BE {\cal G}(z)\Psi^{{\cal G}}_1(w)={1\over z-w} \end{equation} In the case of domain 2, we have a reversed situation. A dimension-$0$ field: \BE \Psi^{{\cal Q}}_2={1\over a_2}\,c-{a_1\over a_2^2}\,C\,\partial c\,c\,,\qquad a_2\neq0 \end{equation} is conjugate to ${\cal Q}$, while the one conjugate to ${\cal G}$ does not exist. In this case the cohomology of ${\cal Q}$ are trivial. In the last case (with neither $a_2$ nor $a_3$ being zero) we have that $\Psi^{{\cal Q}}_3=\Psi^{{\cal Q}}_2$ as well as $\Psi^{{\cal G}}_3=\Psi^{{\cal G}}_1$ exist. Hence the cohomology of both ${\cal Q}$ and ${\cal G}$ are trivial. Therefore the would-be transformations connecting different domains do not exist, since such transformations are required to preserve the OPEs. The set of transformations of the form $e^{-\oint{\cal A}}(\ldots)e^{\oint{\cal A}}$ act transitively on each domain. The above algebras \req{spin1} and \req{spin2} correspond, obviously, to $(1,0,1,0)$ and $(0,1,0,1)$. At the same time, the $N\!=\!2$ algebra \req{N2}, restricted to the cohomology of ${\cal Q}^{(0)}$ in the bosonized description, is identified as the $(1,1,{\textstyle{1\over2}},{\textstyle{1\over2}})$ algebra. \medskip The appearance of field operators such as the above $\Psi$ is often characteristic to bosonized pictures; thus, for example, the cohomology of ${\cal Q}_{\cal A}$ is trivialized in the bosonized picture, since there exists a dimension-$0$ field \BE \Psi=c-C\partial c\,c+C\,c\,{\cal B}^0 \label{Psi} \end{equation} that is conjugate to the BRST current ${\cal J}_{\cal A}$: \BE {\cal J}_{\cal A}(z)\Psi(w)={1\over z-w} \end{equation} whence $\{{\cal Q}_{\cal A},\,\Psi_0\}=1$ and we conclude that every ${\cal Q}_{\cal A}$ closed state is ${\cal Q}_{\cal A}$ exact. This vanishing of the cohomology is an example of the Koszul trivialization, described e.g. in \cite{[A]}. It occurs when extending the algebra ${\cal A}$ \req{A} to an algebra $\overline{\cal A}$ and extending the BRST operator appropriately, in such a way that any ${\cal Q}_{\cal A}$-closed state is given by ${\cal Q}_{\cal A}$ acting on one of the `new' states. The appearance of the above $\Psi$ is due to explicitly solving the condition ${\widehat J}^0\sim0$. Indeed, taking ${\widehat J}^0$ to be an `elementary' field and parametrizing the fields orthogonal to ${\widehat J}^0$ in terms of other `elementary' fields, one has to introduce, one way or another, $(J^+)^{-1}$ (in our realization, this was simply $e^{-\overline\phi}$). Allowing $(J^+)^{-1}$ to appear leads to the existence of an operator such as the above $\Psi$ that trivializes the cohomology. To restore the `original' cohomology of ${\cal Q}_{\cal A}$, one has to project from the bosonized algebra $\overline{\cal A}$ to ${\cal A}$ itself. Recall that, generally, the cohomology of ${\cal Q}_{\cal A}$ can be evaluated using the spectral sequence associated with the decomposition \req{Qdecomposition}. While the cohomology of ${\cal Q}^{(0)}$ and ${\cal Q}^{(1)}={\cal Q}_{\rm str}$ do not necessarily vanish, it is the cohomology of ${\cal Q}^{(2)}=\oint{\cal G}^1$ that undergoes the Koszul trivialization due to the appearance of the $c$ field in the bosonized picture. Note, however, that the $c$ ghost is {\it not\/} a part of the $\N4$ algebra, and thus it is possible to keep the non-trivial cohomology by working solely in the representation of the $\N4$ algebra. \section{Concluding remarks}\leavevmode\hbox to\parindent{\hfill} We have presented arguments showing that the $k=-4$ $\SL2$ WZW model is cohomologically equivalent to the bosonic string with $c=-2$ matter. We have observed the presence of an $\N4$ symmetry in this system, which points to an $\N4$ origin of the Lian--Zuckerman states in the $c=-2$ bosonic string. The derivation involves a spectral sequence on the $\SL2_{-4}$ BRST complex and can be viewed as an extension of the Universal string ideology to include theories with Ka\v{c}--Moody symmetries\footnote{With the help of the explicit realization for the $\SL2$ currents (in terms of matter, Liouville and ghosts) one can also construct a homotopy transformation relating the $\SL2$ space of states in the chosen realization with states of the matter+gravity theories.}. While the $\N4$ algebra is specific to $c=-2$ matter, the relation of Lian--Zuckerman states to the $\SL2_k$ algebra is likely to hold in general, since a twisted $\N2$ is always present in non-critical strings, and on the other hand the relevant $\N2$ singular vectors are isomorphic with $\SL2$ singular vectors~\cite{[ST2]}. The appearance of the $\N4$ algebra would also be interesting to understand in terms of geometry of flag manifolds, along the lines of ref.~\cite{[FS]}. \bigskip \noindent {\sc Acknowledgements}. We are grateful to B.~Feigin for useful discussions. We also thank O.~Andreev, J.M.~Figueora-O'Farrill, S.~Hwang, O.~Khudaverdyan, A.~Marshakov, A.~Taormina, I.V.~Tyu\-tin, M.A.~Vasil\-iev, and B.L.~Vo\-ro\-n\-ov. The research described in this publication was made possible in part by Grant \#MQM300 from the International Science Foundation and Government of Russian Federation, and by RFFI grant 94-02-06338-a.
1,116,691,497,922
arxiv
\section{Introduction} Demand Response (DR), in which a utility company or an aggregator motivates customers to curtail their power usage, has now become an acceptable method in situations where high peaks in demand occur, transmission congestion increases, or some power plants are not available to generate enough power~\cite{imp4, va, den, imp6}. Per the Federal Energy Regulatory Commission (FERC), demand response is the change in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity or any other incentive~\cite{balijepalli2011review}. In general, DR programs may be divided into two main categories: Price Based Programs (PBP) and Incentive Based Programs (IBP). PBPs refer to schemes in which the electricity price varies as a function of variables such as the time of usage or the total demand, with the expectation that the consumers will adjust their demand in response to such a price profile. On the other hand, IBPs offer a constant price for power to every user; however, customers are offered a reward if they reduce their demand when the utility company desires. Classically, these incentives were proposed to be constant and based only on customer participation in the program; however, market-based incentives that offer a reward that varies with the amount of load reduction that a customer achieves have also been proposed. There exists a rich literature for IBPs (e.g, see~\cite{mo,sam,ro, wa} and the references therein) studying the design of suitable incentives with aims such as social welfare maximization, minimization of electricity generation and delivery costs, and reducing renewable energy supply uncertainty for demand response. In this paper, we consider an incentive based program for demand response where the customers are rewarded financially by a demand response aggregator (which role can also be filled by a utility company) for their load reduction during DR events. When called upon to reduce their loads, each customer puts in some effort to achieve a true value of load reduction. The effort is costly to the customers since it causes them discomfort. Further, the amount of effort expended is private knowledge for each customer. Incentivizing customers to put in effort in this setting is the problem of \textit{moral hazard}~\cite[Chapter~4]{laf}. Following the rich literature back to Holmstrom~\cite{holmstrom1979moral}, as a means to incentivize the customer to put in ample effort in the presence of moral hazard, the demand response aggregator (DRA) must pay each customer proportional to the effort that the customer puts in. However, by taking advantage of the fact that the DRA must supply as much power as the customer desires and by anticipating the demand response call, a strategic customer can artificially inflate her base load before an expected DR event. In other words, the true amount of load reduction is also private knowledge for the customer. By artificially inflating the base load, for the same \textit{nominal} load reduction, the customer can report more \textit{measured} load reduction and gain more financial reward from the DRA~\cite{chao20, chao}. This implies that the problem of \textit{adverse selection}~\cite[Chapter~3]{laf} is also present. That such strategic behavior by customers to exploit IBPs is not idle speculation has been pointed out multiple times~\cite{chao}, \cite{wolak}. In 2013, it was revealed that the Federal Energy Regulatory Commission issued large civil penalties to customers for exactly this sort of strategic behavior~\cite{FERC2013}. For instance, Enerwise paid a civil penalty of \$780,000 for wrongly claiming on the behalf of its client, the Maryland Stadium Authority (MSA), that it reduced the baseline electricity usage in 2009 and 2010 at Camden Yards. It may also be pointed out that the possibility of behavior in which ``phantom DR occurs through inflated baseline'' to obtain ``payments for fictitious reductions'' was pointed out in a related but different context by California ISO in its opinion on FERC order 745~\cite{caliso}. To avoid this {\em phantom demand response}, a payment structure to incentivize a rational customer to provide maximal effort and low (or no) misreporting is needed. While there is much literature that uses competitive game theory in smart grids particularly for solutions based on concepts such as pricing (e.g., \cite{saad},~\cite{fadlu} and the references therein), much of this literature assumes the users to be truthful and non-anticipatory. While this is often a good assumption in cases where users are price taking and either unable or unwilling to transmit false information, it can lead to overly optimistic results in the framework discussed above. We consider anticipatory and strategic customers that maximize their own profit by predicting the impact of their actions and possibly falsifying any information they transmit. Of more interest to our setting is the literature on contract design for DR with information asymmetry and strategic behavior. For instance,~\cite{chao20} proposed a DR contract that avoids inefficiencies in the presence of a strategic sensor; however, the possibility of baseline inflation was not considered.~\cite{nguy} proposed a DR market to maximize the social welfare; however, the baseline consumption levels were assumed to be known. The works closest to ours in this stream are~\cite{chen20} and~\cite{pra}. \cite{chen20} considered a two-stage game for DR. Assuming knowledge of the utility function of the consumer, the authors proposed using a linear penalty function for the deviation of the usage level from the reported baseline to induce users to report their true baselines, while at the same time adjusting the electricity price appropriately to realize the desired load reduction.~\cite{pra} designed a two-stage mechanism to induce truth telling by the customer irrespective of the utility function of the DRA. The proposed mechanism relied on assuming a linear utility function for the customers, a deterministic baseline and a low probability of occurrence of the DR event. Unlike these works, we design a contract which maximizes the utility function of the DRA (which includes the payment to the customer) and a more general utility function for the customer that includes falsification and effort costs, as well as constraints of individual rationality. In economics, contract design with either moral hazard or adverse selection alone has a vast literature (for a summary, see, e.g.,~\cite[Chapter~14B]{mas1} and~\cite[Chapter~14C]{mas1}). In the problem we consider, moral hazard followed by adverse selection arises. This combination is much less discussed in the literature and is significantly more difficult since incentives to solve moral hazard (for instance, through payments that are an increasing function of the reported effort) may, in fact, exacerbate the problem of adverse selection by incentivizing larger falsification of the reported effort. A notable exception is~\cite[Chapter~7]{laf} which considers a specific buyer-seller framework with two hidden actions and two hidden pieces of information. In this stream, the closest works to our setup are~\cite{imp3, imp2} who study the problem of incentivizing a single manager (a single customer in our framework) by the owner of a firm (the DRA in our framework) and propose a contract by assuming accurate revelation of the private information of the manager to the owner in long run. Our formulation includes the more general case of multiple customers with the DRA obtaining non-accurate knowledge of the load reduction by the customers even in the long run. The chief contribution of this paper is the design of a contract to maximize the utility function of the DRA while incentivizing rational customers to expend costly effort to reduce their load. The contract addresses the issue of moral hazard followed by adverse selection that is enabled by the fact that knowledge of the effort put in as well as that of the true load savings realized are both private to the customers. The contract that we propose consists of two parts: one part that pays the customer based on the (possibly falsified) reported load reduction, and another that provides a share of the profit that accrues to the DRA through the demand response event to the customer. One interesting result is that the optimal contract may lead to both \textit{under-reporting} and \textit{over-reporting} of load reduction by the customer depending on the true load reduction realized by the customer. In other words, if a strategic customer wishes to maximize her profit, she may sometimes decrease her base load before the DR event to under-report her power reduction as a part of DR event. We also show that the DRA can realize any arbitrary demand reduction by contracting with an appropriate number of customers. The rest of the paper is organized as follows. In Section \ref{sec1}, the problem statement is presented. In Section \ref{sec4}, we propose a contract structure for the DR problem. Next, in Section \ref{sec3}, we derive the optimal strategy chosen by DRA and the customers in response, discuss the interactions among customers, and study several extensions of the problem. In Section \ref{illus}, numerical examples are provided to illustrate the results. Section \ref{concl} concludes the paper and presents some avenues for future work. \paragraph*{Notation} $f_{X|A}(x|a)$ (which is often simplified to $f(x|a)$ when the meaning is clear from the context) denotes the probability distribution function (pdf) of random variable $X$ given the event $A=a$. A Gaussian distribution is denoted by $\mathcal N(m,\sigma^2 )$ where $m$ is the mean and $\sigma $ is the standard deviation. For two functions $g$ and $h$, $g*h$ denotes the convolution between $g$ and $h$. $\E_X[f]$ specifies that the expectation of function $f$ is taken with respect to the random variable $X$; when $X$ is clear from the context, we abbreviate the notation to $\E[f]$. Given $N$ variables $x_1, \cdots, x_N$, the set defining their collection is denoted by $\{x_i\}_{i=1}^{N}$, or sometimes simply by $\{x_i\}$. \section{Problem Statement} \label{sec1} During a DR event, the DRA calls on the customers to decrease their power consumption. A contract that pays the customers merely for the act of reducing the load will not incentivize the customers to exert the maximal effort for reducing the load by as much amount as possible. To solve this problem, the DRA may offer a contract that makes the payment to the customer proportional to the load reduction. However, with such a contract, a strategic customer will try to anticipate the DR event and increase her base load, i.e., the load before the demand response event began. This pre-increase allows the customer to reduce the load during the DR event by a larger amount than would have been possible in the absence of such an increase; thus, receiving a larger payment even though the DRA accrues the benefit of only a smaller true load reduction. The central problem considered in this paper is to design a contract that is free from both these problems. \begin{remark} It is worth pointing out that the falsification of the load reduction claimed by the customer may happen even though the load at the customer is being monitored constantly and accurately. Further, the DRA can not find the `true' base load by considering the load used by a customer at some arbitrary time before the DR event. For one, this simply shifts the problem of customer manipulation of the load to an earlier time. Second, some of the increase in the base load may be due to true shifts in customer need due to, e.g., increased temperature. \end{remark} \subsection{Timeline} \begin{figure}[tb] \centering \includegraphics[width=9cm, height=4.5cm]{plot1} \caption{Timeline of the DR event and the proposed contract.} \label{pic_2} \end{figure} Consider $N$ customers denoted by $i=1, \cdots , N$ that are contracted with a DRA. We refer to the timeline shown in Figure \ref{pic_2} to explain the sequence of events. At time $t_1$, strategic customers anticipate that a DR event is likely to begin at time $t_{2}$. Accordingly, at this time, each customer $i$ calculates the effort $a_{i}$ she is willing to put in for the load reduction during the DR event. We assume that this effort costs the customer $h(a_i)$. Further, this effort leads to a reduction in the load by an amount $x_i$ that can depend on local conditions that are private knowledge for the $i$-th customer. For instance, a factory might be able to induce a large load reduction with a small effort based on its assembly line requirements given the orders it has to fulfill. The DRA is assumed to know the probability density function $f(x_i|a_i)$ according to which $x_i$ is realized, while the customer knows the local conditions and can calculate the value of $x_i$ that will be realized. After this calculation, each customer $i$ at time $t_{1}$ may increase (or decrease) the load by an amount $I_i$ in anticipation of the DR event. \begin{asum} \label{assum11} The random variables describing the load reductions are conditionally independent given the actions taken by all the customers, so that \[ f(x_1,x_2,\cdots, x_N|a_1,a_2, \cdots, a_N)=f(x_1|a_1)f(x_2|a_2)\cdots f(x_N|a_N).\] \end{asum} \begin{asum} \label{asum3} For ease of computation, we assume that $x_i$ is a noisy signal of the action, i.e., \begin{equation} x_i=a_i+e_i, \label{ae} \end{equation} where the random variables $\{e_i\}$ are i.i.d with mean $m_e$ and variance $\sigma^2$. Assumption~\ref{assum11} can thus be stated as \[ f(e_1,e_2,\cdots, e_N)=f(e_1)f(e_2)\cdots f(e_N).\] We first consider the case when $m_e=0$ and extend the results to the case when $m_e\neq0$ in Section \ref{ex1}. \end{asum} At time $t_2$, the DR event begins and the DRA calls on the customers to decrease their loads. Each customer $i$ now makes the predetermined effort $a_i$ leading to a reduction of her load by $x_i$. The DR event ends at $t_3$ with each customer $i$ having reported that she decreased the load by an amount $R_i$. Note that the true reduction in the load for the $i$-th customer is $x_i= R_i-I_i$, while the false report\footnote{We wish to emphasize again that the load at the customer is being accurately monitored at all times.} is $R_i$. We also show the times $t_0$ and $t_4$ in the timeline in Figure \ref{pic_2}. At time $t_0$ (much before $t_{1}$), the contract specifying the payment structure is signed between the DRA and the customers. We assume that $t_{0}$ is sufficiently early, so that at $t_{0}$, the customers too do not know the local conditions and must consider their expected utility according to the probability density functions $f(x_i|a_i)$ (or, equivalently, $f(e_i)$). At time $t_4$, at least a part of the payment $P_i$ as specified by the contract to the customers is paid by the DRA to incentivize them to participate in the DR event. The time $t_{4}$ is sufficiently close to the DR event, so that the realized value of $x_i$ is not known at time $t_4$ to the DRA. The contract may specify that the rest of the payment is done at some later time $t_{5}$, when the DRA may have more knowledge of the true value of $x_i$. \subsection{Utility Functions} The effort cost suffered by the $i$-th customer for an effort $a_i$ is given by a function $h(a_{i})$ that is known to all the customers and the DRA. Further, for a true reduction $x_i$, if the customer manipulates her base load and reports the reduction to be $R_{i},$ she suffers a falsification cost $g_{i}(R_i-x_i)$. This can model, e.g., any extra payment by the customer for boosting her consumption as she manipulated the load prior to the DR event. For simplicity, we assume that $g_{i}(R_{i}-x_{i}) = \beta_{i}\frac{(R_i-x_i)^2}{2},$ $\forall i,$ where $\beta_{i}>0$. Thus, with a payment $P_i$, the utility of the $i$-th customer is given by \[ V_i=-h(a_i)-\beta_{i}\frac{(R_i-x_i)^2}{2}+P_i,\qquad i=1,\cdots, N. \] The utility of the DRA is given by its net profit, which is the difference of the gross profit that occurs due to the load reduction by the customers and the payments made to the customers as part of the contract. For simplicity, we assume that the gross profit made due to reduction of load $x_i$ is equal to $x_i$; more complicated cases can be easily considered. With a total payment $\sum_{i=1}^{N} P_i$, the utility of the DRA is given by \begin{equation*} \Pi=\sum_{i=1}^{N} (x_i-P_i). \end{equation*} \subsection{Problem Formulation} We assume that the customers and the DRA are rational and risk neutral, so that they seek to maximize the expected value of their utility functions. The problem we consider in this paper is for the DRA to design a contract that maximizes its own utility when rational customers choose \textit{actions} $\{a_i\}$ and \textit{reports} $\{R_i\}$ to optimize their own utility functions. Denote by $\mathcal X$ the set of random variables describing the actual load reductions generated by the customers, i.e, $\mathcal X \triangleq\{X_1, \cdots,X_{N}\}$ and by $\mathcal{E}$ the set $\{E_1, \cdots,E_{N}\}$. Further, denote by $\mathcal X_{-i}$ the set of random variables describing the load reductions of all customers except $i$, i.e, $\mathcal X_{-i}\triangleq \mathcal X\backslash \{X_i\}$ and by $\mathcal{E}_{-i}$ the set $\mathcal{E}_\mathcal E \backslash \{E_i\}$. Thus, the optimization problem $\mathcal{P}_{1}$ to be solved by DRA is given by \begin{equation*} \textrm{$\mathcal{P}_{1}$:} \begin{cases} &\underset{\{P_i\}}\max \E_\mathcal E[\Pi]\\ s.t. &\textrm{$\{a_{i}, R_{i}\}$ is chosen to maximize $\E_\mathcal E[V_i]$ by each}\\ &\textrm{ customer $i$}\\ &\textrm{individual rationality and incentive}\\&\textrm{ compatibility constraints for all the customers}. \end{cases} \end{equation*} As stated in problem $\mathcal{P}_1$, we impose two constraints on the contract. \paragraph{Individual rationality} We assume that the DRA can not force customers to participate in the load reduction program due to political or social reasons. Instead, the contract should be individually rational so that a rational customer chooses to participate. We impose this constraint in the form of ex ante individual rationality. This constraint requires that no customer chooses to walk away from the contract at time $t_0$ before she knows either her own load saving or the savings of the other customers; thus, $\E_\mathcal E [V_i]\geq 0, \forall{i}.$ \paragraph{Incentive compatibility} Incentive Compatibility is a standard constraint imposed in mechanism design which is used to limit the space of the contracts we need to optimize over (see, e.g.,~\cite{myerson1979incentive}). Specifically, this constraint implies that the utility of the consumers does not increase if they calculate their report $R_i$ based on any arbitrary quantity other than the true value of their load reduction $x_i$. Further, this constraint also implies that without loss of generality, a customer with private information of load reduction $x_i$ would always prefer the payment $P_{i}(x_{i})$ over the alternatives $P_{i}(\hat{x}_i)$ for any $\hat{x_i}\neq x_i$. We make the following further assumptions: \begin{asum} \label{asum4} \begin{enumerate}[label=(\roman*)] \item \textbf{(Deterministic Policies)} The customers choose effort $a_i$ according to deterministic policies. Stochastic policies would imply additional stochasticity in $\mathcal{P}_{1}$ that we do not consider in this paper. \item \textbf{(Communication Structure)} Individual customers cannot communicate with each other, so that the load reduction $R_i$ claimed by the $i$-th customer as well as the true profit $x_i$ and hidden action $a_i$ for this customer are not known to the other customers. The DRA does not have access to $x_i$ and $a_i$ till possibly at a much later time $t_{5}\gg t_{4}.$ \item \textbf{(Public Knowledge of Functional Forms)} The functional forms of $h_i$, $g_i$, the probability distribution functions $\{f(e_i)\}$, the weights $\{\beta_i\}$, and the contracts offered are known to all the customers and the DRA. \end{enumerate} \end{asum} We now proceed to present our solution to the problem $\mathcal{P}_{1}$. \section{Structure of the Proposed Contract} \label{sec4} In this section, we present a contract as a solution of the problem $\mathcal{P}_{1}$. To this end, we begin by discussing why some intuitive contracts may fail. \subsection{Some Intuitive Contracts} For simplicity, in this section, we restrict our attention to the scenario when only one customer is present. For notational ease, when $N=1$, we drop the subscript $i$ referring to the $i$-th customer. \begin{example} Consider a contract that provides a constant payment $c$ to the customer for decreasing her load. Then, the utility function of the customer is given by: \begin{displaymath} V = cu(R)-\beta \frac{(R-x)^2}{2}-h(a), \end{displaymath} where $u(.)$ is the unit step function. In this case, the customer seeking to maximize her utility, will choose $a=0$ (i.e., no action) but $R=0^{+}$ (i.e., minimal load reduction reported irrespective of true value of $x$), independently of the value of $c$. The utility function of the DRA is given by \begin{displaymath} \Pi=x-cu(R). \end{displaymath} Thus, if zero action leads to zero true load reduction, the DRA ends up making a payment in spite of not achieving any load reduction. Thus, this contract is unsuitable for the DRA. \end{example} The contract proposed in Example $1$ fails because it does not account for the fact that the amount of effort is known only to the customer and not the DRA. Since the effort is costly, this generates the problem of moral hazard~\cite[Chapter~4]{laf}. To induce a positive load reduction in spite of the presence of moral hazard, the contract must make at least part of the payment proportional to the amount of the load reduction. Otherwise, as discussed above, a rational customer will not choose any non-zero effort. \begin{example} Consider a contract in which the DRA provides an incentive $cR$ to the customer in response to the reported reduction $R$ at time $t_{4}$. Then, the utility function of the customer is given by: \begin{displaymath} V = cR-\beta \frac{(R-x)^2}{2}-h(a), \end{displaymath} while the utility function of the DRA is given by \begin{displaymath} \Pi=x-cR. \end{displaymath} Especially if $\beta$ is small, this contract would result in the customer choosing $a=0$ and misreporting a large $R$ to maximize $\E[V]$. Once again, the contract will be unsuitable for the DRA. \end{example} The reason the contract in Example $2$ fails is that the DRA does not have access to the true load reduction $x$ at $t_{4}$ when it has to make at least part of the payment. This creates the problem of {\em adverse selection}~\cite[Chapter~3]{laf}. If the DRA relies on the reported value $R$ for the payment, this creates an incentive for the customer to misreport $R$ as high as possible to gain maximal payment (modulo the falsification cost). \begin{remark} \label{remark22} If the problem is one that displays only one of moral hazard or adverse selection, optimal contracts can be designed using standard methods from the literature. However, such contracts are unsuitable for the problem $\mathcal{P}_{1}$ since we face the problem of moral hazard followed by adverse selection. \end{remark} We conclude this discussion with the following result. \begin{theorem} \label{lem0} Assume that the DRA has accurate knowledge of the true load reduction $x_i$ at time $t_4$. \begin{itemize} \item The level of effort $a_i$ by the $i$-th customer which maximizes the utility of the DRA is given by \[ a_i^{\star}=\argmax\E[x_i-h(a_i)]. \] \item The DRA can ensure that the effort $a_i^{\star}$ is expended by the each customer $i$ by offering a contract that specifies payments of the form \begin{equation} P_i=\E[ x_i- a_i^{\star}+h(a_{i}^{\star})]. \label{puremhz} \end{equation} \end{itemize} \end{theorem} \begin{proof} See Appendix. \end{proof} Next, we propose a contract structure for the problem $\mathcal{P}_{1}$ using a two-part payment structure. \subsection{Proposed Contract Structure} The contract that the DRA offers to the customers should at once incentivize them to put in costly effort and to report the load reduction truthfully. We propose a contract in which the payment to the $i$-th customer is given by a pair of the form $\{B_{i}(R_i), \alpha_i\}$, where \begin{itemize} \item $B_{i}(R_i)$ is a bonus which is rewarded to the $i$-th customer at $t_{4}$ after the customer reports load reduction $R_i$, and \item $\alpha_i$ is the share of its own gross profit that the DRA realizes due to the demand reduction by the customers and pays back to the $i$-th customer at a much later time $t_{5}\gg t_{4}$. \end{itemize} Note that the payment of the share supposes that the DRA knows the profit it obtains as a result of the load reductions by the customers at time $t_5$. We first consider the case when this profit is known to the DRA perfectly. We then extend the results to the case when the gross profit can only be estimated (possibly with some error) in Section \ref{ex2}. The proposed contract results in the payment function for the $i$-th customer as given by \begin{equation} P_i = \alpha_i x_i+B_{i}(R_i). \label{c1} \end{equation} Further, the utility function of the customer can be written as \begin{equation} V_i=\alpha_i x_i+B_{i}(R_i)-h(a_i)-\beta_{i}\frac{(R_i-x_i)^2}{2}, \label{v1} \end{equation} while the utility function for the DRA is given by \begin{equation} \Pi = \sum _{i=1}^{N}(x_i-P_i)=\sum _{i=1}^{N}(1-\alpha_i)x_i- \sum _{i=1}^{N}B_{i}(R_i). \label{v2} \end{equation} By invoking the revelation principle~\cite{laf}, without loss of optimality, we restrict attention to direct mechanisms (where $\hat{x_i}=x_i$) that are incentive compatible. Further, to emphasize the dependence of the utilities on the bonus function and the share, we will sometimes write $V_{i}$ as $V_{i}(B_{i}(R_{i}),\alpha_{i})$ and $\Pi$ as $\Pi(\{B_{i}(R_{i})\},\{\alpha_{i}\}).$ Finally, to ensure that the problem $\mathcal{P}_{1}$ is non-trivial with this contract, we will impose the following further constraints on the problem. \begin{asum} \label{asum1} The DRA does not provide all the profit back to the customers, i.e., $\sum _{i=1}^{N} \alpha_i < 1$. \end{asum} \begin{asum} \label{asum2} The bonus is always positive, i.e., $B_{i}(R_i)\geq0$. $B_{i}(R_i)<0$ will imply that the DRA can fine the customers which we disallow in keeping with the individual rationality constraints. We will also assume that $B_{i}(R_i)$ is twice differentiable and concave in $R_i$ and further that $B_i(R_i)$ is designed such that $\E_{\mathcal{E}}\left[V_i(B(R^*_i), \alpha_i)\right]$ is concave in $a_i$, where $R^*_i$ is the optimal report by the $i$-th customer as a function of her load reduction. \end{asum} \section{Design of the Contract} \label{sec3} In this section, we solve for the optimal contract by solving the problem $\mathcal{P}_{1}$. We begin by exploring the design space in terms of identifying the properties that any contract should satisfy. \subsection{An Impossibility Result} The first question that arises is if we can design any contract that incentivize the consumers not to misreport and set $R_i=x_i$ for all $i$. While naive applications of the revelation principle may suggest that such contracts are not only possible, but that limiting our consideration to such contracts is without loss of generality, this is not the case if the revelation principle is interpreted properly in our context. Note that the if a contract that ensures $R_i=x_i$ were possible, Lemma \ref{lem0} states that the optimal efforts as desired by the DRA are given by $\{a^{\star}_i\}_{i=1}^{N}$. \begin{thm} \label{thfirst} Under Assumptions~\ref{asum1} and \ref{asum2}, there exists no contract for problem $\mathcal{P}_{1}$ which simultaneously guarantees elicitation of the truth from the customers (in the sense that $R_i=x_i$) and the realization of the efforts $\{a_i^{\star}\}$.\end{thm} \begin{proof} See Appendix. \end{proof} \begin{comment} Further, even if we focus only on truthful reporting, the existence of a contract requires extra conditions. \begin{pro} The customer misreports by setting $R_i\neq x_i$ under any contract with sufficiently small cost for misreporting. \end{pro} \begin{proof} Without a penalty for falsifying, the utility of the customer is given by \[ V_i=B_{i}(R_i)-h(a_i)-\beta_i g(R-x). \] Since $B_{i}(R_i)$ is an increasing function of $R_i$, we have for $R_i>x_i$ \[ B_{i}(R_i)-h(a_i)-\beta_i g(R-x)>B_{i}(x_i)-h(a_i). \] Thus, the customer will always misreport $R_i\neq x_i$ for small enough cost $g(R_i-x_i)$ \end{proof} Note that the falsification cost $g(R_i-x_i)$ is one way in which the misreporting by the customer can be limited (or eliminated). Another is to make the payment (bonus) dependent on the value $x_i$. Since $B_{i}(.)$ can not depend on the true value $x_i$, this can be achieved by a payment that has an additional component after the DRA has verified the true saving $x_i$. This is done through .....(present your contract (if impossibility results is before your contract)) \end{comment} \subsection{ Contract Design} We now design the payment schemes for the contract to solve problem $\mathcal{P}_{1}$. We solve problem $\mathcal{P}_{1}$ in three steps: \begin{enumerate} \item First, we characterize what the optimal value of the load reduction claimed by each customer would be for a given contract $(\alpha_i, B_{i}(R_i))$. Thus, we find the optimal value $R_i^*$ of $R_i$, as the solution of the problem \begin{align} R_i^ &= \argmax_{R_{i}} \E_{\mathcal{E}_{-i}}[V(B_{i}(R_i), \alpha_i )]. \label{Rstar} \end{align} \item Then, for this value $R_i^*$, we calculate the optimal effort $a_i^*$ exerted by the customers, i.e., we solve the problem \begin{equation} a_i^*= \argmax_{a_{i}} \E_{\mathcal{E}}[V(B_{i}(R_i^*), \alpha_i )]. \label{effstar} \end{equation} \item Finally, having characterized the response of the customers, we optimize the parameters of the proposed contract for the DRA. Thus, we solve \begin{equation} \{B^*(.),\alpha^*_i\} = \argmax_{\{B_{i}(.)\},\{\alpha_{i}\}} \E_{\mathcal{E}}[\Pi(\{B_{i}(R_i^*), \{\alpha_{i}\})], \label{optpay}\end{equation} \end{enumerate} when the customers exert the efforts $\{a_i^{*}\}$ and report reductions $\{R_i^{*}\}$. We continue with the following result on the first step. \begin{thm} \label{pr0} Consider the optimization problem $\mathcal{P}_1$. The optimal choice of the reported load reduction $R_i$ obtained as a solution to the problem \eqref{Rstar} is given by the solution to the following equation \begin{equation} R^*_i-x_i=\frac{1}{\beta_{i}}\frac{\partial \E_{\mathcal{E}_{-i}}[B_{i}(R_i)]}{ \partial R_i}\bigg\rvert_{R_{i}=R^*_{i}}. \label{Rstar2} \end{equation} \label{pro3} \end{thm} \begin{proof} See Appendix. \end{proof} This result characterizes the optimal reporting by the customer. We note the following interesting feature. \begin{cor} With the payment scheme $P_i{=} \alpha_i x_i+B_{i}(R_i)$ in problem $\mathcal{P}_{1}$, if $B_{i}(R_i)$ is a decreasing (respectively increasing) in $R_i$, then the customer underreports (respectively overreports) her true load reduction.\label{corr1} \end{cor} \begin{proof} The proof follows directly from \eqref{Rstar2}. \end{proof} \begin{remark} Since $B_{i}(R_{i})$ may be a decreasing and an increasing function for different values of $R_i$, the optimal contract may induce both \textit{under-reporting} and \textit{over-reporting} of the load reduction by the customer. In other words, for some values of the true load reduction, it is possible that a strategic customer may decrease her base load before the DR event and under-report her power reduction to maximize her profit. \end{remark} Next, we characterize the optimal effort by solving the problem \eqref{effstar}. \begin{thm} \label{pro22} Consider the optimization problem $\mathcal{P}_1$. The optimal choice of the effort $a_i$ is obtained as the solution of the equation \begin{equation} a_i^*=\alpha_i+\frac{\partial \E_{\mathcal{E}}\left[B_{i}(R^*_i)-\frac{1}{2\beta_i}\left(\frac{\partial B_{i}(R_i)}{\partial R_i}\big\rvert_{R_{i}=R_{i}^{*}}\right)^2\right]}{\partial a_i}\Bigg\rvert_{a_{i}=a_{i}^{*}}, \label{effortstar} \end{equation} where $R_{i}^{*}$ is as specified in Theorem~\ref{pr0}. \end{thm} \begin{proof} See Appendix. \end{proof} Finally, having characterized the response of the customers, the third step is to optimize the parameters of the proposed contract by solving \eqref{optpay}. \begin{thm} \label{proposition_opt_contract} Consider the problem formulation in Section \ref{sec1} and the optimization problem $\mathcal{P}_1$. The optimal choice of the share assigned to $i$-th customer and the optimal bonus function are defined implicitly through the equations \begin{align} \label{optalfa} \alpha_i^*&=1-\frac{a_i^*+\frac{\partial \E_{\mathcal{E}}[B_{i}(R_i)]}{\partial \alpha_i}\Big\rvert_{\alpha_{i}=\alpha_{i}^{*}}}{\frac{\partial a^*_i}{\partial \alpha_i}\Big\rvert_{\alpha_{i}=\alpha_{i}^{*}} }\\ B^*(R_i^{*})&= \argmax\left[ (1-\alpha^*_i)a_i^*-\E_{\mathcal{E}}[B^*(R_i^*)]\right], \label{bp3} \end{align} where $a_i^*$, and $R_i^*$ are evaluated using \eqref{Rstar2} and \eqref{effortstar}. \end{thm} \begin{proof} Proof follows directly from \eqref{optpay}. \end{proof} \subsection{Example Contracts} Notice that equations \eqref{Rstar2}-\eqref{bp3} do not constrain the choices of the contract terms or the resulting actions of the customers to be unique. We now make more assumptions on the problem and provide some example contracts that result. We consider two scenarios: \begin{itemize} \item {\em Unspecified load reduction:} In the first scenario, we consider the case when the DRA is interested in the overall load reduction from all the customers to be as large as possible. In this case, we propose a bonus function of the form $B_i(R_i)=\mu (R_i -c)$ for every customer $i$ with , where $c\geq0$ is a specified constant. \item {\em Specified load reduction:} In the second case, we assume that the DRA wishes the overall load reduction to be equal to a given value $\Gamma$. In this case, the customers are in competition with each other for the load reduction they provide and the consequent payment they obtain. Thus, we must consider the bonus function to customer $i$ to be a function of not only her own report $R_i$, but also the reports from other customers. Following the classical Cournot game \cite{cournot}, we propose a bonus function of the form $B_i(\{R_{i}\})=R_i(\lambda- \sum_{j=1}^{N}R_j)$, where $\lambda$ is a designer-specified parameter that depends on $\Gamma$. \end{itemize} \subsubsection{Unspecified load reduction} We begin with the case when the bonus function is of the form $B_i(R_i)=\mu (R_i -R_0)$ for all customers. In this case, there is no competition among the customers. Our first result says that we can simplify the incentive compatibility constraint. \begin {lem} \label{pro1} Consider the problem $\mathcal{P}_{1}$ such that $\forall i$, the bonus function does not depend on $R_{j}, j\neq i$ and is further of the form $B_i(R_i)=B(R_i)$. If the proposed contract structure in \eqref{c1} is incentive compatible, then it holds that $\forall i,$ \begin{equation*} \frac{\partial B(R_i)}{\partial x_i}=\beta_i (R_i-x_i)\frac{\partial R_i}{\partial x_i} \qquad \textrm{ and }\qquad \frac{\partial R_i}{\partial x_i}\geq0. \end{equation*} In particular for the contract $B(R_i)=\mu (R_i-R_0)$, these conditions reduce to $$\frac{\partial R_i}{\partial x_i}(\mu-\beta_i (R_i-x_i))=0\qquad \textrm{ and }\qquad\frac{\partial R_i}{\partial x_i}\geq0.$$ \end{lem} \begin{proof} See Appendix. \end{proof} \begin{remark} The result implies that an incentive compatible contract will associate higher load reduction $x_i$ with a higher report $R_i$. \end{remark} With this result, we can restate the problem to be solved by the DRA as \begin{equation*} \textrm{$\mathcal{P}_{2}$:} \begin{cases} &\underset{{\{\mu,\{\alpha_{i}\}\}}}\max \E_\mathcal E[\Pi]\\ s.t. &\textrm{$\{a_{i}, R_{i}\}$ is chosen to maximize $\E_\mathcal E[V_i]$ by each}\\ &\textrm{ customer $i$}\\ & \mu \geq 0\\ & \textrm{Individual rationality constraint: } \E[V_i]\geq0\\ & \textrm{Incentive compatibility constraints: } \mu=\beta_{i} (R_i-x_i)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \frac{\partial R_i}{\partial x_i}\geq0. \end{cases} \end{equation*} The following result summarizes the optimal contract and the resulting actions under it for the problem $\mathcal{P}_{2}$. \begin{thm} \label{pro44} Consider the problem $\mathcal{P}_2$ posed above. \begin{itemize} \item The optimal contract obtained as a solution to the problem is specified by the relations \begin{align*} \mu^*&=\frac{R_0 \beta_i}{2}\\ \alpha^*_i&=0.5-\mu^*. \end{align*} \item In response to this optimal contract, every customer $i$ over-reports her true load reduction as $R^*_i=x_i+\frac{\mu^*}{\beta_i }$. Further, the customer exerts the effort $a^*_i=\mu^*+\alpha_i^*.$ \end{itemize} \end{thm} \begin{proof} See Appendix. \end{proof} \begin{remark} Note that the constraint that $\alpha^*_{i}\geq 0$ implies the condition $c\beta_{i}\leq 1.$ \end{remark} \subsubsection{Specified load reduction} We now consider the case when $B_i(R_i, \sum _{j=1}^{N}R_j)=R_i(\lambda-\sum _{j=1}^{N}R_j)$. Once again, we can simplify the incentive compatibility constraint according to the following result. \begin{lem} \label{pro551} Consider the problem $\mathcal {P}_1$ with a bonus function for the $i$-th customer that depends on the report $R_{i}$ submitted by the $i$-th customer and the sum of the reports $\sum_{j=1}^{N} R_j$ submitted by all other customers. If the proposed contract is incentive compatible, then it holds that \begin{align*} &\frac {\partial {\E_{\mathcal{E}_{-i}}\left[B(R_i, \sum_{j=1}^{N} R_j)\right]}}{\partial {x_i}}= \beta_i(R_i-x_i)\frac {\partial {R_i}}{\partial {x_i}}\\ \nonumber &\frac{\partial R_i}{\partial x_i}\geq0. \end{align*}In particular for the contract $B_i(R_i, \sum _{j=1}^{N}R_j)=R_i(\lambda-\sum _{j=1}^{N}R_j)$, these conditions reduce to\begin{align} &\frac{\partial R_i}{\partial x_i}\left[\lambda+\beta_i x_i-(\beta_i+2)R_i-\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} \E_{\mathcal{E}_{-i}}[R_j]\right]= R_i \frac{\partial \sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} \E_{\mathcal{E}_{-i}}[R_j]}{\partial x_i} \label{IC2}, \\\nonumber& \frac{\partial R_i}{\partial x_i}\geq0. \end{align} \end{lem} \begin{proof} See Appendix. \end{proof} We can now restate the problem $\mathcal{P}_{1}$ to be solved by the DRA as follows. \begin{equation*} \textrm{$\mathcal{P}_{3}$:} \begin{cases} &\underset{\{\alpha_{i}\}}\max \E_\mathcal E[\Pi]\\ s.t. &\textrm{$\{a_{i}, R_{i}\}$ is chosen to maximize $\E_\mathcal E[V_i]$ by each}\\ &\textrm{ customer $i$}\\ & \textrm{Individual rationality constraint: } \E[V_i]\geq0\\ & \textrm{Incentive compatibility constraints specified by \eqref{IC2}} \\ &\textrm{expected overall load reduction = }\Gamma. \end{cases} \end{equation*} Since the bonus paid to $i$-th customer is a function not only of $R_i$, but also of the reports from the other customers, the customers compete against each other to gain the maximum compensation possible. Thus, the optimal strategies of the players become interdependent. We analyze this interdependence in the usual Nash Equilibrium sense. For the following result, we make the simplification that all the parameters $\alpha_i$'s and $\beta_i$'s are constants with $\alpha_i=\frac{\alpha}{N}$ and $\beta_i=\beta$, $\forall i$. We first present the following initial result. \begin{thm} \label{theorem2} Consider the problem $\mathcal P_{3}$. Define the variables \begin{align*} A&=\frac{\beta+F}{\beta+1+N}\\ B&=\frac{\beta(\beta+F)}{(\beta+1)(\beta+1+N)}\\ C&=1+\frac{2(\beta+F)^2}{(\beta+2)^2}-\frac{2(\beta+F)F}{(\beta+2)}+\frac{\beta(4-4F+F^2)}{(\beta+2)^2}\\ D&=\frac{(\frac{\beta}{\beta+1+N})^2}{C+(N-1)B}\\ E&=\frac{N}{\beta+1+N}\\ F&= \frac{\beta (N-1)}{(\beta+1+N)(\beta+1)}. \end{align*} There is a unique Nash Equilibrium among the users and the DRA as given by the following: \begin{enumerate}[label=(\roman*)] \item The DRA selects the contract as \begin{align} \label{eq:share_shared} \alpha_{i}^*&=\frac{1-\lambda[A(1-2ND)+\frac{\beta}{\beta+1+N}(1-2E)]}{2(1-ND)}\\ \lambda^*&=\frac{(C+(N-1)B)\Gamma-N\alpha_{i}^*}{AN}. \label{eq:bonus_shared} \end{align} \item The customers exert the optimal effort and report as \begin{align} \label{eq:action_shared} a_i^*&= \frac{\alpha_i+A\lambda-B\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} a_j }{C}=\frac{\alpha_i+A\lambda}{C+(N-1)B}\\ R^*_i&=\frac{\lambda-\left(G-F a_i^*\right)+\beta x_i}{\beta+2}, \label{eq:report_shared} \end{align} where $$ G=\frac{\lambda(N-1)}{\beta+1+N}+ \frac{\beta(\beta+2)\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} a_j^*}{(\beta+1)(\beta+1+N)}.$$ \end{enumerate} This equilibrium always exists. \end{thm} \begin{proof} See Appendix. \end{proof} \begin{remark} To obtain the conditions for $\alpha_{i}^*\geq0$, we can substitute \eqref{eq:share_shared} in \eqref{eq:bonus_shared} to obtain \begin{align*} \Gamma&=N\frac{\alpha_i^*+A\frac{1-\alpha^*2(1-ND)}{A(1-2ND)+\frac{\beta (1-2E)}{\beta+1+N}}}{C+(N-1)B}\\ \Rightarrow\alpha_i^*&=\frac{A-\Gamma \frac{(C+(N-1)B)[A(1-2ND)+\frac{\beta(1-2E)}{\beta+1+N}]}{N}}{\frac{\beta (2E-1)}{\beta+1+N}+A}. \end{align*} Note that the denominator evaluates to \begin{align*} &\frac{\beta (2E-1)}{\beta+1+N}+A\\ &=\frac{\beta}{\beta+1+N} \frac{N-\beta-1}{\beta+N+1}+\frac{\beta+F}{\beta+1+N}\\ &=\beta\left(\frac{2N}{(\beta+N+1)^2}+\frac{(N-1)}{(\beta+1+N)^2(\beta+1)} \right)\\& \geq 0. \end{align*} Thus, the condition for $\alpha^*_i\geq0$ is refined to the choice of $\Gamma$, $N$ and $\beta$ which satisfy \[ \Gamma\leq \frac{NA} {(C+(N-1)B)[A(1-2ND)+\frac{\beta(1-2E)}{\beta+1+N}]}. \] The condition implies that as the desired load reduction $\Gamma$ increases, the number of customers $N$ that the DRA contracts with must increase as well. \end{remark} \begin{remark} \label{rem.int} Note that the optimal level of the effort $a_i^*$ expended by the $i$-th customer is an increasing function of both the assigned share $\alpha$ and the total amount of desired load reduction $\lambda$, which is intuitively specifying. \end{remark} \subsection{Extensions} \label{sec5} Although the above development was done with some specific assumptions, the contracts can be generalized to remove many of these assumptions. We provide some examples below. For notational ease, we consider the case when $N=1$ and drop the subscript $i$ referring to the $i$-th customer. Further, we assume that the parameter $\beta=1$ and the bonus function is given by $B(R)=R(\lambda-R).$ \subsubsection {Realization error with non-zero mean} \label{ex1} The effort $a$ by the customer is assumed to lead to the realization of load reduction $x$. As specified by Assumption \ref{asum3}, in the development so far, we assumed that the realization error $e=x-a$ is a random variable with mean zero. If, instead, the error has mean $m_{e}$, then the following result summarizes the optimal contract. \begin{pro} Consider the problem $\mathcal{P}_{3}$ for $N=\beta=1$ and mean $m_{e}$ of the realization error. \begin{itemize} \item The optimal contract is given by \begin{align*} \alpha^*&=\frac{7.5-3 \lambda+4.3m_e}{14}\\ \lambda^*&=5\Gamma-3\left(\alpha^*+m_{e}\right). \end{align*} \item The optimal effort and the report by the customer are given by \begin{align*} a^*&=\frac{3\alpha+ \lambda-2m_e}{5}\\ R^*&=\frac{\lambda+ x}{3}. \end{align*} \end{itemize} \end{pro} \begin{proof} The proof follows in a straight-forward manner along the lines of that of Theorem~\ref{theorem2}. \end{proof} \begin{remark} Note that the optimal reporting function $R^*$ does not depend on $m_{e}$. Further, as $m_e$ increases (resp. decreases), \begin{itemize} \item the expected load saving for the same contract increases (resp. decreases), \item the optimal effort exerted by customer is lower (resp. higher), \item the optimal value $\alpha^*$ of the share provided by DRA to the customer will increase (resp. decrease). \end{itemize} \end{remark} \subsubsection {Inexact knowledge of the true load reduction} \label{ex2} So far, we assumed that at $t_{5}$, the DRA has an accurate knowledge of the true load reduction $x$ due to the customer. In practice, it may only be able to estimate this reduction by, e.g., large scale data analysis on all similar customers on that day or historical behavioral of the same customer. Let the DRA observe a noisy estimate $y=x+n$ of the load reduction at $t_{5}$, where $n$ denotes the estimation error. We assume that this error in independent of $X$ and has mean $m_n$. In this case, the share of the profit assigned to the customer changes to $\alpha y$. In other words, the utility functions of the customer and the DRA from \eqref{v1} and \eqref{v2} alter to \begin{align} \label{v1_new}V&=\alpha y+B(R)-h(a)-\beta\frac{(R-x)^{2}}{2}\\ \label{v2_new}\Pi&=(1-\alpha) y-B(R). \end{align} We have the following result that can be proved along the lines of Theorem~\ref{theorem2}. \begin{pro} Consider the problem $\mathcal{P}_{3}$ for $N=\beta=1$ and with $n$ denoting the error in estimating the load reduction $x$ at $t_{5}$, so that the utility functions of the customer and the DRA are given by~(\ref{v1_new}) and~(\ref{v2_new}). \begin{itemize} \item The optimal contract is given by \begin{align*} \alpha^*&=\frac{7.5-3 \lambda-12.5m_n}{14}\\ \lambda^*&=5\Gamma-3\alpha^*. \end{align*} \item The optimal effort and the report by the customer are given by \begin{align*} a^*&=\frac{3\alpha+ \lambda}{5}\\ R^*&=\frac{\lambda+ x}{3}. \end{align*} \end{itemize} \end{pro} \begin{remark} Note that the optimal reporting function $R^*$ and the optimal effort $a^*$ do not depend on $m_{n}$. Further, as $m_n$ increases (resp. decreases), the optimal value $\alpha^*$ of the share provided by DRA to the customer decreases (resp. increases). \end{remark} \section{ illustration and discussion} \label{illus} \begin{figure}[!htb] \centering \includegraphics[width=7cm, height=4cm]{fig1unsp.png} \caption{Expected utility of the DRA for various values of $\beta$ as a function of number of the customers $N$ for the specified load reduction scenario. } \label{fig:digraph} \end{figure} \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm, height=4cm] {udra.png} \caption{} \label{uDRA(N)} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm, height=4cm] {alphan.png} \caption{} \label{ALPHA-N} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm, height=4cm]{falsn.png} \caption{} \label{fals-N} \end{subfigure} \caption{Expected utility of the DRA (a), optimal value of the share to the customers (b), and expected value of the falsification by the customer as a function of $N$ for various $\Gamma$, under the bonus function $B_i=R_i(\lambda-\sum _{j=1}^{N}R_j)$, given $\beta=1$.} \end{figure} We now present some illustrative numerical examples. We first consider the unspecified load reduction scenario. We set $R_{0}=1$ and assume that $\beta_{i}=\beta$, $\forall i$. Figure \ref{fig:digraph} presents the expected utility of the DRA with the optimal contract as presented in Theorem~\ref{pro44} for various values of $\beta$ as we vary the number of the customers $N$ that the DRA contracts with. As shown in this figure, for a fixed value of $\beta$, the expected utility of the DRA increases linearly with the number of customers. This is intuitively specifying since as specified by Theorem~\ref{pro44}, for a fixed $\beta$, the effort invested by each customer for the optimal contract is a constant independent of $N$ or $\mu$. Further, we observe that for a fixed $N$, as $\beta$ increases, the expected utility of the DRA increases. In other words, for the same expected load reduction, the DRA needs to pay less to the customers. Note that satisfying the condition for $\alpha^* \geq 0$ precludes the choice of an arbitrarily large $\beta$ by the DRA. Next, we consider the specified load reduction scenario. We set $\beta=1$. Figure \ref{uDRA(N)} shows how the expected utility of the DRA varies as a function of the number of customers contracted by the DRA for various value of the total expected load reduction $\Gamma$ that the DRA desires. We can observe that for a small number of customers, the expected utility of the DRA is a decreasing function of $\Gamma$ while for large enough $N$, it is an increasing function of $\Gamma$. Intuitively, if the number of customers that the DRA has contracted with is too small, it must pay too high a compensation for realizing the desired load reduction. In fact, as the expected load reduction that it wishes increases, the expected utility of the DRA may become negative unless the number of customers is also increased. Once a sufficient number of customers have been contracted with, the total payment once again decreases with the number of customers. Once again, this does not imply that the number of customers can be increased arbitrarily given the constraints of $\alpha^* \geq 0$ and individual rationality for the customers. Viewed alternatively, for the same number of customers, the total expected load reduction $\Gamma$ is bounded by these constraints. For the same setting, the optimal value of the share $\alpha^*$ assigned to the customer as a function of the number of customers for various value of $\Gamma$ is illustrated in Figure \ref{ALPHA-N}. The plot indicates that the optimal value of the share assigned to the customer by the DRA is always positive as desired. Further, it is a decreasing function of the expected load reduction $\Gamma$ desired by the DRA. This plot illustrates that the constraint $\alpha^*>0$ imposes an upper bound on the accepted value of $\Gamma$. For a given value of $\Gamma$, the variation of the optimal share is non-intuitive, although it should be noted that for a large enough number of customers, the share converges to the same value. Figure \ref{fals-N} displays the expected value of the falsification by each customer as a function of the number of customers for various value of $\Gamma$. Figure \ref{fals-N} implies that the expected value of the falsification decreases when the DRA chooses a larger $\Gamma$; in fact, it becomes negative for a high enough $\Gamma$. The negative falsification is interesting since it implies that the customer {\em under-reports} her load reduction. Note that a larger value of $\Gamma$ can be interpreted as a higher expected utility of the DRA. Thus, as the expected utility of the DRA increases, the customers under-report their true load reduction since they can gain more compensation through their shares. Finally, we illustrate the impact of the realization error $m_e$ and the estimation error $m_n$. We consider a single customer and set $\beta=1$. Figure \ref{payment-me} plots the expected payment by the DRA as a function of $m_{e}$. As can be seen, the pattern of variation is quite complex. For a large enough $m_{e}$, the expected load reduction by the customer is large. Thus, the payment through the shares dominates and the expected payment also increases. Figure \ref{payment-mn} plots the expected payment by the DRA as a function of $m_{n}$. The figure illustrates that as the mean of the error with which the DRA estimates the true load reduction increases, it increases the compensation it provides to the customer. In addition, as the DRA wishes to realize a larger value of $\Gamma$, the expected value of the compensation also becomes larger. This is expected since a higher value of $m_{n}$ implies that the DRA observes a higher load reduction compared to the one realized in practice; consequently, it rewards the customer more based on its own observation. \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm, height=4cm]{payment-me.png} \caption{} \label{payment-me} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm, height=4cm]{paymen-mn.png} \caption{} \label{payment-mn} \end{subfigure} \caption{The expected payment to the customer $(a)$ as a function of $m_e$ and $(b)$ as a function of $m_n$, for different values of $\Gamma$, given $N=1$. } \end{figure} \section{Conclusion and future directions} \label{concl} In this paper, we designed an optimal contract between a demand response aggregator (DRA) and power grid customers for incentive-based demand response. We considered a setting in which the DRA asks the customers to reduce their electricity consumption and compensates them for this demand curtailment. However, given that the DRA must supply every customer with as much power as she desires, a strategic customer can temporarily increase her base load to report a larger reduction as part of the demand response event. The DRA wishes to incentivize the customers both to make costly effort to reduce load and to not falsify the reported load reduction. We modeled this problem as a contract design problem and presented a solution. The proposed contract consists of a part that depends on (the possibly inflated) load reduction as measured by the DRA and another that provides a share of the profit that accrues to the DRA through the demand response event to the customers. The contract design, its properties, and the interactions of the customers under the contract were discussed and illustrated. The paper opens many directions for future work. We can consider the dynamic case when the customers need to be incentivized to participate in demand response repeatedly. Particularly interesting is the case when the customers also gain a signal about the total load reductions on a particular day and can alter their strategies accordingly. Another problem that should be considered is when the DRA is able to observe only the sum of the profit due to the effort by multiple customers and thus the payment cannot be based on individual efforts of each customer. This may lead to the problem of `free-riding' in which some customers seek payment in spite of not putting in any effort, by relying on the efforts of other customers. \appendices \section{} \label{appendix_1} \textbf{Proof of Lemma \ref{lem0}} \begin{proof} If the DRA has accurate knowledge of the true load reduction $x_i$ at time $t_4,$ the utility of the DRA is given by $\Pi=\sum_{i=1}^{N}(x_i-P_i)$ and that of the $i$-th customer is given by $V_i=P_i-h(a_i)$. Thus, for any effort $a_{i}$ exerted by the customer, the payment $P_i=h(a_i)$ would solve the problem $\mathcal{P}_{1}$ (notice, in particular, that the individual rationality constraints will be satisfied with this payment). Substituting this payment in the utility of the DRA, we see that the level of effort $a_i$ by the $i$-th customer which maximizes the expected utility of the DRA is given by \[ a_i^{\star}=\argmax \E[x_i-h(a_i)]. \] Further, with this effort, the expected utility of the DRA is given by \[ \E[\Pi^{\star}]=\sum_{i=1}^{N}\left(a_{i}^{\star}-h(a_{i}^{\star})\right), \] where we have used Assumption~\ref{asum3}. However, this payment can be implemented only if the DRA could observe $a_{i}$. We show that even if $a_i$ is unobservable for the DRA, and it can only observe $x_i$ at time $t_4$, it can incentivize the customer to exert the same effort and realize the maximal utility $\E[\Pi^{\star}]$ for itself. To this end, consider the payment specified by \begin{equation*} P_i=x_i- a_i^{\star}+h(a_{i}^{\star}). \end{equation*} With this payment, the expected utility of the $i$-th customer can be written as \[ \E[V_i]=\E[x_i-a_i^{\star}+h(a_{i}^{\star})]-h(a_i)=\E[x_i-h(a_i)]-\left(a_i^{\star}-h(a_{i}^{\star})\right). \] Given the definition of $a_i^{\star}$, it is easy to see that the customer $i$ chooses $a_i=a_i^{\star}$ to maximize her expected utility. Further, the expected utility of the DRA is given by \[ \Pi=\sum_{i=1}^{N}\E[x_{i}-P_{i}]=\sum_{i=1}^{N}\E[a_i^{\star}-h(a_{i}^{\star})]=\E[\Pi^{\star}] \] Thus, the expected utility of the DRA is maximized with this choice of the payment. \end{proof} \textbf{Proof of Theorem \ref{thfirst} } \begin{proof} We prove by contradiction. Suppose that there exists a bonus function $B_{i}(R_i)$ and an allocation $\{\alpha_{i}\}$ which simultaneously satisfies two conditions: (i) $\mathcal{C}_{1}:$ it incentivizes each customer $i$ to choose the strategy $R(x_i)=x_i$, and (ii) $\mathcal{C}_{2}:$ it incentivizes each customer to choose $a_i=a_i^{\star}$. By $\mathcal{C}_{1}$, the utility of the $i$-th customer is maximized if she reports $R_i=x_i$. Since the portion $\alpha_i x_i$ of payment does not depend on $R_i,$ we can write \begin{align} \nonumber&\frac{\partial{\E_{\mathcal E_{-i}}[V_i]}}{\partial{ R_i}}\biggr\rvert \underset{R_i=x_i}\:\:\:=\frac{\partial \E_{\mathcal E_{-i}}[B_{i}(R_i)]}{\partial R_i}-\beta_i(R_i-x_i)\biggr\rvert\underset{R_i=x_i}\:\:\:=0,\\ \label{eq:proof_thm1_cond1}&\qquad\qquad\qquad\Rightarrow \frac{\partial \E_{\mathcal E_{-i}}[B_{i}(R_i)]}{\partial R_i}=0\\ \nonumber&\frac{\partial^2{\E_{\mathcal E_{-i}}[V_i]}}{\partial{R_i}^2}\biggr\rvert \underset{R_i=x_i}\:\:\:=\frac{\partial ^2\E_{\mathcal E_{-i}}[B_{i}(R_i)]}{\partial R_i^2}-\beta_i\biggr\rvert\underset{R_i=x_i}\:\:\:\leq0\\ \label{eq:proof_thm1_cond2}&\qquad\qquad\qquad\Rightarrow \frac{\partial ^2\E_{\mathcal E_{-i}}[B_{i}(R_i)]}{\partial R_i^2}\leq \beta_i. \end{align} (\ref{eq:proof_thm1_cond1}) implies that \[\E_{\mathcal E_{-i}}\bigg[\frac{\partial B_{i}(R_i)}{\partial R_i}\bigg]=0,\] or, in turn, that \[\int_{\mathcal E_{-i}}\frac{\partial B_{i}(R_i)}{\partial R_i}f_{X_{-i}}dX_{-i}=0.\] Since this equation should hold for all $R_{i}$, we must have $B_{i}(R_i)=c$ for some constant $c$. In other words, the DRA provides a fixed compensation to the customer irrespective of what she reports. Further, with truthful reporting, Lemma~\ref{lem0} implies that the payment \begin{equation} \label{eq:temp_thm_1} P=\sum_{i=1}^{N}P_{i}=\sum_{i=1}^{N}\left(x_i- a_i^{\star}+h(a_i^{\star})\right) \end{equation} maximizes the utility of the DRA while ensuring choice of the desired action $a_i^{\star}$ by the customers. The fact that $B_{i}(R_i)=c$ and that the payment is given by~(\ref{eq:temp_thm_1}) implies that the following conditions must be met \begin{equation*} B_{i}(R_i)=-\sum_{i=1}^{N} a_i^{\star}\qquad\textrm{ and }\qquad \sum _{i=1}^{N}\alpha_i=1. \end{equation*} However, this allocation violates Assumptions \ref{asum1} and \ref{asum2}. Thus, our supposition is wrong and there does not exist a payment function that simultaneously guarantees $\mathcal{C}_{1}$ and $\mathcal{C}_{2}.$ \end{proof} \textbf{Proof of Theorem \ref{pr0}} \begin{proof} We can write~(\ref{Rstar}) as \begin{multline*} R_i^*=\argmax_{R_{i}} \E_{\mathcal{E}_{-i}}[V_{i}(B_{i}(R_i), \alpha_i)]\\ =\argmax_{R_{i}}[\alpha_i x_i+\E_{\mathcal{E}_{-i}}[B_{i}(R_i)]-\beta_i \frac{(R_i-x_i)^2}{2}- h(a_i)]. \end{multline*} For optimality, we set \begin{align*} &\frac{\partial \E_{\mathcal{E}_{-i}}[V_{i}(B_{i}(R_i), \alpha_i)]}{\partial R_i}=0\\ \Rightarrow &\frac{\partial \E_{\mathcal{E}_{-i}}[B_{i}(R_i)]}{ \partial R_i}-\frac{\beta_i}{2} \frac{\partial (R_i-x_i)^2}{ \partial R_i}=0\\ \Rightarrow &R^*_i-x_i=\frac{1}{\beta_{i}}\frac{\partial \E_{\mathcal{E}_{-i}}[B_{i}(R_i)]}{ \partial R_i}\bigg\rvert_{R_{i}=R^*_{i}}. \end{align*} Note that the second derivative \[ \frac{\partial^2 \E_{\mathcal{E}_{-i}}[V_i]}{\partial R_i^2}=\frac{\partial^2 \E_{\mathcal{E}_{-i}}[B_{i}(R_i)]}{\partial R_i^2}-\beta_i<0, \] given the concavity of $B_{i}(R_i)$. Thus, $R_i^*$ that satisfies \eqref{Rstar2} is indeed a maximizer. \end{proof} \textbf{Proof of Theorem \ref{pro22}} \begin{proof} The effort $a_i$ is chosen to maximize the utility of the customer given that the optimal report is calculated as given in the equation~\eqref{Rstar2}. Thus, \begin{align*} a_i^*&=\argmax_{a_{i}} \E_{\mathcal{E}}\left[V_i(B_{i}(R^*_i), \alpha_i)\right]\\ &=\argmax_{a_{i}} \E_{\mathcal{E}}\left[\alpha_i x_i+B_{i}(R_i^{*})-\beta_i \frac{(R_i^{*}-x_i)^2}{2} - \frac{a_i^2}{2}\right]\\ &=\argmax_{a_{i}} \left[\alpha_i a_i+\E_{\mathcal{E}}\left[B_{i}(R_i^{*})-\frac{\beta_i}{2} (R_i^{*}-x_i)^2\right] - \frac{a_i^2}{2}\right], \end{align*} where we have used Assumption~\ref{asum3}. For optimality, we set \[ \frac{\partial \E_{\mathcal{E}}\left[V_i(B_{i}(R^*_i), \alpha_i)\right]}{\partial a_i}=0.\] This condition yields \begin{equation} \alpha_i+\frac{\partial \E_{\mathcal{E}}\left[B_{i}(R_i^{*})-\frac{\beta_i}{2} (R_i^{*}-x_i)^2\right]}{ \partial a_i}-a_i=0. \label{astar11} \end{equation} Using Theorem~\ref{pr0}, we can write this condition as \[a_i^*=\alpha_i+\frac{\partial \E_{\mathcal{E}}\left[B_{i}(R^*_i)-\frac{1}{2\beta_i}\frac{\partial B_{i}(R_i)}{ \partial R_i}\bigg\rvert_{R_{i}=R^*_{i}}\right]}{ \partial a_i}\Bigg\rvert_{a_{i}=a_{i}^{*}}.\] Finally, given the concavity of $\E_{\mathcal{E}}\left[V_i(B_{i}(R^*_i), \alpha_i)\right]$ in $a_i$, we note that $a_i^*$ is a maximizer. \end{proof} \textbf{Proof of Lemma \ref{pro1}} \begin{proof} \eqref{v1} implies that the utility of the $i$-th customer, $V(B(R_i), \alpha_i)$, depends on the parameter $x_{i}$ through the report $R_{i}$ and the bonus $B(R_{i})$. For an incentive compatible contract, the utility of the $i$-th customer is maximized when she chooses to calculate her report (and consequently receive the bonus) based on $\hat{x}_i=x_i$. Envelop theorem thus implies that the optimal choice of the parameter should satisfy \begin{equation} \frac{dV(B(R_i), \alpha_i)}{d x_i}=\frac{\partial V(B(R_i), \alpha_i)}{\partial x_i}. \label{I1} \end{equation} We note that \begin{align*} & \frac{dV(B(R_i), \alpha_i)}{d x_i}\\ &= \frac{\partial V(B(R_i), \alpha_i)}{\partial B} \frac{\partial B(R(x_i))}{\partial x_i}+\frac{\partial V(B(R_i), \alpha_i)}{\partial R_i} \frac{\partial R(x_i)}{\partial x_i}\\&\qquad\qquad\qquad\qquad+\frac{\partial V(B(R_i), \alpha_i)}{\partial x_i} \frac{\partial x_i}{\partial x_i}\\ &= \frac{\partial B(R_i)}{\partial x_i}-\beta_i(R_i-x_i)\frac{\partial R_i}{\partial x_i}+\frac{\partial V(B(R_i), \alpha_i)}{\partial x_i}. \end{align*} Thus, \eqref{I1} yields \[ \frac{\partial B(R_i)}{\partial x_i}= \beta_i(R_i-x_i)\frac{\partial R_i}{\partial x_i}.\] To evaluate the second order condition, we start with the first order incentive compatibility condition as \begin{equation*} \frac{dV(B(R_i), \alpha_i)}{d \hat{x}_i}\Bigg\vert_{\hat{x}_{i}=x_{i}}=0, \end{equation*} and differentiate both sides with respect to $x_{i}$ to obtain \[ \frac{\partial^2 V(B(R_i), \alpha_i)}{\partial \hat{x}_i^2} \Bigg\vert_{\hat{x}_{i}=x_{i}} \frac{\partial \hat{x}_i}{\partial x_i}\Bigg\vert_{\hat{x}_{i}=x_{i}}+ \frac{\partial^2 V(B(R_i), \alpha_i)}{\partial x_i \partial \hat{x}_i}\Bigg\vert_{\hat{x}_{i}=x_{i}}=0. \] The second-order condition for the optimal choice of $\hat{x}_i$ implies that $$\frac{\partial^2 V(B(R_i), \alpha_i)}{\partial \hat{x}_i^2} \Bigg\vert_{\hat{x}_{i}=x_{i}} \leq 0.$$ Thus, we can write \begin{align*} &\frac{\partial^2 V(B(R_i), \alpha_i)}{\partial x_i \partial \hat{x}_i}\Bigg\vert_{\hat{x}_{i}=x_{i}}\geq 0\\ \Rightarrow& \frac{\beta_{i}}{2}\frac{\partial^2 (R_i-x_i)^{2}}{\partial x_i^2} \frac{\partial R(x_i)}{\partial x_i} \geq0 \\ \Rightarrow &\frac{\partial R(x_i)}{\partial x_i}\geq0. \end{align*} In particular, for the contract $B(R_i)=\mu \left(R_i-R_{0}\right)$, it is straightforward to see that these conditions reduce to \[\frac{\partial R_i}{\partial x_i}(\mu-\beta_i (R_i-x_i))=0, \quad \frac{\partial R_i}{\partial x_i}\geq0.\] \end{proof} \textbf{Proof of Theorem \ref{pro44}} \begin{proof} First, the optimal load reduction is specified by Theorem~\ref{pr0}. Thus, according to \eqref{Rstar2}, the optimal report is specified as \begin{align*} R^*_i&=x_{i}+\frac{1}{\beta_{i}}\frac{\partial \E_{\mathcal{E}_{-i}}[B_{i}(R_i)]}{ \partial R_i}\bigg\rvert_{R_{i}=R^*_{i}}\\ &=x_i+\frac{\mu}{\beta_i}. \end{align*} Using this report, we can calculate the optimal effort exerted by the customer using Theorem~\ref{pro22}. Thus, from~(\ref{effortstar}), we have \begin{align*} a_i^*=&\alpha_i+\frac{\partial \E_{\mathcal{E}}\left[B_{i}(R^*_i)-\frac{1}{2\beta_i}\left(\frac{\partial B_{i}(R_i)}{\partial R_i}\big\rvert_{R_{i}=R_{i}^{*}}\right)^2\right]}{\partial a_i}\Bigg\rvert_{a_{i}=a_{i}^{*}}\\ =&\alpha_i+\frac{\partial \E[\mu (R_i^*-R_0)-\frac{\mu^2}{2\beta_i}]}{\partial a_i}\\=&\alpha_i+\frac{\partial \E[\mu (x_i+\frac{\mu}{\beta_i}-R_0)-\frac{\mu^2}{2\beta_i}]}{\partial a_i}\\ =&\mu+\alpha_i. \end{align*} The optimal contract can now be specified using Theorem~\ref{proposition_opt_contract}. First,~\eqref{optalfa} implies the relation \begin{align} \nonumber \alpha_i^*&=1-\frac{a_i^*+\frac{\partial \E_{\mathcal{E}}[B_{i}(R_i)]}{\partial \alpha_i}\Big\rvert_{\alpha_{i}=\alpha_{i}^{*}}}{\frac{\partial a^*_i}{\partial \alpha_i}\Big\rvert_{\alpha_{i}=\alpha_{i}^{*}} }\\ \nonumber &=1-(\mu^*+\alpha^*_i)-\frac{\partial \E_{\mathcal{E}}[\mu^* (x_i+\frac{\mu^*}{\beta_i}-R_0)]}{\partial \alpha_i}\\ &=0.5-\mu^*. \label{eq:temp_1} \end{align} On the other hand,~(\ref{bp3}) implies the relation \begin{align} \nonumber \mu^*&= \argmax\left[ (1-\alpha^*_i)a_i^*-\E_{\mathcal{E}}[B^*(R_i^*)]\right]\\ \nonumber&=\argmax\left[ (1-\alpha_i)(\mu+\alpha_i)-\E[\mu (x_i+\frac{\mu}{\beta_i}-R_0)]\right]\\ \nonumber &=\argmax\left[(1-\alpha_i)(\mu+\alpha_i)-\mu (\mu+\alpha_i+\frac{\mu}{\beta_i}-R_0)\right]\\ &=\frac{0.5-\alpha^*+\frac{R_0}{2}}{1+\frac{1}{\beta}}. \label{eq:temp_2} \end{align} From~(\ref{eq:temp_1}) and~(\ref{eq:temp_2}), we solve $\mu^*=\frac{c \beta_i}{2}.$ Finally, it is straight-forward to check that the optimal contract satisfies all the constraints in the problem. \end{proof} \textbf{Proof of Lemma \ref{pro551}} \begin{proof} With the proposed bonus function, the utility of customer $i$ depends on the true load reduction by the other customers since $V_i$ is a function of $x_j$, where $j=1, \cdots, N$, $j\neq i$. Since Assumption \ref{asum4} states that customer $i$ does not have access to the load savings ${\mathcal{E}_{-i}}$ by other customers at the time of generating the report and obtaining the consequent bonus, the contract is incentive compatible if the expected utility of customer $i$ (with expectation taken with respect to ${\mathcal{E}_{-i}}$) is maximized if the customer $i$ calculates the report based on her true load saving $x_{i}$. Now, following the proof of Lemma~\ref{pro1}, we can obtain that necessary and sufficient conditions for incentive compatibility as\begin{align} \label{second_incentive_first} &\frac {\partial {\E_{\mathcal{E}_{-i}}\left[B(R_i, \sum_{j=1}^{N} R_j)\right]}}{\partial {x_i}}= \beta_i(R_i-x_i)\frac {\partial {R_i}}{\partial {x_i}}\\ \nonumber &\frac{\partial R_i}{\partial x_i}\geq0. \end{align} In particular, for the contract $B_i(\cdot)=R_i(\lambda-\sum _{j=1}^{N}R_j)$, it is straightforward to see that these conditions reduce to \begin{align*} &\frac{\partial R_i}{\partial x_i}\left[\lambda+\beta_i x_i-(\beta_i+2)R_i-\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} \E_{\mathcal{E}_{-i}}[R_j]\right]= R_i \frac{\partial \sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} \E_{\mathcal{E}_{-i}}[R_j]}{\partial x_i}, \\& \frac{\partial R_i}{\partial x_i}\geq0. \end{align*} \end{proof} \textbf{Proof of Theorem \ref{theorem2}} \begin{proof} We first prove that the contract structure and the effort and the report specified in the theorem statement specifies a Nash equilibrium and then show that the equilibrium is unique and it always exists. To this end, we start by identifying the optimal report as specified by Theorem~\ref{pr0} with the specified bonus function and the assumption $\beta_{i}=\beta$. \textit{Proof of~(\ref{eq:report_shared}):} We have \begin{align} \nonumber R^*_i&=x_{i}+\frac{1}{\beta}\frac{\partial \E_{\mathcal{E}_{-i}}[B_{i}(R_i)]}{ \partial R_i}\bigg\rvert_{R_{i}=R^*_{i}}\\ \nonumber &=x_{i}+\frac{1}{\beta}\frac{\partial \E_{\mathcal{E}_{-i}}[R_i(\lambda-R_i-\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N}R_j)]}{ \partial R_i}\bigg\rvert_{R_{i}=R^*_{i}}\\ \nonumber &=x_{i}+\frac{1}{\beta}\frac{\partial\left(R_i(\lambda-R_i-\E_{\mathcal{E}_{-i}}[\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N}R_j])\right)}{ \partial R_i}\bigg\rvert_{R_{i}=R^*_{i}}\\ \nonumber &=x_{i}+\frac{1}{\beta}\left(\lambda-R_i^{*}-\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N}\E_{\mathcal{E}_{-i}}[R_j]- R_i^{*}\right)\\ \label{14}\Rightarrow R^{*}_{i} &=\frac{\lambda-\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} \E_{\mathcal{E}_{-i}}[R_j]+\beta x_i}{\beta+2}, \end{align} where we have used the fact that according to Assumption~\ref{asum4}, $R_{j}$ is a function of $x_{j}$ only and $\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N}\E_{\mathcal{E}_{-i}}[R_j]$ is not a function of $R_i.$ We take the expectation of both sides of~(\ref{14}) with respect to $\mathcal{E}$ to obtain \begin{align} \nonumber\E_{\mathcal{E}}(R_i^*)&=\frac {\lambda-\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} \E_{\mathcal{E}}(R_j^*)+\beta a_i}{\beta+2}\\ &=\frac {\lambda-\sum\limits_{j=1}^{N} \E_{\mathcal{E}}(R_j^*)+\beta a_i}{\beta+1}\label{ERN}\\ \Rightarrow \sum\limits_{j=1}^{N} \E_{\mathcal{E}}(R_j^*)&= \frac{N\lambda +\beta \sum\limits_{j=1}^{N} a_j }{\beta+1+N}.\label{ERB} \end{align} Subtracting \eqref{ERN} from \eqref{ERB} thus yields \begin{align} \nonumber &\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} \E_{\mathcal{E}}(R_j^*)\\ \nonumber&=\frac{\lambda(N-1)}{\beta+1+N}+\frac{\beta}{(\beta+1)}\left( \frac{(\beta+2)\sum\limits_{j=1}^{N} a_j-a_i(\beta+1+N) }{(\beta+1+N)}\right)\\ \nonumber&=\frac{\lambda(N-1)}{\beta+1+N}+\frac{\beta}{(\beta+1)}\left(\frac{(\beta+2)\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} a_j-(N-1)a_i }{(\beta+1+N)}\right)\\ &=G-Fa_{i}.\label{1j} \end{align} Substituting this value in \eqref{14}, we obtain~(\ref{eq:report_shared}). \textit{Proof of~(\ref{eq:action_shared}):} The optimal choice of effort $a^*_i$ is specified by Theorem~\ref{pro22}. With the given bonus function and the assumptions $\beta_{i}=\beta$ and $\alpha_{i}=\alpha/N,$ we obtain \begin{align*} a^*_i&=\argmax_{a_{i}} \E_{\mathcal{E}}\left[\frac{\alpha}{N} x_i+ R^*_i(\lambda-\sum\limits_{j=1}^{N} R^*_j)-\frac{\beta}{2}(R^*_i-x_i)^2-\frac{a_i^2}{2}\right]\\ &=\argmax_{a_{i}} \left(\frac{\alpha}{N} a_i+ \E_{\mathcal{E}}\left[R^*_i(\lambda-\sum\limits_{j=1}^{N} R^*_j)-\frac{\beta}{2}(R^*_i-x_i)^2\right]-\frac{a_i^2}{2}\right). \end{align*} Using the first derivative condition to evaluate the optimal choice of $a_i,$ we set \begin{align} \nonumber a_i^{*}-\frac{\alpha}{N}&=\frac{\partial{\E_{\mathcal{E}} \left[R_i^*(\lambda-\sum\limits_{j=1}^{N} R^*_j)\right]}}{\partial a_i}\Bigg\vert_{a_{i}=a_i^{*}}\\&\qquad\qquad-\frac{\beta}{2}\frac{\partial{\E_{\mathcal{E}}\left[(R_i^*-x_i)^2\right]}}{\partial a_i}\Bigg\vert_{a_{i}=a_i^{*}}. \label{EVV} \end{align} We evaluate the terms on the left hand side as follows. \begin{align} \nonumber&\frac{\partial{\E_{\mathcal{E}} \left[R_i^*(\lambda-\sum\limits_{j=1}^{N} R^*_j)\right]}}{\partial a_i}\\ \nonumber&=\lambda\frac{\partial{\E_{\mathcal{E}} \left[R_i^*\right]}}{\partial a_i}-\frac{\partial{\E_{\mathcal{E}} \left[\left(R_i^*\right)^{2}\right]}}{\partial a_i}-\frac{\partial{\E_{\mathcal{E}} \left[R_i^*\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} R^*_j\right]}}{\partial a_i}\\ &=\lambda\frac{\partial{\E_{\mathcal{E}} \left[R_i^*\right]}}{\partial a_i}-\frac{\partial{\E_{\mathcal{E}} \left[\left(R_i^*\right)^{2}\right]}}{\partial a_i}-\frac{\partial{\E_{\mathcal{E}} \left[R_i^*\right]\E_{\mathcal{E}}\left[\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} R^*_j\right]}}{\partial a_i}\label{eq:ind_expectations_1}\\ \nonumber&=\lambda\frac{\partial{\E_{\mathcal{E}} \left[\frac{\lambda-\left(G-F a_i\right)+\beta x_i}{\beta+2}\right]}}{\partial a_i}-\frac{\partial{\E_{\mathcal{E}} \left[\left(\frac{\lambda-\left(G-F a_i\right)+\beta x_i}{\beta+2}\right)^{2}\right]}}{\partial a_i}\\ &\qquad-\frac{\partial{\E_{\mathcal{E}} \left[\frac{\lambda-\left(G-F a_i\right)+\beta x_i}{\beta+2}\right]\left(G-Fa_{i}\right)}}{\partial a_i} \label{eq:ind_expectations_2}\\ \nonumber&=\lambda\frac{F +\beta}{\beta+2}-\frac{2\left((F+\beta)(\lambda-G)+(F+\beta)^{2}a_i\right)}{\left(\beta+2\right)^{2}}\\ &\qquad+ \frac{\left(\lambda-G+\left(F +\beta\right)a_i\right)F+\left(G-Fa_{i}\right)\left(F +\beta\right)}{\beta+2} \label{eq:ind_expectations_3} \end{align} where~(\ref{eq:ind_expectations_1}) follows from Assumption \ref{assum11} and the fact that the report $R_{i}$ is not dependent on $x_{j}$ for all $j\neq i$, ~(\ref{eq:ind_expectations_2}) follows from~(\ref{eq:report_shared}) and~(\ref{1j}),~(\ref{eq:ind_expectations_3}) follows from Assumption~\ref{asum3} and straight-forward algebraic manipulation. Similar manipulation yields \begin{equation} \frac{\partial{\E_{\mathcal{E}}\left[(R_i^*-x_i)^2\right]}}{\partial a_i}=\frac{(2(F-2)(\lambda-G)-8F a_i+8a_i+ 2F^2 a_i)}{(\beta+2)^2}. \label{eq:ind_expectations_4} \end{equation} Substituting \eqref{eq:ind_expectations_3} and \eqref{eq:ind_expectations_4} in \eqref{EVV} and solving for $a_{i}^{*}$ yields \begin{align} \label{p44} a_i^*&=\frac{\frac{\alpha}{N}+\frac{\beta+F}{\beta+2}(\lambda-G)}{1+\frac{2(\beta+F)^2}{(\beta+2)^2}-\frac{2(\beta+F)F}{(\beta_i+2)}+\frac{\beta(4-4F+F^2)}{(\beta+2)^2}}\\ \label{aopt12}&=\frac{\frac{\alpha}{N}+A\lambda-B\sum\limits_{\substack{{j=1}\\{j\neq i}}}^{N} a_j^* }{C}. \end{align} Summing \eqref{aopt12} $N$ times for $i=1,\cdots,N,$ we obtain \begin{align*} (C+(N-1)B)\sum_{j= 1}^{N} a^*_j &=N(\frac{\alpha}{N}+A\lambda)\\ \Rightarrow \sum_{j= 1}^{N} a^*_j =\frac{N(\frac{\alpha}{N}+A\lambda)}{C+(N-1)B}&=\frac{\alpha+NA\lambda}{C+(N-1)B}. \end{align*} Substituting in \eqref{aopt12} finally yields~(\ref{eq:action_shared}). The second derivative condition implies that $a_{i}^*$ is indeed a maximizer. \textit{Proof of~(\ref{eq:bonus_shared}):} Given~(\ref{eq:action_shared}) and Assumption~\ref{asum3}, we have that the overall load reduction \begin{align*} \Gamma&=\sum_{i=1}^{N}\E_{\mathcal{E}}\left[x_{i}\right]\\ &=Na_{i}^*\\ &=N\frac{\alpha_i^*+A\lambda^*}{C+(N-1)B}. \end{align*} A simple realignment yields~(\ref{eq:bonus_shared}). \textit{Proof of~(\ref{eq:share_shared}):} Given~(\ref{eq:bonus_shared}), the optimal share is specified by~(\ref{optpay}). With the specified bonus function, we have \begin{align} \nonumber \alpha^*_i&= \argmax_{\alpha_{i}} \E_{\mathcal{E}}\left[\Pi\left(\{B_{i}(R_i^*)\}, \{\alpha_i^*\}\right)\right]\\ &= \frac{1}{N}\argmax_{\alpha}\E_{\mathcal{E}}\left[(1-\frac{\alpha}{N})\sum_{i=1}^{N}x_i^*-\sum_{i=1}^{N}R^*_{i} (\lambda^{*}- \sum_{j=1}^{N}R^*_{j})\right]. \label{ps34} \end{align} From \eqref{14} and \eqref{ERB}, we can write \begin{align} \nonumber \sum_{i=1}^{N}R_{i}^*=&\frac{N\lambda^*-(N-1)\sum\limits_{j=1}^{N} \E\left[R_j^*\right]+\sum\limits_{j=1}^{N}\beta x_i}{\beta+2}\\ =&E\lambda^*+B_1\sum_{j= 1}^{N} a^*_j +C_1\sum_{j= 1}^{N} x_j, \label{ps341} \end{align} where $E=\frac{N}{\beta+1+N}$, $C_1=\frac{\beta}{\beta+2}$, and $B_1=-\frac{(N-1)\beta}{(\beta+2)(\beta+1+N)}$. Substituting~(\ref{ps341}) in \eqref{ps34}, we obtain \begin{align} \nonumber \alpha^*_i&=\frac{1}{N}\argmax_{\alpha}\Biggl((1-\frac{\alpha}{N}) \sum_{j= 1}^{N} a^*_j \\ \nonumber&- \E_{\mathcal{E}}\Biggl[ \left(E\lambda^*+B_1\sum_{j= 1}^{N} a^*_j +C_1\sum_{j= 1}^{N} x_j\right)\\&\qquad\qquad\left((1-E)\lambda^*-B_1\sum_{j= 1}^{N} a^*_j-C_1\sum_{j= 1}^{N} x_j\right)\Biggr]\Biggr). \label{w95} \end{align} The first order derivative condition implies that $\alpha^*$ is given by the equation \begin{multline*} -\frac{\sum_{j= 1}^{N} a^*_j}{N}+\frac{1}{C+(N-1)B}\Bigl[(1-\frac{\alpha^*}{N})\\+(B_1+C_1)\left(-\lambda^*+2(E\lambda^* (B_1+C_1)\sum_{j= 1}^{N} a^*_j)\right)\Bigr]=0, \end{multline*} which yields \begin{align} \nonumber \alpha_{i}^*&=\frac{1-\lambda^*[A(1-2ND)+(B_1+C_1)(1-2E)]}{2(1-ND)}\\ \nonumber &=\frac{1-\lambda^*[A(1-2ND)+\frac{\beta(1-2E)}{\beta+1+N}]}{2(1-ND)}. \end{align} \textit{Proof of optimality of the contract:} $\alpha_{i}^*$ and $\lambda_{i}^*$ have been chosen to satisfy~(\ref{optpay}) and the constraint on the total load reduction. It is easy to verify that the individual rationality and incentive compatibility constraints are met. Thus, the contract is optimal in the sense of solving Problem $\mathcal{P}_{3}$. That the Nash equilibrium always exists and is unique is clear from the above derivation of the contract and the optimal actions and reports. \end{proof} \bibliographystyle{IEEEtran}
1,116,691,497,923
arxiv
\section{Introduction} Integrability has been the driving force behind the recent years' progress in the study of the spectral problem of the $AdS_5/CFT_4$ correspondence~\cite{Minahan:2002ve,Beisert:2003tq, Bena:2003wd}. Integrability is conjectured to hold in all sectors to all loop orders~\cite{Beisert:2003tq,Beisert:2005fw} and impressive tests involving quantities extrapolating from weak to strong coupling have been performed~\cite{Beisert:2006ez, Beisert:2006ib,Basso:2007wd,Bajnok:2008bm}. Recently a novel explicit example of a gauge/string duality of type $AdS_4/CFT_3$ has emerged~\cite{Aharony:2008ug} and one could hope that integrability would play an equally important role there. So far in the $AdS_4/CFT_3$ correspondence integrability is at a much less firm setting. The gauge theory dilatation generator has been proved to be integrable in the scalar sector at leading two-loop order~\cite{Minahan:2008hf,Bak:2008cp} and the string theory has been proved to be classically integrable in certain subsectors~\cite{Stefanski:2008ik,Arutyunov:2008if,Gomis:2008jt}. Investigations probing integrability at the quantum level of the string theory have been carried out in various regimes such as the BMN limit~\cite{Nishioka:2008gz,Gaiotto:2008cg,Grignani:2008is}, the giant magnon regime~\cite{Grignani:2008is,Grignani:2008te} and the near BMN and near flat-space limits~\cite{Astolfi:2008ji,Kreuzer:2008vd}. There exist conjectures about integrability of the full $AdS_4/CFT_3$ system in all sectors to all loops~\cite{Gromov:2008qe} and a number of tests have come out affirmative~\cite{Astolfi:2008ji,Ahn:2008aa,McLoughlin:2008ms,McLoughlin:2008he} but certain problems still seem to require resolution~\cite{McLoughlin:2008he}. The spectral information only constitutes one part of the information encoded in the gauge and string theory. Eventually, one would like to go beyond the spectral problem and study interacting string theory respectively non-planar gauge theory. A widespread expectation is that integrability cannot persist beyond the planar limit. In reference~\cite{Beisert:2003tq} a way to characterize and quantify the deviation from integrability was presented for ${\cal N}=4$ SYM. In this case one observed at the planar level some a priori unexpected degeneracies in anomalous dimensions between certain pairs of operators with opposite parity. These degeneracies could be explained by the existence of an extra conserved charge and thus eventually by the integrability of the theory. When non-planar corrections were taken into account these degeneracies were found to disappear. Notice, however, that the degeneracies observed at planar one-loop order persisted when planar higher loop corrections were taken into account. This observation was in fact the seed that led to the conjecture about all loop integrability of ${\cal N}=4$ SYM~\cite{Beisert:2003tq}. In the present paper we will study non-planar corrections to ${\cal N}=6$ superconformal Chern--Simons--matter theory, the three-dimensional field theory entering the $AdS_4/CFT_3$ correspondence, in order to investigate whether one observes a similar lifting of spectral degeneracies related to integrability when one goes beyond the planar level. Our investigations will be carried out in the $SU(2)\times SU(2)$ sector at two-loop level and will thus not rely on or involve any conjectures. Using a method based on effective vertices we will derive the full two-loop dilatation generator in this sector involving all non-planar corrections. For short operators the action of this dilatation generator can easily be written down, resulting in a mixing matrix of low dimension which can be diagonalized explicitly.\footnote{For ${\cal N}=4$ SYM, explicit diagonalization at the non--planar level for a range of operators of this type was carried out in \cite{Beisert:2003tq}, see also \cite{Bellucci:2004ru}.} Another type of operators for which the mixing matrix can easily be written down are BMN--type operators~\cite{Berenstein:2002jq} which contain a large (infinite) number of background fields and a small (finite) number of excitations. We will look into the nature of the BMN quantum mechanics~\cite{Beisert:2002ff} of ${\cal N}=6$ superconformal Chern--Simons--matter theory and will find that in the BMN scaling limit the two-loop ${\cal N}=6$ theory resembles the one loop ${\cal N}=4$ SYM theory. Away from the scaling limit the ${\cal N}=6$ dilatation generator has additional terms. The mixing problem of the BMN limit of ${\cal N}=4$ SYM was never solved beyond the planar limit even perturbatively in $\frac{1}{N}$ due to complications arising from huge degeneracies in the planar spectrum~\cite{Freedman:2003bh}. A third type of operators one could dream of studying beyond the planar limit are operators dual to spinning strings. Such operators typically contain $M$ excitations and $J$ background fields where $J,M\rightarrow \infty$ with $\frac{M}{J}$ finite. For such operators, however, acting with the dilatation generator involves evaluating infinitely many terms and writing down the dilatation generator exactly seems intractable. In reference~\cite{Casteill:2007td} it was suggested that non-planar corrections to operators dual to spinning strings could be treated using a coherent state formalism. Non-planar effects in ${\cal N}=6$ superconformal Chern--Simons--matter theory should reflect interactions in the dual type IIA string theory. Directly comparable quantities are, however, not immediate to write down, not least because the $AdS_4/CFT_3$ duality implies the following relation between the string coupling constant and the gauge theory parameters~\cite{Aharony:2008ug} \begin{equation} g_s=\frac{\lambda^{5/4}}{N}. \end{equation} This should be compared to the similar relation for ${\cal N}=4$ SYM that took the form $g_s=\frac{\lambda}{N}$ which at least gave the hope that interacting BMN string states could be studied by perturbative gauge theory computations. The comparison between the perturbative non-planar gauge theory and the interacting string theory, described in terms of light cone string field theory on a plane wave, however, remained inconclusive. For a recent review, see~\cite{Grignani:2006en}. It is thus primarily with the purpose of investigating the role of integrability beyond the planar limit and the structural similarities and differences between ${\cal N}=4$ SYM and ${\cal N}=6$ superconformal Chern--Simons--matter theory that we engage into the present investigations. We start in section~\ref{summary} by giving an ultra-short summary of ${\cal N}=6$ superconformal Chern--Simons--matter theory, i.e.\ ABJM theory. Subsequently in section~\ref{derivation1} we derive the full two-loop dilatation generator in the $SU(2)\times SU(2)$ sector, deferring the details to Appendix~\ref{derivation2}. After a short discussion of the structure of the dilatation generator in section~\ref{structure} we explain in section~\ref{charges} the relation between planar degeneracies and conserved charges. Then we proceed to apply the dilatation generator to respectively short operators in section~\ref{short} and BMN operators in section~\ref{BMNsection}. Finally, section~\ref{conclusion} contains our conclusion. \section{ABJM theory \label{summary}} Our notation will follow that of references~\cite{Benna:2008zy,Bak:2008cp}. ABJM theory is a three-dimensional superconformal Chern--Simons--matter theory with gauge group $U(N)_k\times U(N)_{-k}$ and $R$-symmetry group $SU(4)$. The parameter $k$ denotes the Chern--Simons level. The fields of ABJM theory consist of gauge fields $A_m$ and $\bar{A}_m$, complex scalars $Y^I$ and Majorana spinors $\Psi_I$, $I\in \{1,\ldots 4\}$. The two gauge fields belong to the adjoint representation of the two $U(N)$'s. The scalars $Y^I$ and the spinors $\Psi_I$ transform in the $N\times \bar{N}$ representation of the gauge group and in the fundamental and anti-fundamental representation of $SU(4)$ respectively. For our purposes it proves convenient to write the scalars and spinors explicitly in terms of their $SU(2)$ component fields, i.e.~\cite{Benna:2008zy} \begin{eqnarray} Y^I &= &\{Z^A,W^{\dagger A}\}, \hspace{0.7cm} Y^\dagger_I=\{Z_A^\dagger,W_A\},\nonumber \\ \Psi_I&=& \{\epsilon_{AB}\,\xi^B\, e^{i\pi/4}, \epsilon_{AB}\,\omega^{\dagger B}\, e^{-i\pi/4},\}, \nonumber\\ \Psi^{I \dagger}& = &\{-\epsilon^{AB}\,\xi_B^\dagger\, e^{-i\pi/4}, -\epsilon^{AB}\,\omega_B\, e^{i\pi/4} \}, \nonumber \end{eqnarray} where now $A,B\in \{1,2\}$. Expressed in terms of these fields the action reads \begin{eqnarray} S &= &\int d^3x \left [\frac{k}{4\pi} \epsilon^{m n p} \mbox{Tr} ( A_m \partial_n A_p+\frac{2i}{3} A_m A_n A_p )- \frac{k}{4\pi} \epsilon^{m n p} \mbox{Tr} ( \bar{A}_m \partial_n \bar{A}_p+\frac{2i}{3} \bar{A}_m \bar{A}_n \bar{A}_p ) \nonumber \right. \\ && \left. - \mbox{Tr} ( {\cal D}_m Z)^\dagger {\cal D}^m Z-\mbox{Tr} ({\cal D}_m W)^\dagger {\cal D}^m W +i \mbox{Tr} \xi^\dagger {\cal D}\hspace{-0.3cm}\slash \hspace{0.13cm} \xi +i\mbox{Tr} \omega^\dagger {\cal D}\hspace{-0.3cm}\slash \hspace{0.13cm}\omega -V^{ferm}-V^{bos}\right]. \nonumber \end{eqnarray} Here the covariant derivatives are defined as \begin{equation} {\cal D}_m Z^A = \partial_m Z^A + i A_m Z^A-i Z^A \bar{A}_m, \hspace*{0.7cm}{\cal D}_m W_A = \partial_m W_A + i \bar{A}_m W_A-i W_A A_m, \end{equation} and similarly for ${\cal D}_m \xi^B$ and ${\cal D}_m \omega_B$. The bosonic as well as the fermionic potential can be separated into D-terms and F-terms which read \begin{equation} \nonumber \begin{split} V^{ferm}_D =& \frac{2\pi i}{k} \mbox{Tr} \left[ (\xi^A \xi_A^\dagger\!-\!\omega^{\dagger A} \omega_A)(Z^BZ_B^\dagger\!-\!W^{\dagger B}W_B) \!-\! (\xi_A^\dagger \xi^A\!-\!\omega_A \omega^{\dagger A})(Z_B^\dagger Z^B\!-\!W_BW^{\dagger B} ) \right] \\ & +\frac{4\pi i}{k} \mbox{Tr}\left[ (\xi^A Z_A^\dagger\!-\!W^{\dagger A} \omega_A)(Z^B \xi_B^\dagger\!-\!\omega^{\dagger B} W_B) \!-\! (Z_A^\dagger \xi^A\!-\!\omega_A W^{\dagger A})(\xi_B^\dagger Z^B\!-\!W_B \omega^{\dagger B}) \right], \end{split} \end{equation} \begin{equation}\nonumber \begin{split} V_F^{ferm}&=\frac{2\pi}{k} \epsilon_{AC} \epsilon^{BD}\, \mbox{Tr}\Big[ 2\xi^A W_B Z^C \omega_D\!+\!2\xi^A \omega_B Z^C W_D\!+\!Z^A \omega_B Z^C \omega_D \!+\!\xi^A W_B \xi^C W_D\Big] \\ & \!\!\!\!+\!\frac{2\pi}{k} \epsilon^{AC}\epsilon _{BD}\, \mbox{Tr} \left[ 2 \xi^\dagger_A W^{\dagger B} Z_C^\dagger\omega^{\dagger D} \!+\!2\xi_A^\dagger \omega^{\dagger B} Z_C^\dagger W^{\dagger D} \!+\!Z_A^\dagger \omega^{\dagger B} Z_C^\dagger \omega^{\dagger D}\!+\!\xi_A^\dagger W^{\dagger B}\xi_C^\dagger W^{\dagger D}\right], \end{split} \end{equation} \begin{equation} \begin{split} V_D^{bos}= \left(\frac{2\pi}{k}\right)^2 \mbox{Tr} &\left[ \left(Z^A Z_A^\dagger + W^{\dagger A} W_A\right) \left(Z^BZ_B^\dagger-W^{\dagger B}W_B\right) \left(Z^C Z_C^\dagger-W^{\dagger C}W_C\right)\right.\\ &+ \left(Z_A^{\dagger} Z^A + W_A W^{\dagger A}\right) \left(Z_B^\dagger Z^B -W_B W^{\dagger B}\right) \left(Z_C^\dagger Z^C - W_C W^{\dagger C}\right) \\ & - 2 Z_A^\dagger \left(Z^BZ_B^\dagger-W^{\dagger B}W_B \right) Z^A \left(Z_C^\dagger Z^C - W_C W^{\dagger C}\right) \\ &\left. -2 W^{\dagger A}\left(Z_B^\dagger Z^B -W_B W^{\dagger B}\right) W_A \left(Z^C Z_C^\dagger-W^{\dagger C}W_C\right) \right] \end{split} \end{equation} and \begin{equation} \begin{split} V_F^{bos}= -\left(\frac{4\pi}{k} \right)^2 \mbox{Tr} &\left[ W^{\dagger A} Z_B^\dagger W^{\dagger C} W_A Z^B W_C -W^{\dagger A} Z_B^\dagger W^{\dagger C}W_C Z^B W_A \right. \\ & \left. +Z_A^\dagger W^{\dagger B} Z_C^\dagger Z^A W_B Z^C-Z_A^\dagger W^{\dagger B} Z_C^\dagger Z^ C W_B Z^A \right]. \label{VFbos} \end{split} \end{equation} Introducing a 't Hooft parameter for the theory \begin{equation} \lambda=\frac{4\pi N}{k}, \end{equation} one can consider the 't Hooft limit \begin{equation} N\rightarrow \infty, \hspace{0.7cm} k\rightarrow \infty, \hspace{0.7cm} \lambda\,\, \mbox{ fixed.} \end{equation} Furthermore, the theory has a double expansion in $\lambda$ and $\frac{1}{N}$. In this paper we will be interested in studying non-planar effects for anomalous dimensions at the leading two-loop level. \section{The derivation of the full dilatation generator \label{derivation1}} \begin{figure}[t] \begin{center} \begin{picture}(80,140)(0,0) \put(0,10){ \BBox(0,10)(60,8) \SetColor{Blue} \Line(30,10)(30,90) \CArc(60,10)(50,125,180) \CArc(0,10)(50,0,55) \CArc(60,90)(50,180,235) \CArc(0,90)(50,-55,0) \SetColor{Green} \Vertex(30,50){2} \Text(30,-5)[c]{(a)} } \end{picture} \begin{picture}(80,140)(0,0) \put(0,10){ \BBox(0,10)(60,8) \SetColor{Blue} \Line(10,10)(10,90) \Line(52,10)(52,90) \SetColor{BrickRed} \PhotonArc(31,22)(35,51,129){2}{6} \PhotonArc(31,78)(35,-129,-51){-2}{6} \SetColor{Green} \Vertex(10,50){2} \Vertex(52,50){2} \Text(30,-5)[c]{(b)} } \end{picture} \begin{picture}(80,140)(0,0) \put(0,10){ \BBox(0,10)(60,8) \SetColor{Blue} \Line(10,10)(10,90) \Line(52,10)(52,90) \SetColor{Black} \DashCArc(31,22)(35,51,129){2} \DashCArc(31,78)(35,-129,-51){2} \SetColor{Green} \Vertex(10,50){2} \Vertex(52,50){2} \Text(30,-5)[c]{(c)} } \end{picture} \begin{picture}(80,140)(0,0) \put(0,10){ \BBox(0,10)(60,8) \SetColor{Blue} \Line(10,10)(10,35) \Line(50,10)(50,35) \Line(10,35)(50,35) \Line(30,70)(10,90) \Line(30,70)(50,90) \SetColor{BrickRed} \Photon(10,35)(30,70){2}{4} \Photon(50,35)(30,70){-2}{4} \SetColor{Green} \Vertex(30,70){2} \Vertex(10,35){2} \Vertex(50,35){2} \Text(30,-5)[c]{(d)} } \end{picture} \caption{The four types of two--loop diagrams contributing to anomalous dimensions. For operators in the $SU(2)\times SU(2)$ sector diagrams in class (d) do not contribute.}\label{Figure} \end{center} \end{figure} In~\cite{Minahan:2008hf,Bak:2008cp} an expression for the planar dilatation generator acting on operators of the type \begin{equation} {\cal O}=\mbox{Tr}(Y^{A_1} Y_{B_1}^\dagger Y^{A_2} Y_{B_2}^\dagger\ldots Y^{A_L} Y_{B_L}^\dagger), \end{equation} where $A_i,B_i \in \{1,2\}$ was derived and proved to be identical to the Hamiltonian of an integrable alternating $SU(4)$ spin chain. Here we will restrict ourselves to considering scalar operators belonging to a $SU(2)\times SU(2)$ sub-sector i.e.\ operators of the following type \begin{equation} \label{operators} {\cal O}= \mbox{Tr}\left(Z^{A_1} W_{B_1} \ldots Z^{A_L} W_{B_L} \right), \end{equation} and their multi-trace generalizations. For this class of operators we wish to derive the full dilatation generator including non-planar contributions. In order to do so we employ the method of effective vertices from reference~\cite{Beisert:2002bb}. An effective vertex is a vertex which encodes the combinatorics of a given type of Feynman diagram. For instance, the scalar D-terms give rise to the following effective vertex contributing to the dilatation generator acting on operators of the type given in eqn.~\rf{operators} \begin{equation} \label{VD} \begin{split} \left(V_D^{bos}\right)^{eff}= \gamma \, :\,\mbox{Tr} &\left[ \left(Z^A Z_A^\dagger + W^{\dagger A} W_A\right) \left(Z^BZ_B^\dagger\!-\!W^{\dagger B}W_B\right) \left(Z^C Z_C^\dagger\!-\!W^{\dagger C}W_C\right)\right.\\ & +\left(Z_A^{\dagger} Z^A + W_A W^{\dagger A}\right) \left(Z_B^\dagger Z^B \!-\!W_B W^{\dagger B}\right) \left(Z_C^\dagger Z^C \!-\! W_C W^{\dagger C}\right) \\ & \!-\! 2 Z_A^\dagger \left(Z^BZ_B^\dagger\!-\!W^{\dagger B}W_B \right) Z^A \left(Z_C^\dagger Z^C \!-\! W_C W^{\dagger C}\right) \\ & \left. \!-\!2 W^{\dagger A}\left(Z_B^\dagger Z^B \!-\!W_B W^{\dagger B}\right) W_A \left(Z^C Z_C^\dagger\!-\!W^{\dagger C}W_C\right) \right] : \end{split} \end{equation} where each daggered field is supposed to be contracted with a field inside ${\cal O}$, the omissions of self-contractions of the vertex being encoded in the symbol $:\,\, :$ . All contractions of $(V_D^{bos})^{eff}$ with the operator ${\cal O}$ multiply the same Feynman integral whose value we denote as $\gamma$. The relevant integral is represented by the Feynman diagram in Fig~1a. The dilatation generator also gets contributions from the bosonic $F$-terms, gluon exchange (Fig.~1b), fermion exchange (Fig.~1c) and scalar self interactions~\cite{Minahan:2008hf,Bak:2008cp}. Notice, however, that for operators belonging to the $SU(2)\times SU(2)$ sector there are no contributions involving paramagnetic interactions (Fig.~1d). If things work as in ${\cal N}=4$ SYM the contribution from the D-terms in the sixth order scalar potential should cancel against contributions from gluon exchange, fermion exchange and self-interactions to all orders in the genus expansion. We show explicitly in Appendix A that this is indeed the case. We thus have that the full two-loop dilatation generator takes the form \begin{equation} \label{normalVFbos} D=:V_F^{bos}: \end{equation} It is easy to see that the dilatation generator vanishes when acting on an operator consisting of only two of the four fields from the $SU(2)\times SU(2)$ sector. Accordingly we will denote two of the fields, say $Z_1$ and $W_1$, as background fields and $Z_2$ and $W_2$ as excitations. It is likewise easy to see that operators with only one type of excitations, say $W_2$'s, form a closed set under dilatations. For operators with only $W_2\:$-excitations the dilatation generator takes the form \begin{equation} \begin{split} D= -\left(\frac{4\pi}{k} \right)^2 :\mbox{Tr} &\left[ W^{\dagger 2} Z_1^\dagger W^{\dagger 1} W_2 Z^1 W_1 -W^{\dagger 2} Z_1^\dagger W^{\dagger 1}W_1 Z^1 W_2 \right. \\ & \left. \ +W^{\dagger 1} Z_1^\dagger W^{\dagger 2} W_1 Z^1 W_2 -W^{\dagger 1} Z_1^\dagger W^{\dagger 2}W_2 Z^1 W_1 \right]: \label{oneexcitation} \end{split} \end{equation} In the case of two different types of excitations, i.e.\ both $W_2$'s and $Z_2$'s, the dilatation generator has 16 terms. It appears from the one in~\rf{oneexcitation} by adding similar terms with 1 and 2 interchanged and subsequently adding the same operator with $Z$ and $W$ interchanged. In both cases $D$ is easily seen to reduce to the one of~\cite{Minahan:2008hf,Bak:2008cp} in the planar limit \begin{equation} D_{planar}\equiv \lambda^2 D_0= \lambda^2 \sum_{k=1}^{2L} (1-P_{k,k+2}), \end{equation} where $P_{k,k+2}$ denotes the permutation between sites $k$ and $k+2$ and $2L$ denotes the total number of fields inside an operator. As explained in~\cite{Minahan:2008hf,Bak:2008cp} this is the Hamiltonian of two Heisenberg magnets living respectively on the odd and the even sites of a spin chain. The two magnets are coupled via the constraint that the total momentum of their excitations should vanish which is needed to ensure the cyclicity of the trace. \section{The structure of the dilatation generator \label{structure}} As proved in the previous section and in Appendix~\ref{derivation2} the two-loop dilatation generator in the $SU(2)\times SU(2)$ sector takes the form given in eqn.~\rf{normalVFbos}. When acting on a given operator we have to perform three contractions as dictated by the three hermitian conjugate fields. It is easy to see that by acting with the dilatation generator one can change the number of traces in a given operator by at most two.\footnote{Acting with the dilatation generator involves performing three contractions. Performing the first of these does not change the number of traces. Each of the subsequent contractions on the other hand can lead to an increase or decrease of the trace number by one.} More precisely, the two loop dilatation generator has the expansion \begin{equation} D= \lambda^2 \left( D_0+\frac{1}{N} D_+ +\frac{1}{N} D_- +\frac{1}{N^2}D_{00}+ \frac{1}{N^2} D_{++}+\frac{1}{N^2} D_{--}\right). \label{Dexpansion} \end{equation} Here $D_+$ and $D_{++}$ increase the number of traces by one and two respectively and $D_-$ and $D_{--}$ decrease the number of traces by one and two. Finally, $D_0$ and $D_{00}$ do not change the number of traces. We notice that in ${\cal N}=4$ SYM the two-loop dilatation generator in the $SU(2)$ sector has a similar expansion~\cite{Beisert:2003tq} whereas the most studied, one-loop dilatation generator involves only two contractions and does not contain any $\frac{1}{N^2}$ terms~\cite{Kristjansen:2002bb,Constable:2002hw,Beisert:2002bb,Constable:2002vq}. Let us assume that we have found an eigenstate of the planar dilatation generator $D_0$, i.e. \begin{equation} D_0 |{\cal O}\rangle = E_{\cal O} |{\cal O} \rangle, \end{equation} and let us treat the terms sub-leading in $\frac{1}{N}$ as a perturbation. First, let us assume that there are no degeneracies between $n$-trace states and $(n+1)$-trace states in the spectrum or that the perturbation has no matrix elements between such degenerate states. If that is the case we can proceed by using non-degenerate quantum mechanical perturbation theory. Clearly, the leading $\frac{1}{N}$ terms do not have any diagonal components so the energy correction for the state $|{\cal O}\rangle$ reads: \begin{equation} \delta E_{\cal O}= \frac{\lambda^2}{N^2}\,\sum_{{\cal K}\neq {\cal O}} \frac{\langle {\cal O} | D_+ + D_-| {\cal K}\rangle \langle{\cal K}|D_+ + D_-|{\cal O}\rangle}{E_{\cal O}-E_{\cal K}} +\frac{\lambda^2}{N^2}\, \langle {\cal O}| D_{00}| {\cal O}\rangle. \end{equation} If there are degeneracies between $n$-trace states and $(n+1)$-trace states we have to diagonalize the perturbation in the subset of degenerate states and the corrections will typically be of order $\frac{1}{N}$. \section{Planar parity pairs, conserved charges and integrability \label{charges}} In the previous sections we derived the two--loop non--planar dilatation generator for the $SU(2)\times SU(2)$ sector and analyzed its structure. From the work of \cite{Minahan:2008hf,Bak:2008cp} we know that the planar part of the dilatation generator can be identified as the Hamiltonian for an integrable $SU(2)\times SU(2)$ spin chain. It is then interesting to ask what happens to integrability once non-planar corrections are taken into account. One approach to answering this question is to consider \emph{planar parity pairs}, as we will now review. As part of their analysis of the dilatation generator of ${\cal N}=4$ SYM, the authors of \cite{Beisert:2003tq} considered its action on short scalar operators. They observed an a priori unexpected degeneracy in the resulting spectra, between operators with the same trace structure but opposite \emph{parity}, where the latter is defined as the operation that reverses the order of all generators within each trace (in other words, complex conjugation of the gauge group generators)~\cite{Doikou:1998jh}. Parity commutes with the action of the dilatation generator (and is thus a conserved quantity), therefore one expects that the various operators will organize themselves into distinct sectors according to their (positive or negative) parity. Positive and negative parity sectors do not mix with each other and there is no reason to expect any relation between their spectra. However, in \cite{Beisert:2003tq} it was observed that every time there exist operators, which have the same trace structure and belong to the same global $SO(6)$ representation but have opposite parity, their \emph{planar} anomalous dimensions turn out to be equal. This degeneracy could be very simply understood as a consequence of parity symmetry and planar integrability: Recall that one of the hallmarks of integrability is the existence of a tower of commuting conserved charges $Q_n$ (the hamiltonian $Q_2$ being just one of them). For the ${\cal N}=4$ SYM spin chain there exists such a charge $Q_3$ which (being conserved) commutes with the dilatation generator but \emph{anticommutes} with the operation of parity. This clearly implies the existence of pairs of operators with opposite parity and equal anomalous dimension at the planar level. Thus planar integrability manifests itself in the spectrum of short operators through the appearance of degeneracies between planar parity pairs. Moving beyond planar level, it was observed in \cite{Beisert:2003tq} that all these degeneracies are lifted: There is no apparent relation between the different parity sectors in the spectrum of the non--planar dilatation generator. This was taken as an indication (though by no means a proof) that integrability is lost once one considers non--planar corrections. In this connection, it is worth noticing that the degeneracies observed at planar one-loop order remain when planar higher loop corrections are taken into account~\cite{Beisert:2003tq}. Returning to ${\cal N}=6$ ABJM theory, it is interesting to ask whether the same pattern of planar degeneracies which are lifted at the non--planar level arises in the present context. We begin by defining a parity operation which inverts the order of all generators within each trace, for example: \begin{equation} \mbox{Tr}\left[Z_1W_1Z_1W_2Z_2W_1\right]\; \longrightarrow \mbox{Tr}\left[W_1Z_2W_2Z_1 W_1Z_1\right]= \mbox{Tr}\left[Z_1W_1Z_1W_1Z_2W_2\right]. \end{equation} Obviously, the Hamiltonian of the $SU(2)\times SU(2)$ spin chain is parity symmetric. Furthermore, from the work of~\cite{Minahan:2008hf,Bak:2008cp} we know that the conserved charges of the $SU(2)\times SU(2)$ spin chain are nothing but the sum of the charges of the two $SU(2)$ Heisenberg spin chains. In particular, the third charge $Q_3$ again anti-commutes with parity while commuting with the Hamiltonian. Hence we conclude that we should expect to see parity pairs in the planar part of the spectrum. Furthermore, the intuition gained from ${\cal N}=4$ SYM points to these degeneracies being broken once non--planar corrections are taken into account. In the following section, by explicitly considering the action of the dilatation generator on a series of short operators, we will see that both these expectations are confirmed. \section{Short Operators \label{short}} In this section we determine non-planar corrections to a number of short operators. This is done by explicitly computing and diagonalizing the mixing matrix (aided by {\tt GPL Maxima} as well as {\tt Mathematica}). \subsection{Operators with only one type of excitation } Operators with only one type of excitation can, at the planar level, be described in terms of just a single Heisenberg spin chain and behave at the leading two--loop level very similarly to their ${\cal N}=4$ SYM cousins at one--loop level. Notice, however, that once one goes beyond the planar limit the dilatation generator has novel $\frac{1}{N^2}$ terms. The simplest set of operators for which one observes degenerate parity pairs as well as non-trivial mixing between operators with different number of traces consists of operators of length 14 with three excitations. There are in total 17 such non-protected operators. Notice that due to the absence of the trace condition of ${\cal N}=4$ SYM, for which the gauge group is $SU(N)$, there are more operators here than the naive generalizations of the ${\cal N}=4$ SYM ones. Among the non-protected operators there are only 8 which are not descendants and which we list below. (To improve readability we suppress the background $Z_1$ fields.) Notice that only ${\cal O}_1$, ${\cal O}_3$ and ${\cal O}_6$ have analogues in ${\cal N}=4$ SYM. \begin{equation} \begin{split} \mathcal{O}_1&=\mbox{Tr}([ W_1 W_1, W_1 W_2] W_1 W_2 W_2 )\\ \mathcal{O}_2&= \mbox{Tr}(W_1 ) \mbox{Tr}( W_1 [W_1, W_2] W_1 W_2 W_2)\\ \mathcal{O}_3&=2 \mbox{Tr}( W_1 W_1 W_1 W_1 W_2 W_2 W_2) - 3\mbox{Tr}( W_1 W_2 W_2 W_1 W_1 W_1 W_2) \\ &- 3\mbox{Tr} ( W_1 W_2 W_1 W_1 W_1 W_2 W_2) +2 \mbox{Tr}( W_1 W_2 W_1 W_2 W_1 W_1 W_2)\\ &+ 2 \mbox{Tr}( W_1 W_1 W_2 W_1 W_1 W_2 W_2)\\ \mathcal{O}_4&=4 (2\! +\! \sqrt5) \mbox{Tr}(W_2) \mbox{Tr}( W_1 W_1 W_1 W_2 W_1 W_2) \!-\! 2(1\! +\! \sqrt5) \mbox{Tr}(W_2) \mbox{Tr}(W_1 W_1 W_1 W_1 W_2 W_2)\\ &- 2 (3 \!+\! \sqrt5) \mbox{Tr}(W_2) \mbox{Tr}( W_1 W_1 W_2 W_1 W_1 W_2) \!+\!(3 \!+\! \sqrt5) \mbox{Tr}(W_1 ) \mbox{Tr}(W_1 W_1 W_2 W_1 W_2 W_2)\\ &+(3 \!+\! \sqrt5)\mbox{Tr}(W_1) \mbox{Tr}(W_1 W_2 W_1 W_1 W_2 W_2) \!-\! 2 \mbox{Tr}(W_1) \mbox{Tr}(W_1 W_1 W_1 W_2 W_2 W_2)\\ &\!-\! 2 (2\!+\! \sqrt5) \mbox{Tr}(W_1) ( W_2 W_1 W_2 W_1 W_2 W_1)\\ \mathcal{O}_5&=\!- 4 (2 \!-\! \sqrt5)\mbox{Tr}(W_2) \mbox{Tr}( W_1 W_1 W_1 W_2 W_1 W_2) \!+\!2 (1 \!-\! \sqrt5) \mbox{Tr}(W_2) \mbox{Tr}( W_1 W_1 W_1 W_1 W_2 W_2)\\ &\!+\!2 (3\! -\! \sqrt5) \mbox{Tr}(W_2) \mbox{Tr}( W_1 W_1 W_2 W_1 W_1 W_2) \!-\! (3 \!-\! \sqrt5) \mbox{Tr}((W_1) \mbox{Tr}(W_1 W_1 W_2 W_1 W_2 W_2)\\ &\!-\! (3\! -\! \sqrt5) \mbox{Tr}(W_1 ) \mbox{Tr}( W_1 W_2 W_1 W_1 W_2 W_2) +2 \mbox{Tr}(W_1) \mbox{Tr}( W_1 W_1 W_1 W_2 W_2 W_2)\\ &+2 (2\!-\! \sqrt5) \mbox{Tr}(W_1) \mbox{Tr}( W_2 W_1 W_2 W_1 W_2 W_1)\\ \mathcal{O}_6&=\mbox{Tr}(W_1 W_1 ) \mbox{Tr}( W_1 [ W_2, W_1] W_2 W_2) +\mbox{Tr}( W_1 W_2)\mbox{Tr}( W_1 W_1 [ W_1, W_2] W_2) \\ \mathcal{O}_7&=\mbox{Tr}(W_1 )\mbox{Tr}( W_1) \mbox{Tr}( W_1 [ W_2, W_1] W_2 W_2) +\mbox{Tr}(W_2 ) \mbox{Tr}( W_1) \mbox{Tr}( W_1 W_1 [ W_1, W_2] W_2)\\ \mathcal{O}_8&= \mbox{Tr}(W_2 ) \mbox{Tr}( W_1 W_1) \mbox{Tr}( W_1 [W_2, W_1] W_2) +\mbox{Tr}( W_1) \mbox{Tr} ( W_1 W_2)\mbox{Tr}( W_1 [ W_1, W_2] W_2)\\ \end{split} \end{equation} \newpage The associated planar anomalous dimensions (in units of $\lambda^2$), trace structure and parity are \begin{center} \begin{tabular}{cccc} Eigenvector & Eigenvalue & Trace structure & Parity\\ \hline $\mathcal{O}_1$ & $5$ & (14) & $-$\\ $\mathcal{O}_2$ & $6 $ & (2)(12) & $-$\\ $\mathcal{O}_3$ & $5$ & (14) & $+$ \\ $\mathcal{O}_4$ &$5+\sqrt{5}$& (2)(12) & $+$\\ $\mathcal{O}_5$ & $ 5-\sqrt{5}$ & (2)(12) & $+$\\ $\mathcal{O}_{6}$& $4$ & (4)(10) & $+$\\ $\mathcal{O}_{7}$ &$4$ & (2)(2)(10) & $+$\\ $\mathcal{O}_{8}$ & $6$ & (2)(4)(8) & $+$\\ \end{tabular} \end{center} where by parity for multi-trace operators we mean the product of the parity of its single trace components. The planar anomalous dimensions of ${\cal O}_1$, ${\cal O}_3$ and ${\cal O}_6$ agree (as they should) with those of the similar operators in ${\cal N}=4$ SYM, cf.~\cite{Beisert:2003tq}. We have one pair of degenerate single trace operators with opposite parity, namely the operators ${\cal O}_1$ and ${\cal O}_3$.\footnote{We also observe a degeneracy between the negative parity double trace state ${\cal O}_2$ and the positive parity triple trace state $\mathcal{O}_8$ as well as a degeneracy between the double trace state $\mathcal{O}_6$ and the triple trace state $\mathcal{O}_7$ both of positive parity. However, states with different numbers of traces can not be connected via the conserved charge $Q_3$.} Expressing the dilatation generator in the basis above and taking into account all non-planar corrections we get \small \begin{eqnarray} \begin{pmatrix} 5\!+\!\frac{15}{N^2}&\hspace{-.1cm}0&\hspace{-.1cm}0& \hspace{-.1cm}0&\hspace{-.1cm}0&\hspace{-.1cm}0&\hspace{-.1cm}0 & 0\cr \hspace{-.1cm}\frac{6}{N^2}&\hspace{-.1cm} \!\!6\!+\!\frac{24}{N^2}&\hspace{-.1cm}0&\hspace{-.1cm}\,0&\hspace{-.1cm}0 & 0 & 0 & 0\cr 0 & 0 & \,\!\,\!5\!+\!\frac{35}{N^2}&\hspace{-.1cm}0&\hspace{-.1cm}0& -\frac{8}{N}&\hspace{-.1cm}-\frac{4}{N^2}&\hspace{-.1cm}-\frac{2}{N^2}\cr 0& 0& -\frac{\sqrt{5}}{N}&\hspace{-.1cm}\!\!5\!+\!\sqrt{5}\!+ \!\frac{\left(5\! \sqrt{5}\!+\!35\right)}{N^2}&\hspace{-.1cm}\frac{3\!\sqrt{5}\!-\!5}{N^2} &\hspace{-.1cm}\!\!\frac{1}{N^2} &\hspace{-.1cm}0&\hspace{-.1cm}\!\!\frac{2 }{N}\cr 0& 0& -\frac{\sqrt{5}}{N}&\hspace{-.1cm}-\frac{5\!+\!3\!\sqrt{5}}{N^2}&\hspace{-.1cm} \!\!{5}\!-\!\sqrt{5}\!-\!\frac{5\!\sqrt{5}\!-\!35}{N^2}& \hspace{-.1cm}-\frac{1}{N^2}&\hspace{-.1cm}\! \!\,\,0&\hspace{-.1cm}\!\! -\frac{2}{N}\cr % 0& 0 &-\frac{20}{N} &\hspace{-.1cm}\!\!\frac{4\sqrt{5}+20}{N^2}&\hspace{-.1cm} -\frac{20-4\sqrt{5}}{N^2}&\hspace{-.1cm}\!\!4\!+\!\frac{28}{N^2}& \hspace{-.1cm}\,0&\hspace{-.1cm}0\cr 0& 0& -\frac{10}{N^2}&\hspace{-.1cm}\frac{4\sqrt{5}+20}{N}& \hspace{-.1cm}\frac{4\sqrt{5}-20}{N}&\hspace{-.1cm}0&\hspace{-.1cm} \!\!4\!+\!\frac{32}{N^2}&\hspace{-.1cm}-\frac{2}{N^2}\cr % 0& 0& -\frac{10}{N^2}&\hspace{-.1cm}\! \!\frac{24\sqrt{5}+40}{N}&\hspace{-.1cm} \frac{24\sqrt{5}-40}{N}& \hspace{-.1cm}\frac{8}{N}&\hspace{-.1cm}-\frac{8}{N^2} &\!6\!+\!\frac{40}{N^2}\,\cr \end{pmatrix}\;. \end{eqnarray} \normalsize Notice the decoupling of positive and negative parity states and the presence of numerous $\frac{1}{N^2}$-terms which do not have analogues in one-loop ${\cal N}=4$ SYM. One observes that the states ${\cal O}_1$ and ${\cal O}_2$ are exact eigenstates of the full dilatation generator with non-planar corrections equal to \begin{equation} \delta E_1 = \frac{15}{N^2}, \hspace{0.7cm} \delta E_2 = \frac{24}{N^2}. \hspace{0.7cm} \end{equation} For the remaining operators we observe that all matrix elements between degenerate states vanish. Thus the leading non-planar corrections to the anomalous dimensions can be found using second order non-degenerate perturbation theory. The results read \begin{eqnarray} \delta E_3 &= & \frac{195}{N^2}, \hspace{0.7cm} \delta E_4 = \frac{115+37\sqrt{5}}{N^2}, \hspace{0.7cm} \nonumber\\ \delta E_5 &=& \frac{115-37\sqrt{5}}{N^2}, \hspace{0.7cm} \delta E_6 = -\frac{132}{N^2}, \\ \delta E_7 & =& \frac{32}{N^2}, \hspace{0.7cm} \delta E_8 = -\frac{120}{N^2}.\nonumber \end{eqnarray} We observe that all degeneracies found at the planar level get lifted when non-planar corrections are taken into account. This in particular holds for the degeneracies between the members of the planar parity pair $({\cal O}_1,{\cal O}_3)$. Notice that whereas the planar eigenvalues of the operators ${\cal O}_1$, ${\cal O}_3$ and ${\cal O}_6$ are identical to those of their ${\cal N}=4$ SYM cousins the non-planar corrections are not. \subsection{Operators with two types of excitations} An operator with two excitations of different type corresponds in spin chain language to the situation where each of the two coupled spin chains has one excitation. Such an operator does not immediately have an analogue in ${\cal N}=4$ SYM. (One can indeed consider scalar ${\cal N}=4$ SYM operators with two types of excitations $\Phi$ and $\Psi$ on a background of $Z$ fields but these operators should be organized into representations of $SO(6)$, and not of $SU(2)\times SU(2)$ as here, and thus always come in symmetrized or antisymmetrized versions.) \subsubsection{Length 8 with 2 excitations \label{twoexcitations}} Let us analyze the simplest multiplet of operators with two excitations of different types that exhibit some of the above mentioned non-trivial features of the $\frac{1}{N}$-expansion, operators of length eight with one excitation of each type. There are in total 7 such non-protected operators. The planar non-protected eigenstates of the two-loop dilatation generator read \begin{equation} \begin{split} \mathcal{O}_{ 1 }=&\mbox{Tr}(Z_1 W_1 \{ Z_1 W_2, Z_2 W_1\} Z_1 W_1)-\mbox{Tr}( W_1 Z_1 \{W_1 Z_2, W_2 Z_1\} W_1 Z_1 )\\ \mathcal{O}_{ 2 }=& -\mbox{Tr}(Z_1 W_1 [Z_1 W_2, Z_2 W_1] Z_1 W_1)+\mbox{Tr}( W_1 Z_1 [W_1 Z_2, W_2 Z_1] W_1 Z_1)\\ \mathcal{O}_{ 3 }=& \mbox{Tr}(Z_1 W_1 Z_1 W_1)\left[\mbox{Tr}( Z_1 W_2 Z_2 W_1) - \mbox{Tr}( W_1 Z_2 W_2 Z_1) \right]\\ \mathcal{O}_{ 4 }=& \mbox{Tr}(W_1 Z_1) \left[\mbox{Tr}(W_1 Z_1 W_2 Z_2 W_1 Z_1)- \mbox{Tr}( Z_1 W_1 Z_2 W_2 Z_1 W_1)\right]\\ \mathcal{O}_{ 5 }=& \mbox{Tr}(W_1 Z_1) \mbox{Tr}(W_1 Z_1) \left[\mbox{Tr}(W_2 Z_1 W_1 Z_2) -\mbox{Tr}(Z_2 W_1 Z_1 W_2 )\right]\\ \mathcal{O}_{ 6 }=& - \mbox{Tr}(Z_1 W_1 [Z_1 W_2, Z_2 W_1] Z_1 W_1)- \mbox{Tr}( W_1 Z_1 [W_1 Z_2, W_2 Z_1] W_1 Z_1)\\ \mathcal{O}_{ 7 }=& -\mbox{Tr}(W_1 Z_1)\left[ \mbox{Tr}( W_1 Z_1 [W_1 Z_2, W_2 Z_1])+ \mbox{Tr}(Z_1 W_1 [Z_1 W_2, Z_2 W_1])\right]\\ \end{split} \end{equation} and the associated planar anomalous dimensions (in units of $\lambda^2$), trace structure and parity are \begin{center} \begin{tabular}{cccc} Eigenvector & Eigenvalue & Trace Structure &Parity\\ \hline $\mathcal{O}_1$ & 8 & (8)&$-$\\ $\mathcal{O}_2$ & 4 & (8)&$-$\\ $\mathcal{O}_3$ & 8 & (4)(4)&$-$\\ $\mathcal{O}_4$ & 6 & (2)(6)&$-$\\ $\mathcal{O}_5$ & 8 & (2)(2)(4)&$-$ \\ $\mathcal{O}_6$ & 4& (8)&$+$\\ $\mathcal{O}_7$ & 6 & (2)(6)&$+$\\ \end{tabular} \end{center} Notice that we have two pairs of degenerate operators with opposite parity, namely the single trace operators $\mathcal{O}_2$ and $\mathcal{O}_6$ and the double trace operators $\mathcal{O}_4$ and $\mathcal{O}_7$.\footnote{The double trace operators ${\cal O}_4$ and ${\cal O}_7$ can be related via $Q_3$ when letting $Q_3$ act only on the longer of the two constituent traces of the operators.} Expressing the dilatation generator in the basis given above and taking into account all non--planar corrections we get \begin{equation} \begin{pmatrix} 8&\frac{8}{\:N^2}&\frac{16}{N}&\frac{4}{N}&-\frac{8}{\:N^2}&0&0\cr \frac{8}{\:N^2}&\!4\!-\!\frac{12}{N^2}&0&-\frac{2}{N}&-\frac{4}{\:N^2}&0&0 \cr \frac{16}{N}&-\frac{8}{N}&8 &0&0&0&0\cr 0&-\frac{16}{N}&-\frac{8}{\:N^2}&\!6\!-\!\frac{8}{\:N^2}&-\frac{12}{N}&0&0 \cr 0&\frac{8}{\:N^2}&0&-\frac{12}{N}&\!8\!-\!\frac{8}{\:N^2}&0&0\cr 0&0&0&0&0&\!4\!+\!\frac{4}{N^2}&\frac{2}{N}\cr 0&0&0&0&0 &\frac{8}{N}&\!6\!+\!\frac{8}{\:N^2}\cr \end{pmatrix}\;. \end{equation} The non-planar corrections for ${\cal O}_6$ and ${\cal O}_7$ can be found exactly and read \begin{equation} \delta E_{6,7}= \frac{6}{\:N^2}\mp \left(\sqrt{1+\frac{20}{\:N^2}+\frac{4}{\:N^4}} - 1\right). \end{equation} The corrections to the eigenvalues of the remaining operators we instead find using perturbation theory as described in section~\ref{structure}. First we notice that most matrix elements between degenerate states vanish. The only exception are the matrix elements between the states $\mathcal{O}_1$ and $\mathcal{O}_3$. To find the non--planar correction to the energy of these states we diagonalize the Hamiltonian in the corresponding subspace and find \begin{equation} \delta E_{1,3}=\mp \, \frac{16}{N}. \end{equation} For the remaining operators the leading non-planar corrections to the energy can be found using second order non-degenerate perturbation theory. The results read \begin{equation} \delta E_2 = -\frac{28}{N^2}, \hspace{0.7cm} \delta E_4 = -\frac{64}{N^2}, \hspace{0.7cm} \delta E_5 = \frac{64}{N^2}\;. \end{equation} We again notice that all degeneracies observed at the planar level get lifted when non-planar corrections are taken into account. This in particular holds for the degeneracies between the members of the two parity pairs. \subsubsection{Length 8 with 3 excitations \label{threeexcitations}} We now consider operators with three excitations, one of type $Z_2$ and two of type $W_2$. Among this type of operators one finds 7 which are descendants of the 7 operators considered in the previous section. Of highest weight states one has the following four planar eigenstates: \begin{equation} \begin{split} \mathcal{O}_1=&\mbox{Tr}( Z_1W_2) \left[\mbox{Tr}(Z_1 W_1 Z_2 W_2 Z_1 W_1)-\mbox{Tr}(W_1 Z_1 W_2 Z_2 W_1 Z_1)\right]\\ &- \mbox{Tr}(Z_1W_1 ) \left[\mbox{Tr}(Z_1 W_1 Z_2 W_2 Z_1 W_2)-\mbox{Tr}(Z_1 W_2 Z_2 W_1 Z_1 W_2)\right]\\ \mathcal{O}_2=&\mbox{Tr}(Z_1 W_1 [Z_2 W_1, Z_1 W_2] Z_1 W_2)+\mbox{Tr}( Z_1 W_2 [Z_1 W_2, Z_2 W_1] Z_1 W_1) \\ &+\mbox{Tr}(Z_1 W_1 [Z_1 W_1, Z_2 W_2] Z_1 W_2)+\mbox{Tr}( Z_1 W_2 [Z_2 W_2 ,Z_1 W_1] Z_1 W_1)\\ \mathcal{O}_3=&-\mbox{Tr}(W_2 Z_1 [W_1 Z_1, W_1 Z_2] W_2 Z_1 )+ \mbox{Tr}( W_1 Z_1 [W_2 Z_2, W_2 Z_1] W_1 Z_1)\\ \mathcal{O}_4=&\mbox{Tr}(Z_1W_2 )\left[ \mbox{Tr}( W_1 Z_1 [W_1 Z_2, W_2 Z_1])+ \mbox{Tr}(Z_1 W_1 [Z_1 W_2, Z_2 W_1] )\right]\\ &+\mbox{Tr}(Z_1W_1 ) \left[\mbox{Tr}( Z_1 W_2 [Z_1 W_1, Z_2 W_2])+\mbox{Tr}( W_2 Z_1 [W_2 Z_2, W_1 Z_1])\right] \end{split} \end{equation} Their planar anomalous dimensions (in units of $\lambda^2$), trace structure and parity are tabulated below. \begin{center} \begin{tabular}{cccc} Eigenvector & Eigenvalue & Trace Structure & Parity\\ \hline $\mathcal{O}_1$ & $6 $ &$(2)(6)$ & $-$\\ $\mathcal{O}_2$ & $6 $ & $(8)$ & $+$\\ $\mathcal{O}_3$ & $6 $ & $(8)$ & $+$\\ $\mathcal{O}_4$ & $6 $ & $(2)(6)$ & $+$\\ \end{tabular} \end{center} We observe one planar parity pair with trace structure $(2)(6)$. The full mixing matrix for this set of states takes the following form: \begin{equation} \left(\begin{array}{cccc} \!6\!-\frac{16}{\:N^2}&0&0&0 \cr 0&\!6\!+\!\frac{12}{\:N^2}&0&0 \cr 0&0&6-\frac{4}{\:N^2}&-\frac{12}{N} \cr 0&0&-\frac{4}{N}& 6 \cr \end{array}\right) \end{equation} and the exact non-planar corrections to the energy are \begin{eqnarray} \delta E_1 &=& - \frac{16}{N^2},\hspace{0.7cm} \delta E_2= \frac{12}{N^2}, \nonumber \\ \delta E_{3,4}&=& -\frac{2}{N^2}\pm 2\sqrt{\frac{12}{N^2}+\frac{1}{N^4}}. \end{eqnarray} Also in this case it turns out that all planar degeneracies are lifted. Obviously, there is another three-excitation sector with one $W_2$-excitation and two $Z_2$-excitations. The results for that sector can of course easily be read off from those of the present one. \subsubsection{Length 8 with 4 excitations} Let us turn to the case of operators of length eight with two excitations of type $W_2$ and two excitations of type $Z_2$. In this sector we find seven operators which descend from the operators treated in section~\ref{twoexcitations} as well as eight operators which descend from operators with three excitations. The remaining non-protected operators are \begin{equation} \begin{split} \mathcal{O}_{ 1 }=& - \mbox{Tr}(Z_1 W_1 Z_1 W_1 Z_2 W_2 Z_2 W_2)+\mbox{Tr}(W_1 Z_1 W_1 Z_1 W_2 Z_2 W_2 Z_2) \\ &+\mbox{Tr}(W_2 Z_1 W_2 Z_1 W_1 Z_2 W_1 Z_2)- \mbox{Tr}(W_1 Z_2 W_1 Z_1 W_2 Z_1 W_2 Z_2)\\ \mathcal{O}_{ 2 }=& \mbox{Tr}(W_1 Z_2)\left[ \mbox{Tr}(Z_1 W_2 Z_1 W_1 Z_2 W_2) - \mbox{Tr}(W_1 Z_1 W_2 Z_1 W_2 Z_2)\right] \\ &+ \mbox{Tr}(Z_2 W_2)\left[ \mbox{Tr}(Z_1 W_1 Z_1 W_2 Z_2 W_1)- \mbox{Tr}(W_2 Z_1 W_1 Z_1 W_1 Z_2)\right] \\ &+\mbox{Tr}(Z_1 W_2)\left[ \mbox{Tr}(Z_1 W_1 Z_2 W_1 Z_2 W_2)-\mbox{Tr}(W_1 Z_2 W_1 Z_1 W_2 Z_2)\right]\\ & +\mbox{Tr}(W_1 Z_1)\left[ \mbox{Tr}(W_1 Z_1 W_2 Z_2 W_2 Z_2) - \mbox{Tr}(W_2 Z_1 W_1 Z_2 W_2 Z_2)\right]\\ \mathcal{O}_{ 3 }=& \mbox{Tr}(W_1 Z_1) \mbox{Tr}(Z_2 W_2)\left[\mbox{Tr}(W_1 Z_1 W_2 Z_2) - \mbox{Tr}(W_2 Z_1 W_1 Z_2)\right]\\ &+\mbox{Tr}(W_1 Z_2) \mbox{Tr}(Z_1 W_2) \left[\mbox{Tr}(Z_1 W_1 Z_2 W_2)-\mbox{Tr}(W_1 Z_1 W_2 Z_2)\right] \\ \mathcal{O}_{ 4}=& \mbox{Tr}(Z_1 W_1 \{Z_1 W_1, Z_2 W_2\} Z_2 W_2)+\mbox{Tr}(Z_2 W_1\{ Z_2 W_1, Z_1 W_2\} Z_1 W_2 )\\ &+\mbox{Tr}(W_1 Z_1 \{W_1 Z_1, W_2 Z_2\} W_2 Z_2) +\mbox{Tr}(W_2 Z_1\{ W_2 Z_1, W_1 Z_2\} W_1 Z_2)\\ &- 2 \mbox{Tr}( W_1 Z_1 \{W_1 Z_2, W_2 Z_1\} W_2 Z_2)- 2 \mbox{Tr}(Z_1 W_1\{Z_1 W_2, Z_2 W_1\} Z_2 W_2 )\\ \mathcal{O}_{ 5}=&- \mbox{Tr}(Z_2 W_2)\left[ \mbox{Tr}([W_2 Z_1, W_1 Z_1] W_1 Z_2)+ \mbox{Tr}([Z_1 W_1, Z_1 W_2] Z_2 W_1)\right]\\ &-\mbox{Tr}(W_1 Z_2)\left[\mbox{Tr}([Z_1 W_2, Z_1 W_1] Z_2 W_2) + \mbox{Tr}([W_1 Z_1, W_2 Z_1] W_2 Z_2)\right]\\ &-\mbox{Tr}(Z_1 W_2)\left[ \mbox{Tr}([Z_1 W_1, Z_2 W_1] Z_2 W_2) + \mbox{Tr}([W_1 Z_2, W_1 Z_1] W_2 Z_2)\right]\\ &- \mbox{Tr}( Z_1 W_1)\left[\mbox{Tr}( [Z_1 W_2, Z_2 W_2] Z_2 W_1) + \mbox{Tr}([W_2 Z_1, W_1 Z_2] W_2 Z_2)\right]\\ \mathcal{O}_{ 6}=& 2\mbox{Tr}(W_1 Z_1 W_2 Z_2) \mbox{Tr}(Z_1 W_1 Z_2 W_2) - \mbox{Tr}(W_2 Z_1 W_1 Z_2) \mbox{Tr}(Z_1 W_1 Z_2 W_2) \\ &- \mbox{Tr}(W_1 Z_1 W_2 Z_2) \mbox{Tr}(W_1 Z_1 W_2 Z_2) \end{split} \end{equation} with planar eigenvalues (in units of $\lambda^2$), trace structure and parity given by \begin{center} \begin{tabular}{cccc} Eigenvector & Eigenvalue & Trace Structure & Parity\\ \hline $\mathcal{O}_1$ & $4 $ &$ (8)$ & $-$\\ $\mathcal{O}_2$ & $6 $ & $(2)(6)$ & $-$\\ $\mathcal{O}_3$ & $8 $ & $(2)(2)(4)$ & $-$\\ $\mathcal{O}_4$ & $12 $ &$ (8)$ & $+$\\ $\mathcal{O}_5$ & $6 $ & $(2)(6)$ & $+$\\ $\mathcal{O}_6$ & $16 $ & $(4)(4)$ & $+$\\ \end{tabular} \end{center} We notice one planar parity pair with trace structure $(2)(6)$. In the subspace of negative parity operators the dilatation generator reads \begin{equation} \left(\begin{array}{ccc} \!4\!-\!\frac{12}{\:N^2}&\frac{12}{N}& \frac{12}{\:N^2} \cr \frac{12}{N}&\!6\!& \frac{6}{N} \cr \frac{8}{\:N^2}&\frac{24}{N}&\!8\!-\!\frac{\!8\!}{\:N^2} \cr \end{array}\right)\;. \end{equation} The leading corrections to the eigenvalues can be found to be \begin{equation} \delta E_1= -\frac{84}{\:N^2}, \hspace{0.7cm} \delta E_2= -\frac{1728}{\:N^4} , \hspace{0.7cm} \delta E_3= \frac{64}{\:N^2}. \end{equation} The mixing matrix in the subspace of positive parity eigenvalues looks as follows: \begin{equation} \left(\begin{array}{ccc} \!12\!-\!\frac{12}{\:N^2}&-\frac{12}{N}& -\frac{8}{N} \cr 0&\!6\!& -\frac{8}{\:N^2} \cr -\frac{72}{N}&0 &\!16\! \cr \end{array}\right)\;. \end{equation} For these states we find the following leading corrections: \begin{equation} \delta E_4= -\frac{156}{\:N^2},\hspace{0.7cm} \delta E_5 = -\frac{576}{5\:N^4},\hspace{0.7cm} \delta E_6 = \frac{144}{\:N^2}. \end{equation} Again we see that all planar degeneracies are lifted.\footnote{However, it is worth noting that the resolution of the degeneracy between ${\cal O}_2$ and ${\cal O}_5$ happens at order $1/N^4$ and would thus not be visible purely within second order perturbation theory.} Summarizing, in all sectors considered we have observed a degeneracy between operators with similar trace structure but opposite parity -- a degeneracy which, as explained earlier, could be attributed to the existence of an extra conserved charge and thus to the integrability of the planar dilatation generator. The lift of degeneracies can be taken as an indication (but not a proof) that integrability breaks down beyond the planar level. In any case the concept of integrability when formulated in terms of spin chains and their associated conserved charges has to be reformulated when multi-trace operators are taken into account but it is clear that some symmetries are lost when we go beyond the planar limit. \section{BMN operators \label{BMNsection}} In the previous section we analyzed the case of short operators in ABJM theory. Another important class of operators that played a crucial role in the context of the $AdS_5/CFT_4$ correspondence is that of the so-called BMN operators \cite{Berenstein:2002jq}. It is not difficult to see that BMN operators of ABJM theory can be constructed analogously to BMN operators of ${\cal N}=4$ SYM \cite{Berenstein:2002jq}. In this section we compute non-planar corrections to the anomalous dimensions of BMN-type operators in the $SU(2)\times SU(2)$ sector of ABJM theory~\cite{Nishioka:2008gz, Gaiotto:2008cg,Grignani:2008is}. We will restrict ourselves to considering BMN operators with two excitations. There are two types of such operators:\footnote{As pointed out in \cite{Minahan:2008hf}, these operators resemble scalar operators in the orbifolds of ${\cal N} = 4$ SYM theory in four dimensions. Non-planar corrections for operators in the orbifolded ${\cal N} = 4$ SYM theory have been computed in~\cite{Bertolini:2002nr, DeRisi:2004bc}.} \begin{equation} \mathcal{A}_l^{{J}_0,J_1,\ldots,J_k}= \mbox{Tr}\;\!\!\left[Z_2\left(W_1Z_1\right)^l W_2\left(Z_1W_1\right)^{{J}_0-l}\right]\mbox{Tr}\;\!\!\left[\left(Z_1W_1\right)^{J_1}\right] \ldots \mbox{Tr}\;\!\!\left[\left(Z_1W_1\right)^{J_k}\right], \label{OAB} \end{equation} \begin{equation} \mathcal{B}_l^{{J}_0,J_1,\ldots,J_k}= \mbox{Tr}\;\!\!\left[\left(Z_1W_1\right)^lZ_1 W_2\left(Z_1W_1\right)^{{J}_0-l}Z_1W_2\right]\mbox{Tr}\;\!\!\left[\left(Z_1W_1\right)^{J_1}\right] \ldots \mbox{Tr}\;\!\!\left[\left(Z_1W_1\right)^{J_k}\right]. \label{OBB} \end{equation} There are in total $J_0+1$ independent operators of type $\mathcal{A}$ and $[J_0/2]+1$ independent operators of type $\mathcal{B}$. The associated bare conformal dimensions are \begin{equation} \Delta_{\mathcal{A}}= J_0+\ldots +J_k+1, \hspace{0.7cm}\Delta_{\mathcal{B}}= J_0+\ldots+J_k+2. \end{equation} In the spin chain language the $\mathcal{B}$-operators have two excitations on the same spin chain whereas the $\mathcal{A}$-operators have one excitation on each spin chain. As already mentioned, the $\mathcal{A}$-operators do not have an analogue in the scalar sector of ${\cal N}=4$ SYM\footnote{This was first pointed out in \cite{Astolfi:2008ji} from the analysis of the dual string theory state.} where operators have to organize into representations of $SO(6)$ (and not into representations of $SU(2)\times SU(2)$ as here). In ${\cal N}=4$ SYM two--excitation operators always appear in a symmetrized or anti-symmetrized version. We wish to study the non-planar corrections to both types of operators. As in ${\cal N}=4$ SYM we find the set of two--excitation operators above are closed under the action of the dilatation generator, i.e. two--excitation operators with the two excitations in two different traces are never generated when the dilatation generator acts. In the next two sub-sections we consider separately the two sets of operators $\mathcal{A}_l^{{J}_0,J_1,\ldots,J_k}$ and $\mathcal{B}_l^{{J}_0,J_1,\ldots,J_k}$. Introducing $J=J_0+J_1+\ldots+J_k$ we define the BMN limit as the double scaling limit~\cite{Kristjansen:2002bb,Constable:2002hw} \begin{equation} J\rightarrow \infty, \hspace{0.7cm} N\rightarrow \infty, \hspace{0.7cm}\lambda' \equiv \frac{\lambda^2}{J^2},\hspace{0.5cm} g_2=\frac{J^2}{N}, \hspace{0.3cm} \mbox{fixed}. \label{BMN} \end{equation} The BMN limit of the ${\cal N}=6$ superconformal Chern--Simons--matter theory is expected to correspond to the Penrose limit of the type IIA string theory on $AdS_4\times CP^3$. The string theory states dual to the BMN operators $\mathcal{A}_l^{{J}_0,J_1,\ldots,J_k}$ and $\mathcal{B}_l^{{J}_0,J_1,\ldots,J_k}$ have been studied in~\cite{Grignani:2008is, Astolfi:2008ji}. Notice, however, that due to different dispersion relations of excitations in the spin chain and string theory language~\cite{Grignani:2008is} the correct definition of $\lambda'$ at leading order in a strong coupling expansion is $\lambda'=\lambda/J^2$~\cite{Gaiotto:2008cg,Grignani:2008is}. \subsection{BMN operators with only one type of excitation} For operators with only one type of excitation the dilatation generator is given by the expression in eqn.~\rf{oneexcitation}. Using the notation of eqn.~\rf{Dexpansion} we find \begin{equation} D_0\circ\mathcal{B}_{p}^{J_0,J_1,\ldots,J_k}= -2 \left(\delta_{p\neq J_0}\mathcal{B}_{p+1}^{J_0,J_1,\ldots,J_k}+\delta_{p\neq 0}\mathcal{B}_{p-1}^{J_0,J_1,\ldots,J_k} -(\delta_{p\neq 0}+\delta_{p\neq J_0})\mathcal{B}_{p}^{J_0,J_1,\ldots,J_k}\right),\label{H0B} \end{equation} \begin{equation} \begin{split} D_+\circ\mathcal{B}_{p}^{J_0,J_1,\ldots,J_k}= - 4& \left[\sum_{J_{k+1}=1}^{p-1}\left(\mathcal{B}_{p-J_{k+1}-1}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}} -\mathcal{B}_{p-J_{k+1}}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}}\right)\right.\cr &\left.- \sum_{J_{k+1}=1}^{J_0-p-1}\left(\mathcal{B}_{p}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}} -\mathcal{B}_{p+1}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}}\right) \right] \label{H+B} \end{split} \end{equation} and \begin{equation} \begin{split} D_-\circ\mathcal{B}_{p}^{J_0,J_1,\ldots,J_k}= - 4 &\left[\sum_{i=1}^{k}J_i\left(\mathcal{B}_{J_i+p-1}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k} -\mathcal{B}_{J_i+p}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k}\right.\right.\cr &\qquad\quad\; \left.-\mathcal{B}_{p}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k} +\mathcal{B}_{p+1}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k}\right)\bigg]. \label{H-B} \end{split} \end{equation} The terms resulting from the action of $D_{++}$, $D_{--}$ and $D_{00}$ are rather involved and we have deferred them to Appendix B. We notice that the form of $D_0$, $D_+$ and $D_-$ are exactly as for ${\cal N}=4$ SYM at one loop order, written down in the same notation in~\cite{Beisert:2003tq}, except for the fact that $D_+$ and $D_-$ in the present case have an additional factor of 2 compared to $D_0$. Thus for this type of operators the analysis up to order $\frac{1}{N}$ can be directly carried over from~\cite{Beisert:2003tq}. At order $\frac{1}{N^2}$ one has to take into account the novel terms $D_{00}$, $D_{++}$ and $D_{--}$ appearing in Appendix~\ref{Boperators}. However, as explained there once one imposes the BMN limit defined in eqn.~\rf{BMN} these terms become sub-dominant. The BMN quantum mechanics is therefore (up to trivial factors of two) identical to that of ${\cal N}=4$ SYM at one loop level. In particular one encounters the same problem that the huge degeneracies make the perturbative treatment of the non-planar corrections intractable. \subsection{BMN operators with two different types of excitations} For operators with two different types of excitations the dilatation generator is given by the expression~\rf{oneexcitation} where we add the similar terms with 1 replaced by 2 and subsequently add the same operator with $Z$ and $W$ interchanged. Thus, in this case the dilatation generator consists of 16 terms. Using the notation of eqn.~\rf{Dexpansion} we find \begin{equation} D_0\circ\mathcal{A}_{p}^{J_0,J_1,\ldots,J_k}= -2 \left(\delta_{p\neq J_0}\mathcal{A}_{p+1}^{J_0,J_1,\ldots,J_k}+\delta_{p\neq 0}\mathcal{A}_{p-1}^{J_0,J_1,\ldots,J_k} -(\delta_{p\neq J_0}+\delta_{p\neq 0})\mathcal{A}_{p}^{J_0,J_1,\ldots,J_k}\right), \end{equation} \begin{equation} \label{H+} \begin{split} D_+\circ\mathcal{A}_{p}^{J_0,J_1,\ldots,J_k}= - &\left[4\sum_{J_{k+1}=1}^{p-1}\left(\mathcal{A}_{p-J_{k+1}-1}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}} -\mathcal{A}_{p-J_{k+1}}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}}\right)\right. \\ &\left.-4\sum_{J_{k+1}=1}^{J_0-p-1}\left(\mathcal{A}_{p}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}} -\mathcal{A}_{p+1}^{J_0-J_{k+1},J_1,\ldots,J_k,J_{k+1}}\right)\right.\\ &+2\delta_{p\neq 0}\left(\mathcal{A}_{0}^{p,J_1,\ldots,J_k,J_0-p} -\mathcal{A}_{p}^{p,J_1,\ldots,J_k,J_0-p}\right)\\ &+ 2\delta_{p\neq J_0}\left(\mathcal{A}_{J_0-p}^{J_0-p,J_1,\ldots,J_k,p} -\mathcal{A}_{0}^{J_0-p,J_1,\ldots,J_k,p}\right) \bigg] \end{split} \end{equation} and \begin{equation}\label{H-} \begin{split} D_-\circ\mathcal{A}_{p}^{J_0,J_1,\ldots,J_k}= - 4 \sum_{i=1}^{k}J_i&\left[(\mathcal{A}_{J_i+p-1}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k} -\mathcal{A}_{J_i+p}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k})\right.\cr &- \left.(\mathcal{A}_{p}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k} -\mathcal{A}_{p+1}^{J_0+J_i,J_1,\ldots, \makebox[0pt]{\,\,\,\,$\times$}J_i, \ldots,J_k})\right]. \end{split} \end{equation} The contributions arising from the action of $D_{++}$, $D_{--}$ and $D_{00}$ can be found in Appendix B. Formally $D_0$, $D_+$ and $D_-$ are similar to the ones one obtains when applying the one-loop dilatation generator of ${\cal N}=4$ SYM to an operator containing two different excitations (i.e. $\Psi$ and $\Phi$ in a background of $Z$'s). The only differences are that the quantities $D_+$ and $D_-$ in the present case have an additional factor of 2 compared to $D_0$ and that there appear two Kronecker $\delta$'s in $D_+$. However, as already mentioned, in ${\cal N}=4$ SYM operators with two excitations of different types have to organize into representations of $SO(6)$ and therefore always come in a symmetrized or anti-symmetrized form. For symmetrized operators, the last line of eqn.~\rf{H-} vanishes. Taking the BMN limit we observe as before that the terms $D_{++}$, $D_{--}$ and $D_{00}$ become sub-dominant, cf.\ Appendix~\ref{Aoperators}. \section{Conclusion \label{conclusion}} We have derived and studied the full two-loop dilatation generator in the $SU(2)\times SU(2)$ sector of ${\cal N}=6$ superconformal Chern--Simons--matter theory. As opposed to what was the case at leading order in ${\cal N}=4$ SYM theory, the leading order dilatation generator of ABJM theory implies a mixing not only between $n$ and $(n+1)$ trace states but also between $n$ and $(n+2)$ trace states. The latter mixing becomes sub-dominant when the BMN limit is considered. By acting with the dilatation generator on short operators we observed at the planar level pairs of degenerate operators belonging to the same representation but having opposite parity. As in planar ${\cal N}=4$ SYM these degenerate parity pairs could be explained by the existence of an extra conserved charge, the first of the tower of conserved charges of the alternating $SU(2)\times SU(2)$ spin chain. When non-planar corrections were taken into account these degeneracies disappeared indicating (but not proving) the breakdown of integrability. It would of course be interesting to investigate the mixing problem for higher representations of $SU(2)\times SU(2)$ than the ones considered here to see if other types of symmetries will reveal themselves. It is clear, however, that once one allows for mixing between operators with different number of traces one needs to re-think the entire concept of integrability. The simple spin chain picture breaks down and the concept of local charges becomes inadequate. In fact, it would be interesting to try to construct a toy example of what one could call an integrable model involving splitting and joining of traces, perhaps along the lines of the simple solvable toy model of reference~\cite{Casteill:2007td} which describes the splitting and joining of ${\cal N}=4$ SYM operators dual to the folded Frolov--Tseytlin string~\cite{Frolov:2003xy}. Another interesting and important line of investigation would be to explicitly relate non-planar contributions in the ${\cal N}=6$ superconformal Chern--Simons--matter theory to observables in the dual type IIA string theory. \vspace*{0.5cm} \noindent {\bf Acknowledgments:} We thank G.\ Grignani, T.\ Harmark, S.\ Hirano and A.\ Wereszczinsky for useful discussions. CK and KZ were supported by FNU through grant number 272-06-0434. MO acknowledges FNU for financial support through grant number 272-08-0050.
1,116,691,497,924
arxiv
\section{Introduction} A proxy, or wrapper, is an object that mediates access to an arbitrary target object. Proxies are widely used to perform resource management, access remote objects, impose access control \cite{VanCutsem:2010:PDP:1869631.1869638,Keil:2013:EDA:2508168.2508176}, restrict the functionality of an object \cite{DBLP:conf/oopsla/StricklandTFF12}, or to enhance the interface of an object. Ideally, a proxy is not distinguishable from other objects so that running a program with an interposed proxy should lead to the same outcome as running the program with the target object, unless the proxy imposes restrictions. Proxies introduce a subtle problem. Because a target object may have any number of proxy objects, which are all different from the target, a single target object may obtain multiple identities---it suffers from schizophrenia! Even worse, it turns out that there is no single cure for this schizophrenia because the desired behavior depends on the use case. Unfortunately, current proxy implementations are committed to particular use cases, which makes it hard to adapt them to uses with different requirements. We discuss two such use cases in the context of the JavaScript proxy API \cite{VanCutsem:2010:PDP:1869631.1869638}, identify its shortcomings, and propose a solution. \subsection{JavaScript Proxies} The JavaScript proxy API \cite{VanCutsem:2010:PDP:1869631.1869638} provides a proxy constructor that takes the proxy's target object and a handler object: \begin{lstlisting} var p = new Proxy (target, handler); \end{lstlisting} The handler object provides optional trap methods that are invoked when operations are applied to the proxy. For example, a property get like \texttt{p.foo} invokes the trap \texttt{handler.get(target,'foo',p)} if that trap is present. Untrapped operations are forwarded to the \texttt{target} object. The JavaScript proxy API treats proxies as \emph{opaque}: each proxy object has its own identity different from all other (proxy) objects and this difference is observable with the JavaScript equality operators \texttt{==} and \texttt{===}. When applied to two objects, both operators compare the object references.\footnote{If one argument has a primitive type, \texttt{==} attempts to convert the other argument to the same primitive type, whereas \texttt{===} returns false if the types are different. If both arguments are objects, then both operators do the same.} The use of equality has one consequence: comparing distinct proxies returns false even though the underlying target is the same. Similarly, an unwrapped target object is not equal to any of its proxies. \subsection{Use Case: Access Control} \label{sec:use-case-access-control} JavaScript proxies implement access control wrappers like revocable references and membranes in a library \cite{VanCutsem:2010:PDP:1869631.1869638}. The idea of a revocable reference is to only ever pass a proxy to an untrusted piece of code, e.g., a mashup. Once the host application deems that the mashup has finished its job, it revokes the reference which detaches the proxy from its target. Membranes extend this method recursively to all objects reachable from the object passed to a mashup. Opaque proxies are required for implementing this library. The JavaScript proxy API is tailored to uses where access is strictly compartmentalized. The host application only sees the original objects whereas the mashup only sees proxies. Furthermore, the implementation of revocable references and membranes ensures that there is at most one proxy for each original object. For this reason, each compartment has a consistent view where object references are unique. \subsection{Use Case: Contracts} \label{sec:use-case-contracts} Proxies implement contracts in Racket \cite{DBLP:conf/oopsla/StricklandTFF12} and in JavaScript \cite{Disney2011,Keil:2013:EDA:2508168.2508176}. Contracts impose restrictions that the programmer regards as preconditions for the correct execution of a program. For example, a contract may require a method to be called with a particular type or an object property to always contain positive numbers. During maintenance, the programmer may add contracts to a program as understanding improves. Clearly, the addition of a new contract must not change a program execution that respects the contract already. In this scenario, the program executes in a mix of original objects and proxy objects. Furthermore, there may be more than one proxy (implementing different contracts) for the same target. If introducing proxies affected the object identity, then some true comparisons would flip to false, thus changing the semantics. Consequently, the Racket implementation provides \emph{transparent} proxies \cite{DBLP:conf/oopsla/StricklandTFF12}, which are indistinguishable from their target object, recursively. \subsection{Assessment} \label{sec:assessment} Neither the opaque nor the transparent proxy implementation can be labeled as right or wrong without further qualification. Each is appropriate for a particular use case and leads to undesirable behavior in another use case. It is also clear that the behavior of equality is not something the should be left to the whim of the programmer. For example, equality on objects should be an equivalence relation, which means that the equality operations \texttt{==} and \texttt{===} must not be trapped \cite{DBLP:conf/ecoop/CutsemM13}. Thus, the current state of affairs in JavaScript is fully justified, but it is not well suited to implement contract systems. Hence, we explore some alternative designs that would suit both use cases. \section{Alternative Designs} \paragraph{Proxy-aware equality} One way to obtain transparent proxies with an implementation of opaque proxies is to provide proxy-aware equality functions like \mbox{\texttt{Proxy.isEqual()}} and \mbox{\texttt{Proxy.isIdentical()}} to replace all uses of \texttt{==} and \texttt{===}, respectively, in an application program. This approach preserves the previous behavior and retains the possibility to distinguish proxies from target objects in library code implementing proxy abstractions. However, it would require the application code to be transformed (at run time to support \texttt{eval}), which is not feasible in an application like access control \cite{Keil:2013:EDA:2508168.2508176} that must work with unmodified foreign code. \paragraph{Transparent Proxies} Making proxies generally transparent makes it impossible to test whether a reference is a proxy or an original object. However, there are abstractions that require such a test. For example, our implementation of access permissions \cite{Keil:2013:EDA:2508168.2508176} extracts the current permission from a proxy to construct a new proxy with an updated permission. This improves the efficiency of the implementation, which would otherwise generate long chains of proxy objects. Thus, for implementing proxy abstractions it must be possible to break the transparency. \paragraph{More equality operators} Another possible solution would be to reinterpret the JavaScript equality operators \texttt{==} and \texttt{===} as proxy-transparent and introduce new variants, say, \texttt{:==:} and \texttt{:===:} for their opaque cousins. The former operators are supposed to be used in application code whereas the implementation of proxy abstractions could make use of the opaque operators where needed. No code transformation is required with this approach. However, it is not clear how to ensure that application code does not use the opaque operators. It is not even clear if it \emph{should not} use them. While proxy abstractions can be implemented, the distinction between application and library seems too rigid. Given both operations, application code can test if one object is a proxy for another: \begin{lstlisting} var isProxy = ((objA==objB) != (objA:==:objB)); \end{lstlisting} \paragraph{Trapping the equality operation} We already discussed that trapping the equality operation is not appropriate. However, there is a twist that enables modifying the equality without destroying its properties. Essentially, the handler is extended with a boolean trap: \begin{verbatim} isTransparent : function () -> boolean \end{verbatim} If the handler's trap returns false or if it is not present, the associated proxy behaves opaquely, otherwise it behaves transparently. The implementation is an extension of the equality comparison in the VM. Before testing reference identity as the last step in a comparison of two objects, the equality comparison calls a new internal \texttt{GetEqualityObject} method. For a standard object, this method returns its receiver. For a proxy object, if is \texttt{isTransparent()} on the handler returns false, then \texttt{GetEqualityObject} returns the reference to the current object. Otherwise, it recursively invokes \texttt{GetEqualityObject} on the proxy's target. For consistency, the \texttt{GetEqualityObject} method also needs to be called in other computations that depend on object identity, for instance the WeakMap abstraction provided by some JavaScript implementations. This design enables both scenarios described in Sections~\ref{sec:use-case-access-control} and~\ref{sec:use-case-contracts} by configuring the handler appropriately. It also guarantees that equality is an equivalence relation in application code that does not have access to the handlers. To implement proxy-based abstractions, it may be necessary to temporarily make proxies opaque. But opaqueness can be obtained by reconfiguring the handler in the library code, analogous to the implementation of revocable references. To maintain consistency at the application level, it may be necessary to restrict modifications to this configuration to a certain scope analogously to dynamic variables \cite{DBLP:conf/pldi/HansonP01}. \section{Conclusion} We have shown that neither the transparent nor the opaque implementation of proxies is appropriate for all use cases. We discuss several amendments and propose a flexible solution that enables applications requiring transparence as well as opacity. We are currently implementing this solution in a JavaScript VM and expect to report results soon. \bibliographystyle{abbrvnat}
1,116,691,497,925
arxiv
\section{Introduction} Let $G$ be a split semisimple linear algebraic group over a field $F$. The purpose of the present paper is to relate together three different topics: the {\em geometry} of twisted $G$-flag varieties, the theory of {\em cohomological invariants} of $G$ and the {\em representation theory} of $G$. \medskip As for the first, let $U/G$ be a {\em classifying space} of $G$ in the sense of Totaro, that is $U$ is an open $G$-invariant subset in some representation of $G$ with $U(F)\neq \emptyset$ and $U\to U/G$ is a $G$-torsor. Consider the generic fiber $\gU$ of $U$ over $U/G$. It is a $G$-torsor over the quotient field $K$ of $U/G$ called the {\em versal} $G$-torsor \cite[Ch.~I,~\S 5]{GMS}. We denote by $\gX$ the respective flag variety $\gU/B$ over $K$, where $B$ is a Borel subgroup of $G$, and call it the {\em versal} flag. The variety $\gX$ can be viewed as the `most twisted' form of the `most complicated' $G$-flag variety and, hence, is the most natural object to study. In particular, understanding its geometry via studying the {\em Chow group} $\CH(\gX)$ of algebraic cycles modulo the rational equivalence relation, leads to understanding the geometry of all other $G$-flag varieties. Recall that the group $\CH(X)$ of a twisted flag variety $X$ has been a subject of intensive investigations for decades: started with fundamental results by Grothendieck, Demazure, Berstein-Gelfand-Gelfand in 70's describing its {\em free part}, inspired by its close connections to the motivic cohomology discovered in 90's by Voevodsky and numerous results by Karpenko, Peyre, and many others (including the authors of the present paper) trying to estimate its {\em torsion part}. \medskip Our second ingredient, the theory of cohomological invariants, has been mainly inspired by the works of J.-P.~Serre and M.~Rost. Given a field extension $L/F$ and a positive integer $d$ we consider the Galois cohomology group $H^{d+1}(L,\Q/\Z(d))$ denoted by $H^{d+1}(L,d)$. Following~\cite[Ch.~II,~\S 1]{GMS} a degree $d$ {\em cohomological invariant} is a natural transformation of functors \[ a\colon H^1(\,\text{---}\,,G)\to H^d(\,\text{---}\,,d-1) \] on the category of field extensions over $F$. We denote the group of degree $d$ invariants by $\Inv^d(G,d-1)$. Following \cite[\S1]{Merkurjev} invariant $a$ is called {\em normalized} if it sends trivial torsor to zero. We denote the subgroup of normalized invariants by $\Inv^d(G,d-1)_\norm$. Invariant $a$ is called {\em decomposable} if it is given by a cup-product with an invariant of degree $2$. We denote the subgroup of decomposable invariants by $\Inv^3(G,2)_\dec$. The factor group $\Inv^3(G,2)_\norm/\Inv^3(G,2)_\dec$ is denoted by $\Inv^3(G,2)_\ind$ and is called the group of {\em indecomposable} invariants. This group has been studied by Garibaldi, Kahn, Levine, Rost, Serre and others in the simply-connected case and is closely related to the celebrated Rost-Serre invariant. In recent work \cite{Merkurjev} it was shown how to compute it in general using new results on motivic cohomology obtained in \cite{MerkurjevBG}. In particular, it was computed for all adjoint split groups in \cite{Merkurjev} and for split simple groups in \cite{BR}. \medskip As for the last ingredient, the representation theory of $G$, it had established itself a long time ago originating from the theory of Lie algebras in the middle of the last century. Recall that the classical {\em character map} identifies the representation ring of $G$ with the subring $\Z[T^*]^W$ of $W$-invariant elements of the integral group ring $\Z[T^*]$, where $W$ is the Weyl group which acts naturally on the group of characters $T^*$ of a split maximal torus $T$ of $G$, hence, providing a straightforward link to the classical Invariant theory. \medskip We glue all these ingredients together by introducing a new subgroup of {\em semi-decomposable} invariants $\Inv^3(G,2)_\sdec$ which consists of invariants $a \in \Inv^3(G,2)_\norm$ such that for every field extension $L/F$ and a $G$-torsor $Y$ over $L$ \[ a(Y)=\sum_{i\; \text{finite}} \phi_i\cup b_i(Y)\text{ for some }\phi_i\in L^{\times}\text{ and }b_i\in \Inv^2(G,1)_\norm. \] Roughly speaking, it consists of invariants that are `locally decomposable'. Observe that by definition $\Inv^3(G,2)_\dec \subseteq \Inv^3(G,2)_\sdec \subseteq \Inv^3(G,2)_\norm$. \medskip Our main result then says that \begin{thmm} Let $G$ be a split semisimple linear algebraic group over a field $F$ and let $\gX$ denote the associated versal flag. There is a short exact sequence \[ 0\to \tfrac{\Inv^3(G,2)_\sdec}{\Inv^3(G,2)_\dec} \to \Inv^3(G,2)_{\operatorname{ind}}\to\CH^2(\gX)_\tors\to 0, \] together with a group isomorphism $\tfrac{\Inv^3(G,2)_\sdec}{\Inv^3(G,2)_\dec}\simeq \tfrac{c_2((\ssI^W)\cap\Z[T^*])}{c_2(\Z[T^*]^W)}$, where $c_2$ is the second {\em Chern class} map (e.g. see \cite[\S 3c]{Merkurjev}) and $(\ssI^W)$ denote the ideal generated by classes of augmented representations of the simply-connected cover of $G$. In addition, if $G$ is simple, then $\Inv^3(G,2)_\sdec= \Inv^3(G,2)_\dec$, so there is an isomorphism $\Inv^3(G,2)_\ind \simeq \CH^2(\gX)_\tors$. \end{thmm} Observe that if $G$ is not simple, then $\Inv^3(G,2)_\sdec$ does not necessarily coincide with $\Inv^3(G,2)_\dec$ (see Example~\ref{counterO4}). \medskip The nature of our result suggests that it should have applications in several directions, e.g. for cohomological invariants and algebraic cycles on twisted flag varieties. In the present paper we discuss only few of them. \medskip For instance, since the group $\Inv^3(G,2)_{\ind}$ has been computed for all simple split groups in \cite{Merkurjev} and \cite{BR}, it immediately gives computation of the torsion part of $\CH^2(\gX)$, hence, extending previous results by \cite{Ka98} and \cite{Peyre}. As another straightforward consequence, using the coincidence $\Inv^3(G,2)_\sdec=\Inv^3(G,2)_\dec$ we construct non-trivial cohomological classes for indecomposable central simple algebras, hence, answering questions posed in \cite{GPT09} and \cite{Demba13}. \medskip The paper is organized as follows: In section~\ref{direct} we construct an exact sequence relating the groups of invariants with the torsion part of the Chow group, hence, proving the first part of the theorem. In section~\ref{coincidence} we compute this exact sequence case by case for all simple groups, hence, proving the second part. In the last section we discuss applications. \section{Semi-decomposable invariants and the Chow group}\label{direct} Let $G$ be a split semisimple linear algebraic group over a field $F$. We fix a split maximal torus $T$ of $G$ and a Borel subgroup $B$ containing $T$. Consider the $T$-equivariant structure map $U\to \Spec F=pt$, where $U$ is the open $G$-invariant subset from the introduction. \subsubsection*{Characteristic maps and classes} By~\cite{EG} the induced pullback on $T$-equivariant Chow groups $\CH_T(pt) \to \CH_T(U)$ is an isomorphism. Since $\CH_T(U)\simeq \CH(U/T)\simeq \CH(U/B)$ and $\CH_T(pt)$ can be identified with the symmetric algebra $\Sym(T^*)$ of the group of characters of $T$, it gives an isomorphism \begin{equation}\label{CHisoU} \cs^{\tCH}\colon \Sym(T^*) \xrightarrow{\simeq} \CH(U/B). \end{equation} Similarly, by the homotopy invariance and localization property of the equivariant $K$-theory~\cite[Theorems~8 and~11]{MerkurjevK} the induced pull-back on $T$-equivariant $K$-groups gives a surjection \[ \cs^{\tK}\colon \Z[T^*]\twoheadrightarrow K_0(U/B), \] where the integral group ring $\Z[T^*]$ can be identified with $K_T(pt)$ and $K_0(U/B)\simeq K_0(U/T)\simeq K_T(U)$. \medskip Let $\tau^i(X)$ denote the $i$-th term of the {\em topological filtration} on $K_0$ of a smooth variety $X$ and let $\tau^{i/i+1}$, $i\ge 0$ denote its $i$-th subsequent quotient. Let $I$ denote the augmentation ideal of $\Z[T^*]$. \begin{lem}\label{isoquot} The map $\cs^{\tK}$ induces isomorphisms on subsequent quotients \[ I^i/I^{i+1} \xrightarrow{\simeq} \tau^{i/i+1}(U/B),\quad \text{ for }0\le i\le 2, \] and, its restriction $\cs^{\tK}\colon I^2 \to \tau^2(U/B)$ is surjective. \end{lem} \begin{proof} By \cite[Ex.~15.3.6]{Fu} the Chern class maps induce isomorphisms $c_i \colon \tau^{i/i+1}(X) \xrightarrow{\simeq} \CH^i(X)$ for $0\le i\le 2$. Since the Chern classes commute with pullbacks and $\Sym^i(T^*)\simeq I^i/I^{i+1}$, the isomorphisms then follow from~\eqref{CHisoU}. Finally, since $I/I^2 \simeq \tau^{1/2}(U/B)$, $\cs^{\tK}(x)\in \tau^2(U/B)$ implies that $x\in I^2$. \end{proof} Consider the natural inclusion of the versal flag $\imath\colon \gX=\gU/B \hookrightarrow U/B$. Since $\imath$ is a limit of open embeddings, by the {\em localization property} of Chow groups, the induced pullback gives surjections \[ \imath^{\tCH}\colon\CH^i(U/B)\twoheadrightarrow \CH^i(\gX). \] Moreover, the induced pullback in $K$-theory restricted to $\tau^i$ also gives surjections \[ \imath^{\tK}\colon \tau^i(U/B) \twoheadrightarrow \tau^i(\gX). \] Indeed, by definition $\tau^i(\gX)$ is generated by the classes $[\Os_Z]$ for closed subvarieties $Z$ of $\gX$ with $\operatorname{codim} Z\geqslant i$ and each $[\Os_Z]$ is the pullback of the element $[\Os_{\bar{Z}}]$ in $\tau^i(U/B)$, where $\bar{Z}$ is the closure of $Z$ inside $U/B$. \medskip Let $L$ be a splitting field of the versal torsor $\gU$. According to~\cite[Thm.~4.5]{GiZa} composites \[ \Sym(T^*) \xrightarrow{\cs^{\tCH}} \CH(U/B) \xrightarrow{\imath^{\tCH}} \CH(\gX) \xrightarrow{res} \CH(\gX_L)\quad \text{ and} \] \[ \Z[T^*] \xrightarrow{\cs^{\tK}} K_0(U/B) \xrightarrow{\imath^{\tK}} K_0(\gX) \xrightarrow{res} K_0(\gX_L) \] give the classical characteristic maps for the Chow groups and for the $K$-groups respectively (here we identify the rightmost groups with the Chow group and the $K$-group of the split flag $G/B$ respectively). Restricting the latter to $I^2$ and $\tau^2$ we obtain the map \[ \cs\colon I^2 \xrightarrow{\cs^{\tK}} \tau^2(U/B) \xrightarrow{\imath^{\tK}} \tau^2(\gX) \xrightarrow{res} \tau^2(\gX_L)=\tau^2(G/B). \] From this point on, we denote by $\cs^{\tCH}$, $\imath^{\tCH}$ and by $\cs^{\tK}$, $\imath^{\tK}$ the respective restrictions to $\Sym^2$, $\CH^2$ and $I^2$, $\tau^2$. Let $\Lambda$ be the weight lattice. Consider the integral group ring $\Z[\Lambda]$. Let $\ssI$ denote its augmentation ideal. The Weyl group $W$ acts naturally on $\Z[\Lambda]$. Let $(\ssI^W)$ denote the ideal generated by $W$-invariant elements in $\ssI$. \begin{lem}\label{l2} The kernel of the composite $I^2\stackrel{\cs^{\tK}}\to \tau^2(U/B)\stackrel{\imath^{\tK}}\to \tau^2(\gX)$ is $(\ssI^W)\cap I^2$. \end{lem} \begin{proof} By the results of Panin~\cite{Panin}, $K_0(\gX)$ is the direct sum of $K_0(A_i)$ for some central simple algebras $A_i$ over $K$. So $K_0(\gX)$ is a free abelian group and, hence, the restriction $\tau^2(\gX)\to \tau^2(\gX_L)$ is injective. Therefore, $\ker (\imath^{\tK}\circ\cs^{\tK})$ coincides with the kernel of the characteristic map $\cs \colon I^2\to \tau^2(G/B)$. Since $\cs$ factors as $I^2\hookrightarrow \ssI^2 \to \tau^2(G/B)$ and the kernel of the second map is $(\ssI^W)\cap \ssI^2$ by the theorem of Steinberg \cite{St}, we get $\ker \cs= (\ssI^W) \cap I^2$ (here we used that $\Z[T^*]\cap \ssI^i=I^i$). \end{proof} Consider the second Chern class map $c_2\colon \tau^2(U/B)\to\CH^2(U/B)$. \begin{lem}\label{l1} We have $c_2(\ker \imath^{\tK})=\ker \imath^{\tCH}$. \end{lem} \begin{proof} Consider the diagram \[ \xymatrix{ \tau^3(U/B)\ar[r]\ar[d]_{\imath^{\tK}|_{\tau^3}} & \tau^2(U/B)\ar[r]^-{c_2}\ar[d]^{\imath^{\tK}} & \CH^2(U/B)\ar[r]\ar[d]^{\imath^{\tCH}} & 0\\ \tau^3(\gX)\ar[r] & \tau^2(\gX)\ar[r]^-{c_2} & \CH^2(\gX)\ar[r] & 0\\ } \] Its vertical maps are surjective and the rows are exact by \cite[Ex.~15.3.6]{Fu}. The result then follows by the diagram chase. \end{proof} Consider the composite $\cs_2\colon I^2\xrightarrow{\cs^{\tK}} \tau^2(U/B)\xrightarrow{c_2}\CH^2(U/B)$. Observe that it coincides with the Chern class map defined in~\cite[\S 3c]{Merkurjev}. \begin{lem}\label{kernel} We have $\ker \imath^{\tCH}=\cs_2((\ssI^W)\cap I^2)$. \end{lem} \begin{proof} Since $\cs^{\tK}$ is surjective by lemma~\ref{isoquot}, we have by lemma~\ref{l2} and~\ref{l1} \[ \cs_2((\ssI^W)\cap I^2)=\cs_2(\ker(\imath^{\tK}\circ\cs^{\tK}))=c_2(\ker \imath^{\tK})=\ker \imath^{\tCH}. \qedhere \] \end{proof} Following~\cite{Merkurjev} we denote \[ \Dec(G):=(\cs^{\tCH})^{-1}\circ \cs_2(\Z[T^*]^W). \] And we set \[ \SDec(G):=(\cs^{\tCH})^{-1}\circ \cs_2((\ssI^W)\cap\Z[T^*]). \] Since the action of $W$ on $\Lambda$ is essential, i.e. $\Lambda^W=0$, we have $(\ssI^W)\subseteq\ssI^2$. Therefore, for any $x\in(\ssI^W)$ we have $x\equiv x' \text{ mod }\ssI^3$ and, hence, $\cs_2(x)=\cs_2(x')$ for some $x'\in\Z[\Lambda]^W$, where $\Z[\Lambda]^W$ is the subring of $W$-invariants. So there are inclusions \begin{equation}\label{modulo3} \Dec(G)\subseteq\SDec(G)\subseteq \Sym^2(T^*)^W. \end{equation} \begin{lem} We have $\CH^2(\gX)\simeq \Sym^2(T^*)/\SDec(G)$. \end{lem} \begin{proof} By \eqref{CHisoU} and lemma~\ref{kernel} we have \[ \CH^2(\gX)\simeq \CH^2(U/B)/\cs_2((\ssI^W)\cap I^2) \simeq \Sym^2(T^*)/\SDec(G). \qedhere \] \end{proof} \begin{cor}\label{chowtwo} We have $\CH^2(\gX)_\tors\simeq \Sym^2(T^*)^W/\SDec(G)$. \end{cor} \begin{proof} By the lemma it remains to show that \[ (\Sym^2(T^*)/\SDec(G))_\tors=\Sym^2(T^*)^W/\SDec(G). \] Indeed, suppose that $x\in \Sym^2(T^*)$ and $nx\in \SDec(G)$. Then $nx$ lies in $\Sym^2(T^*)^W$ by~\eqref{modulo3}. So for every $w\in W$ we have $n(wx-x)=0$. Since $\Sym^2(T^*)$ has no torsion, $x\in \Sym^2(T^*)^W$. Conversely, let $x\in \Sym^2(T^*)^W$. Since the second Chern class map $c_2\colon I^2\to \Sym^2(T^*)$ is surjective, there is a preimage $y\in I^2$ of $x$. Take $y'=\sum_{w\in W}w\cdot y\in \Z[T^*]^W\subseteq(\ssI^W)\cap\Z[T^*]$. Since $c_2$ is $W$-equivariant and coincides with the composite $(\cs^{\tCH})^{-1}\circ \cs_2$, we get $(\cs^{\tCH})^{-1}\circ \cs_2(y')=|W|\cdot x\in\SDec(G)$. \end{proof} \subsubsection*{Cohomological Invariants} For a smooth $F$-scheme $X$ let $\sH^3(2)$ denote the Zariski sheaf on $X$ associated to a presheaf $W\mapsto H^3_{\text{\'et}}(W,\Q/\Z(2))$. The Bloch-Ogus-Gabber theorem~(see \cite{CTHK} and \cite{GrSu}) implies that its group of global sections $H^0_{\text{Zar}}(X,\sH^3(2))$ is a subgroup in $H^3(F(X),2)$. \medskip Consider the versal $G$-torsor $\gU$ over the quotient field $K$ of the classifying space $U/G$. By \cite[Thm.~A]{BM} the map $\Theta\colon \Inv^3(G,2)\to H^3(K,2)$ defined by $\Theta(a):=a(\gU)$ gives an inclusion \[ \Inv^3(G,2)\hookrightarrow H^0_{\text{Zar}}(U/G,\sH^3(2)). \] \begin{lem}\label{theta} We have $a(\gU)\in\ker[H^3(K,2)\to H^3(K(\gX),2)]$ for any $a\in \Inv^3(G,2)_\norm$. \end{lem} \begin{proof} Consider the composite $q\colon\Spec K(\gU)\to \gU \to U/G$. Observe that the pullback $q^*$ factors as \[ q^*\colon H^0_{\text{Zar}}(U/G,\sH^3(2))\to H^3(K(\gX),2)\to H^3(K(\gU),2). \] Since $\gU\to \gX$ is a $B$-torsor, $K(\gU)$ is purely transcendental over $K(\gX)$, so the last map of the composite is injective. Since the $\gU$ becomes trivial over $K(\gU)$, we have $q^*(a(\gU))=a(\gU\times_K K(\gU))=0$. Therefore, $a(\gU)\in\ker[H^3(K,2)\to H^3(K(\gX),2)]$. \end{proof} \begin{lem}\label{trivact} Let $Y\to\Spec L$ be a $G$-torsor and $X=Y/B$. Let $L^{sep}$ denote the separable closure of $L$, $\Gamma_L$ its Galois group and $X^{sep}=X\times_L L^{sep}$. Then the $\Gamma_L$ action on $\Pic X^{sep}$ is trivial. \end{lem} \begin{proof} It follows by \cite[Prop.~2.2]{MT}. \end{proof} \subsubsection*{The Tits map} Consider a short exact sequence of $F$-group schemes \[ 1\to C\to\sG \stackrel{\pi}\to G\to 1. \] Given a character $\chi\in C^*$ of the center and a field extension $L/F$ consider the {\em Tits map}~\cite[\S4,5]{Ti71} \[ \alpha_{\chi,L}\colon H^1(L,G)\xrightarrow{\partial} H^2(L,C)\xrightarrow{\chi_*} H^2(L,\Gm), \] where $\partial$ is the connecting homomorphism (if $C$ is non-smooth, we replace it by $\Gm$ and $G$ by the respective push-out as in~\cite[II, Example 2.1]{GMS}). This gives rise to a cohomological invariant $\beta_\chi$ of degree two \[ \beta_{\chi}\colon Y\mapsto\alpha_{\chi,L}(Y)\quad\text{ for every }G\text{-torsor }Y\in H^1(L,G). \] \cite[Theorem~2.4]{BM} shows that the assignment $\chi\to\beta_\chi$ provides an isomorphism $C^*\to \Inv^2(G,1)$. For a $G$-torsor $Y$ over $L$ there is an exact sequence studied in~\cite{Merkurjev95},~\cite{Peyre} and~\cite[II, Thm. 8.9]{GMS}: \[ A^1((Y/B)^{sep},K_2)^{\Gamma}\xrightarrow{\rho}\ker[H^3(L,2)\to H^3(L(Y/B),2)]\xrightarrow{\delta_Y}\CH^2(Y/B). \] The multiplication map $L^{sep}\otimes\CH^1(Y/B)^{sep}\to A^1((Y/B)^{sep},K_2)$ is an isomorphism. By lemma~\ref{trivact} we obtain an exact sequence \begin{equation}\label{sequence} L\otimes\Lambda\xrightarrow{\rho_Y}\ker[H^3(L,2)\to H^3(L(Y/B),2)]\xrightarrow{\delta_Y}\CH^2(Y/B). \end{equation} According to~\cite{Merkurjev95} the map $\rho_Y$ acts as follows: \[ \rho_Y(\phi\otimes\lambda)=\phi\cup\beta_{\overline{\lambda}}(Y),\quad\text{ where }\phi\in L^{\times},\; \lambda\in\Lambda\text{ and} \] $\overline{\lambda}$ denotes the image of $\lambda$ in $\Lambda/T^*=C^*$. \medskip We define the subgroup $\Inv^3(G,2)_\sdec$ of {\em semi-decomposable} invariants as follows: \begin{dfn} An invariant $a \in \Inv^3(G,2)_\norm$ is called semi-decomposable, if there is a finite set $b_i\in \Inv^2(G,1)_\norm$ such that for every field extension $L/F$ and a torsor $Y\in H^1(L,G)$ we have \[ a(Y)=\sum_i \phi_i\cup b_i(Y)\text{ for some }\phi_i\in L^{\times}. \] \end{dfn} Observe that by definition, we have \[ \Inv^3(G,2)_\dec\subseteq \Inv^3(G,2)_\sdec\subseteq \Inv^3(G,2)_\norm \] and $a\in \Inv^3(G,2)_\sdec$ if and only if $a(Y)\in\im(\rho_Y)=\ker(\delta_Y)$ for every torsor $Y$. \begin{lem}\label{semi} We have $a\in\Inv^3(G,2)_\sdec$ if and only if $a(\gU)\in \ker(\delta_{\gU})$. \end{lem} \begin{proof} If $a$ is a semi-decomposable invariant, then $a(\gU)=\sum_{\chi\in C^*} \phi_\chi\cup\beta_\chi(\gU)$ lies in the image of $\rho_{\gU}$, hence, $\delta_{\gU}(a(\gU))=0$. On the other hand, let $a$ be a degree $3$ invariant such that $\delta_{\gU}(a(\gU))=0$ and let $Y$ be a $G$-torsor over a field extension $L/F$. We show that $\delta_Y(a(Y))=0$. We may assume that $L$ is infinite (replacing $L$ by $L(t)$ if needed). Choose a rational point $y\in (U/G)_L$ such that $Y$ is isomorphic to the fiber of $U\to U/G$ over $y$. Let $R$ be the completion of the regular local ring $\Os_{(U/G)_L,y}$ and let $\hat K$ be its quotient field. The ring $R$ is a regular local ring with residue field $L$. By the theorem of Grothendieck $Y_R$ is a pullback of $Y$ via the projection $\Spec R\to \Spec L(y)$. Then the $G$-torsors $Y_{\hat K}$ and $\gU_{\hat K}$ over $\hat K$ are isomorphic. We have \[ \delta_Y(a(Y))_{\hat K}=\delta_{Y_{\hat K}}(a(Y_{\hat K}))=\delta_{\gU_{\hat K}}(a(\gU_{\hat K}))=\delta_{\gU}(a(\gU))_{\hat K}=0. \] The restriction $\CH^2(Y/B)\to \CH^2((Y/B)_{\hat K})$ is injective, since it is split by the specialization map with respect to a system of local parameters of $R$. Therefore, $\delta_Y(a(Y))=0$ for every $Y$, hence, $a$ is semi-decomposable. \end{proof} Now we are ready to prove the first part of the main theorem: \begin{thm}\label{exactseq} The map $\delta_{\gU}$ induces a short exact sequence \[ 0\longrightarrow \tfrac{\Inv^3(G,2)_\sdec}{\Inv^3(G,2)_\dec} \longrightarrow \Inv^3(G,2)_\ind\xrightarrow{g}\CH^2(\gX)_\tors\longrightarrow 0, \] and there is a group isomorphism \[ \tfrac{\Inv^3(G,2)_\sdec}{\Inv^3(G,2)_\dec}\simeq \tfrac{c_2((\ssI^W)\cap\Z[T^*])}{c_2(\Z[T^*]^W)}. \] \end{thm} \begin{proof} Consider the following diagram. The rows are exact sequences given by~\cite[Thm.~1.1]{Kahn} and vertical arrows are pullbacks: \[ \xymatrix{ 0\ar[r] & \CH^2(U/G)\ar[r]\ar[d] & \mathbb{H}^4_{\text{\'et}}(U/G,\Z(2))\ar[r]\ar[d] & H^0_{\text{Zar}}(U/G,\sH^3(2))\ar[r]\ar[d] & 0\\ 0\ar[r] & \CH^2(U/B)\ar[r]\ar[d]_{\imath^{\tCH}} & \mathbb{H}^4_{\text{\'et}}(U/B,\Z(2))\ar[r]\ar[d] & H^0_\text{Zar}(U/B,\sH^3(2))\ar[r]\ar[d] & 0\\ 0\ar[r] & \CH^2(\gX)\ar[r] & \mathbb{H}^4_{\text{\'et}}(\gX,\Z(2))\ar[r] & H^0_\text{Zar}(\gX,\sH^3(2))\ar[r] & 0 } \] Since $F(U/B)=K(\gX)$, lemma~\ref{theta} implies that the composite \[ \Inv^3(G,2)_\norm\to H^0_\text{Zar}(U/G,\sH^3(2))\to H^0_\text{Zar}(U/B,\sH^3(2)) \] is zero. By the diagram chase there is a homomorphism \[ \Inv^3(G,2)_\norm\to\CH^2(U/B)/\CH^2(U/G). \] The map $\gX\to U/B\to U/G$ factors as $\gX\to\Spec K\to U/G$, hence the composite of pullbacks $\CH^2(U/G)\to\CH^2(U/B)\xrightarrow{\imath^{\tCH}}\CH^2(\gX)$ coincides with the composite $\CH^2(U/G)\to\CH^2(\Spec K)\to\CH^2(\gX)$ which is zero. This gives a homomorphism $g\colon \Inv^3(G,2)_\norm\to \CH^2(U/B)/\CH^2(U/G) \to \CH^2(\gX)$ which by the proof of theorem of B. Kahn (see~\cite[II,\,\S 8, 8.1-8.5]{GMS}) factors through the map $\delta_{\gU}$ of~\eqref{sequence}. By~\cite[3.9]{Merkurjev} the map $g$ also factors through $\Inv^3(G,2)_\ind\xrightarrow{\simeq} \tfrac{\Sym^2(T^*)^W}{\Dec(G)}$. So there is a commutative diagram \begin{equation}\label{maindiagr} \xymatrix{ \Inv^3(G,2)_\norm\ar[r]^{g}\ar[d] & \CH^2(\gX)_\tors\\ \tfrac{\Sym^2(T^*)^W}{\Dec(G)}\ar[r] & \tfrac{\Sym^2(T^*)^W}{\SDec(G)}\ar[u]^{\simeq}_{\text{Cor. }\ref{chowtwo}} }. \end{equation} The bottom row of~\eqref{maindiagr} gives a short exact sequence \[ 0\to \tfrac{\SDec(G)}{\Dec(G)}\to \tfrac{\Sym^2(T^*)^W}{\Dec(G)}\to\CH^2(\gX)_\tors\to 0. \] Lemma~\ref{semi} and composite~\eqref{sequence} give an exact sequence \[ 0\to \Inv^3(G,2)_\sdec\to \Inv^3(G,2)_\norm\xrightarrow{g}\CH^2(\gX)_\tors. \] Combining these together and factoring modulo $\Inv^3(G,2)_\dec$ we obtain an isomorphism \[ \tfrac{\Inv^3(G,2)_\sdec}{\Inv^3(G,2)_\dec}\cong \tfrac{\SDec(G)}{\Dec(G)}. \qedhere \] \end{proof} \section{Semi-decomposable invariants vs. decomposable invariants}\label{coincidence} In this section we prove case by case that the groups of decomposable $\Inv^3(G,2)_\dec$ and semi-decomposable $\Inv^3(G,2)_\sdec$ invariants coincide for all split simple~$G$, hence, proving the second part of our main theorem. More precisely, we show that \[\Dec(G)=\cs_2(\Z[T^*]^W)=\cs_2((\ssI^W)\cap\Z[T^*])=\SDec(G)\text{ in }\Sym^2(T^*)^W\] (here we denote $(\cs^{\tCH})^{-1}\circ \cs_2$ simply by $\cs_2$). Observe that in the simply connected case $\Sym^2(\Lambda)^W=\Z q$, where $q$ corresponds to the normalized Killing form from \cite[\S1B]{GaZa}, and $\Dec(G)\subseteq \SDec(G)\subseteq \SDec(\sG)=\Dec(\sG)$. \begin{ex}\label{counterO4} If $G$ is not simple, then $\Dec(G) \neq \SDec(G)$ in general. Indeed, consider a quadratic form $q$ of degree 4 with trivial discriminant (it corresponds to a $\operatorname{\mathbf{SO}}_4$-torsor). According to \cite[Example~20.3]{GMS} there is an invariant given by $q\mapsto \alpha \cup \beta \cup \gamma$, where $\alpha$ is represented by $q$ and $\langle\!\langle \beta,\gamma\rangle\!\rangle=\langle \alpha\rangle q$ is the 2-Pfister form. By definition this invariant is semi-decomposable (this fact was pointed to us by Vladimir Chernousov). Since it is non-trivial over an algebraic closure of $F$, it is not decomposable. \end{ex} \subsection{\it Adjoint groups of type $A_n$ ($n\ge 1$), $B_n$ ($n\geq 2$), $C_n$ ($n\ge 3$, $4\nmid n$), $D_n$ ($n\ge 5$, $4\nmid n$), $E_6$, $E_7$ and special orthogonal groups of type $D_n$ ($n\ge 4$)} \ \medskip For classical adjoint types we have $\Inv^3(G,2)_\norm=\Inv^3(G,2)_\dec$ by~\cite[\S 4b]{Merkurjev}, so we immediately obtain $\Inv^3(G,2)_\dec=\Inv^3(G,2)_\sdec$. For exceptional types by~\cite[p.135]{GMS} and~\cite[\S4b]{Merkurjev} we have $\Dec(G)=\Dec(\sG)=6\Z q$ for $E_6$ and $\Dec(G)=\Dec(\sG)=12\Z q$ for $E_7$. For special orthogonal groups $G=\operatorname{\mathbf{SO}}_{2n}$ by~\cite[\S15]{GMS} we have $\Dec(\operatorname{\mathbf{SO}}_{2n})=\Dec(\operatorname{\mathbf{Spin}}_{2n})=2\Z q$ (here $\tilde G=\operatorname{\mathbf{Spin}}_{2n}$), hence, $\Dec(G)=\SDec(G)$. \subsection{\it Non-adjoint groups of type $A_{n-1}$ ($n\ge 4$)}\label{nonadjA} \ \medskip Let $p$ be a prime integer and $G=\operatorname{\mathbf{SL}}_{p^s}/\gmu_{p^r}$ for some integers $s\geq r>0$. If $p$ is odd, we set $k=\min\{r, s-r\}$ and if $p=2$ we assume that $s\geq r+1$ and set $k=\min\{r, s-r-1\}$. It is shown in \cite[\S4]{BR} that the group $\Inv^3(G,2)_\ind$ is cyclic of order $p^k$. On the other hand, by~\cite[Example~4.15]{Ka98} if $X$ is the Severi-Brauer variety of a generic algebra $A^{\mathrm{gen}}$, then $\CH^2(X)_\tors$ is also a cyclic group of order $p^k$. The canonical morphism $\gX \to X$ is an iterated projective bundle, hence, $\CH^2(\gX)_\tors\simeq \CH^2(X)_\tors$ is a cyclic group of order $p^k$. It follows from the exact sequence of theorem~\ref{exactseq} that $\Inv^3(G,2)_\sdec=\Inv^3(G,2)_\dec$. \medskip More generally, let $G=\operatorname{\mathbf{SL}}_n/\gmu_m$, where $m\mid n$. Let $p^s$ and $p^r$ be the highest powers of a prime integer $p$ dividing $n$ and $m$ respectively. Consider the canonical homomorphism $H=\operatorname{\mathbf{SL}}_{p^s}/\boldsymbol{\mu}_{p^r}\to G$. We claim that it induces an isomorphism between the $p$-primary component of $\Inv^3(G,2)_\ind$ and the group $\Inv^3(H,2)_\ind$. \medskip Indeed, let $H'=\operatorname{\mathbf{SL}}_{n}/\gmu_{p^r}$. It follows from \cite[Theorem 4.1]{BR} that the natural homomorphism $\Inv^3(H',2)_\ind\to \Inv^3(H,2)_\ind$ is an isomorphism. Thus, it suffices to show that the pull-back map for the canonical surjective homomorphism $H'\to G$ with kernel $\gmu_t$, where $t:=m/p^r$ is relatively prime to $p$, induces an isomorphism between the $p$-primary component of $\Inv^3(G,2)_\ind$ and $\Inv^3(H',2)_\ind$. Let $\Lambda\subset \Lambda'$ be the character groups of maximal tori of $G$ and $H'$ respectively. The factor group $\Lambda'/\Lambda$ is isomorphic to $\gmu_t^*=\Z/t\Z$. Since the functor $\Lambda\mapsto \tfrac{\Sym^2(\Lambda)^W}{\Dec(\Lambda)}$ is quadratic in $\Lambda$, the kernel and the cokernel of the homomorphism \[ \Inv^3(G,2)_\ind=\tfrac{\Sym^2(\Lambda)^W}{\Dec(\Lambda)}\to \tfrac{\Sym^2(\Lambda')^W}{\Dec(\Lambda')}=\Inv^3(H',2)_\ind \] are killed by $t^2$. As $t$ is relatively prime to $p$, the claim follows. \medskip Since the $p$-primary component of $\CH(\gX)_\tors$ and the group $\CH(\gX_H)_\tors$ are isomorphic by \cite[Prop.~1.3]{Ka98} (here $\gX_H$ denotes the versal flag for $H$), we obtain that $\Inv^3(G,2)_\ind\simeq \CH(\gX)_\tors$ and, therefore, by the exact sequence of theorem~\ref{exactseq} $\Inv^3(G,2)_\sdec=\Inv^3(G,2)_\dec$. \subsection{\it Adjoint groups of type $C_{4m}$ $(m\geq 1)$}\label{Cn} \ \medskip By~\cite[\S4b]{Merkurjev} we have $\Sym^2(T^*)^W=\Z q$ and $\Dec(G)=\cs_2(\Z [T^*]^W)=2\Z q$. We want to show that $\cs_2(x)\in 2\Z q$ for every element $x\in(\ssI^W)\cap\Z[T^*]$. \medskip Given a weight $\chi\in \Lambda$ we denote by $W(\chi)$ its $W$-orbit and we define $\widehat{e^{\chi}}:=\sum_{\lambda\in W(\chi)}(1-e^{-\lambda})$. By definition, the ideal $(\ssI^W)$ is generated by elements $\{\widehat{e^{\omega_i}}\}_{i=1..4m}$ corresponding to the fundamental weights $\omega_i$. An element $x$ can be written as \begin{equation}\label{alphaid} x=\sum_{i=1}^{4m}n_i\widehat{e^{\omega_i}}+\delta_i\widehat{e^{\omega_i}},\quad\text{ where }n_i\in \Z\text{ and }\delta_i\in\ssI. \end{equation} Similar to \cite[\S3]{Za} consider a ring homomorphism $f\colon\Z[\Lambda]\to\Z[\Lambda/T^*]$ induced by taking the quotient $\Lambda\to\Lambda/T^*=C^*$. We have $\Lambda/T^*\simeq \Z/2\Z$ and $\Z[\Lambda/T^*]=\Z[y]/(y^2-2y)$, where $y=f(e^{\omega_1}-1)$. Observe that $C^*$ is $W$-invariant. \medskip By definition, $f(I)=0$, so $f(x)=0$. Since $\omega_i\in T^*$ for all even $i$, $f(\widehat{e^{\omega_i}})=y$ for all odd $i$ and $f(\delta_i)\in f(\ssI)=(y)$, we get \[ 0=f(x)=\sum_{i \text{ is odd}}n_i d_i y+ m_id_i y^2=(\sum_{i \text{ is odd}}n_i + 2 m_i)d_iy, \] where $m_i\in \Z$ and $d_i=2^i{4m \choose i}$ is the cardinality of $W(\omega_i)$, which implies that $(\sum_{i \text{ is odd}}n_i + 2 m_i)d_i=0$. Dividing this sum by the g.c.d. of all $d_i$'s and taking the result modulo 2 (here one uses the fact $\tfrac{n}{g.c.d.(n,k)}\mid \tbinom{n}{k}$), we obtain that the coefficient $n_1$ in the presentation~\eqref{alphaid} has to be even. \medskip We now compute $\cs_2(x)$. Let $\Lambda=\Z e_1\oplus\ldots\oplus\Z e_{4m}$. The root lattice is given by $T^*=\{\sum a_ie_i\mid \sum a_i \text{ is even}\}$ and \[ \omega_1=e_1,\;\omega_2=e_1+e_2,\;\omega_3=e_1+e_2+e_3,\ldots,\omega_{4m}=e_1+\ldots +e_{4m}. \] By \cite[\S 2]{GaZa} we have $\cs_2(x)=\sum_{i=1}^{4m} n_i \cs_2(\widehat{e^{\omega_i}})$ and $\cs_2(\widehat{e^{\omega_i}})=N(\widehat{e^{\omega_i}})q$, where \[ N(\sum a_je^{\lambda_j})=\tfrac{1}{2}\sum a_j\langle\lambda_j,\alpha^{\vee}\rangle^2\text{ for a fixed long root }\alpha. \] If we set $\alpha=2e_{4m}$, then $\langle \lambda,\alpha^{\vee}\rangle =(\lambda,e_{4m})$ and \[ N(\widehat{e^{\omega_i}})=\tfrac{1}{2}\sum_{\lambda\in W(\omega_i)}\langle\lambda,\alpha^{\vee}\rangle^2=\tfrac{1}{2}\sum_{{\lambda\in W(\omega_i)}}(\lambda,e_{4m})^2=2^{i-1}\tbinom{4m-1}{i-1} \] which is even for $i\ge 2$ (here we used the fact that the Weyl group acts by permutations and sign changes on $\{e_1,\ldots,e_{4m}\}$). Since $n_1$ is even, we get that $\cs_2(x)\in 2\Z q$. \subsection{\it Half-spin and adjoint groups of type $D_{2m}$ ($m\ge 2$)} \ \medskip We first treat the half-spin group $G=\operatorname{\mathbf{HSpin}}_{8m}$. As in the $C_n$-case all even fundamental weights are in $T^*$ and all odd fundamental weights correspond to a generator of $\Lambda/T^*\simeq \Z/2\Z$. Therefore, the map $f\colon \Z[\Lambda] \to \Z[\Lambda/T^*]$ applied to the element $x=\sum_{i=1}^{2m}n_i\widehat{e^{\omega_i}}+\delta_i\widehat{e^{\omega_i}}$ gives the same equality $(\sum_{i \text{ is odd}}n_i + 2 m_i)d_i=0$, where $m_i\in \Z$, $d_i=2^i\binom{2m}{i}$ for $i\le 2m-2$ and $d_{2m-1}=2^{2m-1}$. Dividing by the g.c.d. of $d_i$'s and taking modulo $2$ we obtain that $n_1$ is even if $m>2$ and $n_1+n_3$ is even if $m=2$. \medskip We now compute $\cs_2(x)$. Take a long root $\alpha=e_{2m-1}+e_{2m}$. Then $(\alpha,\alpha)=2$ and $\langle \lambda,\alpha^{\vee}\rangle=(\lambda,e_{2m-1})+(\lambda,e_{2m})$. For $i\le 2m-2$ we have \[ N(\widehat{e^{\omega_i}})=\sum_{\lambda\in W(\omega_i)}(\lambda,e_{2m})^2+(\lambda,e_{2m})(\lambda,e_{2m-1})=\sum_{\lambda\in W(\omega_i)}(\lambda,e_{2m})^2= 2^i\tbinom{2m-1}{i-1}, \] and $N(\widehat{e^{\omega_{2m-1}}})=N(\widehat{e^{\omega_{2m}}})=2^{2m-3}$ (here we used the fact that $W$ acts by permutations and even sign changes). \medskip Finally, if $m>2$, we obtain $\cs_2(x)=\sum_{i}n_i N(\widehat{e^{\omega_i}})q \in 4\Z q$, where $4\Z q = \Dec(\operatorname{\mathbf{HSpin}}_{4m})$ by~\cite[\S5]{BR}. If $m=2$, then $N(\widehat{e^{\omega_{4}}})=2$, hence, $\cs_2(x) \in 2\Z q$, where $2\Z q=\Dec(\operatorname{\mathbf{HSpin}}_{8})$ again by~\cite[\S5]{BR}. \medskip If $m>2$, for the adjoint group $G=\operatorname{\mathbf{PGO}}_{8m}$ by \cite[\S4]{Merkurjev} and the respective half-spin case we obtain \[ 4\Z q=\Dec(\operatorname{\mathbf{PGO}}_{8m})\subseteq \SDec(\operatorname{\mathbf{PGO}}_{8m})\subseteq \SDec(\operatorname{\mathbf{HSpin}}_{8m})=4\Z q. \] If $G=\operatorname{\mathbf{PGO}}_8$, direct computations (see \cite{Ne}) show that $\Dec(G)=\SDec(G)$. \section{Applications} Observe that $H^3(F,\Z/n\Z(2))$ is the $n$-th torsion part of $H^3(F,2)$ for every $n$ and $H^3(F,\Z/n\Z(2))= H^3(F,\mu_n^{\otimes 2})$ if $\charac(F)$ does not divide $n$. \subsection{\it Type $C_n$}\label{typec} Let $G=\operatorname{\mathbf{PGSp}}_{2n}$ be the split projective symplectic group. For a field extension $L/F$, the set $H^1(L,G)$ is identified with the set of isomorphism classes of central simple $L$-algebras $A$ of degree $2n$ with a symplectic involution $\sigma$ (see \cite[\S 29]{Book}). A decomposable invariant of $G$ takes an algebra with involution $(A,\sigma)$ to the cup-product $\phi \cup [A]$ for a fixed element $\phi\in F^\times$. In particular, decomposable invariants of $G$ are independent of the involution. \medskip Suppose that $4\mid n$. It is shown in \cite[Theorem 4.6]{Merkurjev} that the group of indecomposable invariants $\Inv^3(G,2)_\ind$ is cyclic of order $2$. If $\charac(F)\neq 2$, Garibaldi, Parimala and Tignol constructed in \cite[Theorem A]{GPT09} a degree $3$ cohomological invariant $\Delta_{2n}$ of the group $G$ with coefficients in $\Z/2\Z$. They showed that if $a\in A$ is a $\sigma$-symmetric element of $A^\times$ and $\sigma'=\operatorname{Int}(a)\circ\sigma$, then \begin{equation}\label{conj} \Delta_{2n}(A,\sigma')=\Delta_{2n}(A,\sigma)+\operatorname{Nrp}(a)\cup[A], \end{equation} where $\operatorname{Nrp}$ is the pfaffian norm. In particular, $\Delta_{2n}$ does depend on the involution and therefore, the invariant $\Delta_{2n}$ is not decomposable. Hence the the class of $\Delta_{2n}$ in $\Inv^3(G,2)_\ind$ is nontrivial. \medskip It follows from (\ref{conj}) that the class $\Delta_{2n}(A)\in \tfrac{H^3(L, \Z/2\Z)}{L^\times\cup [A]}$ of $\Delta_{2n}(A,\sigma)$ depends only on the $L$-algebra $A$ of degree $2n$ and exponent $2$ but not on the involution. Since $\Delta_{2n}(A,\sigma)$ is not decomposable, it is not semi-decomposable by our main theorem. The latter implies that $\Delta_{2n}(A)$ is {\em nontrivial generically}, i.e. there is a central simple algebra $A$ of degree $2n$ over a field extension of $F$ with exponent $2$ such that $\Delta_{2n}(A)\neq 0$. This answers a question raised in \cite{GPT09}. (See \cite[Remark 4.10]{Demba13} for the case $n=4$.) \subsection{\it Type $A_{n-1}$} Let $G=\operatorname{\mathbf{SL}}_n/\gmu_m$, where $n$ and $m$ are positive integers such that $n$ and $m$ have the same prime divisors and $m\mid n$. Given a field extension $L/F$ the natural surjection $G\to \operatorname{\mathbf{PGL}}_{n}$ yields a map \[ \alpha:H^1(L,G)\to H^1(L,\operatorname{\mathbf{PGL}}_{n})\subset \operatorname{Br}(L) \] taking a $G$-torsor $Y$ over $L$ to the class of a central simple algebra $A(Y)$ of degree $n$ and exponent dividing $m$. By definition, a decomposable invariant of $G$ is of the form $Y\mapsto \phi \cup [A(Y)]$ for a fixed $\phi \in F^\times$. \medskip The map $\operatorname{\mathbf{SL}}_m\to \operatorname{\mathbf{SL}}_n$ taking a matrix $M$ to the tensor product $M\otimes I_{n/m}$ with the identity matrix, gives rise to a group homomorphism $\operatorname{\mathbf{PGL}}_m\to G$. The induced homomorphism (see \cite[Theorem 4.4]{Merkurjev}) \[ \varphi:\Inv^3(G,2)_\norm\to \Inv^3(\operatorname{\mathbf{PGL}}_m,2)_\norm=F^\times/F^{\times m} \] is a splitting of the inclusion homomorphism \[ F^\times/F^{\times m}=\Inv^3(G,2)_\dec\hookrightarrow \Inv^3(G,2)_\norm. \] Collecting descriptions of $p$-primary components of $\Inv^3(G,2)_\ind$ (see~\ref{nonadjA}) we get \begin{equation}\label{an} \Inv^3(G,2)_\ind\simeq \tfrac{m}{k}\Z q/m\Z q,\quad\text{ where } k=\left\{ \begin{array}{ll} \gcd(\frac{n}{m},m), & \hbox{if $\frac{n}{m}$ is odd;} \\ \gcd(\frac{n}{2m},m), & \hbox{if $\frac{n}{m}$ is even.} \end{array} \right. \end{equation} Let $\Delta_{n,m}$ be a (unique) invariant in $\Inv^3(G,2)_\norm$ such that its class in $\Inv^3(G,2)_\ind$ corresponds to $\tfrac{m}{k}q+m\Z q$ and $\varphi(\Delta_{n,m})=0$. Note that the order of $\Delta_{n,m}$ in $\Inv^3(G,2)_\norm$ is equal to $k$. Therefore, $\Delta_{n,m}$ takes values in $H^3(-,\Z/k\Z(2))\subset H^3(-,2)$. Fix a $G$-torsor $Y$ over $F$ and consider the twists $^Y\! G$ and $\operatorname{\mathbf{SL}}_1(A(Y))$ by $Y$ of the groups $G$ and $\operatorname{\mathbf{SL}}_{n}$ respectively. The group $F^\times$ acts transitively on the fiber over $A(Y)$ of the map $\alpha$. If $\phi\in F^\times$, we write $^\phi Y$ for the corresponding element in the fiber. By~\eqref{an} the image of $\Delta_{n,m}$ under the natural composition \[ \Inv^3(G,2)_\norm\simeq \Inv^3(^Y\! G,2)_\norm\longrightarrow \Inv^3(\operatorname{\mathbf{SL}}_1(A(Y)),2)_\norm \] is a $\tfrac{m}{k}$-multiple of the Rost invariant. Recall that the Rost invariant takes the class of $\phi$ in $F^\times/\Nrd(A(Y)^\times)=H^1(F,\operatorname{\mathbf{SL}}_1(A(Y)))$ to the cup-product $\phi\cup [A(Y)]\in H^3(F,2)$. So we get \begin{equation}\label{diff} \Delta_{n,m}(^\phi Y)-\Delta_{n,m}(Y)\in F^\times\cup \tfrac{m}{k}[A(Y)]. \end{equation} \medskip Given a central simple $L$-algebra $A$ of degree $n$ and exponent dividing $m$, we define an element \[ \Delta_{n,m}(A)\in \tfrac{H^3(L,\Z/k\Z(2))}{L^\times\cup \tfrac{m}{k}[A]} \] as follows. Choose a $G$-torsor $Y$ over $L$ with $A(Y)\simeq A$ and set $\Delta_{n,m}(A)$ to be the class of $\Delta_{n,m}(Y)$ in the factor group. It follows from \eqref{diff} that $\Delta_{n,m}(A)$ is independent of the choice of $Y$. \begin{prop} Let $A$ be a central simple $L$-algebra of degree $n$ and exponent dividing $m$. Then the order of $\Delta_{n,m}(A)$ divides $k$. If $A$ is a generic algebra, then the order of $\Delta_{n,m}(A)$ is equal to $k$. \end{prop} \begin{proof} If $k'$ is a proper divisor of $k$, then the multiple $k'\Delta_{n,m}$ is not decomposable. By our theorem $k'\Delta_{n,m}$ is not semi-decomposable and, hence, $k'\Delta_{n,m}(A)\neq 0$. \end{proof} \begin{ex} Let $A$ be a central simple $F$-algebra of degree $2n$ divisible by $8$ and exponent $2$. Choose a symplectic involution $\sigma$ on $A$. The group $\operatorname{\mathbf{PGSp}}_{2n}$ is a subgroup of $\operatorname{\mathbf{SL}}_{2n}/\gmu_2$, hence, if $\charac(F)\neq 2$, the restriction of the invariant $\Delta_{2n,2}$ on $\operatorname{\mathbf{PGSp}}_{2n}$ is the invariant $\Delta_{2n}(A,\sigma)$ considered in subsection~\ref{typec}. It follows that $\Delta_{2n,2}(A)=\Delta_{2n}(A)$ in the group $H^3(F, \Z/2\Z)/(F^\times\cup [A])$. \end{ex} The class $\Delta_{n,m}$ is trivial on decomposable algebras: \begin{prop}\label{decomp} Let $n_1, n_2,m$ be positive integers such that $m$ divides $n_1$ and $n_2$. Let $A_1$ and $A_2$ be two central simple algebras over $F$ of degree $n_1$ and $n_2$ respectively and of exponent dividing $m$. Then $\Delta_{n_1n_2,m}(A_1\otimes_F A_2)=0$. \end{prop} \begin{proof} The tensor product homomorphism $\operatorname{\mathbf{SL}}_{n_1} \times \operatorname{\mathbf{SL}}_{n_2} \to \operatorname{\mathbf{SL}}_{n_1n_2}$ yields a homomorphism \[ \Sym^2(T_{n_1n_2}^*)\to \Sym^2(T_{n_1}^*)\oplus \Sym^2(T_{n_2}^*), \] where $T_{n_1}$, $T_{n_2}$ and $T_{n_1n_2}$ are maximal tori of respective groups. The image of the canonical Weyl-invariant generator $q_{n_1n_2}$ of $\Sym^2(T_{n_1n_2}^*)$ is equal to $n_2 q_{n_1} + n_1 q_{n_2}$. Since $n_1$ and $n_2$ are divisible by $m$, the pull-back of the invariant $\Delta_{n_1n_2,m}$ under the homomorphism $(\operatorname{\mathbf{SL}}_{n_1}/\gmu_m) \times (\operatorname{\mathbf{SL}}_{n_2}/\gmu_m) \to \operatorname{\mathbf{SL}}_{n_1n_2}/\gmu_m$ is trivial. \end{proof} \section{Appendix} The aim of this section is to verify that the groups of decomposable and semi-decomposable cohomological invariants of the group $\operatorname{\mathbf{PGO}}_8$ coincide. Following the notation of~\cite{Merkurjev} we have \begin{itemize} \item $\Lambda=\Z e_1\oplus\ldots\Z e_4+\Z e$ where $e=\frac{1}{2}(e_1+e_2+e_3+e_4)$, \item $T^*$ consists of all $\sum a_ie_i$ with $\sum a_i$ even. \item $S^2(\Lambda)^W=\Z q$ where $q=\frac{1}{2}(e_1^2+e_2^2+e_3^2+e_4^2)$ and \item Fundamental weights are \[\omega_1=e_1, \omega_2=e_1+e_2, \omega_3=e-e_4,\omega_4=e.\] \item Simple roots are \[\lambda_1=e_1-e_2,\lambda_2=e_2-e_3,\lambda_3=e_3-e_4,\lambda_4=e_3+e_4.\] \item The Weyl group $W=S_4\rightthreetimes (C_2)^3$ consists of permutations of $e_i$ and sign changes of even number of variables. \item The $W$-orbits of fundamental weights are given by: $W(\omega_1)=\{e_1,e_2,e_3,e_4,-e_1,-e_2,-e_3,-e_4\}$, $W(\omega_2)=\{e_1+e_2,\ldots,e_3+e_4,-(e_1+e_2),\ldots,-(e_3+e_4),e_1-e_2,\ldots e_3-e_4,-e_1+e_2,\ldots,-e_3+e_4\}$, $W(\omega_3)=\{e-e_1,e-e_2,e-e_3,e-e_4\}$ and $W(\omega_4)=\{e,-e,e-e_1-e_2,e-e_1-e_3,e-e_1-e_4,e-e_2-e_3,e-e_2-e_4,e-e_3-e_4\}$. \end{itemize} \medskip Let $\widehat{e^{\omega_i}}$ denote the sum $\sum_{\lambda\in W(\omega_i)}e^{\lambda}-1$. Then the ideal $\widetilde{I}^W$ is generated by $\widehat{e^{\omega_i}}, i=1,\ldots, 4$ Note that $\Z[\Lambda]$ is the Laurent polynomial ring $\Z[e^{\pm e_1},\ldots e^{\pm e_4},e^{\pm e}]$ so we represetn it as a quotient of the polynomial ring \[Z[\Lambda]=\Z[u_1,v_1,\ldots u_4,v_4,u_5,v_5]/(u_iv_1-1,\ldots u_5v_5-1, u_5^2=u_1u_2u_3u_4)\] where $u_i= e^{e_i}$, $v_i= e^{-e_i}$ for $i=1\ldots 4$ and $u_5= e^e$ and $v_5=e^{-e}$. Let $r_i=e^{\lambda_i}$ and $s_i=e^{-\lambda_i}$ for the simple roots $\lambda_i$. We have an exact sequence \[ 0\to J_1\to \Z [u_1,\ldots, u_5,v_1,\ldots, v_5,r_1,\ldots, r_4,s_1,\ldots ,s_4]\stackrel{\pi}\to\Z[\Lambda]\to 0,\] where \[J_1=(u_1v_1-1,\ldots, u_5v_5-1,r_1s_1-1,\ldots, r_4s_4-1, \] \[ r_1-u_1v_2,r_2-u_2v_3,r_3-u_3v_4,r_4-u_3u_4, u_1u_2u_3u_4-u_5^2)\] and the preimage \[ \pi^{-1}(\widetilde{I}^W)=J_1+(u_1+\ldots u_4+v_1+\ldots v_4-8, \] \[ u_1u_2+\ldots u_3u_4+v_1v_2+\ldots v_3v_4+\ldots u_1v_2+\ldots u_3v_4+v_1u_2+\ldots v_3u_4-24, \] \[ u_5v_1+u_5v_2+u_5v_3+u_5v_4+u_1v_5+u_2v_5+u_3v_5+u_4v_5-8, \] \[ u_5+v_5+u_5v_1v_2+u_5v_1v_3+u_5v_1v_4+u_5v_2v_3+u_5v_2v_4+u_5v_3v_4-8). \] Note that $\Z[T^*]=\pi(\Z[r_1,\ldots r_4,s_1,\ldots s_4])$ and \[\widetilde{I}^W\cap\Z[T^*]=\pi(\pi^{-1}(\widetilde{I}^W)\cap\Z[r_i,s_i])\] Now we use the Maple software to compute a basis of a bigger intersection: \[\pi^{-1}(\widetilde{I}^W+\widetilde{I}^4)\cap\Z[r_i,s_i].\] We will prove that for any $x$ in the basis of this intersection~\cite[\S 4b,p.19]{Merkurjev}\[ c_2(\pi(x))\in 4q\Z=\Dec(G).\] Since $c_2(\widetilde{I}^3)=0$, it is enough to consider the generators that are not contained in $\pi^{-1}(\widetilde{I}^3)$. We compute the basis of the intersection using the following code: { \tt with(PolynomialIdeals):\\ \# relations ideal: J1 := $\langle u_1v_1-1, u_2v_2-1, u_3v_3-1, u_4v_4-1, u_5v_5-1, r_1s_1-1, r_2s_2-1, r_3s_3-1, r_4s_4-1, u_1u_2u_3u_4-u_5^2, r_1-u_1v_2, r_2-u_2v_3, r_3-u_3v_4, r_4-u_3u_4\rangle$\\ \# preimage of $\widetilde{I}^W$: J2 := $\langle u_1+u_2+u_3+u_4+v_1+v_2+v_3+v_4-8, u_5v_1+u_5v_2+u_5v_3+u_5v_4+u_1v_5+u_2v_5+u_3v_5+u_4v_5-8, u_5+v_5+u_5v_1v_2+u_5v_1v_3+u_5v_1v_4+u_5v_2v_3+u_5v_2v_4+u_5v_3v_4-8, -24+u_1v_2+u_2v_3+u_3v_4+u_3u_4+u_1u_2+u_1v_3+u_1v_4+u_2v_4+v_1u_2+u_1u_3+u_1u_4+u_2u_3+u_2u_4+v_1v_2+v_1v_3+v_1v_4+v_2v_3+v_2v_4+v_3v_4+v_1u_3+v_1u_4+v_2u_3+v_2u_4+v_3u_4\rangle$\\ \# preimage of the augmentation ideal:\\ augL := $\langle u[1]-1, v[1]-1, u[2]-1, v[2]-1, u[3]-1, v[3]-1, u[4]-1, v[4]-1, u[5]-1, v[5]-1\rangle$;\\ \# preimages of the square,cube and fourth power of the augmentation ideal: squarL := Add(Multiply(augL, augL), J1);\\ cubL := Add(Multiply(augL, Multiply(augL, augL)), J1);\\ quadL := Add(Multiply(augL, cubL), J1); \\ \# preimage of $\widetilde{I}^W+\widetilde{I}^4$:\\ J := Add(Add(J1, J2), quadL)\\ \# intersection with the subring $\Z[r_i,s_i]$:\\ K := EliminationIdeal(J, {r[1], r[2], r[3], r[4], s[1], s[2], s[3], s[4]}):\\ \# basis of the intersection\\ Gen := IdealInfo[Generators](K):\\ \# print out the elements of the basis that do not lie in $\pi^{-1}(\widetilde{I}^3)$\\ for x in Gen do \\ if not(IdealMembership(x, cubL)) then print(x) end if \\ end do\\ } This gives a list of 18 polynomials: \begin{itemize} \item $ -34 - r_1 s_4 - 2 r_2 s_4 + 6 r_1 + 6 s_1 + 10 r_2 + 10 s_2 + 4 r_3 + 4 s_3 + 4 r_4 + 4 s_4 - 2 s_2 r_4 - s_1 r_4 - 2 s_2 r_3 - s_1 r_3 + r_4 r_3 - 2 s_3 r_2 - 2 s_1 r_2 - s_3 r_1 - 2 s_2 r_1 + s_3 s_4$; \item $ 38 + r_1 s_4 + 2 r_2 s_4 + r_3 s_4 - 6 r_1 - 6 s_1 - 10 r_2 - 10 s_2 - 6 r_3 - 6 s_3 - 6 r_4 - 6 s_4 + s_3 r_4 + 2 s_2 r_4 + s_1 r_4 + 2 s_2 r_3 + s_1 r_3 + 2 s_3 r_2 + 2 s_1 r_2 + s_3 r_1 + 2 s_2 r_1$; \item $-37 - r_1 s_4 - 2 r_2 s_4 - 3 r_3 s_4 + 6 r_1 + 6 s_1 + 10 r_2 + 10 s_2 + 7 r_3 + 5 s_3 + 4 r_4 + 7 s_4 - 2 s_2 r_4 - s_1 r_4 - 2 s_2 r_3 - r^2_3 - s_1 r_3 + r_4 r_3 - 2 s_3 r_2 - 2 s_1 r_2 - s_3 r_1 - 2 s_2 r_1 + r^2_3 s_4$; \item $35 + r_1 s_4 + 2 r_2 s_4 + r_3 s_4 - 6 r_1 - 6 s_1 - 10 r_2 - 10 s_2 - 3 r_3 - 5 s_3 - 3 r_4 - 6 s_4 + 2 s_2 r_4 + s_1 r_4 + 2 s_2 r_3 - r^2_3 + s_1 r_3 - 3 r_4 r_3 + 2 s_3 r_2 + 2 s_1 r_2 + s_3 r_1 + 2 s_2 r_1 + r^2_3 r_4$; \item $-118 - 6 r_2 s_4 + 14 r_1 + 9 s_1 + 74 r_2 + 46 s_2 + 14 r_3 + 9 s_3 + 14 r_4 + 9 s_4 + r^2_4 - 10 s_2 r_4 - 10 s_2 r_3 + r^2_3 + r_4 r_3 - 6 s_3 r_2 - 6 s_1 r_2 - 8 r_4 r_2 - 8 r_3 r_2 + r^2_1 - 8 r^2_2 - 10 s_2 r_1 + r_4 r_1 + r_3 r_1 - 8 r_2 r_1 + 3 r_1 r_3 r_4$; \item $-92 + 28 r_1 + 18 s_1 + 52 r_2 + 26 s_2 + 34 r_3 + 12 s_3 - 8 r_4 + 2 r^2_4 - 2 s_2 r_4 - 8 s_2 r_3 - r^2_3 - 9 s_1 r_3 + 2 r_4 r_3 - 6 s_3 r_2 - 12 s_1 r_2 + 2 r_4 r_2 - 16 r_3 r_2 - r^2_1 - 4 r^2_2 - 3 s_3 r_1 - 8 s_2 r_1 + 2 r_4 r_1 - 4 r_3 r_1 - 10 r_2 r_1 + 6 r_2 r_3 s_1$; \item $-92 + 34 r_1 + 12 s_1 + 46 r_2 + 32 s_2 + 34 r_3 + 12 s_3 - 8 r_4 + 2 r^2_4 - 2 s_2 r_4 - 14 s_2 r_3 - r^2_3 - 3 s_1 r_3 + 2 r_4 r_3 - 6 s_3 r_2 - 6 s_1 r_2 + 2 r_4 r_2 - 10 r_3 r_2 - r^2_1 - 4 r^2_2 - 3 s_3 r_1 - 14 s_2 r_1 + 2 r_4 r_1 - 10 r_3 r_1 - 10 r_2 r_1 + 6 r_1 r_3 s_2$; \item $-92 + 34 r_1 + 12 s_1 + 52 r_2 + 26 s_2 + 28 r_3 + 18 s_3 - 8 r_4 + 2 r^2_4 - 2 s_2 r_4 - 8 s_2 r_3 - r^2_3 - 3 s_1 r_3 + 2 r_4 r_3 - 12 s_3 r_2 - 6 s_1 r_2 + 2 r_4 r_2 - 10 r_3 r_2 - r^2_1 - 4 r^2_2 - 9 s_3 r_1 - 8 s_2 r_1 + 2 r_4 r_1 - 4 r_3 r_1 - 16 r_2 r_1 + 6 r_1 r_2 s_3$; \item $-92 - 9 r_1 s_4 - 12 r_2 s_4 + 34 r_1 + 12 s_1 + 52 r_2 + 26 s_2 - 8 r_3 + 28 r_4 + 18 s_4 - r^2_4 - 8 s_2 r_4 - 3 s_1 r_4 - 2 s_2 r_3 + 2 r^2_3 + 2 r_4 r_3 - 6 s_1 r_2 - 10 r_4 r_2 + 2 r_3 r_2 - r_1^2 - 4 r^2_2 - 8 s_2 r_1 - 4 r_4 r_1 + 2 r_3 r_1 - 16 r_2 r_1 + 6 r_1 r_2 s_4$; \item $-92 - 3 r_1 s_4 - 6 r_2 s_4 + 28 r_1 + 18 s_1 + 52 r_2 + 26 s_2 - 8 r_3 + 34 r_4 + 12 s_4 - r^2_4 - 8 s_2 r_4 - 9 s_1 r_4 - 2 s_2 r_3 + 2 r^2_3 + 2 r_4 r_3 - 12 s_1 r_2 - 16 r_4 r_2 + 2 r_3 r_2 - r^2_1 - 4 r^2_2 - 8 s_2 r_1 - 4 r_4 r_1 + 2 r_3 r_1 - 10 r_2 r_1 + 6 r_2 r_4 s_1$; \item $-92 - 3 r_1 s_4 - 6 r_2 s_4 + 34 r_1 + 12 s_1 + 46 r_2 + 32 s_2 - 8 r_3 + 34 r_4 + 12 s_4 - r^2_4 - 14 s_2 r_4 - 3 s_1 r_4 - 2 s_2 r_3 + 2 r^2_3 + 2 r_4 r_3 - 6 s_1 r_2 - 10 r_4 r_2 + 2 r_3 r_2 - r^2_1 - 4 r^2_2 - 14 s_2 r_1 - 10 r_4 r_1 + 2 r_3 r_1 - 10 r_2 r_1 + 6 r_1 r_4 s_2$; \item $80 - 22 r_1 - 12 s_1 - 40 r_2 - 26 s_2 - 22 r_3 - 12 s_3 + 8 r_4 - 2 r^2_4 + 2 s_2 r_4 + 8 s_2 r_3 + r^2_3 + 3 s_1 r_3 - 2 r_4 r_3 + 6 s_3 r_2 + 6 s_1 r_2 - 2 r_4 r_2 + 4 r_3 r_2 + r^2_1 + 4 r^2_2 + 3 s_3 r_1 + 8 s_2 r_1 - 2 r_4 r_1 - 2 r_3 r_1 + 4 r_2 r_1 + 6 r_1 r_2 r_3$; \item $80 + 3 r_1 s_4 + 6 r_2 s_4 - 22 r_1 - 12 s_1 - 40 r_2 - 26 s_2 + 8 r_3 - 22 r_4 - 12 s_4 + r^2_4 + 8 s_2 r_4 + 3 s_1 r_4 + 2 s_2 r_3 - 2 r^2_3 - 2 r_4 r_3 + 6 s_1 r_2 + 4 r_4 r_2 - 2 r_3 r_2 + r^2_1 + 4 r^2_2 + 8 s_2 r_1 - 2 r_4 r_1 - 2 r_3 r_1 + 4 r_2 r_1 + 6 r_1 r_2 r_4$; \item $-34 - 3 r_1 s_4 + 26 r_1 + 18 s_1 - 10 r_2 + 4 s_2 - 4 r_3 + 6 s_3 - 4 r_4 + 6 s_4 + r^2_4 + 2 s_2 r_4 - 3 s_1 r_4 + 2 s_2 r_3 + r^2_3 - 3 s_1 r_3 - 2 r_4 r_3 - 6 s_1 r_2 + 4 r_4 r_2 + 4 r_3 r_2 - 2 r^2_1 + 4 r^2_2 - 3 s_3 r_1 - 4 s_2 r_1 - 2 r_4 r_1 - 2 r_3 r_1 - 2 r_2 r_1 + 6 r_2 r_3 r_4$; \item $22 + 3 r_1 s_4 - 26 r_1 - 18 s_1 + 16 r_2 + 2 s_2 + 16 r_3 - 6 s_3 + 16 r_4 - 6 s_4 - r^2_4 - 8 s_2 r_4 + 3 s_1 r_4 - 8 s_2 r_3 - r^2_3 + 3 s_1 r_3 - 10 r_4 r_3 + 6 s_1 r_2 - 10 r_4 r_2 - 10 r_3 r_2 + 2 r^2_1 - 4 r^2_2 + 3 s_3 r_1 + 4 s_2 r_1 + 2 r_4 r_1 + 2 r_3 r_1 + 2 r_2 r_1 + 6 r_3 r_4 s_2$; \item $112 - 3 r_1 s_4 + 6 r_2 s_4 - 3 r_3 s_4 - 8 r_1 - 9 s_1 - 74 r_2 - 46 s_2 - 8 r_3 - 9 s_3 - 11 r_4 - 6 s_4 - r^2_4 + 10 s_2 r_4 + 10 s_2 r_3 - r^2_3 - 4 r_4 r_3 + 6 s_3 r_2 + 6 s_1 r_2 + 8 r_4 r_2 + 8 r_3 r_2 - r^2_1 + 8 r^2_2 + 10 s_2 r_1 - 4 r_4 r_1 - 7 r_3 r_1 + 8 r_2 r_1 + 3 r_1 r_3 s_4$; \item $112 + 6 r_2 s_4 - 11 r_1 - 6 s_1 - 74 r_2 - 46 s_2 - 8 r_3 - 9 s_3 - 8 r_4 - 9 s_4 - r^2_4 + 10 s_2 r_4 - 3 s_1 r_4 + 10 s_2 r_3 - r^2_3 - 3 s_1 r_3 - 7 r_4 r_3 + 6 s_3 r_2 + 6 s_1 r_2 + 8 r_4 r_2 + 8 r_3 r_2 - r^2_1 + 8 r^2_2 + 10 s_2 r_1 - 4 r_4 r_1 - 4 r_3 r_1 + 8 r_2 r_1 + 3 r_3 r_4 s_1$; \item $22 + 3 r_1 s_4 - 6 r_2 s_4 - 6 r_3 s_4 - 26 r_1 - 18 s_1 + 22 r_2 - 4 s_2 + 16 r_3 - 6 s_3 + 10 r_4 - r^2_4 - 2 s_2 r_4 + 3 s_1 r_4 - 2 s_2 r_3 - r^2_3 + 3 s_1 r_3 - 4 r_4 r_3 + 6 s_1 r_2 - 10 r_4 r_2 - 16 r_3 r_2 + 2 r^2_1 - 4 r^2_2 + 3 s_3 r_1 + 4 s_2 r_1 + 2 r_4 r_1 + 2 r_3 r_1 + 2 r_2 r_1 + 6 r_2 r_3 s_4$; \end{itemize} Take the first element of the list \[y=-34 - r_1 s_4 - 2 r_2 s_4 + 6 r_1 + 6 s_1 + 10 r_2 + 10 s_2 + 4 r_3 + 4 s_3 + 4 r_4 + \] \[ 4 s_4 - 2 s_2 r_4 - s_1 r_4 - 2 s_2 r_3 - s_1 r_3 + r_4 r_3 - 2 s_3 r_2 - 2 s_1 r_2 - s_3 r_1 - 2 s_2 r_1 + s_3 s_4.\] We compute $c_2(y)$ as the second term in the power series expansion of \[ (1+(l_1-l_4)t)^{-1}(1+(l_2+l_4)t)^{-2}(1+l_1t)^6(1-l_1t)^6(1+l_2t)^{10}\] \[ (1-l_2t)^{10}(1+l_3t)^4(1-l_3t)^4(1+l_4t)^4(1-l_4t)^4(1+(l_2+l_4)t)^{-2}\] \[ (1+(-l_1+l_4)t)^{-1}(1+(-l_2+l_3)t)^{-2}(1+(-l_1+l_3)t)^{-1}(1+(l_4+l_3)t)\] \[ (1+(-l_3+l_2)t)^{-2}(1+(-l_1+l_2)t)^{-2}(1+(-l_3+l_1)t)^{-1}(1+(-l_2+l_1)t)^{-2} (1+(-l_3+l_4)t) \] where $l_1 = e1]-e_2, l_2 = e_2-e_3, l_3 = e_3-e_4, l_4 = e_3+e_4$. Computation shows that $c_2(y)=-2(e_1^2+e_2^2+e_3^2+e_4^2)=-4q$. As the last step we show that for every generator $x$ that does not lie in $\pi^{-1}(\widetilde{I}^3)$ we have that either $x-y$ or $x+y$ $x-2y$ or $x+2y$ lies in $\pi^{-1}(\widetilde{I}^3)$. To do this we use the following Maple code: { \tt for x in Gen do \\ if not IdealMembership(x, cubL) and IdealMembership(x, squarL) and \\ (IdealMembership(x-y, cubL) or IdealMembership(x+y, cubL) or \\ IdealMembership(x+2y, cubL) or IdealMembership(x-2y, cubL) ) \\ then print(x) end if \\ end do } It returns the same list of 18 polynmials, so we see that for every generator $x$ we have $c_2(x)\in 4q\Z,$ so $\SDec(G)\subseteq\Dec(G).$
1,116,691,497,926
arxiv
\section{Introduction} The radio emission of core-dominated, radio-loud Active Galactic Nuclei (AGN) is synchrotron radiation generated in the relativistic jets that emerge from the nucleus of the galaxy, presumably along the rotational axis of a central supermassive black hole. One important source of information about the physical conditions in the radio-emitting regions is the distribution of the spectral index $\alpha$ over the source ($S_{\nu}\propto\nu^{\alpha}$, where $S_{\nu}$ is the source flux at frequency $\nu$). The core region is typically observed to be at least partially optically thick, with a nearly flat or inverted spectrum, while the jets are optically thin, with negative spectral indices. The spectrum may also flatten in regions of the jet in which there is re-acceleration of electrons or low-frequency absorption (e.g. Gabuzda, Pushkarev \& Garnich 2000; Gabuzda, G\'omez \& Agudo 2001). Synchrotron radiation can be highly linearly polarised, to $\simeq 75\%$ in the case of a uniform magnetic field (Pacholczyk 1970), and linear polarisation observations can yield unique information about the orientation and degree of order of the magnetic field in the synchrotron source, as well as the distribution of thermal electrons and the magnetic-field geometry in the immediate vicinity of the AGN (e.g., via Faraday rotation of the plane of polarisation). The compact radio emission of such AGN can be probed with high resolution using Very Long Baseline Interferometry (VLBI). The radio telescopes in the interferometric array can be separated by hundreds or thousands of kilometres, making it infeasible to physically link (synchronise) them electronically, and high-accuracy timing signals must be recorded together with the data, so that the signals obtained at different antennas can be accurately synchronised during correlation. In practice, the amplitudes and, especially, phases of the measured complex visibility data unavoidably contain unknown errors, which can conveniently be expressed via antenna-based complex gain factors: \begin{eqnarray*} V^{obs}_{ij} & = & G_iG_J^*V^{true}_{ij} \end{eqnarray*} \noindent where $V^{obs}_{ij}$ and $V^{true}_{ij}$ are the observed and true visibility functions on the baseline between antennas $i$ and $j$, and $G_i$, $G_j$ are the complex gain factors for antennas $i$ and $j$. The complex gains $G$ must be determined and removed from the data in order to achieve the highest-quality images possible for the radio-telescope array used. This is normally done iteratively, via alternate application of self-calibration (Fort and Yee 1976, Cotton 1979, Readhead \& Wilkinson 1978, Readhead et al. 1980, Cornwell \& Wilkinson 1981) and a deconvolution method, such as CLEAN (H\"ogbom 1974). \section{Matching and Aligning Images at Different Frequencies} Since AGN are variable, it goes without saying that multi-frequency data to be compared must correspond to epochs separated by time intervals appreciably less than the timescales for variability of the source. When preparing data for techniques involving comparison of multi-frequency data, various instrumental differences between the datasets must also be taken into account. VLBI datasets at different frequencies will have different angular resolutions and sensitivities to structures on various scales due to the different baseline coverages of the observations. One approach to reducing these differences when comparing multi-frequency images is to match the baseline coverages at different frequencies by giving relatively low weights to the longest baselines at higher frequencies and to the shortest baselines at lower frequencies, e.g., via tapering of the visibility data. Alternatively, the images can be obtained without such weighting, but then all be convolved with the same CLEAN beam before comparison. The size of the CLEAN beam to be used in this case is ordinarily roughly equal to the central lobe of the dirty beam for the lowest frequency. The accuracy to which the positions of the CLEAN components are known is limited by the resolution of the observing system, and convolving with a beam that was much smaller than the central lobe of the dirty beam would result in a ``superresolved'' image that may not be reliable. Iterative imaging via self-calibration and a deconvolution algorithm such as CLEAN is generally quite effective, but the absolute information about the coordinates of the source on the sky is lost during phase self-calibration, which essentially places the centre of gravity of the radio brightness distribution at the phase centre, which has coordinates $(0,0)$. Because most radio-loud AGN are highly core-dominated, directly comparing multi-frequency self-calibrated VLBI images of AGN essentially amounts to aligning these images on the observed VLBI core, which usually coincides very closely with the peak of the radio brightness distribution. However, the standard theory of extragalactic radio sources (e.g. Blandford \& K\"onigl 1979) predicts a frequency-dependent shift in the location of the VLBI core due to opacity effects in the core region. Reabsorption of synchrotron radiation takes place in the ultra-compact region near the central engine of an AGN, a mechanism which is more efficient at low frequencies. Consequently, the peak brightness appears further along the jet axis in lower frequency observations. Thus the alignment of multi-frequency images on their VLBI core results in a misalignment between images of different observing frequencies. This prediction is supported by observation: the frequency-dependent shift in core position has been measured for several quasars and micro-quasars (see Lobanov 1998 and references therein), and the effect is discussed in terms of its impact on high precision astrometry by Fey (2000), Charlot (2002), Ros (2005) and Boboltz (2006). Lobanov (1998) explains in detail how the frequency-dependence of the shift depends on physical conditions near the central engine. It is thus necessary to correctly align images prior to applying multi-frequency data-analysis techniques. This can be achieved in one of two ways. The first is by phase-referenced observations, first employed by Marcaide \& Shapiro (1984), in which a nearby source (or sources) is observed along with the target source. The reference source would ideally be a point source, to eliminate structure effects including those discussed above, but this is rarely possible since most sources show extended structure on the milliarcsecond scales available with VLBI. The position of the target source relative to the reference source can then be determined. The second method involves aligning images according to the positions of optically thin jet components (i.e. components optically thin to synchrotron radiation, so that their positions are not affected by absorption effects such as those occuring in the core) that are present in both images to be compared. This can be non-trivial, particularly if the source has a complicated structure, but has been employed effectively by several authors, such as Paragi et al. (2000), who used this method to determine the radio core shift in 1823+568. This difficulty in aligning complex images without distinct optically thin components detected at all frequencies to be compared was the main motivation for us to consider alternative methods of image alignment. \section{Image Alignment via Cross-Correlation} The method we have developed to align multi-frequency images is based on the cross-correlation technique widely used in many fields, including biomedical signal processing and imaging (Panescu 1993, Frank \& McEwan 1992) and remote sensing (Hartl 1976). Cross-correlation provides a measure of how closely correlated (i.e. how alike) two functions are. The use of this measure in image alignment gives an objective, quantitative assessment of how well two images are aligned, and does not depend on the presence of very compact features. By applying different shifts between images and calculating the cross-correlation coefficient for each shift, it is possible to determine which shift results in the best alignment. The normalised cross-correlation coefficient we used (see, e.g., Dunn \& Clark 1974) is defined in two-dimensions as: \begin{equation} r_{x y}=\frac{\sum_{i=1}^{n}\sum_{j=1}^{n}(I_{\nu 1,i j}-\overline{I_{\nu 1}}) (I_{\nu 2, i j}-\overline{I_{\nu 2}})}{\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n} (I_{\nu 1, i j}-\overline{I_{\nu 1}})^{2}\sum_{i=1}^{n}\sum_{j=1}^{n} (I_{\nu 2, i j}-\overline{I_{\nu 2}})^{2}}} \label{corrcoef} \end{equation} \noindent where $n$ is the number of pixels in each direction in the two-dimensional images to be compared, $I_{\nu 1, ij}$ and $I_{\nu 2, ij}$ are the intensities for the maps at frequencies $\nu 1$ and $\nu 2$ at pixel $(RA_i, Dec_j)$, and $\overline{I_{\nu 1}}$ and $\overline{I_{\nu 2}}$ are the mean values of these two intensities over the region analyzed. Although the source emission at a given location varies with frequency, all the radiation observed at radio frequencies is believed to be synchrotron radiation from the same population of relativistic electrons. Therefore the optically thin total-intensity ($I$) structures should be well-correlated despite local changes in the spectrum of the synchrotron emission due to variations in the local magnetic field, interaction with the surrounding medium and other effects. Using this (reasonable) assumption, it is possible to determine the shift to be applied between maps by comparing the structures of optically thin regions of the source at different frequencies. The highest correlation between dual-frequency images should therefore be obtained when the areas being compared correspond to the same physical region of the sky. This method has the advantage that it takes account of the optically thin emission from the \emph{entire} source, not just isolated compact components. \section{Implementation of the Cross-Correlation Technique} The most widely used software for the calibration, imaging and analysis of radio interferometric data is the National Radio Astronomy Observatory AIPS (Astronomical Image Processing System) package. We have written a C program to implement the cross-correlation technique, which is external to but compatible with the NRAO AIPS package. The input to the program are two images in the format produced by the AIPS task IMTXT, and the program outputs files which can be imported back into AIPS using the task FETCH. IMTXT allows the user to export an AIPS image as a text file containing an array of floats representing the map values at each pixel location. Conversely, FETCH imports a text file in the format exported by IMTXT as a map file which can be displayed by any AIPS tasks that work with images (e.g. KNTR, TVLOD). \begin{figure} \begin{picture}(360,360) \newsavebox{\map} \savebox{\map}(288,288)[bl]{ \multiput(0,0)(0,288){2}{\multiput(0,0)(18,0){16}{\line(1,0){12}}} \multiput(0,0)(288,0){2}{\multiput(0,0)(0,18){16}{\line(0,1){12}}} } \multiput(0,0)(12,12){7}{\usebox{\map}} \linethickness{0.5mm} \multiput(36,36)(0,288){2}{\line(1,0){288}} \multiput(36,36)(288,0){2}{\line(0,1){288}} \multiput(72,72)(0,216){2}{\line(1,0){216}} \multiput(72,72)(216,0){2}{\line(0,1){216}} \thicklines \put(234,126){\circle{36}} \thinlines \multiput(72,285)(3,3){2}{\line(1,-1){147}} \multiput(246,111)(3,3){2}{\line(1,-1){39}} \multiput(72,279)(9,9){2}{\line(1,-1){146}} \multiput(241,110)(9,9){2}{\line(1,-1){38}} \multiput(72,273)(15,15){2}{\line(1,-1){144}} \multiput(237,108)(15,15){2}{\line(1,-1){36}} \multiput(72,267)(21,21){2}{\line(1,-1){144}} \multiput(231,108)(21,21){2}{\line(1,-1){36}} \multiput(72,261)(27,27){2}{\line(1,-1){189}} \multiput(72,255)(33,33){2}{\line(1,-1){183}} \multiput(72,249)(39,39){2}{\line(1,-1){177}} \multiput(72,243)(45,45){2}{\line(1,-1){171}} \multiput(72,237)(51,51){2}{\line(1,-1){165}} \multiput(72,231)(57,57){2}{\line(1,-1){159}} \multiput(72,225)(63,63){2}{\line(1,-1){153}} \multiput(72,219)(69,69){2}{\line(1,-1){147}} \multiput(72,213)(75,75){2}{\line(1,-1){141}} \multiput(72,207)(81,81){2}{\line(1,-1){135}} \multiput(72,201)(87,87){2}{\line(1,-1){129}} \multiput(72,195)(93,93){2}{\line(1,-1){123}} \multiput(72,189)(99,99){2}{\line(1,-1){117}} \multiput(72,183)(105,105){2}{\line(1,-1){111}} \multiput(72,177)(111,111){2}{\line(1,-1){105}} \multiput(72,171)(117,117){2}{\line(1,-1){99}} \multiput(72,165)(123,123){2}{\line(1,-1){93}} \multiput(72,159)(129,129){2}{\line(1,-1){87}} \multiput(72,153)(135,135){2}{\line(1,-1){81}} \multiput(72,147)(141,141){2}{\line(1,-1){75}} \multiput(72,141)(147,147){2}{\line(1,-1){69}} \multiput(72,135)(153,153){2}{\line(1,-1){63}} \multiput(72,129)(159,159){2}{\line(1,-1){57}} \multiput(72,123)(165,165){2}{\line(1,-1){51}} \multiput(72,117)(171,171){2}{\line(1,-1){45}} \multiput(72,111)(177,177){2}{\line(1,-1){39}} \multiput(72,105)(183,183){2}{\line(1,-1){33}} \multiput(72,99)(189,189){2}{\line(1,-1){27}} \multiput(72,93)(195,195){2}{\line(1,-1){21}} \multiput(72,87)(201,201){2}{\line(1,-1){15}} \multiput(72,81)(207,207){2}{\line(1,-1){10}} \put(298,184){\vector(-1,0){10}} \put(298,182){$\mathbf{\Delta }$} \put(314,184){\vector(1,0){10}} \put(184,298){\vector(0,-1){10}} \put(180,302){$\mathbf{\Delta }$} \put(184,314){\vector(0,1){10}} \put(218,123){$CORE$} \end{picture} \caption{\small{Implementation of image alignment by cross-correlation. A sub-area (shaded region) of the first map (outlined in bold) is compared with the overlying region of the second map. This sub-area remains constant, while the corresponding region of the second map changes as it is shifted relative to the first map (some possible shifted positions are outlined with dashed lines). The cross-correlation coefficient between the two regions, given by equation (1), is computed each time to provide a measure of how well the areas are correlated. The method assumes that the highest correlation is achieved when the areas being compared refer to the same physical region of the sky. The circle labeled ``core'' represents a region with a specified elliptical or circular shape (corresponding to the beam shape) coincident with the position of the optically thick core, that is omitted from Map~1 during the cross-correlation calculations.}} \label{corrfig} \end{figure} The input images to the program must have the same pixel size and numbers of pixels (i.e. so that both the size of a pixel and the overall size of the images correspond to the same area on the sky), convolved with the same beam, and exported from AIPS using the task IMTXT. The user specifies the maximum trial shift to be applied between the images. The procedure used to calculate the shift that best aligns the input images is as follows. \begin{enumerate} \item Strips of width $\Delta $, where $\Delta $ is the maximum shift to be applied to the maps, are subtracted from each edge of the first map (Map 1). An area of the same shape as the restoring beam, but whose size is specified by the user, is removed from the area in Map~1 surrounding the (at least partially) optically thick core, whose position will be frequency-dependent and which usually corresponds to the peak of the map. In comparatively unusual cases, the brightest feature may not correspond to the core; to enable the program to produce reliable results for this situation, we have included the option of the user specifying the position of the core. \item The remaining area is used in the comparison (see Fig. 1). The second map (Map 2) is shifted so that different regions overlay this area each time. Since the area of Map~1 is not changed, the normalised cross-correlation coefficients can be compared directly to determine which part of Map~2 is best correlated with the selected region. \item Map~2 is first shifted so that its bottom left hand corner corresponds to the bottom left hand corner of the selected sub-area of Map~1. This corresponds to the maximum negative shift ($-\Delta , -\Delta $) applied. Map~2 is then shifted in right ascension, one pixel at a time, and the cross-correlation coefficient computed each time, until the maximum positive shift $\Delta $ is reached. The image is then shifted by one pixel in declination, and the correlation coefficients for the next row computed in the same way. This is repeated until the maximum shift in both directions ($\Delta , \Delta $) is reached, which occurs when the top right hand corner of Map~2 corresponds to the top right hand corner of the selected subarea of Map~1. \item The program outputs to the screen the maximum correlation $r_{max}$ and the shift ($dRA$,$dDec$) at which it occurs. \item Having found the best initial alignment between the two images, the program now applies the corresponding shift to Map 2, constructs a spectral-index map and blanks any optically thick points in this map, taken to be points with spectral indices $\alpha > 0$. The theoretical limiting optically thick spectral index $\alpha = \frac{5}{2}$ is rarely reached, but it is generally accepted that a positive spectral index implies some optically thick contribution in that region. The optically thick regions are blanked \emph{after} first calculating the shift with only the core region blanked, since the shift between the maps can result in a significant change in the spectral-index distribution, and thus in those regions which have $\alpha > 0$. The cross-correlation procedure is then repeated, taking into account only the optically thin (i.e. unblanked) regions of the source, since the position of these regions is not affected by absorption effects. \item The program again outputs to the screen the maximum correlation $r_{max}$ and the shift ($dRA$,$dDec$) at which it occurs. Positive shifts correspond to moving the second image downward (to the South) and to the left (East) relative to the first image. In other words, a feature whose pixel location is ($RA$,$Dec$) in the first map is located at ($RA + dRA$,$Dec + dDec$) in the second map. \item Three files are output. One is simply a text file containing the array of cross-correlation values computed, while the other two are text files of the format recognised by the AIPS task FETCH. One of these likewise contains the cross-correlation coefficients; importing this image into AIPS and plotting it (e.g with KNTR) shows the shape of the cross-correlation function. The other image text file displays the subarea of Map~1 that was compared; displaying this image can be useful in verifying the area to be blanked around the optically thick core. Ideally this area should be large enough to cover virtually all of the core region, but not features in the inner jet; it should cover at least one beam size, and usually more, since the shape of the beam and the very high flux emanating from the core will dominate the VLBI $I$ structure here. \end{enumerate} In all cases we have tested, we found the cross-correlation function to fall off monotonically from its peak. This lends credence to the hypothesis that the optically thin jet structures in the maps should be well correlated when aligned properly (if small-scale variations were important the function could show secondary peaks where individual features align well rather than falling off uniformly, but we have never found this to be the case). Obviously, the accuracy of the estimated shift between a pair of images will depend to some extent on the pixel size in the images being compared. In practice, it may be expedient, within reason, to use images with slightly smaller pixel sizes in order to derive more refined shift estimates. The results should not depend on whether the higher-frequency or lower-frequency images is used as Map~1: in either case, it is the relative shift that is being determined, and the shifts obtained with the higher-frequency map as Map~1 will be the negative of those obtained with the lower-frequency map as Map~1. In practice, it may be expedient to compare results obtained blanking out core areas of several different sizes (e.g. 2.0 times the beam area, 2.5 times the beam area etc.), to ensure that the optically thick core region is fully blanked, while as much of the optically thin inner jet is included in the correlation analysis as possible. \section{Testing/Illustration of the Cross-Correlation Method} \begin{figure*} \begin{center} \includegraphics[width=8cm]{2007_UNALIGNED.PS} \includegraphics[width=8cm]{2007_ALIGNED.PS} \end{center} \vspace*{-3.0cm} \caption{\small{Maps of 2007+777 before (left) and after (right) alignment, showing the $I$ contours at 5.1~GHz in green, with $I$ contours at 7.9~GHz superposed in black. The convolving beam was $1.80\times 1.55$~mas in position angle $87.0^{\circ}$, and was the same in all cases. The bottom contour levels are $\pm0.20$, and the contours increase in steps of 1.98. The peak brightnesses are 657~mJy/beam (7.9~GHz) and 547~mJy/beam (5.1~GHz). The algorithm has clearly led to a very good alignment for the distinct, optically thin feature $\simeq 7$~mas from the core.}} \label{2007_maps} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=80truemm,angle=-90]{2007_5.9cm3.8cmSPIX.ps} \includegraphics[width=75truemm]{2007_5.9cm3.8cm_SPIXshift.ps} \end{center} \vspace*{-0.5cm} \caption{\small{Colour spectral-index maps of 2007+777 before (top) and after (bottom) alignment, with the $I$ contours at 7.9~GHz from Fig.~2 superposed. The convolving beam is shown in an upper corner of each image. The shown ranges of spectral indices are from $-1.89$ to $+1.35$ (top) and from $-1.38$ to $+1.96$ (bottom). Spectral artefacts due to misalignment are clearly visible in the top spectral-index map (false optically thin emission to the East of the $I$ peak and false optically thick emission at the Western end of the jet), which are absent from the bottom spectral-index map.}} \label{2007_spixmaps} \end{figure} We present two examples of applying the program in practice to align total-intensity images of AGN obtained with the NRAO Very Long Baseline Array at 7.9~GHz and 5.1~GHz: (i) for 2007+777, which displays a fairly distinct optically thin jet feature that could be used to align the two maps in the ``traditional'' way, and (ii) for 2200+420, which displays only fairly amorphous optically thin jet emission. The maps compared have matched cell sizes, map sizes and beam parameters. The VLBA observations were made over 24 hours in July 2006 in a snapshot mode, so that the baseline coverage obtained was spread out over the time the sources were visible with all or nearly all of the VLBA antennas. The data was obtained simultaneously at the different frequencies. The preliminary calibration and imaging of these data were carried out in AIPS using standard techniques; some initial results are presented by O'Sullivan \& Gabuzda (2008), and a more complete analysis is in preparation. For each of these two sources, we show a superposition of the 7.9~GHz and 5.1~GHz $I$ maps before and after applying the derived alignment shift (Figs.~2 and 4), and the spectral-index maps obtained before and after applying the derived alignment shift (Figs.~3 and 5). In both cases, we used a cell size of 0.1~mas during the initial total-intensity mapping, but used final maps made with a cell size of 0.05~mas as input to the program, to improve slightly the accuracy of the relative shifts obtained. In the case of 2007+777, the derived shift of the 5.1~GHz relative to the 7.9~GHz image was 4 pixels (0.20~mas) to the West and 0 pixels to the South, in the direction of the VLBI jet, as expected. The correctness of this shift is immediately obvious via a visual inspection of the superposed $I$ images (Fig.~2) and the spectral-index maps before and after applying this shift (Fig.~3); the algorithm has obviously aligned a distinct optically thin jet component located $\simeq 7$~mas to the West of the map peak. In the case of 2200+420, the calculated shift of the 5.1~GHz relative to the 7.9~GHz image was 4 pixels (0.20~mas) to the South and 0 pixels to the East, again in the direction of the VLBI jet, as expected. It is less straightforward to estimate the correctness of this shift directly from the superposed $I$ maps in Fig.~4, but the spectral-index map after applying this shift shows appreciably more regular behaviour, with a smooth gradient in the spectral index from North of the core region to the jet extending nearly directly to the South; in particular, a spurious region of optically thin emission to the North of the peak has disappeared. We also show the results of applying the cross-correlation technique to 4.8 and 1.6-GHz images of the AGN 1803+784 (Gabuzda \& Chernetskii 2003), as an example of the operation of the program when applied to two images at more widely separated frequencies. A superposition of the 4.8~GHz and 1.6~GHz $I$ maps before and after applying the derived alignment shift is shown in Fig.~6, and the spectral-index maps obtained before and after applying the derived alignment shift in Fig.~7. A cell size of 0.50~mas was used during the initial total-intensity mapping, but final maps with a cell size of 0.25~mas were used as input to the shift program. The derived shift of the 1.6~GHz relative to the 4.8~GHz image was 5 pixels (1.25~mas!) to the West and 1 pixel to the North, in the direction of the VLBI jet. Although examination of the superposed $I$ images (Fig.~6) does not enable an unambiguous estimate of the needed shift ``by eye,'' the correctness of the shift derived by our program is immediately obvious via a visual inspection of the spectral-index maps before and after applying this shift (Fig.~7). At first glance, the ``slanting'' boundary between the regions of optically thick and thin emission near the core seems suspicious, but in fact, the 4.8-GHz VSOP space-VLBI image for this epoch shows that the VLBI jet initially emerges to the Northwest (Gabuzda 1999), and this feature likely real (the implied spectral-index gradient is roughly perpendicular to the direction of the small-scale jet). Finally, we show the two-dimensional plots of the cross-correlation functions output by the program for 2007+777, 2200+420 and 1803+784 (Fig.~8). The cross-correlation plots contain a single peak, with a monotonic fall-off in the correlation coefficient with distance from the derived optimal alignment shift. \begin{figure*} \begin{center} \includegraphics[width=8cm]{2200_UNALIGNED.PS} \includegraphics[width=8cm]{2200_ALIGNED.PS} \end{center} \vspace*{-2.0cm} \caption{\small{Maps of 2200+420 before (left) and after (right) alignment, showing $I$ contours at 5.1~GHz in green, with $I$ contours at 7.9~GHz superposed in black. The convolving beam was $2.25\times 1.59$~mas in position angle $-13.8^{\circ}$, and was the same in all cases. The bottom contour levels are $\pm0.20$, and the contours increase in steps of 1.98. The peak brightnesses are 1940~mJy/beam (7.9~GHz) and 1520~mJy/beam (5.1~GHz). In this case, there is no compact, distinct optically thin feature on which to base the derived shift between the two images, but the algoritm has nevertheless properly aligned the images, as is clear from a comparison of the corresponding spectral-index maps in Fig.~5.}} \label{2200_maps} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=75truemm]{2200_5.9cm3.8cmSPIX.ps} \includegraphics[width=75truemm]{2200_5.9cm3.8cm_SPIX.05px_shift.ps} \end{center} \caption{\small{Spectral-index maps of 2200+420 before (left) and after (right) alignment, with the contours of total intensity at 7.9~GHz from Fig.~4 superposed. The convolving beam is shown in an upper corner of each image. The shown ranges of spectral indices are from $-1.47$ to $1.13$ (left) and from $-1.76$ to $+1.45$ (right). False optically thin emission is visible to the North of the $I$ peak in the top spectral-image map. This artefact is absent from the bottom spectral-index map, and the range of spectral indices in the VLBI jet is more moderate.}} \label{2200_spixmaps} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=8cm]{1803_UNALIGNED.PS} \includegraphics[width=8cm]{1803_ALIGNED.PS} \end{center} \vspace*{-2.0cm} \caption{\small{Maps of 1803+784 before (left) and after (right) alignment, showing $I$ contours at 1.6~GHz in green, with $I$ contours at 4.8~GHz superposed in black as an example of the operation of the cross-correlation technique applied to images separated by a larger frequency difference. The convolving beam for both images was $5.49\times 4.49$~mas in position angle $51.2^{\circ}$. The bottom contour levels are $\pm0.15$ (4.8~GHz) and $\pm 0.28$~mJy (1.6~GHz), with the contours increasing in steps of 2.0 in both cases. The peak brightnesses are 2270~mJy/beam (4.8~GHz) and 1640~mJy/beam (1.6~GHz). The 4.8 and 1.6~GHz ``peaks'' of the somewhat diffuse jet feature roughly 25~mas from the core are at appreciably different positions in the aligned maps, but an inspection of the spectral-index maps in Fig.~7 demonstrates that the algoritmm has properly aligned the overall optically thin jet structure. (Maps adapted from Gabuzda \& Chernetskii 2003.)}} \label{1803_maps} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=75truemm]{1803_SPIX.25_noshift.ps} \includegraphics[width=75truemm]{1803_SPIX.25_shift-5-1.ps} \end{center} \caption{\small{Spectral-index maps of 1803+784 before (left) and after (right) alignment, with the contours of total intensity at 1.6~GHz from Fig.~6 superposed. The convolving beam is shown in an upper corner of each image. The shown ranges of spectral indices are from $-1.50$ to $0.50$ in both cases. False optically thin emission is visible to the East of the $I$ peak in the top spectral-image map. This artefact is absent from the bottom spectral-index map; the ``slanting'' boundary between the regions of optically thick and thin emission near the core seems suspicious, but in fact, the initial direction of the 4.8-GHz jet on small scales is to the Northwest (Gabuzda 1999).}} \label{1803_spixmaps} \end{figure} \begin{figure} \begin{center} \includegraphics[width=75truemm]{2007_CORRELATION.PS} \includegraphics[width=75truemm]{2200_CORRELATION.PS} \includegraphics[width=75truemm]{1803_CORRELATION.PS} \caption{\small{Cross-correlation functions obtained when the 7.9 \& 5.1-GHz $I$ images for 2007+777 (top left) and 2200+420 (top right) and 4.8 and 1.6-GHz images for 1803+784 (bottom) were compared. Contour levels are 90, 95, 98, 99.5, and 99.9\% of the peak values of 0.9927 (top right), 0.9991 (top left) and 0.9185 (bottom). The units plotted along the axes are pixels.}} \label{corrplot} \end{center} \end{figure} \section{Conclusion} We have developed a C program to determine the shift between two VLBI images based on a cross-correlation analysis of the images. In the past, it has been necessary to determine such shifts for images of AGN by aligning compact optically thin jet components, which can be a somewhat subjective and not entirely unambiguous procedure. In addition, this method is difficult to apply to complex and/or extended AGN jets without compact optically thin features suitable for such an analysis. The great advantage of our new approach is that it provides a straightforward, objective means to determine the shift between two images that makes use of all optically thin regions in the source structure, not just individual chosen features. Our tests have shown that the program produces reliable shifts for images both with and without distinct optically thin features. The code can be compiled using a standard C compiler, and has been designed to take input files written by the AIPS task IMTXT, and to output files that can be read by the AIPS task FETCH, making it straightforward for radio astronomers familiar with AIPS to implement the code. The code can be obtained by contacting D. C. Gabuzda ([email protected]). \section{Acknowledgements} We thank Shane P. O'Sullivan for allowing us to use unpublished images of 2007+777 and 2200+420 as examples of our alignment program, and for help in making the spectral index maps for 1803+784. SC thanks Ger Croke for helpful discussions on the cross-correlation technique. We are also grateful to the referee, Bob Campbell, for helpful comments that led to improvement of this paper.
1,116,691,497,927
arxiv
\section{Introduction} Voltage controlled spin precession (see ~\cite{koo_st_09} and references therein), proposed in 1990 ~\cite{datta_das_90}, posed two difficult challenges namely (1) spin-polarized injection into a semiconducting channel, and (2) gate control of the Rashba spin-orbit interaction (RSO) in the channel~\cite{rso_ref}. The latter was demonstrated by Nitta {\it{et al.}} in 1997 using an inverted InGaAs/InAlAs quantum well with a top gate \cite{nitta_sdh_97}. But spin-polarized injection into a semiconductor proved to be a more difficult challenge which has only recently been overcome through the combined efforts of many groups around the world~\cite{spin_inj_ref}. Very recently, Koo {\it{et al.}} ~\cite{koo_st_09} have combined both ingredients, spin-polarized injection and gate-controlled RSO, into a single experimental structure using a high mobility InAs heterostructure with a top gate interposed between the current contacts and the voltage contacts (Fig.~\ref{expt_bench}(a)). The non-local voltage signal~\cite{vanWees_nl_03} showed an oscillatory behavior when the contacts are magnetized along the direction of current flow, but not when they are magnetized perpendicular to the current flow (Fig. \ref{expt_bench}(b)), as expected from the theory presented in ~\cite{datta_das_90}. Furthermore, it was shown in~\cite{koo_st_09} that the oscillation (see Fig.~ \ref{expt_bench}(b)) is described well by the expression \begin{equation} V_{exp}=A\cos\left(\begin{array}{c}\frac{2m^*\alpha(V_G)L}{\hbar^2}+\phi\end{array}\right) \label{vexp} \end{equation} where $m^*$ is the effective mass, $\alpha(V_G)$ is the RSO measured independently from the Shubnikov-de Haas (SDH) beating pattern, and A and $\phi$ are fitting parameters. The oscillation period $2m^*\alpha(V_G)L/\hbar^2$ was derived by Datta and Das~\cite{datta_das_90} for electrons with wavevectors that are purely along the direction of current flow (x-) with $k_Y=0$, noting that `in practice we have an angular spectrum of electrons' and the `effect is reduced as $\vec{k}$ turns away from the x-axis.' In this paper we will first (Section~\ref{simp_mod_sec}) describe a straightforward extension of the theory in ~\cite{datta_das_90} to include the sum over the angular spectrum $k_Y$ of electrons. The results closely follow those obtained earlier in ref.~\cite{zulicke} which are more general since they include both electron and hole systems. We show that the results from this simple model are in good agreement with those in (Fig.\ref{expt_bench}(d)) from a non-equilibrium Green function (NEGF) based model with contact parameters adjusted to fit the experimental contact conductances (Section~\ref{negf_mod_sec}). We hope that a careful comparison of experiments will help refine model proposed here and establish this effect on a firm footing, so that it can be used both for fundamental studies as well as for various proposed applications such as spin-filtering, magnetic recording and sensing or quantum computing~\cite{wolf_sbandy_dsarma}. \section{Simple model} \label{simp_mod_sec} We start from an effective mass Hamiltonian for a two-dimensional conductor with RSO interaction of the form ($\vec{\sigma}$: Pauli spin matrices) \begin{equation} H=-\frac{\hbar^2}{2m^*}\left(\begin{array}{c}\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}\end{array}\right)+\alpha(\sigma_X k_y-\sigma_Y k_X) \label{Hamiltonian} \end{equation} We neglect Dresselhaus spin-orbit (DSO) coupling since this is believed to be small in structures of this type~\cite{dso_rso_ref}. Eq.~\ref{Hamiltonian} leads to the dispersion relation \begin{equation} E=\frac{\hbar^2k^2}{2m^*}\pm \alpha k , k =+\sqrt{k_X^2+k_Y^2} \label{e-k} \end{equation} with the upper and lower signs corresponding to eigenspinors of the form $\{1\quad\pm\exp(i\phi)\}^T$, where $\tan\phi\equiv-k_X/k_Y$. Here, $x$ and $y$ are the longitudinal (or transport) and transverse direction respectively following the co-ordinate system used in ~\cite{koo_st_09}, which is different from that used in ~\cite{datta_das_90}. Assuming periodic boundary conditions in the transverse direction, both $E$ and $k_Y$ are conserved in the absence of scattering and the two eigenmodes have different $k_X$'s so as to satisfy Eq. \ref{e-k} with the upper and lower signs respectively. For small $\alpha$ we can write approximately \begin{equation} k_{X-}-k_{X+}\approx\frac{2m^*\alpha}{\hbar^2}\frac{k_0}{\sqrt{k_0^2-k_Y^2}} \label{theta} \end{equation} with $k_0\equiv\sqrt{2m^*E}/\hbar$. \begin{figure}[] \begin{center} \includegraphics[width=0.3\textwidth]{Fig_expt_bench_1_1.eps} \includegraphics[width=0.4\textwidth]{fig_1_c_new_1.eps} \includegraphics[width=0.4\textwidth]{fig_1_d_new_1.eps} \end{center} \caption{(a) Schematic structure and (b) experimentlly observed non-local signal in~\cite{koo_st_09}. Calculated non-local signal for the structure in (a) using: (c) simple analytical model (section~\ref{simp_mod_sec}) and (d) using NEGF based model (section~\ref{negf_mod_sec}). Parameters: $P_C=6.8$\%$=\frac{G_M-G_m}{G_M+G_m}$ and $G_M+G_m = 4\times10^{10}/m^2/\Omega$ ($G_{M(m)}$ being the contact conductance per unit area for majority (minority) spins), carrier density $n_S=2.7\times10^{16}$m$^{-2}$, FM contact lengths $L_{Ci}=0.2\mu$m, $L_{Cd}=0.25\mu$m, FM contact spacing $L_{CH}=1.65\mu$m, width $W=8\mu$m, effective mass $m^*=0.05m_0$.}\label{expt_bench} \end{figure} We define the spin-voltage $V_X$ (or $V_Y$) as the difference between the voltages measured in the parallel and the anti-parallel configurations by X- (or Y-) directed injecting and detecting magnets. This is expected to be twice that measured in the parallel configuration using a setup like the one shown in Fig.~\ref{expt_bench}(a) \cite{takahashi} which is exactly how the oscillatory signals $V_X$ in \cite{koo_st_09} are measured, although the spin-valve signal $V_Y$ is measured the same way our $V_Y$ is defined. It is shown in Appendix A (supplementary information) that for a point injector located at $x=0$ and a point detector located at $x=L$, assuming ballistic transport, the voltage signals for X- and Y-directed magnets can be written as ($B$: constant) \begin{subequations} \begin{eqnarray} V_{X0}(E,k_Y)=B\left\{s^2+\left(1-s^2\right)\cos\left(\frac{\theta_L}{\sqrt{1-s^2}}\right)\right\} \label{vx_point} \\ V_{Y0}(E,k_Y)=B\left\{\left(1-s^2\right)+s^2\cos\left(\frac{\theta_L}{\sqrt{1-s^2}}\right)\right\} \label{vy_point} \end{eqnarray} \label{v_xy_point} \end{subequations} where $s\equiv k_Y/k_0=\hbar k_Y/\sqrt{2m^{*}E}$ and $\theta_L=2m^{*}\alpha L/\hbar^2$. The contributions from different $E$, $k_Y$ all act `in parallel' giving a voltage equal to the average. At low temperatures we can average the contributions from all transverse wave-vectors $k_Y$ over the Fermi circle ($E=E_F$) to write \begin{equation} V_{X(Y)}=\int^{+k_0}_{-k_0}\frac{dk_y}{2\pi k_0} V_{X0(Y0)}(E_F,k_Y) \label{vxy_point} \end{equation} Interestingly, the results obtained from the integration in Eq.~\ref{vxy_point} looks almost exactly like the single cosine result in Eq.~\ref{vexp} that describes the experimental observations. This can be understood by noting that the argument $\theta_L/\sqrt{1-s^2}$ has a stationary point at $s=0$ \cite{stat_point} and we can use the method of stationary phase to write approximately, \begin{subequations} \begin{eqnarray} V_X &\simeq& \frac{B}{3\pi}+\frac{B}{\sqrt{2\pi\theta_L}}\cos(\theta_L+\frac{\pi}{4}) \label{vx_stat_phase} \\ V_Y &\simeq& \frac{2B}{3\pi} \label{vy_stat_phase} \end{eqnarray} \label{vxy_stat_phase} \end{subequations} As shown in Appendix B (supplementary information) these approximations describe the results from the exact integration quite well for $\theta_L\gtrsim 2\pi$ which is true for the range of $\alpha$ and $L$ involved in the experiment. This should help answer some of the concerns raised in a recent comment~\cite{sbandy_com_09}. Let us note that the simple results presented above are made possible by our assumption of periodic boundary conditions (PBC) in the y-direction making $k_Y$ a 'good quantum number' like $E$. Most of the prior work, on this topic \cite{hbc_ref}, uses hardwall boundary condition (HBC) which does not seem to permit the simple decoupling of different transverse wavevectors ($k_Y$) due to non-trivial `boundary scattering'. We have checked numerically that the use of HBC does not change the conclusions described above in a significant way although some details are different. Furthermore, one could argue that since the actual boundaries in the experimental structure of \cite{koo_st_09} are relatively far away ($W=8\mu m$) the physics is better captured by a model employing PBC like ours. However, the possible role of boundary scattering deserves further attention. It is interesting to note the similarities and differences between this simple model for the voltage controlled spinprecession signal and the usual model for the Hanle signal~\cite{hanle_ref} \begin{equation} V_H\sim\int^{\infty}_0 dt\frac{e^{-L^2/4Dt}}{\sqrt{4\pi Dt}}\cos\left(\frac{g\mu_B Bt}{\hbar}\right)e^{-t/\tau_S} \label{vhanle} \end{equation} The cosine functions in Eqs.~\ref{v_xy_point} can be written as $\cos(2\alpha k_0t/\hbar)$ where $t$ is the transit time $L/v_x=m^*L/\hbar k_0\sqrt{1-s^2}$, showing that $2\alpha k_0$ in our problem plays the role that $g\mu_B B$ plays in the Hanle precession signal. Since we assume ballistic rather than diffusive transport we have a different weighting function for different transit times. But the most important difference is that Hanle signals are typically observed within tens of Gauss around $B=0$ while the experiment we are analyzing has $\alpha$ varying between $8\times 10^{-12}$ and $12\times 10^{-12}$eV-m with $k_0=4.1\times 10^{8}$m$^{-1}$ corresponding to values of $|g|B$ close to $\sim 140$T as noted in~\cite{koo_st_09}. At such high values of $|g|B$, the Hanle signal is usually reduced essentially to zero because of the spread in the transit time `$t$' caused by diffusive transport. One would expect the same in the present case, were it not for ballistic transport. By contrast, the Hanle signal around $B=0$ is relatively robust and it would be interesting to look for an analogous voltage-controlled signal in shorter structures or perhaps in structures where the RSO, $\alpha(V_G)$, can be tuned through $\alpha=0$~\cite{awschalom}. The simple model here makes no prediction about the amplitude $B$, but it does suggest that the peak-to-peak amplitude of the oscillation in $V_X$ should be $3\pi/\sqrt{2\pi\theta_L}$ times the spin-valve signal $V_Y$. Using $m^{*}=0.05m_0$, $\alpha\simeq 10^{-11}$eV-m, $L=1.65\mu$m, this suggests $V_Y=1.2V_X$(p-p). Experimentally, the p-p oscillatory signal $\sim 6\mu$V which equals $V_X$(p-p)$/2$(since the experiment measures the parallel-antiparallel difference we have defined as $V_X$) but $V_Y$ is only $\sim 6\mu$V. Possible reasons for the discrepancy are discussed at the end of this paper, but here we would like to note that we expect a further reduction in the amplitude of the oscillatory component due to the extension of the injecting and the detecting contacts along $x$ giving rise to a spread in the values of $\theta_L$ in Eqs.~\ref{vx_point} and \ref{vy_point}. We can write \begin{equation} \widetilde{V}_x=C_iC_d\frac{B\cos(\theta_0+\theta_i+\theta_d+\pi/4)}{\sqrt{2\pi (\theta_0+\theta_i+\theta_d)}} \label{vx_contsum} \end{equation} where $C_i$ and $C_d$ are numbers less than one representing the averaging effects of the injecting and detecting contacts respectively and $\theta_i$, $\theta_d$ are the additional phase-shifts introduced by the injecting and detecting contacts respectively in addition to $\theta_0$. To estimate $\theta_i$, $\theta_d$ or $C_i$, $C_d$ we need to know (1) the spatial uniformity of the injecting and detecting contacts, (2) how the electronic wavefunction evolves under the contacts, and (3) how the RSO $\alpha(V_G)$ varies under the contacts. Regarding point 1 we assume the contacts to be uniform and the NEGF model described next should account for point 2 within this assumption. However, point 3 requires a careful treatment beyond the scope of this paper. Here we simply note that Eq.~\ref{vx_contsum} describes the {\it{shape}} of the oscillatory $V_X(V_g)$ quite well with the following choice: $\theta_0=2m^{*}\alpha(V_G)L_{CH}/\hbar^2$, where $L_{CH}=1.65\mu$m which is the experimental center-to-center distance between contacts and $\theta_{i,d}=m^*\alpha(V_G=0)L_{Ci,d}/\hbar^2$ where $L_{Ci}=0.2\mu$m and $L_{Cd}=0.25\mu$m equal to half the contact widths. The result from Eq.~\ref{vx_contsum} also matches that from the NEGF model (see Fig.~\ref{expt_bench}(d)) to be described next in shape and amplitude if we use $C_{i,d}=\sin(\theta_{i,d})/\theta_{i,d}$ which can be justified if the electronic wavefunction is assumed to remain constant under each contact. \section{Quantitative NEGF based model} \label{negf_mod_sec} One way to make the results from the quantum transport model quantitative is to use the NEGF-based method described in detail in \cite{datta3}. The inputs to this model are the Hamiltonian $[H]$ and the self-energy matrices $[\Sigma]$ (Fig~\ref{schematic}). For $H$ we use a discrete version of the one used in Section~\ref{simp_mod_sec} (Eq. \ref{Hamiltonian}), as described in~\cite{datta3} assuming PBC along y as discussed above. We neglect all scattering processes since both the mean free path and the spin-coherence length are believed to be longer than the longitudinal dimensions at low temperatures. To understand the signal decay at higher temperatures will require a consideration of both momentum and spin-relaxation processes, but we leave this for future work. The self-energies for the FM contacts ($\Sigma_2$,$\Sigma_3$) have the form $-(i/2)\gamma[I+P_C\vec{{\sigma}}.\hat{n}]$ where the polarization, $P_C=(G_M-G_m)/(G_M+G_m)$ and $\hat{n}$ is the unit vector in the direction of the magnet. The constant $\gamma=\pi (G_M+G_m)\hbar^3/e^2m^*$ is chosen to give a tunneling conductance equal to the experimental value . The NM contacts ($\Sigma_1$,$\Sigma_4$) are represented similarly with $P_C=0$. Finally, the long extended regions outside the channel at two ends (see Fig. \ref{expt_bench}(a)) are represented by two semi-infinite contacts whose coupling is given by, $\Sigma_{L(R)}=\tau_{L(R)}g_S\tau_{L(R)}^{\dagger}$ where $\tau$ is the spin-dependent coupling matrix between the contact and the channel and $g_S$ is the surface Green's function. The transmission functions are calculated from the NEGF model and contacts 3,4,L and R are treated as voltage probes with zero current (following the approach introduced by Buttiker, see section 9.4, in ~\cite{datta2}). Note that although we are not including scattering processes explicitly, the voltage probes introduce an effective spin-scattering that reduces the signal. Indeed both $V_X$ and $V_Y$ increase significantly if we remove the end regions represented by $\Sigma_L$ and $\Sigma_R$. \begin{figure}[] \begin{center} \includegraphics[width=6.5cm, height=3.0cm]{Fig_schematic.eps} \end{center} \caption{NEGF based lateral transport model for the structure in Fig. \ref{expt_bench}(a), with $\Sigma_2$ and $\Sigma_3$ representing injecting and detecting ferromagnetic contacts, $\Sigma_1$ and $\Sigma_4$ representing as non-magnetic contacts (NM), $\Sigma_L$ and $\Sigma_R$ representing the semi-infinite regions outside the central region}.\label{schematic} \end{figure} For contacts 1 and 2 we adjust the applied potential difference $(\mu_1-\mu_2)$ to obtain a current level equal to the experimental value. The voltage signal $\mu_{3P}$ is obtained from the difference between the $\mu_{3P}$ measured with parallel contacts and $\mu_{3AP}$ measured with anti-parallel contacts. We use a contact conductance of $G_M+G_m=4\times 10^{10}\Omega^{-1}$m$^{-2}$ based on the experimental parameters in~\cite{koo_st_09} and a $P_C=(G_M-G_m)/(G_M+G_m)=0.068$ to match the spin-valve signal, $V_Y$. Fig.~\ref{expt_bench}(d) shows the numerical results obtained using a channel with $\alpha(V_G)$ of length $L_{CH}=1.65\mu$m and contacts with fixed $\alpha(V_G=0)$ of lengths $L_{Ci}=0.2\mu$m and $L_{Cd}=0.25\mu$m. The oscillatory signal matches the experimental observation in shape but the amplitude is smaller. One possibility is that the $P_C$ we use has been calibrated for the spin-valve signals obtained with Y-directed magnets. The same magnets when forced into the X-direction for the oscillatory signals may have a higher effective $P_C$ especially since no anti-parallel measurements are involved. However, to account fully for the discrepancy a significant increase in $P_C$ would be needed and other sources of discrepancy should be investigated. \section{Discussion} In summary, we have presented (1) a straightforward extension of the Datta-Das theory ~\cite{datta_das_90} to include the angular spectrum of electrons and the extended contacts, and (2) a more elaborate model that treats the actual non-local experimental structure using a NEGF based approach widely used in nanoelectronics. The simpler theory provides a number of insights and is well-supported by the more elaborate model, identifying several features that deserve further investigation. Specifically it seems that while the experimental oscillation period shows good agreement with theory, the amplitude relative to the spin-valve signal $V_Y$ is larger showing essentially no reduction expected from averaging over the angular spectrum and the extended contacts. Possible reasons for this discrepancy deserve carefull attention. On the theoretical side, it is possible that the contribution from high $k_Y$ components are suppressed because they have shorter effective spin coherence lengths and a purely ballistic theory misses this aspect. Another possibility is that the effective polarization $P_C$ is lower in the antiparallel case which affects only $V_Y$ and not $V_X$. The structure and nature of the injecting and detecting contacts also require careful consederation. \section{Acknowledgement} This work is supported by the office of Naval Research under Grant No. N0014-06-1-2005 and the Network for Computational Nanotechnology (NCN).
1,116,691,497,928
arxiv
\section{Introduction} The experimental and theoretical studies on itinerant electron ferromagnetism address one of the crucial problems in condensed matter physics (for reviews, see Ref.\ \onlinecite{vollhardt1999metallic}). One of the most important experimental tools to get direct insight into the electronic structure of solids and surfaces is angle-resolved photoemission spectroscopy (ARPES). In particular, spin- and angle-resolved photoemission (SARPES) has been developed into a powerful method to study surface and thin film magnetism \cite{johnson1997spin}. Very recently this technique has been used extensively to investigate the topological properties of solid state materials \cite{dil2019spin}. In the 1980-ies first experimental studies using SARPES had been devoted to probe the existence of local magnetic moments at temperatures close and above the Curie temperature $T_{\rm C}$. Pioneering SARPES experiments had been performed in particular by Kisker et al. \cite{kisker1984photoemission,kisker1985spin,kisker1984temperature} on Fe(001). At that time two contrary models had been proposed to describe the ferromagnetic to paramagnetic transition at the critical temperature. On the one-hand side, the so-called Stoner model proposed the breakdown of the exchange splitting of bands leading this way to the non-magnetic phase. On the other hand the existence of local fluctuating magnetic moments above the Curie temperature according to the Heisenberg model was suggested. SARPES studies on magnetic transition metals (Fe and Co) were able to clearly identify the exchange-split bands at lower temperatures and fluctuating moments at high temperatures \cite{johnson1997spin}. Unfortunately, after the pioneering SARPES experiments at elevated temperatures, most of the more recent SARPES studies were done at room or even at very low temperatures for a variety of materials including superconductors, topological materials etc. \cite{DHS03,dil2019spin,BME18}. The main reason for this is found in a possible contamination of the electron analyzer and UHV chamber after heating of the thin film samples which leads to a significant decrease of the pressure in the UHV chamber. However, the thermal vibrations in combination with spin fluctuations turned out to be a very important issue for photoemission spectra measured at high photon energy ranging from soft to hard-X-rays \cite{woicik2016hard,fadley2012looking,GPU+11,VMB+08, PMS+08}. Going to higher photon energies has the advantage of a longer inelastic mean free path of the photoelectons and turns ARPES to a bulk sensitive technique. However, higher photon energies challenge the interpretation of the corresponding experimental spectra. In particular, even at very low temperatures (tenths of a Kelvin), indirect transitions occur which in consequence lead to the XPS limit. The corresponding averaging over the Brillouin zone leads to density of states like spectra for any emission angle and the access to the ground state band structure is lost \cite{MBE13a}. Finally, spin fluctuations play an important role in the description of ultrafast processes measured by pump-probe angle-resolved photoemission and two photon photoemission spectroscopy. Absorption of a very intense pump-pulse leads in the first femtoseconds to the increase of the electronic temperature and after several hundreds of femtoseconds the energy is dissipated into the lattice. Very recently, first time-dependent SARPES measurements have been performed for topological insulators \cite{CCB+15}. Furthermore, Eich et al. performed a detailed study on possible ultrafast demagnetization processes in ferromagnetic transition metals by SARPES \cite{EPR+17}. It is well known that density functional theory (DFT) in its local spin-density approximation is able to describe quantitatively the ground state and magnetic properties of transition metals at $T=0$~K. This rigorous description can be extended also to finite temperatures. The most common multi-scale approach in this direction is based on the calculation of the so called exchange coupling constants \cite{LKAG87} for a classical Heisenberg model on the basis of DFT and to perform subesquent Monte Carlo or spin dynamics simulations. On the other hand, it has been realized since many years that locally fluctuating magnetic moments are a consequence of local electronic correlations. A very successful method to go beyond the DFT-LSDA scheme is the dynamical mean field theory (DMFT) in combination with DFT. Liechtenstein et al. showed that such a DFT+DMFT approach can quantitatively describe temperature-dependent magnetism in Fe and Ni \cite{LKK01}. However such an approach does not take into account lattice vibrations which are present at all finite temperatures. On the other hand, a scheme to deal with thermal lattice vibrations is provided by the so-called alloy analogy model \cite{EMKK11} that takes the necessary thermal average by means of the coherent potential approximation (CPA) alloy theory. This approach was already applied successfully to deal with ARPES of non-magnetic materials at finite temperatures \cite{BMM+13}. In addition, following the orignal idea behind the alloy analogy model it was extended to account for thermally induced spin fluctuations in magnetic materials \cite{EMC+15} as well. This opens the combination with various models to deal with thermal spin fluctuations as for example the disordered local moment approach \cite{SGP+84,SBS+14}. Another advantage of the approach is its possible combination with methods describing local correlations as for example LSDA+U and LSDA+DMFT. This was demonstrated recently for Gd, where temperature dependence of the longitudinal resistivity and the anomalous Hall effect was studied \cite{CKM17}. It is widely accepted to interpret a measured photoelectron spectrum by referring to the results of band-structure calculations. Such an interpretation is questionable for moderately and even more for strongly correlated systems. On the other hand, the most reliable theoretical approach to interprete ARPES spectsa is provided by the so-called one-step model of photoemission. This approach was formulated first by Pendry and co-workers \cite{Pen76,HPT80} in the framework of multiple scattering theory and has been recently generalized to include various aspects like e.g. disorder, lattice vibrations, electronic correlations, the fully relativistic spin-density matrix formulation and time-dependent pump-probe aspects \cite{BME18,MBME11,MBE13}. However this scheme did not allow up to now to consider temperature-dependent spin fluctuations in combination with lattice vibrations. In this paper we generalize the one-step model of photoemission in order to include spin-fluctuations and lattice vibrations on the same level of accuracy within the framework of the alloy analogy model. The paper is organized as follows: In Sec.~\ref{sec:theory} we describe the theoretical approach, the so called alloy analogy model, which has been applied to the one-step model of photoemission in the framework of the SPR-KKR method. In Secs.~\ref{sec:dos} and \ref{sec:fe} we apply this formalism and calculate temperature-dependent, spin-polarized ARPES spectra for Fe(001). In Sec.~\ref{sec:sum} we summarize our results. \section{Theoretical approach: thermal effects} \label{sec:theory} Considering the electronic structure of a magnetic solid at finite temperature, its modification due to thermal lattice and magnetic excitations has to be taken into account. The present approach is based on the adiabatic treatment of the non-correlated localized thermal displacements of atoms from their equilibrium positions (thermal lattice vibrations) in combination with a tilt of the local magnetic moments away from their orientation in the ground state (thermal spin fluctuations). Multiple scattering theory allows to describe uncorrelated local thermal vibrations and spin fluctuations within the single-site CPA alloy theory. This implies the reduction of the calculation of a thermal average to the calculation of the configurational average in complete analogy to the averaging for random, substitutional alloy systems. The impact of thermal effects on the electronic structure, taken into account within such an approach, was discussed previously in order to describe the temperature dependent transport properties and Gilbert damping in magnetic systems \cite{MBE13}. The impact of the thermal lattice vibrations was also studied in calculations of temperature-dependent photoemission of non-magnetic systems \cite{BMM+13}, however the inclusion of the thermal spin fluctuations for ferromagnetic systems is missing and in the following we generalize the one-step model of photoemission accordingly. \subsection{Alloy analogy model} \label{sec:aam} Within the alloy analogy model, lattice vibrations are described by a discrete set of $N_{v}$ displacement vectors $\Delta \vec{R}^q_v(T)$ for each atom in the unit cell. The temperature dependent amplitude of the displacements is taken to be equal to the root mean square displacement $(\langle u^2\rangle_T)^{1/2}$, $|\Delta \vec{R}^q _v(T)| = \langle u_q ^2 \rangle _T ^{1/2}$, with the probabilities $x_v = 1/N_{v}$ ($v=1,..,N_{v}$). $[\langle u_q^2\rangle _T]^{1/2}$ is evaluated here within the Debye model with the Debye temperature $\Theta_D$ taken from experiment. Using the rigid muffin-tin approximation \cite{PZDS97,Lod76}, the displaced atomic potential is associated with a corresponding single-site t-matrix $ \underline{t}$ that has to be referred with respect to the common global frame of reference. This quantity is obtained by a coordinate transformation from local single-site t-matrix $ \underline{t}^{\rm loc}$ via the expression: \begin{equation} \label{eq:U-trans} \underline{t} = \underline{U}(\Delta \vec{R})\,\underline{t}^{\rm loc}\, \underline{U}(\Delta \vec{R})^{-1} \;. \end{equation} In the following the underline symbol represents a matrix in the angular momentum representation. In the fully relativistic formulation case, as adopted here, this implies a labelling of the matrix elements with the relativistic quantum numbers $\Lambda=\kappa,\mu$ \cite{Ros61}. The so-called U-transformation matrix $\underline{U}(\vec{s})$ in Eq.\ (\ref{eq:U-trans}) is given in its non-relativistic form by:\cite{Lod76,PZDS97} \begin{equation} \label{eq:U-trans-matrix} U_{LL'}(\vec{s}) = 4\pi \sum_{L''}i^{l+l''-l'}\, C_{LL'L''}\, j_{l''}(|\vec{s}|k)\, Y_{L''}(\hat{s}) \;. \end{equation} Here $L=(l,m)$ represents the non-relativistic angular momentum quantum numbers, $j_{l}(x)$ is a spherical Bessel function, $Y_{L}(\hat{r})$ a real spherical harmonics, $C_{LL'L''}$ a corresponding Gaunt number and $k=\sqrt{E}$ is the electronic wave vector. The relativistic version of the U-matrix is obtained by a standard Clebsch-Gordan transformation.\cite{Ros61} \medskip To account for the impact of disorder caused by thermal spin fluctuations, the continuous distribution $P(\hat{e})$ for the orientation of local magnetic moments is replaced by a discrete set of orientation vectors $\hat{e}_f$ (with $f=1,...,N_f$) occurring with a probability $x_f$. The configurational average for this discrete set of orientations is made using the CPA leading to a periodic effective medium. The rigid spin approximation \cite{LKAG87} used in the calculations implies that the spin-dependent part $B_{\rm xc}$ of the exchange-correlation potential does not change for the local frame of reference fixed to the magnetic moment when the moment is oriented along an orientation vector $\hat{e}_f$. As a result, the single-site t-matrix $ \underline{t}_f^{\rm loc}$ considered in the local frame is the same for all orientation vectors. With respect to the common global frame that is used to deal with the multiple scattering problem (see Eq.~(\ref{eq:CPA3})) the t-matrix for a given orientation vector $(\hat{e}_f)$ is determined by: \begin{equation} \label{eq:R-trans} \underline{t}_f = \underline{R}(\hat{e}_f)\,\underline{t}^{\rm loc}\, \underline{R}(\hat{e}_f)^{-1} \;, \end{equation} with the transformation from the local to the global frame of reference expressed by the rotation matrices $ \underline{R}_f = \underline{R}(\hat{e}_f)$.\cite{Ros61} The temperature dependent probability $x_f=x(\hat{e}_f)$ for each orientation $\hat{e}_f$ and an appropriate Weiss field parameter $w(T)$ is given by the expression \cite{Tik64}: \begin{equation} \label{eq:xf} x_f = \frac{\exp(-w(T) \hat{e}_{z} \cdot \hat{e}_{f}/kT)} {\sum_{f'} \exp(-w(T) \hat{e}_{z} \cdot \hat{e}_{f'}/kT)} \;. \end{equation} The various types of disorder discussed above may be combined with each other as well as with chemical i.e.\ substitutional disorder. In the most general case a pseudo-component $(vft)$ is characterized by its chemical atomic type $t$, the spin fluctuation $f$ and lattice displacement $v$. Using the rigid muffin-tin and rigid spin approximations, the single-site t-matrix $t^{\rm loc}_t $ in the local frame is independent from the orientation vector $\hat{e}_f$ and displacement vector $\Delta \vec{R}_{v}$, and coincides with $ \underline{t}_t$ for the atomic type $t$. With respect to the common global frame one has accordingly the t-matrix: \begin{equation} \label{eq:tvft} \underline{t}_{vft} = \underline{U}(\Delta \vec{R}_{v})\, \underline{R}(\hat{e}_{f})\, \underline{t}_t \, \underline{R}(\hat{e}_{f})^{-1} \underline{U}(\Delta \vec{R}_{v})^{-1} \;. \end{equation} With this the resulting CPA equations are identical to the standard CPA Eqs.~(\ref{eq:CPA1}) to (\ref{eq:CPA3}) below with the index $t$ identifying atom types replaced by the combined index $(vft)$. The corresponding pseudo-concentration $x_{vft}$ combines the concentration $x_t$ of the atomic type $t$ with the probability for the orientation vector $\hat{e}_f$ and displacement vector $\Delta \vec{R}_{v}$. This leads to the site diagonal configurational average which can be determined by solving the multi-component CPA equations \cite{FS80}: \begin{eqnarray} \label{eq:CPA1} \underline{\tau}_{{\rm CPA}} &= & \sum_{t} x_{t} \underline{\tau}_{vft} \\ \label{eq:CPA2} \underline{\tau}_{t}& = & \big[ (\underline{t}_{vft})^{-1} - (\underline{t}_{{\rm CPA}})^{-1} + (\underline{\tau}_{{\rm CPA}})^{-1} \big]^{-1} \\ \label{eq:CPA3} \underline{\tau}_{{\rm CPA}} & = & \frac{1}{\Omega_{{\rm BZ}}} \int_{\Omega_{\rm BZ} } d^{3}k \left[ (\underline{t}_{{\rm CPA}})^{-1} - \underline{G}(\vec{k},E) \right]^{-1} \; , \end{eqnarray} where again the underline symbol indicates matrices with respect to the combined index $ \Lambda$. \subsection{One step model of ARPES} The main idea of the one-step model of photoemission is to describe the excitation process, the transport of the photoelectron to the surface as well as the escape into the vacuum in a coherent way as a single quantum mechanical process \cite{Pen76}. The one-step model of ARPES is based on the Fermi's golden rule and was originally implemented for ordered surfaces using the multiple scattering KKR formalism (for more details see review in Ref.\ \onlinecite{Bra96}). This approach has been generalized to describe photoemission of disordered alloys by means of the CPA \cite{DTLN83,BMM+10}. Recently it was extended to deal with thermal lattice vibration effects exploiting the alloy analogy model described above. This approach was successfully applied to describe indirect transitions which occur in soft- and hard-X-ray photoemission \cite{BMM+13}. Based on the CPA approach the temperature-dependent spin-density matrix $\rho$ for a given kinetic energy $\epsilon_f$ and wave vector ${\bf k}_{\|}$ can be written in the following form: \begin{eqnarray} \langle{\overline\rho}_{ss'} (\epsilon_f,{\bf k}_{\|},T)\rangle~\propto~&& \langle{\overline\rho}^{at}_{ss'}(\epsilon_f,{\bf k}_{\|},T)\rangle +\langle{\overline\rho}^{c}_{ss'} (\epsilon_f, {\bf k}_{\|},T)\rangle \nonumber \\ &+&\langle{\overline\rho}^{inc}_{ss'} (\epsilon_f,{\bf k}_{\|},T)\rangle +\langle{\overline\rho}^{surf}_{ss'}(\epsilon_f,{\bf k}_{\|},T)\rangle, \end{eqnarray} with a purely atomic part ($at$), a coherent part ($c$) with multiple scattering involved and an incoherent ($inc$) part as described in detail in Refs.\ \onlinecite{GDG89} and \onlinecite{Dur81} in the context of chemical disorder in alloys. The third, incoherent contribution which appears due to the CPA-averaging procedure represents an on-site quantity that behaves DOS-like \cite{GDG89}. The last contribution is the surface ($surf$) part of the spin-density matrix. As dispersing and non-dispersing contributions are clearly distinguishable we can define the spin-density matrix which describes the angle-integrated (AIPES) photoemission \begin{eqnarray} \langle{\overline\rho}^{\rm AIPES}_{ss'} (\epsilon_f,{\bf k}_{\|},T)\rangle~\sim~ \langle{\overline\rho}^{at}_{ss'}(\epsilon_f,{\bf k}_{\|},T)\rangle+\langle{\overline\rho}^{inc}_{ss'} (\epsilon_f, {\bf k}_{\|},T)\rangle+ \langle{\overline\rho}^{surf}_{ss'}(\epsilon_f,{\bf k}_{\|},T)\rangle~, \end{eqnarray} where the ${\bf k}$-dependence in the atomic and incoherent contributions is only due to the final state. A ${\bf k}$-averaging is not necessary because the ${\bf k}$-dependence of the (SP)LEED-type final state is very weak and can be neglected in explicit calculations. Furthermore, when using the single-scatterer approximation for the final state the ${\bf k}$-dependence completely vanishes. This way a direct comparison to corresponding measurements is given in both cases. In terms of the spin-density matrix $\rho$ the intensity of the photocurrent can be written as: \begin{equation} I(\epsilon_f,{\bf k}_{\|},T)~=~Tr \left(~\rho_{ss'}(\epsilon_f,{\bf k}_{\|},T)~\right)~, \label{eq:spind3} \end{equation} with the corresponding spin polarization vector given by: \begin{equation} {\bf P}~=~\frac{1}{I}~Tr~\left(~ \mbox{\boldmath $\sigma$}~\rho~\right)~. \label{eq:spind4} \end{equation} Finally, the spin-projected photocurrent is obtained from the following expression: \begin{equation} I^{\pm}_{{\bf n}}~=~\frac{1}{2}~\left(~1~\pm~{\bf n} \cdot {\bf P}~\right)~, \label{eq:spind5} \end{equation} with the spin polarization $(\pm)$ referring to the vector $\bf n$. Within our approach, we aim on a generalized spin-density matrix formalism for the photo current to include spin fluctuations and thermal vibrations on the same level of accuracy. The formalism presented in section \ref{sec:aam} provides us with the temperature-dependent single-site scattering matrix $\underline{t}_{vft}$ which enters the multiple scattering KKR formalism to calculate the photocurrent $I(\epsilon_f,{\bf k}_{\|},T)$. (A detailed description of the generalized fully relativistic one-step model for disordered magnetic alloys can be found in Ref.\ \onlinecite{BME18}). Special care has to be taken concerning the temperature-dependent averaging of the photoemission matrix elements, in contrast to the previous work which did not account for spin fluctuations \cite{BMM+13}. Within the above mentioned rigid spin approximation \cite{LKG84}, the regular $\underline M^{\rm loc}_{i'}$ and irregular $\underline I^{\rm loc}_{i',j'}$ dipole matrix transition elements are first calculated for the local frame of reference fixed to the magnetic moment when the moment is oriented along an orientation vector $\hat{e}_f$ with the components $i'$ and $j'$ of the light polarization vector referred to the local frame of reference ($x',y',z'$) with $\hat{e}_{z'}= \hat{e}_f$ . In the case of spin fluctuations, the transformation of the matrix elements into the global frame of reference includes also a rotation of the polarization. For the regular matrix elements one finds: \begin{equation} \underline{M}^{\rm vft}_{i}=\sum_{i'}D_{i i'}(\hat{e}_{f})\, \underline{U}(\Delta \vec{R}_{v})\, \underline{R}(\hat{e}_{f})\, \underline{M}^{\rm loc}_{i'} \, \underline{R}(\hat{e}_{f})^{-1} \underline{U}(\Delta \vec{R}_{v})^{-1} \;, \end{equation} and for the irregular matrix elements one has accordingly: \begin{equation} \underline{I}^{\rm vft}_{ij}=\sum_{i' j'}D_{i i'}(\hat{e}_{f})\, D_{j j'}(\hat{e}_{f})\, \underline{U}(\Delta \vec{R}_{v})\, \underline{R}(\hat{e}_{f})\, \underline{I}^{\rm loc}_{i' j'} \, \underline{R}(\hat{e}_{f})^{-1} \underline{U}(\Delta \vec{R}_{v})^{-1} \;, \end{equation} where the $3\times3$ matrix $D_{ij}$ represents the transformation of the polarization vector of the light from the local to the global frame of reference. \section{Computational details} The electronic structure of the investigated ferromagnet bcc Fe, has been calculated self-consistently using the SPR-KKR band structure method \cite{SPR-KKR7.7,EKM11}. For the LSDA exchange-correlation potential the parametrization as given by Vosko et al. \cite{VWN80} has been used and the experimental lattice parameter have been taken. For the angular momentum expansion within the KKR multiple-scattering method a cutoff of $l_{max}= 3$ was used. The temperature effects are treated within the alloy analogy scheme based on the CPA alloy theory. For the description of the magnetic spin fluctuations the temperature-dependent magnetization data were taken from experimental magnetization curves \cite{CG71} and the lattice displacements as a function of temperature has been calculated using the Debye temperature of $T=420$~K. In addition to the LSDA calculations, a charge and self-energy self-consistent LSDA+DMFT scheme for correlated systems based on the KKR approach \cite{Min11,MCP+05} has been used. The many-body effects are described by means of dynamical mean field theory (DMFT) \cite{Hel07} and the relativistic version of the so-called spin-polarized T-matrix fluctuation exchange approximation \cite{PKL05,MMC+09} impurity solver was used. The realistic multiorbital interaction has been parametrized by the average screened Coulomb interaction $U$ and the Hund exchange interaction $J$. In our calculations of bcc Fe we used values for the Coulomb parameter $U=1.5$~eV and $J=0.9$~eV as found by our previous ARPES studies on Fe \cite{SFB+09,SBM+12}. In a second step the self consistent potential and DMFT self energy for bcc Fe has been used to calculate the photoemission response from the Fe(001) surface by means of the one step model of photoemission as presented above. \section{Results and Discussion} \subsection{Temperature dependent ground state} \label{sec:dos} First, let's discuss the impact of thermal lattice vibration and spin fluctuations on the ground state electronic structure of a magnetic solid, focusing on the temperature induced modification of the density of states (DOS). In an ordered material, the spin (s) resolved density of states is represented by the sum $n_s(E) = \frac{1}{N} \sum_{\vec{k}} \delta(E - E_s(\vec{k}))$, with $E_s(\vec{k})$ the energies of the electron states characterized by an infinite life time in the case of $T = 0$~K. On the other hand, at a finite temperature, $T > 0$ K, the electron scattering due to thermally induced lattice vibrations and spin fluctuations leads to a finite life time of the electronic states which can be accounted for within the KKR Green function formalism by giving the total DOS in terms of the Green function as follows \begin{eqnarray} \label{eq:DOS} n(E) &= & -\frac{1}{\pi} \mbox{Im } \mbox{Trace } \underline{G}(E) \;. \end{eqnarray} Thermally induced lattice vibrations are treated here as random atomic displacements from the equilibrium positions, with the amplitude dependent on temperature. The same holds for the temperature induced tilting of the atomic spin moments. This creates a thermal disorder in the atomic positions and spin orientations having a similar impact on the electronic structure as chemical disorder in an alloy. In particular, it causes a broadening of the electronic states and a change of the exchange splitting of the states with opposite spin direction. Using the alloy analogy formalism described above, the Green function of the system, represented within multiple scattering theory is given in terms of the configurational average of the scattering path operator $\underline{\tau}_{CPA}$ given by Eqs.\ (\ref{eq:CPA1}) to (\ref{eq:CPA3}). As it will be shown below, spin fluctuations have a dominating contribution to the thermally induced modification of electronic structure when the temperature approaches the critical temperature $T_{\rm C}$, where a transition to the paramagnetic (PM) state occurs. Thus, focusing on thermal spin fluctuations only, the scattering path operator averaged over spin fluctuations at a given temperature can be written as follows $\underline{\tau}_{CPA} = \sum_{f} x_{f} \underline{\tau}_{f}$, where $\underline{\tau}_{f}$ is associated with the spin orientation $\hat{e}_f$, giving access to a corresponding DOS contribution $n_{f,s}(E)$. The DOS $n^{loc}_{f,s}(E)$ projected on spin $s$ evaluated in the local frame of reference with $\hat{e}_{z'} = \hat{e}_f$ is different for different spin channels in the case of a non-zero local magnetic moment. This holds even for the PM (i.e.\ magnetically disordered) state with $\langle \hat{m} \rangle = 0$ in case of a non-vanishing local moment above $T_{\rm C}$ as it occurs, e.g.\ for bcc Fe. However, the average spin-projected DOS functions calculated for the PM state in a common global frame of reference are equal; i.e. one has $\langle n_{\uparrow}\rangle (E) = \langle n_{\downarrow}\rangle (E)$. Here, the indices ${\uparrow}$ and ${\downarrow}$ stand for a spin orientation along the global $\hat{e}_{z}$ direction and opposite to it, respectively. Due to random orientation of the atomic spin magnetic moments in the system, the $n_{+}$ and $n_{-}$ DOS projections are contributed equally by the electronic states characterized by different spin quantum numbers, implying mixed-spin character of the electronic states in such a system. Fig. \ref{fig:DOS_T} (a) represents the DOS for bcc Fe calculated for the PM state ($\langle \hat{m} \rangle = 0$) in the local frame of reference (solid line), averaged over all possible orientations of the magnetic moment. This result is compared with the DOS at $T = 0 K$. One can see first of all a finite exchange splitting of the majority and minority spin states at $T > T_C$. The main temperature effect is a significant broadening of the energy bands when compared to the case of $T = 0 K$. However, in the global frame of reference the difference between the majority and minority-spin states decreases approaching the critical temperature $T_C = 1024 K$. Above $T_{\rm C}$, in the PM state, the difference vanishes between the DOS for different spin channels. However, this is not the case when only thermal lattice vibrations are taken into account (dashed line in Fig. \ref{fig:DOS_T} (a) for $T = 1025$~K). In this case only a weak broadening of the energy bands occurs, which is much weaker when compared to that due to spin fluctuations. \begin{figure}[h] \includegraphics[width=0.4\textwidth,angle=0,clip]{Fe_DOS_T_local.eps}\;(a) \includegraphics[width=0.4\textwidth,angle=0,clip]{Fe_DOS_T_global_mod.eps}\;(b) \caption{\label{fig:DOS_T} Total spin resolved DOS for bcc Fe in the local (a) and the global (b) frames of reference. } \end{figure} \subsection{Angle resolved photoemission of bcc Fe(001)} \label{sec:fe} Although a large number of experimental spin-resolved ARPES studies on ferromagnetic transition metals are present in the literature, corresponding data for high temperatures are very rare. Experimental temperature-dependent studies have been carried out predominantly for Fe and Ni in the middle of 1980-ies (for review see Ref.\ \onlinecite{johnson1997spin}). On the other hand, there have been several attempts to account for temperature-dependent ARPES within various different theoretical frameworks such as dynamical mean field theory \cite{LKK01}, or the disorder local moment approach \cite{DSG84}. However, most theoretical models were limited either to $T=0$~K or to temperatures above the critical temperature $T_{\rm C}$, and are based on the ground state electronic structure only. This way these approaches are ignoring matrix element, surface and final state effects. Therefore the question whether ARPES can distinguish between the different models describing finite temperature spin correlations, as the Stoner or Heisenberg model, is still open \cite{EPR+17}. The alloy analogy model in combination with the one-step model of photoemission, presented in Sec.~\ref{sec:theory}, allows to describe all the mentioned effects on the same level of accuracy. As a first illustration of an application of this approach we discuss results for temperature-dependent spin-resolved ARPES on Fe(001) and compare the calculated spactra with corresponding experimental data stemming from Kisker et al. \cite{kisker1985spin}. \begin{figure}[h] \includegraphics[width=1.0\textwidth,angle=0,clip]{60eV_LDA_Exp.eps} \caption{\label{fig:NormalEm_LDA} Comparison between experimental (right panel) and theoretical LSDA based spectra (left panel) for temperature dependent spin resolved photoemission with at $E_{\rm phot}=60$~eV and normal emission. The dashed lines are spectra calculated for $T=0$~K. } \end{figure} In Fig.~\ref{fig:NormalEm_LDA} we compare experimental and theoretical LSDA based spin-resolved photoemission data for three different temperatures, namely $T=0$, $300$ and $900$~K respectively. The data for $0$~K are seen as a reference obtained by using the standard one-step model of photoemission scheme. All spectra have been calculated for normal emission geometry assuming s-polarized light with $60$~eV photon energy. Prior to these calculations we performed a photon energy scan ($k_z$-scan) in order to identify the $k_z$ position in the Brillouin zone. Due to the LSDA approximation the final states are usually shifted somewhat in energy with respect to the experimental spectra. In the case of Fe the photon energy of $60$~eV corresponds to emission from the $\Gamma$ point. The spin-resolved spectra reveal three main transitions with bulk states as initial states: a minority peak close to the Fermi level and a majority peak at $-2.4$~eV binding energy having both T$_{2g}$ symmetry. The majority peak at $-0.9$~eV binding energy has E$_g$ symmetry. This transition should be suppressed by using s-polarized light due to the selection rules. However, as mentioned by Kisker et al. \cite{kisker1985spin} due to the finite acceptance angle of the analyzer this transition has nevertheless been observed in the corresponding measurements. In addition a majority peak around $-0.9$~eV shows up with strong surface character and in fact it is a mixture of an E$_g$-like state and a surface d-like resonance. The minority surface states of Fe(001) close to the Fermi have been studied in detail in the past \cite{plucinski2009surface} but could not be resolved in Kisker's work due to the limited experimental resolution. In Fig.~\ref{fig:NormalEm_LDA} (lower panel) results of finite temperature calculations (see Sec.~\ref{sec:theory}) are compared with corresponding experimental data. As a refernce, calculated spectra for $T=0$~K are given by dashed lines. Obviously, we obtained reasonable agreement with the experimental spectra. At $T=900$~K the magnetization of Fe is decreased to roughly about 60\% of the value at $T=300$~K. As one can see, at high temperature the E$_{g}$ states are shifted towards the Fermi level. The exchange splitting of the T$_{2g}$ states is reduced but still it remains considerably high. In particular, not only a reduction of the exchange splitting is observed but also an increase of the minority peak intensity at $-2.5$ and $-0.9$~eV is found in accordance with the experimental findings. This results from an increasing contribution from the majority spin states in line with the discussion in Sec.~\ref{sec:dos}. The overall reduction in the minority spin intensities at finite temperature is also a result of a varying contribution of the different spin channels to the 'spin-mixed' electronic states. In the calculations we can turn the lattice vibrations or spin fluctuations separately on and off. The main broadening effect in the spectra results from the spin fluctuations, while lattice vibrations have a minor effect on the spin polarization. However, as it was shown in the case of soft- and hard-X-ray photoemission \cite{BMM+13} lattice vibrations will become more noticeable at higher photon energies. \begin{figure}[h] \includegraphics[width=1.0\textwidth,angle=0,clip]{60eV_DMFT_Exp.eps} \caption{\label{fig:NormalEm_DMFT} Comparison between experimental (right panel) and theoretical LSDA+DMFT based calculations (left panel) for temperature dependent spin-resolved photoemission as measured for $E_{\rm phot}=60$~eV and normal emission. Dashed lines give calculated spectra obtained by means of LSDA (taken from Fig.\ref{fig:NormalEm_LDA}).} \end{figure} It can be seen from Fig.~\ref{fig:NormalEm_LDA}, that the overall agreement between the experimental data and the LSDA based calculations is quite reasonable. Also the temperature-dependency is well described by the LSDA calculations. However, LSDA based calculations underestimate the energy-dependent broadening and the position of the E$_{g}$ peak is found at higher binding energy. One of the most successful approaches to include many-body effects beyond LSDA is the LSDA+DMFT scheme. Various aspects concerning a self-energy obtained via self-consistent LSDA+DMFT calculations for bcc-Fe have been discussed in detail recently in the context of ARPES \cite{SFB+09,SBM+12}. To find the best correspondence between the binding-energy positions and energy-dependent broadening of the theoretical peaks we have used for the averaged on-site Coulomb interaction $U$ the value of $U=1.5$~eV and exchange $J=0.9$~eV. The chosen value for $U$ lies between the estimated value $U \approx 1$~eV based on experiment \cite{PhysRevB.45.13272} and the value $U\approx 2$~eV derived from theoretical studies \cite{CG05,CMK+08}. The most pronounced difference between LSDA+DMFT calculations and corresponding experimental results concerns the majority T$_{2g}$ state which in the LSDA+DMFT calculations is shifted towards the Fermi level. On the other hand, the energetic position of this peak is better reproduced by plain LSDA calculations as shown in Fig.~\ref{fig:NormalEm_LDA}. These differences may indicate a strong influence of nonlocal correlations in the case of Fe \cite{SFB+09,SBM+12}. In the following we address the question to which extent strongly correlated systems can be investigated by means of an implementation suited to deal with only moderately correlated systems. In general local spin fluctuations and corresponding correlations are formally included in the LSDA+DMFT calculations if a numerically exact DMFT impurity solver is used, e.g. by using the continuous time Monte Carlo method. On the other hand, the spin-polarized T-matrix fluctuation-exchange solver (SPTF) \cite{PKL05,KL02} used to calculate the spectra presented in Fig.~\ref{fig:NormalEm_DMFT}, has been implemented to treat the problem of magnetic fluctuations in transition metals, and has been successfully applied to the ferromagnetic phases of Fe, Co, Ni \cite{KL02,BME+06,GDK+07} and to the anti-ferromagnetic phase of $\gamma$-Mn \cite{DMB+09}, as well as to half-metallic ferromagnets \cite{KIC+08}. This solver is quite stable, computationally rather cheap and deals with the complete four-indices interaction matrix. On the other hand, its perturbative character restricts its use to relatively weakly, or moderately, correlated systems. Not surprisingly, the SPTF performs well when starting from a spin-polarized solution, since the spin-splitting contains already the main part of the exchange and correlation effects. On the other hand, the direct application of SPTF to a non-magnetic reference state can create stability problems. This is because one tries to attribute the strong and essentially mean-field effect of the formation of a local magnetic moment to dynamical fluctuations around the non-spin-polarized state. Using a non-magnetic reference state causes no problems when one uses the quantum Monte Carlo (MC) method, which has no formal restrictions on the amplitude of fluctuations, but seems problematic for perturbative approaches. As a way to reduce the limitations for the latter case we propose a combination of SPTF with the disordered local moment approach \cite{GPS+85,Sta94}. As already shown for the case of actinides \cite{NWK+03} the inclusion of the fluctuations of randomly oriented local moments can improve drastically the description of the energetics in the paramagnetic phase. Therefore, as it is demonstrated in Fig.~\ref{fig:NormalEm_DMFT} one can hope that the combination of spin fluctuations treated within the alloy analogy model presented here in combination with a pertrubative DMFT solver allows us to extend the range of applicability of SPTF. \begin{figure}[h] \includegraphics[width=1.0\textwidth,angle=0,clip]{Stoner_vs_Heisenberg.eps} \caption{\label{fig:Stoner_Vs_Heisenberg} Calculated spin resolved ARPES spectra for $E_{\rm phot}=60$~eV and normal geometry. The results in the top panel are calculated spectra for $T=0$~K. Bottom left panel: calculated LSDA results based on the alloy analogy model (Heisenberg model). Bottom right panel: calculated LSDA results applying a modified exchange splitting (Stoner model).} \end{figure} Within the recent novel ultra fast pump-probe spin-resolved photoemission experiments on ferromagnetic materials \cite{EPR+17} time-dependent demagnetization is reflected by a corresponding change in the exchange splitting. Several mechanisms for this observation have been proposed in the literature. Among others, Eich et al.\ discussed as two possible limiting physical models the itinerant electron Stoner-like approach versus the localized electron Heisenberg spin-fluctuation picture. While the first model allows only for a homogeneous longitudinal magnetisation in the system, the later one accounts for transversal spin fluctuations as well. Refering to a common spin quantization axis in the system, these lead to a band mirroring, i.e.\ to a transfer of spectral weight of majority- or minority-spin states to mirrored states located close in binding energy but with opposite spin. Here we point out, that a point of view similar to the band mirroring picture has been introduced in a more formal way in the past when dealing with itinerant ferromagnets at finite tempratures \cite{MJK98,KMP77,Cap74,GPS+85}. The approach leads to so-called shadow bands and was used among others to discuss the temperature dependence of ARPES as well as magnetoresistance measurements \cite{MJK98}. Both of these models will lead to different signatures in the spin-resolved ARPES data and the main question is to what extent are these two models distinguishable by the use of ab-initio based calculated ARPES spectra. The formalism presented in this manuscript allows to model quantitatively and to predict in detail all possible differences in the corresponding ARPES spectra. In the left panel of Fig.~\ref{fig:Stoner_Vs_Heisenberg} we summarize spin-resolved spectra for the Heisenberg model as calculated by the alloy analogy model for $T=0$, $300$ and $900$~K (results taken from Fig.~\ref{fig:NormalEm_LDA}). In the right panel, we present calculated spectra for a modified exchange field $B(\vec r)=\alpha B(\vec r)$, where $\alpha$ is a scaling factor which has been chosen in such a way that the local magnetic moment of Fe follows the experimental magnetization curve. We obtain significant differences between the two models. Within the Heisenberg model the minority-spin channel develops a second peak at higher binding energy, this way reflecting {\bf the shadow bands and band mirroring picture}. Whereas, the Stoner model leads to a shift of the minority spin states towards higher binding energies. Finally, as shown in the Fig.~\ref{fig:NormalEm_paramag}, above $T_{\rm C}$ the Heisenberg picture still leads to a non-zero spin polarization in the spin-polarized ARPES spectra due to the photoemission process. On the other hand, the Stoner model leads to zero spin polarization above $T_{\rm C}$ and the main intensity is found at a binding energy of about $1$~eV. As a consequence one may state that these explicit spectroscopical calculations provide an adequate tool to distinguish between the various physical mechanisms involved. \begin{figure}[h] \includegraphics[width=1.0\textwidth,angle=0,clip]{Febcc_PES60eV_HeiSto_1100K.eps} \caption{\label{fig:NormalEm_paramag} Left panel: Comparison of spin resolved ARPES intensities between Stoner- and Heisenberg-like model calculated at $T=1100$~K close to ferro- to paramagnetic transition. Right panel: Corresponding spin difference $I_{maj}-I_{min}$.} \end{figure} \section{Conclusions} \label{sec:sum} We have introduced a generalization of the one-step model of photoemission for finite temperatures. The scheme is based on the alloy analogy model that allows for the inclusion of thermal effects when calculating spin-resolved ARPES spectra. The technical details of the implementation using the spin-polarized relativistic coherent potential approximation within the one-step model of photoemission have been outlined. This formalism allows to deal quantitatively with spin-fluctuations as well as with lattice vibrations on the same footing. In the present contribution we have discussed temperature-dependent, spin-resolved ARPES spectra of Fe(001). Our calculated photoemission spectra for Fe(001) were found to match quantitatively the experimental data. To overcome the limitations of local density approximation based calculations applications of the LSDA+DMFT scheme have been presented and discussed. The inclusion of electronic correlations described by the pertrubative SPTF-DMFT many body solver in combination with randomly fluctuating local moments improve the description of the corresponding spectra in the paramagnetic phase. As it was shown, the alloy analogy model can be used to describe and predict changes of the spin-polarized spectra due to the ultrafast processes obtained in pump-probe photoemission. Here we showed that the Heisenberg like {\bf band mirroring mechanism which leads to the shadow bands} provide an adequate model to describe recent experimental findings. \section{Acknowledgements} Financial support by DFG (Ebe154/32-1) is thankfully acknowledged. J.M. would like to thank CEDAMNF project financed by Ministry of Education, Youth, and Sports of Czech Rep., Project No. CZ.02.1.01/0.0/0.0/15\_003/0000358. Authors would like to thank Voicu Popescu for the discussions and his support when preparing some graphs.
1,116,691,497,929
arxiv
\section{Introduction} The conformal symmetry in one dimension is so powerful, that in the one particle case it completely fixes the potential to be $1/x^2$ \cite{AFF}. The supersymmetric extensions of conformal mechanics \cite{n2,IKL1} add to the theory some fermions interacting with the single bosonic field $x$, but in the bosonic sector the potential is still the same. So it seems that the question whether we can add something to the bosonic potential without breaking (super)conformal invariance has a unique answer - no. Nevertheless, it is not completely true. Indeed, the standard description of the one dimensional conformal invariance consists in the statement that the Hamiltonian \begin{equation}\label{i1} H= \frac{1}{2} p^2 +\frac{g^2}{2 x^2} \end{equation} forms the $so(1,2)$ algebra together with the generators of the dilatation $D$ and conformal boost $K$ defined as \begin{equation}\label{i2} D=xp, \qquad K= x^2, \end{equation} with respect to the canonical Poisson bracket \begin{equation}\label{i3} \left\{ x,p\right\}=1. \end{equation} Now it is completely clear that the modified Hamiltonian $\hat H$ \begin{equation}\label{i4} \hat H = H+m^2K \end{equation} will do the same job perfectly forming the same $so(1,2)$ algebra with generators $D$ and $K$ \p{i2}. Thus, the additional harmonic oscillator potential is also admissible without breaking conformal symmetry. It has been firstly shown in \cite{AP1} that this additional harmonic oscillator potential just modifies the realization of the conformal group, keeping the resulting action invariant under conformal group transformations. Unfortunately, the approach presented in \cite{AP1} is not fully appropriate and the nice idea of the modification of conformal group transformations could not be immediately extended to the $N=2,4$ supersymmetric cases. In the present paper (Sections 2-4) we will demonstrate that this additional harmonic oscillator term can be easily obtained within the nonlinear realizations framework in which the conformal mechanics (together with its $N=2,4$ superextensions) describes the motion along geodesics in the group space of the $d=1$ (super)conformal group \cite{IKL0,IKL1}. In such approach, the almost trivial bosonic case (Section 2) has natural and straightforward $N=2$ (Section 3) and $N=4$ (Section 4) supersymmetric analogs. Another issue we analyzed in this paper is the generation of bosonic potentials in $N=4$ supersymmetric mechanics through coupling it with the auxiliary fermionic supermultiplet. This additional supermultiplet enters the action in a rather special manner: \begin{itemize} \item the fermionic components $\psi^a, \bar\psi_a$ appear in the action only hatted by the time derivative. Thus the corresponding equations of motion are nothing but the conservation laws, and all fermionic components could be expressed in terms of the remaining components; \item the bosonic components $w^i, {\bar w}_i$ have a first order kinetic term and therefore serve as spin degrees of freedom; \item the coupling constant $g$ in the bosonic potential is the square of the norm of the spin degrees of freedom $w^i: g=w^i{\bar w}_i=const$. \end{itemize} The idea to generate the bosonic potential by coupling with additional supermultiplets has been firstly proposed in \cite{DI}. In this paper the Authors introduced coupling with a supermultiplet containing physical fermions, thus finishing with a system with a doubled number of fermions, in contrast with our case where all fermions are auxiliary. Our approach in this paper is very close to those one recently proposed in \cite{IFL}. The component actions we constructed in this paper have to coincide with the ones from \cite{IFL}, because the main ingredient - the action describing the coupling of the basic $(1,4,3)$ supermultiplet with the auxiliary fermionic $(0,4,4)$ one - is unique and is completely fixed by $N=4$ Poincar\'{e} supersymmetry (up to an overall constant). \section{Bosonic case: justification of the idea} The standard conformal algebra in $d=1$ is the $so(1,2)$ one spanned by the generators of translations ($P$), dilatations ($D$) and conformal boosts ($K$) \p{alg1}. One of the simplest ways to construct a one-dimensional system which is conformally invariant is to use the method of covariant reduction \cite{IKL0}. In application to this simplest case this method includes the following steps: \begin{itemize} \item realization of the conformal group $SO(1,2)$ in some coset; \item building the Cartan forms; \item imposing invariant constraints on the Cartan forms which result in the desired equations of motion. \end{itemize} Let us choose the following parametrization of the $SO(1,2)$ group space \begin{equation}\label{g1} g = e^{itP}\; e^{izK}\; e^{iuD} \end{equation} where the coordinates $u$ and $z$ are functions of the time variable $t$. Thus, we are dealing with the nonlinear realization of $SO(1,2)$ in its group space. The Cartan forms for the group element $g$ \p{g1} read \begin{equation}\label{CF0} g^{-1}\; dg = i\omega_P P + i\omega_K K + i\omega_D D \;, \end{equation} where \begin{equation}\label{CF1} \omega_P=e^{-u} dt, \quad \omega_D = du-2z dt,\quad \omega_K=e^u\left[ dz +z^2 dt\right]. \end{equation} In the present case the coset coincides with the group space, therefore all Cartan forms \p{CF1} are invariant under $SO(1,2)$ transformations realized as left multiplications of the group element $g$ \p{g1}. The set of constraints we are going to impose on the forms \p{CF1} reads \cite{IKL1} \begin{equation}\label{con1} \omega_D=0, \qquad \omega_K=g^2 \omega_P , \end{equation} where $g$ is a free parameter with the dimension of the mass. The first constraint in \p{con1} \begin{equation}\label{IH1} \omega_D = du-2z dt=0 \quad \Rightarrow \quad z=\frac{1}{2}\dot{u} \end{equation} is just a simplest version of the Inverse Higgs phenomenon \cite{IO}. Its meaning is rather simple -- we do not need an independent field $z(t)$ in order to realize the conformal invariance in the group space. Instead we may use the time derivative of the dilaton $\dot{u}$ which has the same transformation properties with respect to the $SO(1,2)$ group. The second constraint in \p{con1} is a dynamical one. Using the \p{IH1} it may be rewritten in the familiar form as \begin{equation}\label{conf} \ddot{x}=\frac{g^2}{x^3}, \qquad \mbox{where} \qquad x\equiv e^{\frac{u}{2}}. \end{equation} Clearly, the equation of motion \p{conf} follows from the action of conformal mechanics \cite{AFF} \begin{equation}\label{L1} S=\int dt \left( \frac{ {\dot{u}}^2}{2}- \frac{g^2}{2 x^2}\right). \end{equation} It is not unexpected now that the action \p{L1} is invariant under conformal transformations \begin{equation}\label{transf1} \delta t = f(t) = a+bt+ct^2, \delta u = \dot{f}. \end{equation} Let us stress that the explicit form of \p{transf1} simply follows from the left action of the conformal group $SO(1,2)$ on the group element $g$ \p{g1}. All this is not new and has been known for a long time. In order to learn something new, let us change the parametrization \p{g1} and associate the time variable $t$ with the generator $P+m^2K$, where $m$ is an additional parameter with the dimension of the mass as \begin{equation}\label{g2} {\tilde g} = e^{it\left( P +m^2 K\right)}\; e^{izK}\; e^{iuD} \;. \end{equation} The explicit relation between the coset element $\tilde g$ \p{g2} and $g$ \p{g1} is given by \begin{equation}\label{g1g2} e^{it\left(P+m^2 K\right)}e^{izK} e^{iuD}=e^{i \frac{\tan[m t]}{m}P}e^{i\left(z\cos^2[m t]+ m \cos[m t]\sin[m t]\right)K} e^{i\left( u-2\log[\cos[m t]]\right)D}. \end{equation} The Cartan forms \p{CF0} are slightly changed to be \begin{equation}\label{CF} \widetilde{\omega}_P=e^{-u} dt, \quad \widetilde{\omega}_D = du-2z dt,\quad \widetilde{\omega}_K=e^u\left[ dz +z^2 dt +m^2 dt\right]. \end{equation} Now we impose the same constraints as before \p{con1}. As a result, we are ending up with the following equation of motion: \begin{equation}\label{conf1} \ddot{x}=-m^2 x+\frac{g^2}{x^3}, \qquad \mbox{where again} \qquad x= e^{\frac{u}{2}}. \end{equation} Clearly, this equation follows from the following action: \begin{equation}\label{L2} \tilde{S}=\int dt \left[ \frac{ {\dot{x}}^2}{2}- \frac{m^2 x^2}{2} -\frac{g^2}{2 x^2}\right]. \end{equation} Thus, by construction, the action \p{L2}, which describes the conformal mechanics equipped with an additional harmonic oscillator term, has to be invariant under the "conformal" group $SO(1,2)$! It is not too hard to find the corresponding realization of this symmetry \begin{equation}\label{transf2} \delta t = \tilde{f}(t)=a\left( 1+ Cos(2m t)\right)+ \frac{b}{2m}Sin(2m t)+\frac{c}{2m^2} \left(1- Cos(2m t)\right) ,\quad \delta u=\dot{\tilde{f}}, \end{equation} where the parameters $(a,b,c)$ are, as before, the parameters for translations, conformal boosts and dilatations, respectively. The function ${\tilde f}(t)$ which collects all parameters of the $SO(1,2)$ transformations obeys, in view of \p{transf2}, the constraint \begin{equation}\label{f1} \frac{d}{dt}\left[ \ddot{\tilde f}+ 4 m^2 {\tilde f} \right] =0. \end{equation} Thus, the conformal mechanics with the added harmonic oscillator term (1-particle Calogero-Moser system) is invariant under the $SO(1,2)$ group which has a non-canonical realization on the time variable $t$. Two comments have to be added here. Firstly, in the limit $m\rightarrow 0$ the transformations \p{transf2} are reduced to the standard ones \p{transf1}, as it should be. Secondly, the kinetic term in the action \p{L2} is invariant under \p{transf2} only together with the oscillator term, while the conformal potential is invariant by itself. The invariance of the action \p{L2} with respect the $SO(1,2)$ transformations \p{transf2} has been firstly demonstrated in \cite{AP1}. Although we do not use the realization of the Virasoro group in the corresponding coset, which was the key ingredient in \cite{AP1}, it makes sense to consider the present approach as a further development of the nice ideas of this paper. In the next Sections we will extend our description to the $N=2$ and $N=4$ supersymmetric cases. \setcounter{equation}0 \section{N=2 superconformal mechanics} The simplest nontrivial extension of the conformal mechanics corresponds to the $N=2$ case with $SU(1,1|1)$ superconformal group \cite{n2}. The geometric construction of $N=2$ superconformal mechanics within the framework of the nonlinear realization of $SU(1,1|1)$ in the coset $SU(1,1|1)/U(1)$ has been carried out in \cite{IKL1}. Without deeply going in the details, our extension of the consideration in \cite{IKL1} looks as follows. As usual, we are starting from the $N=2$ superconformal group $SU(1,1|1)$. Its superalgebra includes the four bosonic generators $\left\{ P,D,K, V_3\right\}$ and the four fermionic ones $\left\{ Q^1,{\overline Q}{}_1,S^1, {\overline S}{}_1\right\}$. Their commutators follow from the general formulas given in Appendix \p{alg1}-\p{alg4} with $\alpha=-1$. We will realize this group in the coset $SU(1,1|1)/U(1)$ parameterized as \begin{equation}\label{g3} g=e^{it{\hat P}}e^{\theta {\hat Q} +{\bar\theta} {\hat{\bar Q}}}e^{izK}e^{\psi S^1+{\bar\psi} {\overline S}{}_1}e^{iuD}, \end{equation} where \begin{equation}\label{def1} {\hat P}=P+2im V_3 +m^2 K,\quad {\hat Q}=Q^1+i m S^1, \quad {\hat{\bar Q}}={\overline Q}{}_1-i m {\overline S}{}_1, \end{equation} and all the coordinates $(u,z, \psi,{\bar\psi})$ are $N=2$ superfields which depend on $(t, \theta,{\bar\theta})$. One should stress that the $U(1)$ generator is anti-hermitian \p{conjug}, so the operator ${\hat P}$ \p{def1} is hermitian. The only difference with the presentation in \cite{IKL1} is given by the $m$-dependent terms in \p{g3},\p{def1}. In what follows we shall need the explicit structures of several Cartan forms in the expansion $g^{-1}dg$ over the generators, \begin{eqnarray}\label{CF2} && \omega_P=e^{-u}\left( dt +i\theta d{\bar\theta}+i {\bar\theta} d\theta\right) \equiv e^{-u} d\tilde{t},\quad \omega_{Q^1}=e^{-\frac{1}{2}u}\left( d\theta -\psi d\tilde{t}\right), \nonumber \\ && \omega_D= du-2xd\tilde{t}-2id{\bar\theta}\psi -2id\theta{\bar\psi}, \nonumber \\ &&\omega_{S^1}=e^{\frac{1}{2}u}\left( d\psi -zd\theta -i \psi{\bar\psi} d\theta +(z+i m)\psi d\tilde{t}+im d\theta\right). \end{eqnarray} The constraints we impose on the Cartan forms are the same as in \cite{IKL1} \begin{equation}\label{con3} \omega_D=0, \qquad \omega_{S^1} =i g \omega_{Q^1}, \end{equation} where, as before, the arbitrary parameter $g$ has the dimension of the mass. As a result we will get two sets of equations which follow from \p{con3} \begin{equation}\label{ih2} z=\frac{1}{2}\dot{u}, \quad \psi=-\frac{i}{2}{\overline D}{} u,\; {\bar\psi}=-\frac{i}{2} Du, \end{equation} and \begin{equation}\label{n2cal} \left[ D, {\overline D}{}\right] X = 2m X +\frac{2g}{X}, \qquad X \equiv e^{\frac{1}{2}u}, \end{equation} where the semi-covariant (fully invariant only under Poincar\'{e} supersymmetry) spinor derivatives are defined by \begin{equation} D=\frac{\partial}{\partial\theta}+i{\bar\theta}\partial_t,\; {\overline D}{}=\frac{\partial}{\partial{\bar\theta}}+i\theta\partial_t, \qquad \left\{D,{\overline D}{} \right\}=2i\partial_t . \end{equation} The equations \p{ih2} express unessential Goldstone superfields $(z,\psi,{\bar\psi})$ in terms of the super-dilaton $u$, while the equation \p{n2cal} is the dynamical one. Clearly, it can be obtained from the following superfield action: \begin{equation}\label{L3} S=\int dt d^2\theta \left[ D X {\overline D}{} X +m X^2 + 2 g \log(X) \right]. \end{equation} It is worth to note that the action \p{L3}, despite the presence of the harmonic oscillator potential $m X^2$, has to be invariant under the full $SU(1,1|1)$ superconformal group. In order to clarify this point, let us write the transformations as follows: \begin{equation}\label{realiz1} \delta t=E-\frac{1}{2}{\bar\theta}\; {\overline D}{} E -\frac{1}{2} \theta D E, \quad \delta\theta=-\frac{i}{2}{\overline D}{} E,\; \delta {\bar\theta} =-\frac{i}{2} D E,\qquad \delta u = \dot{E}, \end{equation} where $E$ is a superfunction collecting all parameters of the $SU(1,1|1)$ group. In the standard realization (with $m=0$ ) this function obeys the following constraint: \cite{IKL1} \begin{equation} \frac{\partial}{\partial t} \left[ D,{\overline D}{}\right] E =0, \end{equation} which leaves in $E$ just the parameters of the $SU(1,1|1)$ group. In the present case this constraint is modified to become \begin{equation}\label{eqq} \frac{\partial}{\partial t} \left[ D,{\overline D}{}\right] E =4m \dot{E}. \end{equation} One may easily check that the solution of the modified equation \p{eqq} also contains the set of parameters corresponding to the transformations of the $SU(1,1|1)$ group. Nevertheless, the realization of this group on the superspace $(t,\theta,{\bar\theta})$ and superfield $u$ \p{realiz1} is different now. The action \p{L3} is invariant with respect to these transformations. In a full analogy with the bosonic case, the term $g \log(X)$ is invariant by itself, while the kinetic term is invariant only together with the harmonic potential. Thus, we explicitly demonstrated that the $N=2$ superconformal mechanics, being equipped with the harmonic oscillator potential, admits the same invariance with respect to $N=2$ superconformal symmetry $SU(1,1|1)$, but with a different realization. To conclude, it makes sense to note that in principle one could ignore the coset construction and just ask which invariance does the superfield action \p{L3} possess. Looking for the answer one may write the general transformations in the form \p{realiz1} and then immediately get the constraint \p{eqq} on the superfunction $E$ which selects just the $SU(1,1|1)$ group. \setcounter{equation}0 \section{N=4 superconformal mechanics} The extension of our previous consideration to the case of $N=4$ superconformal mechanics goes almost straightforwardly. We will start with the $su(1,1|2)$ superalgebra which is isomorphic to $D(2,1;-1)$ (see Appendix). Next, similarly to \cite{IKL1}, we will realize $SU(1,1|2)$ in the coset $SU(1,1|2)/SU(2)$ parameterized as \begin{equation}\label{g4} g=e^{it{\hat P}}e^{\theta_i {\hat Q}{}^i +{\bar\theta}{}^i {\hat{\bar Q}}{}_i}e^{izK}e^{\psi_i S^i+{\bar\psi}{}^i {\overline S}{}_i}e^{iuD}, \end{equation} where \begin{equation}\label{def41} {\hat P}=P+2im V_3 +m^2 K,\quad {\hat Q}{}^1=Q^1+i m S^1,{\hat Q}{}^2=Q^2-i m S^2, \quad {\hat{\bar Q}}_1={\overline Q}{}_1-i m {\overline S}{}_1,{\hat{\bar Q}}_2={\overline Q}{}_2+i m {\overline S}{}_2 \end{equation} and all the coordinates $(u,z, \psi,{\bar\psi})$ are now $N=4$ superfields which depend on $(t, \theta_i,{\bar\theta}{}^i)$. With our choice \p{def41} the generators $\left\{{\hat P},{\hat Q}{}^i,{\hat{\bar Q}}_i\right\}$ span the $N=4, d=1$ super Poincar\'{e} algebra. In order to construct the equations of motion, we have to impose covariant constraints on the Cartan forms. We will choose the same constraints as in \cite{IKL1}, namely \begin{equation}\label{conN4} \omega_D=0, \qquad \omega_{S^i} =i g \omega_{Q^i}. \end{equation} Using explicit expressions for the Cartan forms \begin{eqnarray} \omega_D &=& du -2z d{\tilde t}-2i d{\bar\theta}{}^i\xi_i -2id\theta_i {\bar\xi}{}^i,\qquad \omega_{Q^i}=e^{-\frac{u}{2}}d\theta_i + d{\tilde t}\left(\ldots\right),\nonumber \\ \omega_{S^1} &=& e^{\frac{u}{2}} \left[ d\xi_1-i\xi_i {\bar\xi}{}^i d\theta_1 -2i\xi_1 \xi_2d{\bar\theta}{}^2+ \left(i m-z\right)d\theta_1\right]+ d{\tilde t}\left(\ldots\right),\nonumber\\ \omega_{S^2} &=& e^{\frac{u}{2}}\left[ d\xi_2-i\xi_i {\bar\xi}{}^i d\theta_2 +2i\xi_1 \xi_2d{\bar\theta}{}^1- \left(i m+z\right)d\theta_2\right]+ d{\tilde t}\left(\ldots\right), \end{eqnarray} where the covariant differential of $t$ is defined as \begin{equation} d{\tilde t} = dt -i \left( d {\bar\theta}{}^i \theta_i+d\theta_i {\bar\theta}{}^i\right), \end{equation} we will get the following set of equations from \p{conN4}: \begin{eqnarray} && z=\frac{1}{2}{\dot u}, \qquad \xi_i=-\frac{i}{2} {\overline D}{}_i u, \; {\bar\xi}{}^i =-\frac{i}{2} D^i u, \label{ih4} \\ && D^i D_i e^u=0, \; {\overline D}{}_i {\overline D}{}{}^i e^u=0, \qquad \left[D^i ,{\overline D}{}_i\right] e^u =-8g, \label{eq41} \\ && D^1 {\overline D}{}_2 u= D^2 {\overline D}{}_1 u=0, \qquad \left( D^1 {\overline D}{}_1 -D^2 {\overline D}{}_2 \right) u = 4m \label{eq42}. \end{eqnarray} Here, we introduced the spinor covariant derivatives as \begin{equation} D^i=\frac{\partial}{\partial \theta_i}+i{\bar\theta}{}^i\partial_t,\; {\overline D}{}^i=\frac{\partial}{\partial {\bar\theta}{}^i}+i\theta_i\partial_t,\qquad \left\{ D^i,{\overline D}{}_j\right\}=2i\partial_t. \end{equation} The meaning of the equations \p{ih4}-\p{eq42} is clear: \begin{itemize} \item the equations \p{ih4} express unessential superfields $\left\{z,\xi_i,{\bar\xi}^i\right\}$ in terms of the superdilaton $u$; \item the constraints \p{eq41} are off-shell irreducibility conditions: they reduce the components content of the $N=4$ superfield $u$ to 1 physical and 3 auxiliary bosonic fields and four fermionic fields \cite{IKL1}; \item equations \p{eq42} are dynamical: they serve to eliminate the triplet of auxiliary fields and give rise to equations of motion for the physical fields. \end{itemize} The component action has a very simple form \begin{equation} S=\int dt\left[ \frac{{\dot y}^2}{2} +\frac{i}{2}\left( {\bar\psi}_i{\dot\psi}{}^i -{\dot{\bar\psi}}_i\psi^i\right)-\frac{m^2 y^2}{2}-\frac{g^2}{2y^2} +m{\bar\psi}_i \psi^i+ \frac{g}{y^2}\left( {\bar\psi}_1\psi^1-{\bar\psi}_2\psi^2\right)+3\frac{{\bar\psi}_1\psi^1{\bar\psi}_2\psi_2}{y^2}\right] \end{equation} where \begin{equation}\label{ac4} y=e^{\frac{u}{2}}|_{\theta={\bar\theta}=0}, \psi^1=D^1 e^{\frac{u}{2}}|_{\theta={\bar\theta}=0},\psi^2={\overline D}{}_2 e^{\frac{u}{2}}|_{\theta={\bar\theta}=0}. \end{equation} By construction the action \p{ac4} is invariant with respect to the $su(1,1|2)$ superalgebra realized in the modified coset \p{g4}. \setcounter{equation}0 \section{Potentials in N=4 superconformal mechanics} One of the most restrictive features of $N=4$ supersymmetric mechanics based on the $(1,4,3)$ supermultiplet is a specific generation of potential terms from constants in the defining constraints. It has been shown many years ago \cite{IKL1} that the constraints defining the irreducible $N=4, d=1$ supermultiplet with $(1,4,3)$ components content are uniquely fixed to be \begin{equation}\label{la1} D^i D_i X = g f,\; {\overline D}{}_i {\overline D}{}{}^i X = g {\bar f}, \qquad \left[ D^i, {\overline D}{}_i\right] X=2 g c, \end{equation} where the set of constants $f, {\bar f}, c$ are related by \begin{equation}\label{la2} c^2+f{\bar f}=1. \end{equation} Clearly, one may always choose \begin{equation} c=0,\; f=-{\bar f}=i \end{equation} to have \begin{equation}\label{la3} D^i D_i X = i g ,\; {\overline D}{}_i {\overline D}{}{}^i X = -i g , \qquad \left[ D^i, {\overline D}{}_i\right] X=0. \end{equation} Now, the most general action of the one-particle $N=4$ supersymmetric mechanics reads \begin{equation}\label{la4} S=-\int dt d^4\theta {\cal F}(X), \end{equation} with ${\cal F}(X)$ being an arbitrary function of the superfield $X$. It is not hard to get the bosonic part of the component action (with auxiliary fields excluded by their equations of motion) \begin{equation}\label{la5} S_B\sim \int dt \left[ F'' {\dot x}{}^2 + g^2 F''\right], \qquad x\equiv X|_{\theta={\bar\theta}=0}, F(x)\equiv {\cal F}(X)|_{\theta={\bar\theta}=0}. \end{equation} Finally, one could bring the kinetic term to the flat one \begin{equation}\label{la6} S_B \sim \int dt \left[ {\dot y}{}^2+g^2 (y')^2\right], \qquad F''=(y')^2, \end{equation} where $y'(y)$ is considered as a function of $y$. The additional requirement of conformal invariance, i.e. $y'\sim 1/y$, completely fixes everything, unambiguously restoring ${\cal F}(X) \sim X \log X$. Thus, everything is strictly fixed by $N=4$ superconformal invariance ($SU(1,1|2)$ in the case at hands). Another possibility to get the potential term in $N=4$ supersymmetric mechanics has been proposed in \cite{DI}. The main idea was to couple the $(1,4,3)$ supermultiplet to the fermionic $(0,4,4)$ one. The price one has to pay for this is a doubled up number of physical fermions in the resulting system. Here we will use the same idea of coupling but with a different action. In our action the fermions appear only through the time derivatives which can be replaced, without breaking of supersymmetry, by auxiliary fermions. Moreover, the bosonic fields in the action have the kinetic term which is linear in the time derivatives, and therefore these bosonic fields acquire the interpretation of spin degrees of freedom. Our starting point is the $(1,4,3)$ supermultiplet $X$ with $g=0$ \p{la3} and the fermionic $(0,4,4)$ supermultiplet $\Psi{}^a,{\overline \Psi}{}_a$ defined by the constraints\footnote{If we combine the spinor derivatives $D^i,{\overline D}{}^i$ in the quartet of spinor derivatives $\nabla^{ia}=\left\{ D^i, {\overline D}{}^i\right\}$ then the constraints \p{la7} acquire the familiar form $\nabla^{i(a}\Psi^{b)}=0$}. \begin{equation}\label{la7} D^i \Psi{}^1=0, \; D^i \Psi{}^2+{\overline D}{}^i \Psi{}^1=0, \; {\overline D}{}_i \Psi{}^2=0. \end{equation} We introduce the coupling of these supermultiplets by considering the following action: \begin{equation}\label{la8} S=S_1 +S_2 =-\frac{1}{32}\int dt d^4\theta {\cal F}(X)-\frac{1}{32}\int dt d^4\theta X \Psi^a {\overline \Psi}{}_a . \end{equation} After integration over theta's, the components action which follows from \p{la8} reads \begin{eqnarray}\label{la9} S&=&\int dt\left[\frac{1}{8}F'' {\dot x}{}^2-\frac{1}{16} F'' A^{ij}A_{ij}+\frac{i}{8} F''\left(\dot\eta{}^i\bar\eta_i- \eta^i\dot{\bar\eta}_i\right)+\frac{1}{8}F'''\eta^i\bar\eta{}^jA_{ij}-\frac{1}{32}F^{(4)}\eta^i\eta_i\bar\eta_j\bar\eta{}^j\right]+ \nonumber\\ &&\int dt\left[ -x \left({\dot \psi}{}^1{\dot{\bar\psi}}{}^2-{\dot \psi}{}^2{\dot{\bar\psi}}{}^1\right)+\frac{i}{4} x \left( {\dot v}_i {\bar v}{}^i- v_i\dot{\bar v}{}^i\right)+\frac{1}{4}A_{ij}v^i{\bar v}{}^j+\right.\nonumber\\ && \left. \frac{1}{2}\eta_i\left({\bar v}{}^i \dot{\bar\psi}{}^2+v^i \dot\psi{}^2\right)+\frac{1}{2}\bar\eta{}^i\left(v_i \dot\psi{}^1+{\bar v}_i\dot{\bar\psi}{}^1\right) \right], \end{eqnarray} where \begin{eqnarray} && x\equiv X|,\; A_{(ij)}\equiv \frac{1}{2}\left[ D_i,{\overline D}{}_j\right] X|,\qquad \eta^i\equiv -iD^i X|,\; \bar\eta{}_i\equiv -i{\overline D}{}_i X|, \nonumber\\ && \psi^a\equiv \Psi{}^a|,\qquad v^i\equiv -D{}^i{\overline \Psi}{}{}^2|, \; {\bar v}_i\equiv {\overline D}{}_i \Psi{}^1|, \end{eqnarray} and, as usual, $|$ in the r.h.s. denotes the $\theta={\bar\theta}=0$ limit. In the component action \p{la9} the fermionic fields $\psi^a$ and ${\bar\psi}{}^a$ enter only through the time derivatives. Let us replace these time derivatives by new fermionic fields $\xi^a$ and ${\bar\xi}{}^a$ as \begin{equation}\label{xi} \xi^a= {\dot\psi}{}^a, \qquad {\bar\xi}^a=\dot{\bar\psi}{}^a. \end{equation} This is nothing but the reduction from the $(0,4,4)$ supermultiplet to the auxiliary $(4,4,0)$ one \cite{GR,root,SM}\footnote{The superfield version of such a reduction reads $V^i=\nabla^{ia} \Psi_a$, where the $N=4$ superfields $V^i$ start with $v^i$ and, by construction, obey the constraints $\nabla^{a(i}V^{j)}=0$.}. This reduction is compatible with $N=4$ supersymmetry. Indeed, the components of the $\Psi^a$ have the following transformation properties under $N=4$ Poincar\'{e} supersymmetry \begin{equation}\label{ad1} \delta \psi^1=-\bar\epsilon{}^i {\bar v}_i,\; \delta\psi^2=\epsilon_i{\bar v}{}^i,\quad \delta v^i=-2i\epsilon^i\dot{\bar\psi}{}^1+2i\bar\epsilon{}^i\dot{\bar\psi}{}^2,\; \delta {\bar v}_i=-2i\epsilon_i\dot\psi{}^1+2i\bar\epsilon_i\dot\psi{}^2. \end{equation} {}From \p{ad1} we learn the transformation properties of the new fermions $\xi^a,{\bar\xi}{}^a$ \begin{equation}\label{ad2} \delta \xi^1=-\bar\epsilon{}^i \dot{\bar v}_i,\; \delta\xi^2=\epsilon_i\dot{\bar v}{}^i,\quad \delta v^i=-2i\epsilon^i{\bar\xi}{}^1+2i\bar\epsilon{}^i{\bar\xi}{}^2,\; \delta {\bar v}_i=-2i\epsilon_i\xi{}^1+2i\bar\epsilon_i\xi{}^2. \end{equation} Now one may easily check that the action \begin{eqnarray}\label{ad3} S&=&\int dt\left[\frac{1}{8}F'' {\dot x}{}^2-\frac{1}{16} F'' A^{ij}A_{ij}+\frac{i}{8} F''\left(\dot\eta{}^i\bar\eta_i- \eta^i\dot{\bar\eta}_i\right)+\frac{1}{8}F'''\eta^i\bar\eta{}^jA_{ij}-\frac{1}{32}F^{(4)}\eta^i\eta_i\bar\eta_j\bar\eta{}^j\right]+ \nonumber\\ &&\int dt\left[ -x \left({ \xi}{}^1{{\bar\xi}}{}^2-{ \xi}{}^2{{\bar\xi}}{}^1\right)+\frac{i}{4} x \left( {\dot v}_i {\bar v}{}^i- v_i\dot{\bar v}{}^i\right)+\frac{1}{4}A_{ij}v^i{\bar v}{}^j+\right.\nonumber\\ && \left. \frac{1}{2}\eta_i\left({\bar v}{}^i {\bar\xi}{}^2+v^i \xi{}^2\right)+\frac{1}{2}\bar\eta{}^i\left(v_i \xi{}^1+{\bar v}_i{\bar\xi}{}^1\right) \right], \end{eqnarray} is invariant under \p{ad2}, provided the components of the $X$ supermultiplet transform in a standard way as\footnote{We defined symmetrization over indices as $a_{(ij)}\equiv \frac{1}{2}\left( a_{ij}+a_{ji}\right)$.} \begin{equation}\label{ad4} \delta x=-i\epsilon_i\eta^i-i\bar\epsilon{}^i\bar\eta_i,\quad \delta\eta{}^i=-\bar\epsilon{}^i{\dot x}-i\bar\epsilon{}^j A^i_j,\; \delta \bar\eta_i=-\epsilon_i{\dot x}+i\epsilon_j A_i^j,\quad \delta A_{ij} = -\epsilon_{(i}\dot\eta_{j)} + \bar\epsilon_{(i}\dot{\bar\eta}{}_{j)}. \end{equation} In the action \p{ad3} the fields $\xi^a,{\bar\xi}{}^a$ and $A_{ij}$ are auxiliary ones. Eliminating them by their equations of motion \begin{equation}\label{ad5} \xi^1=\frac{1}{2x}\eta_i{\bar v}{}^i, \quad \xi^2=-\frac{1}{2x}\bar\eta{}^i{\bar v}_i, \qquad A_{ij}=\frac{F'''}{F''}\eta_{(i}\bar\eta_{j)}+\frac{2}{F''}v_{(i}{\bar v}_{j)}, \end{equation} we will get the following action: \begin{eqnarray}\label{la10} S&=&\int dt\left[\frac{1}{8}F'' {\dot x}{}^2+\frac{i}{8} F''\left(\dot\eta{}^i\bar\eta_i- \eta^i\dot{\bar\eta}_i\right)+\frac{1}{32}\left( \frac{3}{2}\frac{(F''')^2}{F''}-F^{(4)}\right)\eta^i\eta_i\bar\eta_j\bar\eta{}^j+ \frac{i}{4} \left( {\dot w}_i {\bar w}{}^i- w_i\dot{\bar w}{}^i\right)+\right. \nonumber\\ && \left. \frac{1}{8x}\left(\frac{F'''}{ F''}+\frac{2}{x}\right) \eta_i\bar\eta_j\left( w^i {\bar w}{}^j+ w^j{\bar w}{}^i\right)-\frac{1}{8F'' x^2}\left(w^i{\bar w}_i\right)^2 \right], \end{eqnarray} where, in order to bring the kinetic term for $v^i$ to the standard form we introduce the new fields $w^i$ as \begin{equation} w^i=\frac{1}{\sqrt{x}}v^i. \end{equation} Thus, we see that from our fermionic superfields $\Psi{}^a$, there survived only bosonic components $w^i,{\bar w}_i$ which enter the Lagrangian only through first time-derivatives. After quantization these variables become purely internal spin degrees of freedom. Moreover, from the equations of motion for $w^i,{\bar w}_i$ one may conclude that \begin{equation} w^i{\bar w}_i=g=\mbox{ constant}, \end{equation} and therefore, the last term in \p{la10} is just the bosonic potential for $x$ \begin{equation}\label{la11} V_B=\frac{g^2}{8F'' x^2}. \end{equation} Thus, we conclude that indeed by coupling our $(1,4,3)$ superfield $X$ with the fermionic $(0,4,4)$ auxiliary supermultiplet $\Psi^a$ one may generate the bosonic potential for the physical bosonic field $x$, together with terms describing the spin interaction of the fermionic components $\eta^i,\bar\eta_i$. It is quite interesting to understand whether the action \p{la8} could possess any type of $N=4$ superconformal symmetry. The key point is to achieve the invariance of the second term $S_2$ in the action \p{la8}, because the first one $S_1$ can be always chosen to be superconformally invariant. Indeed, the superfield $X$ obeys the constraints \p{la3} with $g=0$. The invariance of these constraints under the $D(2,1;\alpha)$ group forces $X$ to transform as \cite{IKLe1} \begin{equation}\label{la13} \delta X = 2i\alpha \left(\epsilon_i {\bar\theta}{}^i+\bar\epsilon{}^i\theta_i\right)X \end{equation} while the superspace measure transformation reads \begin{equation}\label{la14} \delta dt d^4\theta = 2i \left(\epsilon_i {\bar\theta}{}^i+\bar\epsilon{}^i\theta_i\right) dt d^4\theta. \end{equation} Clearly, the superconformally invariant action for the supermultiplet $X$ has the following form: \begin{equation}\label{ad5} S_1^{Conf}=-\frac{1}{32}\int dt d^4\theta \left(X\right)^{-\frac{1}{\alpha}}, \qquad \alpha \neq -1 \end{equation} or \cite{IKL1} \begin{equation}\label{ad6} S_1^{Conf}=-\frac{1}{32}\int dt d^4\theta X \log X, \qquad \alpha = -1. \end{equation} The invariance of the second term $S_2$ needs to be considered more carefully. First of all, one may check that the action $S_2$ in the form of \p{ad3} is invariant under the $D(2,1;\alpha)$ group for arbitrary $\alpha$, provided the components transform under conformal supersymmetry as \begin{eqnarray} && \delta v^i=-2it\left(\varepsilon^i{\bar\xi}{}^1-\bar\varepsilon{}^i{\bar\xi}{}^2\right),\qquad \delta{\bar\xi}{}^1=-\alpha\bar\varepsilon_i v^i +t\bar\varepsilon_i{\dot v}{}^i,\; \delta{\bar\xi}{}^2=-\alpha\varepsilon_i v^i +t\varepsilon_i{\dot v}{}^i,\label{ad7a}\\ && \delta x =-t\left(\varepsilon_i \eta^i+\bar\varepsilon{}^i\bar\eta{}_i\right),\qquad \delta \eta^i=-2\alpha \bar\varepsilon{}^ix -\bar\varepsilon{}^i t {\dot x}-i t \bar\varepsilon{}^j A^i_j, \nonumber \\ && \delta A_{ij}=-2(1+2\alpha )\left( \varepsilon_{(i}\eta_{j)}-\bar\varepsilon{}_{(i}\bar\eta{}_{j)}\right)- 2t\left( \varepsilon{}_{(i}\dot\eta{}_{j)}-\bar\varepsilon{}_{(i}\dot{\bar\eta}{}_{j)}\right).\label{ad7b} \end{eqnarray} Thus, the action \p{ad3} with the properly chosen superpotential $F$ as in \p{ad5}, \p{ad5} is invariant with respect to the full $N=4$ superconformal group $D(2,1;\alpha)$. The crucial point in proving the superconformal invariance of our action, was its form \p{ad3} obtained {\it after} reduction \p{xi}. If we instead would check the invariance of the action \p{la8} and limit ourselves by considering the {\it local} transformations of fermionic superfields $\Psi^a$ we would get that \begin{equation}\label{la12} \delta\left( \Psi^a{\overline \Psi}{}_a\right)=2i(1+\alpha) \left(\epsilon_i {\bar\theta}{}^i+\bar\epsilon{}^i\theta_i\right) \left( \Psi^a{\overline \Psi}{}_a\right). \end{equation} Therefore the full action \p{la8} will be invariant only for $\alpha=-1$ which corresponds just to the $SU(1,1|2)$ group! Clearly, with this value of $\alpha$ the first term is also fixed to be ${\cal F}=X\log X$. As we already proved this is not correct. The subtle point is the {\it locality} of the transformation properties of $\Psi^a$. Indeed, from the explicit form of $\delta \xi^a$ \p{ad7a}, it follows that they can be explicitly integrated only for $\alpha=-1$. For any other value of parameter $\alpha$ the integrated $\delta \xi^a$ which is just $\delta \psi^a$ will contain non-local term. Thus, we see that similarly to the preceding Sections the action does not seem to be conformally invariant, but it possesses this invariance by modification of the transformation properties of the involved fields. To conclude, let us make several comments. Funny enough, but in contrast with the standard action \p{la5}, by fixing the bosonic potential in the action \p{la8} to be $1/y^2$ in flat coordinates, one does not completely fix the prepotential $F$. Indeed, rewriting \p{la8} in flat coordinates $y(x)$ with $F''=(y')^2$ we get the condition \begin{equation} x\frac{d y}{dx} = a y \quad \Rightarrow \quad y(x) = x^a. \end{equation} Thus, any polynomial superpotential ${\cal F}\sim X^a$, will give rise to a $N=4$ supersymmetric mechanics with inverse square potential term in the bosonic sector. One of the most interesting examples of such superpotentials are the superconformally invariant ones \p{ad5} and \p{ad6}. Thus, the actions of $D(2,1;\alpha)$ superconformal invariant mechanics read \begin{eqnarray}\label{AC1} S_{\alpha}&=& \int dt \left[ (1+\alpha) \frac{{\dot y}{}^2}{2}+(1+\alpha)\frac{i}{8}\left( \dot{\tilde\eta}{}^i\tilde{\bar\eta}{}_i- \tilde\eta{}^i\dot{\bar{\tilde\eta}}_i\right)+\frac{i}{4}\left( {\dot w}_i {\bar w}{}^i-w_i\dot{\bar w}{}^i\right)-\frac{\alpha^2}{8(1+\alpha)y^2}\left(w^i{\bar w}_i\right)^2-\right.\nonumber\\ && \left.\frac{\alpha}{8y^2}\tilde\eta_i\bar{\tilde\eta}_j\left( w^i {\bar w}{}^j+ w^j{\bar w}{}^i\right)+ \frac{(1+\alpha)(1+2\alpha)}{64 y^2} \tilde\eta{}^2 \bar{\tilde\eta}{}^2 \right], \qquad \alpha\neq -1,0 \end{eqnarray} where \begin{equation} y=x^{-\frac{1}{2\alpha}},\qquad \tilde\eta{}^i=x^{-\frac{1}{2\alpha}-1}\frac{\eta^i}{\alpha}, \end{equation} and \begin{eqnarray}\label{AC2} S_{-1}&=& \int dt \left[ \frac{{\dot y}{}^2}{2}+\frac{i}{8}\left( \dot{\tilde\eta}{}^i\tilde{\bar\eta}{}_i- \tilde\eta{}^i\dot{\bar{\tilde\eta}}_i\right)+\frac{i}{4}\left( {\dot w}_i {\bar w}{}^i-w_i\dot{\bar w}{}^i\right)-\frac{1}{8y^2}\left(w^i{\bar w}_i\right)^2-\right.\nonumber\\ && \left.\frac{1}{8y^2}\tilde\eta_i\bar{\tilde\eta}_j\left( w^i {\bar w}{}^j+ w^j{\bar w}{}^i\right)- \frac{1}{64 y^2} \tilde\eta{}^2 \bar{\tilde\eta}{}^2 \right], \qquad \alpha= -1,\quad y=\sqrt{x}, \; \tilde\eta{}^i=\frac{\eta^i}{\sqrt{x}}. \end{eqnarray} Now it is clear that the simplest case of $N=4$ superconformal invariant mechanics corresponds just to the $\alpha=-1/2$ case, i.e. the $OSp(4|2)$ superconformal group. Indeed, it follows from \p{AC1}, that with $\alpha=-1/2$ the four-fermionic interaction disappears from the Lagrangian. This means that the corresponding supercharges contain the fermions only linearly, similarly to the $N=2$ supersymmetric case. Finally, one should note that our consideration in this Section is very close to the one presented in the recent paper \cite{IFL}. Of course, here we considered only the one-particle case and used the standard $N=4, d=1$ superspace, in contrast with the harmonic superspace approach \cite{HSS,DI,HSS1} advocated in \cite{IFL}. It seems that our action \p{la8} is a more economical one. In any case, the final component action \p{la10} has to coincide with the one which could be obtained from the harmonic superspace action presented in \cite{IFL} upon gauge fixing, integration over theta's and harmonics, and elimination of the auxiliary components. \section{Conclusion} In this paper we demonstrated that (super)conformal mechanics with an additional harmonic oscillator term in the bosonic sector possesses the same superconformal symmetry as the standard one. The main difference between systems with and without oscillator term is the modification of the transformation laws in such a way that the kinetic term is invariant under the (super)conformal group only together with the oscillator potential. The treatment of the bosonic case has natural and straightforward extensions to $N=2$ and $N=4$ superconformal symmetry. Probably, our approach could be also extended to the case of $N$-extended supersymmetric mechanics with $Osp(1,1|N/2)$ superconformal group \cite{IKL1}. Another interesting question for further investigation is the generalization of this approach to the $n$-particles superconformal mechanics \cite{Cal}. We also analyzed in this paper the generation of the bosonic potentials in $N=4$ supersymmetric mechanics through coupling it with auxiliary fermionic supermultiplet. In contrast with \cite{DI} the coupling we introduced does not advocate new fermionic degrees of freedom - all our additional fermions are purely auxiliary ones. The additional bosonic components have a first order kinetic term and therefore they serve as spin degrees of freedom. The new coupling we introduced in this paper is invariant under the full $D(2,1;\alpha)$ superconformal group. This invariance is not evident, because the starting action possesses only $SU(1,1|2)$ invariance, provided we limit ourselves by considering {\it local} transformation properties of involved superfields. The invariance under the $D(2,1;\alpha)$ superconformal group is achieved within {\it non-local} transformations, which become local in terms of new variables. Thus, similarly to the situation with oscillator type potentials, the key ingredient for the construction of the most general $N=4$ superconformal mechanics is the modification of (super)conformal transformations. Our approach in this paper is very close to the one recently proposed in \cite{IFL}. It would be interesting to compare our action with the one-particle action (with all fermionic terms included) presented in \cite{IFL}. They have to coincide, because the main ingredient - the action describing the coupling of the basic $(1,4,3)$ supermultiplet $X$ with the auxiliary bosonic $(4,4,0)$ one \p{ad3}- is unique and is completely fixed by $N=4$ Poincar\'{e} supersymmetry (up to an overall constant). The full component action we constructed \p{AC1} revealed the main peculiarity of the special case of $OSp(4|2)$ invariant action which is the main subject considered in \cite{IFL}. In this particular case the Lagrangian does not contain four-fermionic interactions and, therefore, the corresponding supercharges are linear over fermionic components, similarly to the case of $N=2$ supersymmetry. Finally, the way to deal with spin degrees of freedom proposed in the present work could be relevant for a proper supersymmetric generalization of the system with Yang monopole recently analyzed in \cite{toppp}. \section*{Acknowledgments} We acknowledge discussions with A.~Shcherbakov. S.K. thanks the Laboratori Nazionali di Frascati for the warm hospitality extended to them during the course of this work. This work was partially supported by INTAS under contract 05-7928, RFBR grants 08-02-90490-Ukr, 06-02-16684 and DFG grant 436 Rus~113/669/03. \setcounter{equation}0 \defA.\arabic{equation}{A.\arabic{equation}} \section*{Appendix. N=4, d=1 Superconformal algebra} The most general $N{=}4, d{=}1$ superconformal algebra is the superalgebra $D(2,1;\alpha)$. We use the standard definition of this superalgebra \cite{FRS} with the notations of refs. \cite{{IKLe},{IKLe1}}. It contains nine bosonic generators which form a direct sum of $sl(2)$ with generators $P,D,K$ and two $su(2)$ subalgebras with generators $V, {\overline V}, V_3\, \; \mbox{ and } \; T, {\overline T}, T_3$, respectively: \begin{eqnarray}\label{alg1} && i\left[ D,P\right] =P,\; i\left[ D,K\right]=-K ,\; i\left[ P,K\right]=-2D , \quad i\left[ V_3,V\right]=-V,\; i\left[ V_3,{\overline V} \right]={\overline V}, \nonumber\\ && i\left[ V,{\overline V}\right]=2V_3,\quad i\left[ T_3,T\right]=-T,\; i\left[ T_3,{\overline T} \right]={\overline T},\; i\left[ T,{\overline T}\right]=2T_3. \end{eqnarray} The eight fermionic generators $Q^i,{\overline Q}{}_i,S^i,{\overline S}{}_i$ are in the fundamental representations of all bosonic subalgebras (in our notation only one $su(2)$ is manifest): \begin{eqnarray}\label{alg2} &&i\left[D ,Q^i \right] = \frac{1}{2}Q^i,\; i\left[D ,S^i \right] = -\frac{1}{2}S^i, \quad i\left[P ,S^i \right] =-Q^i,\; i\left[K ,Q^i \right] =S^i, \nonumber \\ && i\left[V_3 ,Q^1 \right] =\frac{1}{2}Q^1,\; i\left[V_3 ,Q^2 \right] =-\frac{1}{2}Q^2,\quad i\left[V ,Q^1 \right] =Q^2, \; i\left[V ,{\overline Q}{}_2 \right] =-{\overline Q}{}_1, \nonumber \\ &&i\left[V_3 ,S^1 \right] =\frac{1}{2}S^1,\; i\left[V_3 ,S^2 \right] =-\frac{1}{2}S^2, \quad i\left[V ,S^1 \right] =S^2, \; i\left[V ,{\overline S}{}_2 \right] =-{\overline S}{}_1, \nonumber\\ && i\left[T_3 ,Q^i\right] =\frac{1}{2}Q^i, \; i\left[T_3 ,S^i\right] =\frac{1}{2}S^i, \quad i\left[T ,Q^i\right] ={\overline Q}{}^i, \; i\left[T ,S^i\right] ={\overline S}{}^i. \end{eqnarray} The fermionic generators $Q^i,{\overline Q}{}_k$ together with $P$ form the $N=4, d=1$ super Poincar\'e subalgebra, while $S^i,{\overline S}{}_k $ generate superconformal translations: \begin{equation}\label{allg3} \left\{Q^i ,{\overline Q}{}_j \right\} = -2\delta^i_j P , \quad \left\{S^i ,{\overline S}{}_j \right\} =-2\delta^i_j K . \end{equation} The non-trivial dependence of the superalgebra $D(2,1;\alpha)$ on the parameter $\alpha$ manifests itself only in the cross-anticommutators of the Poincar\'e and conformal supercharges \begin{eqnarray}\label{alg4} && \left\{ Q^i,S^j \right\} =-2(1+\alpha )\epsilon^{ij} {\overline T} , \; \left\{Q^1 ,{\overline S}{}_2 \right\} =2\alpha {\overline V} ,\;\left\{Q^1 ,{\overline S}{}_1 \right\} =-2D-2\alpha V_3+2(1+\alpha)T_3 , \nonumber \\ && \left\{Q^2 ,{\overline S}{}_1 \right\} =-2\alpha V, \;\left\{Q^2 ,{\overline S}{}_2 \right\} =-2D +2\alpha V_3+2(1+\alpha)T_3. \end{eqnarray} The generators $P,D,K$ are chosen to be hermitian, and the remaining ones obey the following conjugation rules: \begin{equation}\label{conjug} \left( T \right)^\dagger = {\overline T}, \; \left( T_3\right)^\dagger =-T_3 , \; \left( V \right)^\dagger = {\overline V}, \; \left( V_3\right)^\dagger =-V_3 , \; \overline{\left( Q^i \right)}={\overline Q}{}_i,\; \overline{\left( S^i \right)}={\overline S}{}_i. \end{equation} The parameter $\alpha $ is an arbitrary real number. At $\alpha = 0$ and $\alpha = -1$ one of the $su(2)$ algebras decouples and the superalgebra $su(1,1\vert 2)\oplus su(2)$ is recovered.
1,116,691,497,930
arxiv
\section{Introduction}\label{S1} Although the primary aim of $Kepler$ mission is to detect transiting planets by obtaining very high precision photometric measurements, it provides further benefits, especially in terms of clear and reliable determination of very small amplitude light variation on eclipsing and intrinsic variable stars. About 150\,000 targets have been observed in the mission, and apart from the exoplanets, which is the main purpose of the mission, numerous variable stars have been discovered. Unprecedented precision of $Kepler$ photometry clearly revealed low amplitude (mmag) light variations, which is used in analysis of stellar flares, spot activity and differential rotation \citep{Balona_2015MNRAS, Balona_2016MNRAS, Reinhold_Reiners_difrot_2013A&A, Reinhold_2013b_A&A}. Among these variable stars, 2876 eclipsing binary stars have been discovered \citep{Prsa_et_al_2011, Slawson_et_al_2011}. Careful light curve modeling of the binaries with cool components ($T_{eff} < 6500$ K) revealed rotational modulation of light curves and flares in model residuals. KIC\,09641031 \citep{Yol16}, KIC\,09761199 \citep{Yol17} and KIC\,2557430 \citep{Kam17}, GJ\,1243, GJ\,1245A and B \citep{Hawley_et_al_2014ApJ}, KIC\,2300039, KIC\,4671547 \citep{Balona_2015MNRAS} are such stars. The analyses of the patterns of magnetic activity exhibiting by these stars reveals some clues about their evolutionary stages. Although there are several indicators found in these analyses for the evolutionary stage, two of them are the energy spectra defined by \citet{Gershberg_1972Ap&SS} and flare frequencies described by \citet{Ishida_1991Ap&SS}. Both of them have been computed especially from the 1970's to the 1980's in order to figure out the magnetic activity levels for the stars, which the flares are detected from. In 1990's, \citep{Leto_1997A&A} examined the flare frequency variation of EV Lac, a well-known UV Ceti type star. There are a few studies, in which the activity levels of three magnetic active stars discovered in Kepler Mission are discussed depending on their flare frequencies, recently published in the literature. \citet{Yol16} detected 240 flares from KIC\,09641031, and \citet{Yol17} detected 94 flares from KIC\,09761199. In addition, \citet{Kam17} detected 69 flares from KIC\,2557430. \citet{Yol16} derived the One Phase Exponential Association (hereafter OPEA) model, and the flare frequency $N_{1}$ was found to be 0.41632 $h^{-1}$ for KIC\,09641031. \citet{Yol17} computed $N_{1}$ as 0.01351 $h^{-1}$ over 69 flares for KIC\,09761199. However, an interesting situation occurs in case of KIC\,2557430. \citet{Kam17} find that some of the flares detected from KIC\,2557430 come from a third body, which is unclear whether it is a component in the system or an undetected light source from background. Depending on the OPEA model derived from 69 flares, \citet{Kam17} reveal that 40 flares (called Group 1) of them come from the secondary component, while 29 flares (called Group 2) come from a third body. They computed the flare frequency $N_{1}$ as 0.02726 $h^{-1}$ for Group 1 and 0.01977 $h^{-1}$ for Group 2. As it is discussed by \citet{Yol16} and \citet{Ger05}, the flare frequency is one of the parameter which is an indicator about the nature of the flare mechanism that is in porgress on the stellar atmosphere. Apart from the classical parameters described by \citet{Ger05}, \citet{Dal_Evren_2010AJ, Dal_Evren_2011AJ} have also described some new parameters derived from the OPEA models in order to determine the flare process running on the stellar surface. Continuous photometry of variable single stars discovered in the scope of $Kepler$ enabled to trace photometric period variation as a proxy of differential rotation via Fourier transform \citep[see, e.g.][]{Reinhold_et_al_difrot_2013A&A, Reinhold_Reiners_difrot_2013A&A}. However, Fourier transform may not perfectly work in case of eclipsing binaries, where the amplitude of rotational modulation of star spots is usually embedded into the relatively large amplitude light variations caused by eclipses and break of spherical symmetry of the binary components. Furthermore, insufficient representation of light curve models, especially around mid-eclipse phases, may require discarding of data around those phases and causes regular gaps in light curve, which would lead to unwanted alias period and harmonics. In this case, alternative methods could be adopted to trace photometric period variation, such as $O-C$ diagram based on minimum times of rotationally modulated light curves \citep[see, e.g.][]{V2075Cyg_Orkun_2010AN}. In case of eclipsing binary stars, additional intrinsic variations may not be determined at first look, due to the reasons explained above. KIC\,9451096 is such an eclipsing binary in $Kepler$ eclipsing binary catalog\footnote{http://keplerebs.villanova.edu/} \citep{Prsa_et_al_2011, Slawson_et_al_2011} with a short period, and with a confirmed third body \citep{Borkovits_2016MNRAS_To_P}. Beyond the properties provided by the catalog, such as morphology and eclipse depths, \citet{Armstrong_et_al_2014} provided physical information, estimated from spectral energy distribution based on photometric measurements. They estimated the effective temperature of the components of KIC\,9451096 as 7166 K and 5729 K for the primary and the secondary component, respectively. In this study, we carry out photometric and spectroscopic analysis of KIC\,9451096, based on $Kepler$ photometry and optical spectroscopic observations with intermediate resolution ,described in Section~\ref{S2}. Section~\ref{S3} comprises spectroscopic and photometric modeling of the system, and the analysis of out--of--eclipse variations. In the final section, we summarize and discuss our findings. \section{Observations and data reductions}\label{S2} \subsection{$Kepler$ photometry}\label{S2.1} Photometric data obtained by $Kepler$ spacecraft cover a broad wavelength range between 4100\,\AA~and 9100\,\AA, which has advantage of collecting much more photons in a single exposure and reaching sub-milimag precision, but also has disadvantage of having no "true" photometric filter, hence no photometric color information. There are two types of photometric data having different exposure times. These are short cadence data (having exposure time of 58.89 seconds) and long cadence data (having exposure time of 29.4 minutes). In this study we use long cadence data of KIC\,9451096 obtained from $Kepler$ eclipsing binary catalog. The catalog provides detrended and normalized intensities, which is obtained by application of procedures described by \citet{Slawson_et_al_2011} and \citet{Prsa_et_al_2011}. The whole data covers $\sim$4 years of time span with 65\,307 data points in total. MAST archive reports 0.9\% contamination level in the measurements, practically indicating no additional light contribution to the measured fluxes of KIC\,9451096. \subsection{Spectroscopy}\label{S2.2} We obtained optical spectra of KIC\,9451096 by 1.5 m Russian -- Turkish telescope equipped with Turkish Faint Object Spectrograph Camera (TFOSC) at Tubitak National Observatory\footnote{http://www.tug.tubitak.gov.tr/rtt150\textunderscore tfosc.php}. TFOSC enables one to obtain intermediate resolution optical spectra in \'echelle mode. In our case, the instrumental setup provides actual resolution of R = $\lambda/\Delta\lambda$ $\sim$ 2500 around 6500\,\AA, and observed spectra covers usable wavelength range between 3900--9100\,\AA~ in 11 \'echelle orders. Back illuminated 2048 $\times$ 2048 pixels CCD camera, which has pixel size of 15 $\times$ 15 $\mu m^{2}$, was used to record spectra. We obtained ten optical spectra of KIC\,9451096 between 2014 and 2016 observing seasons. In order to obtain enough signal, we used 3600 s of exposure time for each observation. Estimated signal--to--noise ratio (SNR) of observed spectra is mostly between 80--100, except a few case, where the SNR is around 50. SNR estimation is based on photon statistic. Together with the target star, we also obtained high SNR optical spectrum of HD\,225239 (G2V, $v_{r} = 4.80$ \mbox{km\,s$^{-1}$}) and $\iota$\,Psc (HD\,222368, F7V, $v_{r} = 5.656$ \mbox{km\,s$^{-1}$}), and adopted them as radial velocity and spectroscopic comparison templates. We reduce all observations by using standard IRAF\footnote{The Image Reduction and Analysis Facility is hosted by the National Optical Astronomy Observatories in Tucson, Arizona at URL iraf.noao.edu.} packages and tasks. Typical reduction procedure starts with obtaining master bias frame from nightly taken several bias frames and subtracting master bias frame from all object, calibration lamp (Fe-Ar spectra in our case) and halogen lamp frames. Then bias corrected halogen frames are combined together to form average halogen frame and this average frame is normalized to the unity to produce normalized master flat frame. After that, all target and calibration lamp spectra are divided by the normalized flat field frame. Next, cosmic rays removal and scattered light corrections are applied to bias and flat corrected frames. At the end of these steps, reduced frames are obtained and these frames are used for extraction of spectra. In the final steps, Fe-Ar frames are used for wavelength calibration of extracted spectra and wavelength calibrated spectra are normalized to the unity by using cubic spline functions. \section{Analysis}\label{S3} \subsection{Radial velocities and spectroscopic orbit}\label{S3.1} The first step of our analysis is to determine radial velocities of the components and spectroscopic orbit of the system. We cross-correlate each observed spectrum of KIC\,9451096 with spectra of template stars HD\,225239 and $\iota$\,Psc, as described in \citet{fxcor_Tonry_Davis_1979}. In practice we use $fxcor$ task in IRAF environment. We achieve better cross-correlation signals (especially for the weak secondary component) when we use HD\,225239 as template, thus we determine all radial velocities with respect to the HD\,225239 spectrum. We obtain acceptable cross-correlation signals of both components in \'echelle orders 5 and 6, which cover wavelength range between 4900--5700\,\AA\@. Figure~\ref{F1} shows cross-correlation functions of two spectra obtained around orbital quadratures. \begin{figure}[!htb] \centering {\includegraphics[angle=0,scale=0.80,clip=true]{f1.pdf}} \caption{Cross-correlation functions of two spectra obtained in orbital quadratures. The letter $\phi$ denotes corresponding orbital phase. P and S indicate the primary component and the secondary component, respectively.} \label{F1} \end{figure} We list observational log and measured radial velocities of the components in Table~\ref{T1}. Note that we use ephemeris and period given by \citep{Borkovits_2016MNRAS_To_P} and listed in Table~\ref{T2} to calculate orbital phases and for further analysis. \begin{table} \setlength{\tabcolsep}{3pt} \small \caption{Log of spectroscopic observations together with measured radial velocities and their corresponding standard errors ($\sigma$) in \mbox{km\,s$^{-1}$}.}\label{T1} \begin{center} \begin{tabular}{cccrrrr} \hline\noalign{\smallskip} HJD & Orbital & Exposure & \multicolumn{2}{c}{Primary} & \multicolumn{2}{c}{Secondary} \\ (24 00000+) & Phase & time (s) & V$_{r}$ & $\sigma$ & V$_{r}$ & $\sigma$ \\ \hline\noalign{\smallskip} 56842.5435 & 0.7794 & 3600 & 91.4 & 8.2 & -152.5 & 36.9 \\ 56844.4052 & 0.2682 & 3600 & -79.9 & 6.3 & 151.9 & 39.1 \\ 56844.4479 & 0.3024 & 3600 & -74.4 & 6.6 & 155.0 & 37.2 \\ 56889.4315 & 0.2781 & 3600 & -77.1 & 5.7 & 148.1 & 40.0 \\ 56890.2958 & 0.9693 & 3600 & 14.5 & 5.0 & --- & --- \\ 57591.4532 & 0.7199 & 3600 & 88.5 & 7.2 & -153.3 & 32.0 \\ 57601.4386 & 0.7058 & 3600 & 88.7 & 5.4 & -149.8 & 32.1 \\ 57616.4778 & 0.7333 & 3600 & 86.0 & 4.3 & -145.2 & 38.7 \\ 57617.5188 & 0.5659 & 3600 & 31.0 & 5.8 & --- & --- \\ 57672.3009 & 0.3779 & 3600 & -54.8 & 5.1 & 111.1 & 47.9 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} We achieve reasonable solution for spectroscopic orbit under non-eccentric orbit assumption, where the eccentricity is zero and the longitude of periastron is undefined. We check this assumption by inspecting $Kepler$ light curve of the system, where we observe deeper and shallower eclipses at 0.0 and 0.5 orbital phases, respectively, indicating circular orbit (see Section~\ref{S3.3}, Figure~ \ref{F4}). In order to reach the final spectroscopic orbital solution, we prepare a simple script written in python language, which applies Markov chain Monte Carlo simulations to the measured radial velocities, considering their measured errors. We list the final spectroscopic orbital elements in Table~\ref{T2} and plot measured radial velocities, their observational errors, theoretical spectroscopic orbit and residuals from the solution in Figure~\ref{F2}. \begin{table} \caption{Spectroscopic orbital elements of KIC\,9451096. $M{_1}$ and $M{_2}$ denote the masses of the primary and the secondary component, respectively, while $M$ shows the total mass of the system.}\label{T2} \begin{center} \begin{tabular}{cc} \hline\noalign{\smallskip} Parameter & Value \\ \hline\noalign{\smallskip} $P_{\rm orb}$ (day) & 1.25039069 (fixed) \\ $T_{\rm 0}$ (HJD24 00000+) & 54954.72942 (fixed) \\ $\gamma$ (\mbox{km\,s$^{-1}$}) & 2.8$\pm$0.5 \\ $K_{1}$ (\mbox{km\,s$^{-1}$}) & 84.1$\pm$2.3 \\ $K_{2}$ (\mbox{km\,s$^{-1}$}) & 153.2$\pm$14.6 \\ $e$ & 0 (fixed) \\ $a\sin i$ (\mbox{R$_{\odot}$}) & 5.92$\pm$0.35 \\ $M\sin^{3} i$ (\mbox{M$_{\odot}$}) & 1.79$\pm$0.25 \\ Mass ratio ($q = M{_2}/M{_1}$) & 0.55$\pm$0.05 \\ rms1 (\mbox{km\,s$^{-1}$} ) & 3.7 \\ rms2 (\mbox{km\,s$^{-1}$} ) & 4.9 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} \begin{figure}[!htb] \centering {\includegraphics[angle=0,scale=0.55,clip=true]{f2.pdf}} \caption{\textbf{a)} Observed radial velocities of the primary and the secondary (blue and red filled circles, respectively), and their corresponding theoretical representations (blue and red curve). \textbf{b)} Residuals from theoretical solution.} \label{F2} \end{figure} \subsection{Spectral type}\label{S3.2} We rely on our intermediate resolution TFOSC optical spectra to determine the spectral type of the components. Most of our spectra correspond to the phases around orbital quadratures, where we observe the signal of two components separated. However, there are two spectra obtained at phases close to the eclipses, where two components can not be resolved separately. One of these spectra corresponds to $\sim$0.56 orbital phase (see Table~\ref{T1}), where we can not observe the radial velocity signal of the secondary component in cross-correlation. Even in the orbital quadratures, cross-correlation signal of the secondary component is considerably weak compared to the primary component, indicating a very small light contribution from the secondary component to the total light of the system. Our preliminary light curve analysis shows that the contribution of the secondary component to the total light does not exceed $\sim$10\%. In this case, the signal from the secondary component becomes almost negligible in the resolution of our observed spectrum at $\sim$0.56 orbital phase, therefore we assume that we only observe the spectrum of the primary component and adopt this spectrum as reference spectrum for the primary component. We confirm this assumption by calculating composite spectrum of the binary via final parameters of the components (see Section~\ref{S3.3}), where we observe that the contribution of the secondary component affects the theoretical composite spectrum less than 2\%for wavelength range of 4900-5700\,\AA\@. We refrain from performing detailed analysis with spectral disentangling. Future studies could take advantage of this technique and derive atmospheric parameters of the secondary. We first compare the reference spectrum with the template spectra of HD\,225239 and $\iota$\,Psc. We observe that $\iota$\,Psc spectrum provides closer match to the reference spectrum but also indicates earlier spectral type and slightly lower metal abundances for the primary component. At that point, we switch to the spectrum synthesizing method. We use the latest version of python framework $iSpec$ \citep{iSpec_Cuaresma_2014A&A} which enables practical and quick calculation of a synthetic spectrum with a given set of atmospheric parameters via different radiative transfer codes. Among these codes we adopt SPECTRUM\footnote{http://www.appstate.edu/$\sim$grayro/spectrum/ spectrum.html} code \citep{spectrum_gray_1994}, together with ATLAS-9 \citep{ATLAS9_castelli_2004} model atmospheres and actual line list from the third version of the Vienna atomic line database ($VALD3$) \citep{VALD3_Ryabchikova_2015}. Considering spectral type of $\iota$\,Psc, we synthesize spectra for the effective temperatures between 6000 K and 7000 K in steps of 250 K, and metallicity values ([Fe/H]) between $-$1.0 and 0.0 in steps of 0.5. For all synthetic spectra we fix the gravity (log $g$) to 4.15, which we precisely calculate in light curve modeling (see Section~\ref{S3.3}). Since we do not have high resolution spectrum, we fix the microturbulence velocity to 2 \mbox{km\,s$^{-1}$}. We convolve all calculated spectra with a proper Gaussian line spread function in order to degrade their resolution to the resolution of TFOSC spectra. Instrumental broadening in TFOSC spectra is 2.2 \,\AA\@, corresponding 119 \mbox{km\,s$^{-1}$}\@ for wavelengths around 5500\,\AA\@. Estimated projected rotational velocities of the components are 62 \mbox{km\,s$^{-1}$}\@ and 36 \mbox{km\,s$^{-1}$}\@ for the primary and the secondary component respectively (see Section~\ref{S3.3}). Since instrumental broadening is the most dominant broadening source in observed spectra, we do not consider rotational broadening and other line broadening mechanisms. Among the calculated spectra we find that the model with 6500 K effective temperature and [Fe/H] value of $-$0.5 provides the closest match to the reference spectrum. The final effective temperature indicates F5 spectral type \citep{Gray_2005}. Considering the effective temperature and metallicity steps in model atmospheres, and resolution of TFOSC spectra, the final values and their estimated uncertainties are $T_{eff}$ = 6500$\pm$200 K and [Fe/H] = $-$0.5$\pm$0.5 dex, respectively. Note that even we consider the neglected contribution of the secondary component in the reference spectrum, its effect would be fairly inside the estimated uncertainties above. The final $T_{eff}$ values is $\sim$670 K lower than the 7166 K value estimated in \citet{Armstrong_et_al_2014}. We show portions of reference spectrum and the model spectrum, calculated with the final parameters above, in Figure~\ref{F3}. \begin{figure}[!htb] \centering {\includegraphics[angle=0,scale=0.80,clip=true]{f3.pdf}} \caption{Representation of the observed (black), best matched (red) synthetic spectrum and residuals (blue) for three regions. Note that we shift the residuals upwards by 0.3 for the sake of simplicity. Panels $a$, $b$ and $c$ show the regions around H$_{\beta}$, Mg I triplet and metallic absorption lines around 5500\,\AA, respectively.} \label{F3} \end{figure} \subsection{Light curve modeling and physical properties}\label{S3.3} Global visual inspection of KIC\,9451096 $Kepler$ photometry reflects properties of a typical close eclipsing binary. We start light curve modeling by phasing the whole long cadence data with respect to the ephemeris and period given by \citet{Borkovits_2016MNRAS_To_P}, and re--binning the phased data with a phase step of 0.002 via freely available fortran code $lcbin$\footnote{http://www.astro.keele.ac.uk/$\sim$jkt/codes.html$\#$lcbin} written by John Southworth. We plot the binned and phased light curves of the system in Figure~\ref{F4}, panel $a$ and $aa$. The light curve indicates detached configuration for the system. Mid-eclipse phases are 0.0 and 0.5 phases, indicating circular orbit. There is no conspicuous asymmetry in the light curve. \begin{figure}[!htb] \centering {\includegraphics[angle=0,scale=0.60,clip=true]{f4.pdf}} \caption{\textbf{a)} Phase binned light curve of KIC\,9451096 (black filled circles) together with best-fit model (red curves). \textbf{b)} Close up view of the light curve at light maxima. \textbf{c)} Residuals from the best--fit model. Panels at right ($aa$, $bb$ and $cc$) are the same as left panels but for phased long cadence data.} \label{F4} \end{figure} We model the light curve with 2015 version of the Wilson-Devinney code \citep{WD_MAIN_1971ApJ, WD2015_2014ApJ}. In the modeling, we first fixed the most critical two parameters of the light curve modeling, i.e., mass ratio ($q$) and effective temperature of the primary component ($T_{1}$). Since we have reliably derived these parameters in previous sections as $q$ = 0.55 and $T_{1}$ = 6500 K, we adopt them as fixed parameters. Calculated atmospheric properties of the primary component reveal that both stars have convective envelopes, therefore we set albedo ($A_{1}$, $A_{2}$) and gravity darkening ($g_{1}$, $g_{2}$) coefficients of the components to 0.5 and 0.32, respectively, which are typical values for stars with convective outer envelopes. We also consider slight metal deficiency of the system, thus adopt internal stellar atmosphere formulation of the Wilson-Devinney code according to the determined [Fe/H] value of $-$0.5. We assume that the rotation of the components is synchronous to the orbital motion, thus fix the rotation parameter of each component ($F_{1}$, $F_{2}$) to 1.0. We adopt square root law \citep{Sqrt_LD_Law_Klinglesmith_1970AJ} for limb darkening of each component, that is more appropriate for stars cooler than 9000 K. We take the limb darkening coefficients for $Kepler$ passband ($x_{1}$, $x_{2}$, $y_{1}$, $y_{2}$) and bolometric coefficients ($x_{1bol}$, $x_{2bol}$, $y_{1bol}$, $y_{2bol}$) from \citet{van_Hamme_LD_1993AJ}. In the modeling, we adjust inclination of the orbit ($i$), temperature of the secondary component ($T_{2}$), dimensionless omega potentials of the components ($\Omega_{1}$, $\Omega_{2}$) and luminosity of the primary component ($L_{1}$). We also include phase shift parameter as adjustable in the modeling since we expect a shift in ephemeris due to the light--time effect of the third body \citep{Borkovits_2016MNRAS_To_P}. The model quickly converged to a steady solution in a few iterations. We list the model output in Table~\ref{T3} and we plot the best--model in Figure~\ref{F4}, panel $a$, $b$, and residuals from the model in panel $c$. In Figure~\ref{F4}, panel $b$, one can easily see the model inconsistency around 0.25 orbital phases. The inconsistency indicates an additional light variation, which is known as $O'Connell effect$, i.e. difference between light levels of subsequent maxima in an orbital cycle. Possible sources of the difference may be Doppler beaming, hot spot or a cool spot on one of the component in the system. KIC\,9451096 is a detached eclipsing binary, thus we can safely exclude possibility of mass transfer between components, i.e. hot spot possibility. Doppler beaming was detected observationally among some $Kepler$ binaries \citep[see, e.g.][]{Doppler_Beaming_VanKerkwijk2010ApJ}, which becomes important for systems with very low mass ratio, especially for systems with a compact component, such as white dwarf or hot sub-dwarf. In addition, if the effect is in progress, then it would change light levels of each maxima. However, we observe inconsistency only for 0.25 phase, while the model fairly represents light level at 0.75 phase, thus Doppler beaming should have negligible effect in case of KIC\,9451096, if any. Remaining possibility is cool spots located preferably on the cooler component. Here we do not prefer to model this inconsistency alone, which would only show cumulative effect of hundreds of light curves, but instead we subtract the best--fit model from whole long cadence data and inspect the residuals in order to investigate further light variations. We will focus on this in Section~\ref{S3.4}. \begin{table}[!htb] \caption{Light curve modeling results of KIC\,9451096. $\langle r_{1}\rangle$ and $\langle r_{2}\rangle$ denote mean fractional radii of the primary and the secondary components, respectively. Internal errors of the adjusted parameters are given in parentheses for the last digits. Asterisk symbols in the table denote fixed value for the corresponding parameter. Note that we adopt the uncertainty of $T_{1}$ for $T_{2}$ as well, since the internal error of $T_{2}$ is unrealistically small ($\sim$1 K).}\label{T3} \begin{center} \begin{tabular}{cc} \hline\noalign{\smallskip} Parameter & Value \\ \hline\noalign{\smallskip} $q$ & 0.55* \\ $T_{1}(K)$ & 6500* \\ $g_{1}$, $g_{2}$ & 0.32*, 0.32*\\ $A_{1}$, $A_{2}$ & 0.5*, 0.5*\\ $F_{1}$ = $F_{2}$ & 1.0* \\ phase shift & 0.00108(2) \\ $i~(^{\circ})$ & 79.07(4)\\ $T_{2}(K)$ & 5044(200)\\ $\Omega_{1}$ & 4.4942(49)\\ $\Omega_{2}$ & 4.8885(125) \\ $L_{1}$/($L_{1}$+$L_{2})$ & 0.897(1) \\ $x{_1bol},x{_2bol}$ & 0.136*, 0.293*\\ $y{_1bol},y{_2bol}$ & 0.583*, 0.401*\\ $x{_1}, x{_2}$ & 0.106*, 0.482* \\ $y{_1}, y{_2}$ & 0.670*, 0.313* \\ $\langle r_{1}\rangle, \langle r_{2}\rangle$ & 0.2557(3), 0.1506(5) \\ Model rms & 3.0 $\times$ 10$^{-4}$ \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} We complete light curve modeling section with calculation of absolute parameters of the system by combining spectroscopic orbital solution and light curve model results. In Table\ref{T4}, we give physical properties of each component. Our analysis reveals that the system is formed by F5V primary and K2V secondary components. \begin{table} \caption{Absolute physical properties of KIC\,9451096. Error of each parameter is given in paranthesis for the last digits.}\label{T4} \begin{center} \begin{tabular}{ccc} \hline\noalign{\smallskip} Parameter & Primary & Secondary \\ \hline\noalign{\smallskip} Spectral Type & F5V & K2V \\ \multicolumn{1}{c}{[Fe/H]} & \multicolumn{2}{c}{$-0.5\pm0.5$} \\ Mass (\mbox{M$_{\odot}$}) & 1.18(26) & 0.65(9) \\ Radius (\mbox{R$_{\odot}$}) & 1.53(10) & 0.90(6) \\ Log $L/L_{\odot}$ & 0.574(76) & $-$0.327(88) \\ log $g$ (cgs) & 4.14(4) & 4.34(1) \\ $M_{bol}$ (mag) & 3.31(19) & 5.57(22) \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} \subsection{The out-of-eclipse variations}\label{S3.4} In this section, we subtract the best--fit light curve model from the whole long cadence data and obtain residuals. Here, we first divide the whole long cadence data into subsets, where each subset covers only a single orbital cycle, resulting in 1026 individual light curve. Then we apply differential corrections routine of the Wilson-Devinney code and fix all parameters, except ephemeris reference time. In this way, we find precise ephemeris reference time for each individual subset, therefore eliminate any shift in the ephemeris time due to the third body reported by \citet{Borkovits_2016MNRAS_To_P} and obtain precise residuals. In Figure\ref{F5}, we plot three different parts of the residuals. Note that we remove data points that correspond to the eclipse phases due to the insufficient representation of the model at those phases. This mainly arises from inadequacy of radiative physics used in light curve modeling for a very high photometric precision and can clearly be seen in Figure\ref{F4} panel $c$. \begin{figure*}[!htb] \centering {\includegraphics[angle=0,clip=true,scale=0.75]{f5.pdf}} \caption{\textbf{a)} Residuals from whole long cadence data. Remaining panels show different time ranges of residuals, where we observe different light curve shapes, and flares.} \label{F5} \end{figure*} Inspecting residual brightness, we immediately see a variation pattern which changes its shape from time to time. Furthermore, we observe sudden increase and gradual decrease in residual brightness which occasionally occurs in four years of time span and has short time scale of a few hours. These patterns are traces of magnetic spot activity, which is very possibly from the K2V secondary component. Observational confirmation of this possibility can be done by inspecting magnetic activity sensitive spectral lines, such as H$_{\alpha}$ and \ion{Ca}{2} H \& K lines. We inspect these lines in our TFOSC spectra and do not notice any emission features, which could be considered as the sign of the activity. However, one should consider that the contribution of the secondary component to the total light does not exceed 10\% at optical wavelengths and will steeply decrease towards the ultraviolet region of the spectrum. Furthermore, variation patterns observed in Figure\ref{F5} exhibit very small amplitude. Therefore the existence of magnetic spot activity can not be confirmed or excluded via spectral line inspection in case of KIC\,9451096. Nevertheless, variation patterns and flares observed in the residuals indicate weak magnetic spot activity on the secondary component, which can still be detected with the very high precision of $Kepler$ photometry. We analyze rotational modulation and flares of the secondary component via residuals by assuming that the source of all variation patterns is only the secondary component. \subsubsection{Photometric period and differential rotation}\label{S3.4.1} Conventional periodogram methods for determining rotational period do not perfectly work in our case because observed variation patterns exhibit quick changes in amplitude and mean brightness level in a short time scales of a few days, which is comparable to the orbital period. Moreover, since we remove data points at eclipse phases, this causes regular gaps in the data which repeats itself in every $\sim$0.625 day (i.e. half of the orbital period), thus causes alias period and its harmonics, and disturbs real periods. Furthermore, one can clearly see that the rotational modulation of residuals has asymmetric shape. Considering an individual light curve with an asymmetric shape, it is not possible to find a single period to represent whole light curve perfectly and additional periods (i.e. harmonics) are required to full representation. Therefore we apply an alternative method based on tracing the time of a minimum light observed in an orbital cycle, which was previously applied to RS CVn system HD\,208472 \citep{V2075Cyg_Orkun_2010AN}. For each orbital cycle, we find the time of the deepest minimum in the cycle by fitting a second or third order polynomial to the data points around the expected minimum time. The order of the polynomial depends on the light curve shape. After obtaining all minimum times, we construct an $O-C$ diagram by adopting the first minimum time observed in the residuals as initial ephemeris reference time and orbital period as the initial period, and obtain $O-C I$ values. Then we apply a linear fit to the $O-C I$ values and calculate average ephemeris reference time and period given in Equation\ref{Eq1}, together with statistical uncertainties given in parentheses for the last digits. \begin{equation}\label{Eq1} T_{0} {\rm (BJD)} = 2,454,954.02(24) + 1\fd24544(36) \ \times \ E . \end{equation} In the equation, $T_{0} {\rm (BJD)}$ and $E$ denote ephemeris reference time and integer cycle number, respectively. We plot $O-C I$ values and linear fit in Figure~\ref{F6}, panel $a$. After obtaining average ephemeris and period, we subtract the linear fit from $O-C I$ data and obtain $O-C II$ data, which in principle shows real period variation for a given time range. Figure~\ref{F6}, panel $b$ shows $O-C II$ data. We divide $O-C II$ data into 30 subsets by grouping data points that appear with a linear slope. Linear trend of a subset gives the difference between the best--fit photometric period of the subset and grand average photometric period given in Equation\ref{Eq1}. Therefore we can calculate mean photometric period for each subset. We plot the calculated mean photometric periods versus time in Figure~\ref{F6}, panel $c$, together with their statistical uncertainties. We list photometric periods for 30 subsets in Table~\ref{T5}, and tabulate $O-C$ analysis results in Table~\ref{T_ap}. \begin{table} \caption{Photometric periods found from $O-C$ analysis.}\label{T5} \begin{center} \begin{tabular}{cccc} \hline\noalign{\smallskip} Subset & BJD & P & $\sigma$(P) \\ & (24 00000+) & (day) &(day) \\ \hline\noalign{\smallskip} 1 & 54994.8107 & 1.2456 & 0.0004 \\ 2 & 55048.8731 & 1.2326 & 0.0008 \\ 3 & 55094.1598 & 1.2441 & 0.0004 \\ 4 & 55139.0644 & 1.2260 & 0.0019 \\ 5 & 55169.9192 & 1.2459 & 0.0008 \\ 6 & 55208.0721 & 1.2489 & 0.0006 \\ 7 & 55250.0831 & 1.2584 & 0.0011 \\ 8 & 55314.8252 & 1.2484 & 0.0004 \\ 9 & 55366.4562 & 1.2355 & 0.0006 \\ 10 & 55425.0957 & 1.2470 & 0.0006 \\ 11 & 55478.0779 & 1.2517 & 0.0010 \\ 12 & 55507.4240 & 1.2437 & 0.0006 \\ 13 & 55539.3828 & 1.2216 & 0.0025 \\ 14 & 55629.1787 & 1.2430 & 0.0004 \\ 15 & 55702.5236 & 1.2447 & 0.0004 \\ 16 & 55740.2684 & 1.2522 & 0.0007 \\ 17 & 55793.0150 & 1.2485 & 0.0004 \\ 18 & 55840.9410 & 1.2223 & 0.0022 \\ 19 & 55868.2947 & 1.2534 & 0.0005 \\ 20 & 55894.6874 & 1.2712 & 0.0022 \\ 21 & 55924.7567 & 1.2494 & 0.0006 \\ 22 & 55960.4676 & 1.2391 & 0.0011 \\ 23 & 55996.8636 & 1.2507 & 0.0005 \\ 24 & 56026.2172 & 1.2474 & 0.0009 \\ 25 & 56073.0738 & 1.2528 & 0.0005 \\ 26 & 56136.3924 & 1.2449 & 0.0005 \\ 27 & 56258.6328 & 1.2509 & 0.0004 \\ 28 & 56333.3104 & 1.2323 & 0.0019 \\ 29 & 56359.5423 & 1.2565 & 0.0008 \\ 30 & 56400.8932 & 1.2504 & 0.0004 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} \begin{figure}[!htb] \centering {\includegraphics[angle=0,scale=0.59,clip=true]{f6a.pdf}} {\includegraphics[angle=0,scale=0.59,clip=true]{f6b.pdf}} \caption{\textbf{a)} $O-C I$ diagram of observed minimum times (blue filled circles) and linear fit (red line). \textbf{b)} $O-C II$ diagram obtained via residuals from the linear fit in panel $a$. Each color denotes a subset where data points appear on a linear trend. Linear fit to each subset is shown by black dashed line. \textbf{c)} Calculated mean photometric period for each subset (blue filled circles) and their statistical uncertainties. Note that the horizontal axis values are converted from E numbers to barycentric Julian date. Orbital period and grand average photometric period obtained from linear fit to the $O-C I$ data are shown with blue color in form of dashed line and dot--dashed line, respectively.} \label{F6} \end{figure} The average period given in Equation\ref{Eq1} represents average rotation period for magnetic activity features on the surface of the secondary component, which are typically cool and dark regions, i.e., star spots, and indicates a slightly ($\sim$0.5\% day) shorter period compared to the orbital period. This is clearly observed in Figure~\ref{F6} panel $c$, where the mean photometric periods of subsets are mostly shorter than the orbital period. Assuming solar type differential rotation, it means that the orbital period is slightly longer than the equatorial rotation period of the secondary component. Under the same assumption, differential rotation coefficient can be estimated from $(P_{max} - P_{min})/P_{equ} = kf$, where $P_{max}$, $P_{min}$, $k$ and $f$ denote observed maximum and minimum period, differential rotation coefficient and a constant that depends on the range of spot forming latitudes, respectively \citep{Hall_Busby_1990_difrot}. Considering small amplitude of rotational modulation of residuals, we assume that the secondary component is not largely spotted and total latitudinal range of spot distribution is 45 degrees, which puts the $f$ constant takes values between 0.5 and 0.7 \citep{Hall_Busby_1990_difrot} Using maximum and minimum photometric periods from $O-C$ analysis, and assuming the shortest period corresponds to the equatorial rotation period of the star, we find $k = 0.081\pm0.011$ and $k = 0.058\pm0.006$ for $f = 0.5$ and $f = 0.7$, respectively. Since these $k$ values are calculated via boundary values of $f$, the real differential rotation coefficient must lie in the range of $k$ values calculated above. An average $k$ is found as 0.069$\pm$0.008. \subsubsection{Flares}\label{S3.4.2} We detect 13 flares in the residuals from long cadence data. In flare analysis, it is critical to determine quiescent level, which denotes the brightness level in the absence of flare. In our case, we determine the quiescent level by applying Fourier analysis to single orbital cycle where the flare occurs. The Fourier analysis represents the rotational modulation of residuals in the cycle, and then we remove the Fourier representation from the data. The remaining residuals only shows quiescent level and flare itself. We show such a flare light curve in Figure~\ref{F7}. \begin{figure}[!htb] \centering {\includegraphics[angle=0,scale=0.59,clip=true]{f7.pdf}} \caption{An example of a flare light curve. The filled black circles represent the observations, while the red line represents the quiescent level derived from the data out-of-flare.} \label{F7} \end{figure} The energy ($E$) is a very important parameter for a flare. However, the energy parameter has the luminosity $L$ of the star as a factor in equation $E = P \times L$ described by \citep{Gershberg_1972Ap&SS}. Due to the disadvantages described in \citet{Dal_Evren_2010AJ}, we use flare equivalent duration instead of flare energy, which is more proper. We compute the equivalent durations of flares via equation $P = \int[(I_{flare}-I_{0})/I_{0}]dt$ \citep{Gershberg_1972Ap&SS}, where P is the flare equivalent duration in seconds, $I_{0}$ is the quiescent level intensity, and $I_{flare}$ is the intensity observed at the moment of flare. Considering the quiescent level, the times of flare beginning, flare maximum and flare end are determined, together with flare rise duration, flare decay duration and flare amplitude. We list all computed values in Table\ref{T6} for each of 13 flare. \begin{table} \caption{The parameters calculated for each flare. Note that BJD column denotes the mid--flare time. Tr, Td and Amp denote flare rise duration, flare decay duration and flare amplitude, respectively.}\label{T6} \begin{center} \begin{tabular}{ccccc} \hline\noalign{\smallskip} BJD & P & Tr & Td & Amp \\ (24 00000+) & (s) & (s) & (s) & (mag)\\ \hline\noalign{\smallskip} 55021.2171 & 11.4 & 1763 & 15889 & -0.001516 \\ 55043.1016 & 5.6 & 1763 & 5296 & -0.002483 \\ 55310.6569 & 7.6 & 1763 & 8830 & -0.002047 \\ 55326.5140 & 2.7 & 1771 & 1763 & -0.001618 \\ 55412.0302 & 5.9 & 1763 & 7068 & -0.001648 \\ 55416.9343 & 12.1 & 1771 & 14118 & -0.002853 \\ 55824.2162 & 4.3 & 1763 & 5296 & -0.001578 \\ 55931.1213 & 4.5 & 3534 & 3534 & -0.001453 \\ 55971.7021 & 4.9 & 1763 & 5296 & -0.002152 \\ 56142.9809 & 6.0 & 3534 & 7059 & -0.001983 \\ 56284.8887 & 3.4 & 1771 & 3525 & -0.001806 \\ 56286.5642 & 4.4 & 1771 & 3525 & -0.001568 \\ 56375.4705 & 2.2 & 1763 & 1763 & -0.001429 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} \citet{Dal_Evren_2010AJ, Dal_Evren_2011AJ} suggest that the best function to represent the relation between flare equivalent duration and flare total durations is the OPEA, where the flare equivalent duration is considered on a logarithmic scale. The OPEA function is defined as $y = y_{0}+(Plateau-y_{0})\times(1-e^{-kx})$, where $y$ is the flare equivalent duration on a logarithmic scale, $x$ is the flare total duration, and $y_{0}$ is the flare equivalent duration in the logarithmic scale for the least total duration, according to the definition of \citet{Dal_Evren_2010AJ}. It should be noted that the $y_{0}$ does not depend on only flare mechanism, but also depends on the sensitivity of the optical system used in the mission. The most important parameter in the model is the $Plateau$ value, which defines the upper limit for the flare equivalent duration on a logarithmic scale and defined as saturation level for a star \citep{Dal_Evren_2011AJ}. Using the least squares method, the OPEA model leads to the results in Table~\ref{T7}. We plot the resulting model in Figure~\ref{F8} with its 95\% statistically sensitivity limit. \begin{table} \caption{Parameters derived from the OPEA Model by using the least squares method.}\label{T7} \begin{center} \begin{tabular}{cc} \hline\noalign{\smallskip} Parameter & Value \\ \hline\noalign{\smallskip} $Y_{0}$ & $-$0.015961$\pm$0.13891 \\ Plateau & 1.2394$\pm$0.14441 \\ K & 0.00011438$\pm$0.000036715 \\ Half-time & 6060 \\ $R^{2}$ & 0.94535 \\ P value & $\sim$0.10 \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} \begin{figure}[!htb] \centering {\includegraphics[angle=0,scale=0.59,clip=true]{f8.pdf}} \caption{The OPEA model obtained over 13 flares. The blue filled circles show each flare while the continuous red line shows the OPEA model and the dotted red lines show the sensitivity range of the model.} \label{F8} \end{figure} We tested the derived model by using method proposed by \citet{D'Agostino_1986book} to understand whether there are any other functions to model the distribution of flare equivalent durations on this plane. In this method, the probability value (P value), is found to be as $\sim$0.10, which means that there is no other function to model the distributions \citep{Graphpad_motulsky2007, Spanier_1987}. \citet{Ishida_1991Ap&SS} described a frequency for the stellar flare activity as $N_{1} = \Sigma n_{f}/\Sigma T_{t}$, where $\Sigma n_{f}$ is the total flare number detected in the observations, while $\Sigma T_{t}$ is the total observing duration from the beginning of the observing season to the end. In case of KIC\,9451096 we find $N_{1}$ frequency as 0.000368411 $h^{-1}$ adopting the total long cadence observing duration as 1470.2786 days from the times of the first and last long cadence data points. \section{Summary and discussion}\label{S4} Photometric and spectroscopic analysis of KIC\,9451096 reveals that the system is composed of a F5V primary and a K2V secondary star on a circular orbit with a detached binary configuration. Medium resolution TFOSC spectra suggest that the system has one third of [Fe/H] of the Sun. Light curve modeling reasonably represents observations, however, we are able to catch the signals of additional light variation, which is very weak compared to the variations due to the binarity and eclipses, but still observable in the very high precision of $Kepler$ photometry. We observe occasional flares and rotational modulation of the light curve residuals from eclipsing binary model. Considering the physical and atmospheric properties of the components, we attribute these variations to the secondary component, which is a perfect candidate for magnetic star spot activity with its deep convective zone owing to its spectral type and very fast rotation caused by short orbital period. We inspect rotational modulations of the residuals to trace photometric period of the secondary component, and analyze its flare characteristics. Photometric period analysis via $O-C$ diagrams shows us the average photometric period is shorter than the orbital period by $\sim$0.5\% day. Under any type of differential rotation (either solar like, or anti--solar like) assumption, it means that the orbital period does not correspond to the equatorial rotation period of the star. Following the method proposed by \citet{Hall_Busby_1990_difrot}, we find an average differential rotation coefficient as $k = 0.069\pm0.008$, suggesting $\sim$3 times weaker differential rotation compared to the solar value of 0.19. We note that the type of differential rotation can not be determined from photometry alone and we implicitly assume solar type differential rotation in case of KIC\,9451096. However, $k = 0.069$ value, which is extracted from very high precision continuous photometry for a restricted time range (four years in our case), defines a lower limit for the strength of differential rotation on the star. Quick comparison of $k$ values for other stars can be done by looking at 17 stars listed in \citet{Hall_Busby_1990_difrot}, where $k$ values are usually a few percent or less, except BY Dra with $k$ = 0.17. More reliable way of detecting differential rotation with its magnitude and type is Doppler imaging, which is based on high resolution time series spectroscopy. Considering other stars whose $k$ values were determined by Doppler imaging, we see mostly weak differential rotation with a $k$ value of a few percent, either among solar type differential rotators (HD\,208472 $k$ = 0.015 \citep{DI_V2075Cyg_Ozdarcan2016}, XX\,Tri $k$ = 0.016 \citep{DI_XXTri_Kunstler_2015A&A}, $\zeta$ And $k$ = 0.055 \citep{DI_Zeta_And_Kovari2012}, KU\,Peg $k$ = 0.04 \citep{DI_KU_Peg_Kovari2016}) or anti--solar type differential rotators (UZ\,Lib $k$ = $-$0.004 \citep{DI_UZ_Lib_Vida2007AN}, $\sigma$ Gem $k$ = $-$0.04 \citep{DI_Sigma_Gem_Kovari2015}, HU\,Vir $k$ = $-$0.029 \citep{DI_HU_Vir_Harutyunyan2016}). Due to the binary nature of KIC\,9451096, considerable effect of tidal forces on redistribution of the angular momentum in the convective envelope of the components can be expected, which would alter the magnitude of differential rotation \citep{Scharlemann_tidal_difrot_1982ApJ}. Based on observational findings, \citet{Collier_Cameron_difrot_tidal_2007AN} suggests suppression of differential rotation by tidal locking, which is possibly in progress for KIC\,9451096. We detect 13 flares in residuals from long cadence data, which are attributed to the secondary component with a corresponding $B-V$ value of 0$^m$.92 \citep{Gray_2005}. We apply OPEA model to analyze flare characteristic and find that the calculated flare parameters and resulting OPEA model parameters seem to be in agreement with parameters derived from stars analogous to the secondary component, except half--time value. Possible source of disagreement for half--time value is that there are not enough sample flares at the beginning of the OPEA model. We find $N_{1}$ value of 0.000368411 $h^{-1}$ for KIC\,9451096. $N_{1}$ was found to be 0.41632 $h^{-1}$ for KIC\,09641031 \citep{Yol16}, 0.01351 $h^{-1}$ for KIC\,09761199 \citep{Yol17}, and 0.02726 $h^{-1}$ for Group 1 and 0.01977 $h^{-1}$ for Group 2 of KIC\,2557430 \citep{Kam17}. Among these systems, KIC\,9451096 has the lowest $N_{1}$ value, which indicates the magnetic activity level of the secondary component of KIC\,9451096 is the lowest, according to \citet{Dal_Evren_2011AJ}. \section*{Acknowledgments} We thank to T\"UB\.ITAK for a partial support in using RTT150 (Russian-Turkish 1.5-m telescope in Antalya) with project number 14BRTT150-667. This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission Directorate.
1,116,691,497,931
arxiv
\section{Introduction} \label{sec:introduction} Convolutional neural networks (CNNs) are one of the most popular deep learning structures. These models are consisted of many layers with different functionalities to perform some tasks that are usually difficult for the traditional algorithms \cite{CNN1} \cite{CNN3}. CNNs have been around for many years and studied deeply over 25 years after LeNet5 was proposed \cite{LENET5}. However, they became popular when the inventions in computer architectures paved the way for programmable and massive parallel computation needed by these structures. General-purpose Graphical processing Units (GP-GPU) allow the CNN computation to be carried out quickly and easily thanks to the recent progress on programming models and architecture advancements \cite{GPU, LUlaw}. However, the push from CNN designers to create deeper and larger models in one hand, and high power consumption of GPUs on the other hand, motivated the architects to increase the computation capacity of current platforms by suggesting new special-purpose accelerators \cite{Diannao,Eyeriss}. The CNN computations are inherently parallel, and they require a large amount of memory due to their large working set size. One solution to overcome the problem of larger memory is employing emerging memory technologies such as STT-RAM or eDRAM. Non-volatile memories provide higher capacity (4X or more) at almost the same area \cite{STT2,STT3}. The other appealing feature of NVMs is called Multi-Level Cell where more than one bit can be stored in any single cell. More clearly, by using a more sophisticated circuit, the resistance spectrum of these resistive memories can be partitioned into more than two regions, and then more than one bit can be stored in a single cell \cite{MLC_1,MLC_2}. This feature is not free and imposes some major challenges. The reliability of MLC STT-RAM is lower than that of SLC, and can be as high as $1.5 \times 10 ^{-2}$ to $2 \times 10 ^{-2}$ \cite{staterestrict}. The lifetime of SLC STT-RAM devices fabricated so far is less than $4 \times 10 ^{15}$ cycles, which is very close to conventional memories. However for MLC STT-RAM, the larger write current exponentially degrades the lifetime \cite{endurance}. So, to benefit from the larger capacity of MLC STT-RAM, the major weaknesses associated with MLC NVMs such as low reliability, high dynamic power consumption and shorter lifetime must be addressed comprehensively. Fortunately, CNN models are naturally robust to some level of inaccuracy. In other words, the accuracy of prediction will not significantly drop, if the weights slightly change either intentionally by the designer to reduce the space, or by the memory technology substrate which might not be highly reliable \cite{Stochastic,DeepBurning}. However, naively replacing the memory system with a low reliable one may impact the accuracy; as we will show in the later sections. Thus, we are seeking for a larger MLC STT-RAM memory to replace the traditional SRAM memories while maintaining the prediction accuracy. In this paper, we propose two simple yet effective schemes to efficiently tolerate the soft errors and also at the same time reduce the energy dissipation. The first scheme which is called \emph{Sign-Bit Protection} utilizes an unused bit in half-precision floating-point representation to duplicate the sign-bit. Based on our experiment sign-bit error is the main contributor to accuracy loss and, thus must be protected separately. We show that protecting the sign bit can be done for free because we can duplicate it in an unused bit. The key behind this scheme is that weights are normalized between -1 and 1 after each convolutional layer, and the second bit in half-precision floating-point representation remains unused. This duplication allows us to change the cell mode from vulnerable MLC mode to safe and reliability-friendly SLC mode. Additionally, we propose a data reformation scheme where by manipulating the data, we increase the error resiliency of the system. The key behind the second scheme is that some bit patterns are power-friendly and at the same time they are more robust to soft errors (i.e in a 2-bit MLC-STT, "00" is easier to program and also has higher soft error resiliency \cite{staterestrict}), while other patterns are not very robust. We manipulate the content of the data block by simple operations such as rotation and rounding to increase the number of reliability- and power-friendly patterns and minimize the power-hungry and vulnerable patterns. Combining these two schemes guarantees the accuracy of prediction to be as good as the error-free baseline while increasing the energy efficiency. Our experimental results taken from TensorFlow \cite{TensorFlow} platform and SCALE-Sim models \cite{SCALESim} show that our scheme can provide 89\% and 97\% top-5 accuracy of prediction for ImageNet and VGG16 while reducing the read and write energy dissipation by 9\% and 6\%, respectively. Our scheme needs 2 bits per 16 bits (12.5\%) and 2 bits per 64 bits (3.125\%) of storage overhead for the most energy-efficient and the energy-balanced systems, respectively; while providing the same level of accuracy compared to the baseline. \section{Background} In this section, we provide the required backgrounds of CNN accelerators and MLC STT-RAM. \label{sec:Background} \subsection{CNN Accelerator} Deep Neural Networks (DNNs) has become a very reliable solution for addressing many energy constraint problems over the last few years \cite{Energy,footprint}. Since large amount of data and computation is required for CNN operations, using proper hardware for this purpose is inevitable. Accelerators are energy-efficient devices that can carry out simple computation in a very effective manner. A typical accelerator-based architecture has been shown in Fig. \ref{fig:base}. In this system, a general-purpose architecture is connected to many Processing Elements (PE) through a shared medium. There is a DMA engine that can handle data transfers between main memory, CPU, and accelerators. In the accelerator side, there are 3 large buffers holding inputs, weights, and outputs. Buffers are responsible for keeping the PEs busy aiming for higher throughput. PEs are processing components that can compute simple functions such as add-multiply-sum operations. Eyeriss \cite{Eyeriss} categorizes the systolic arrays into five classes: Output Stationary (OS), Weight Stationary, (WS), Input Stationary (IS), Row Stationary (RS), and No Local Reuse (NLR). The difference is how tasks are assigned to computing nodes and how weights and inputs are distributed across the PEs. For each class weight matrices are differently mapped to a given MAC unit, and are not replaced until the computation is completed. These design-choices have their own advantageous and disadvantageous, however, without loss of generality, we assume that our baseline is a weight-stationary system. The insight is that by keeping the weights more on PEs and not on an NVM buffer, the system would be more reliable. \begin{figure}[t] \centering \includegraphics[width=3.4in]{Figures3/Systolic.pdf} \caption{Accelerator-based architecture for the CNNs operation} \label{fig:sub-first} \end{figure} \begin{figure*} \centering \includegraphics[width=5in]{Figures3/MLC-STT.pdf} \label{fig:sub-second} \caption{Multi-level cell STT-RAM. (a) Series MLC MTJ structure; (b) 2-step write operation; (c) 2-step read operation.} \label{fig:base} \end{figure*} \subsection{MLC STT-RAM Basics} MLC STT-RAM relies on a magnetic tunneling junction (MTJ) to store one bit. Basically, an STT-RAM cell has 2 layers, a free layer, and a reference layer. The reciprocal magnetic direction of these layers determines the value stored in each cell. The magnetization direction of the reference layer is fixed while the other one can change. To hold a "0" logic in a cell, a current must be applied from reference layer to free layer so that the magnetization direction of the free layer becomes the same as the fixed layer. This formation, which is called parallel has the lowest resistance and can represent logic "0". The magnetization direction of these two layers can be managed in such a way that resistance of a cell becomes very high representing logic "1". This design is called Single Level Cell (SLC) and is capable of storing one bit in each cell. On the other hand, the structure can be extended in such a way that 2 or even more bits are stored in each cell. For this purpose, 2 MTJs are stacked to create 4 different and distinct resistances leading to a 2-bit cell configuration. One MTJ should be sized larger namely \emph{hard bit} and the smaller MTJ is called \emph{soft bit}. Fig. \ref{fig:base} shows the structure of the hard and soft bits. To program a 2-bit MLC STT-RAM, 2 steps are required. In the first step, the soft bit is programmed and then the hard bit is realized. More specifically, in the first step, we can program MTJ either to "00" or "11". Then in the next step, by applying another pulse, we can reach to "01" or "10". Fig. \ref{fig:base}(b) shows the program process of serial MLC STT-RAM. The solid line is the indicator of the first step and the dashed line shows the second step. For example, by applying a high current pulse to the STT cell, we can reach from "00" to "11". Then, by applying another pulse, we can work around the least significant bit. Read operation in a 2-bit MLC STT is performed as follows. First, a small current is applied to the cell and the resistance is compared to a reference value. Based on the result of comparison another pulse is applied and the result is then compared against another reference value. This approach is very similar to a binary search, where based on the comparison result, we narrow down the space of the search. As an example, let's say value "10" has been stored in the cell. To read the content, first, we apply a small current and compare either observed resistance or voltage to $V_{ref0}$. If the system is not faulty, the result of comparison leads us to apply the second pulse and compare the result to $V_{ref2}$. The results of this comparison tell us the voltage is lower than $V_{ref2}$ and higher than $V_{Ref0}$, and the stored value is realized. The only consideration here is that the size of the current should be small enough not to change the value of a cell. \subsection{Reliability of MLC STT-RAM Cells} Process variations and thermal fluctuations in MTJ switching process are the two main sources of unreliability and inefficiency in MLC STT-RAM cells. Process variations persuade deviations of electrical and magnetic characteristics of MTJs from their nominal values and it causes read and write errors of STT-RAM \cite{staterestrict}. Furthermore, thermal fluctuations change the resistance switching process of MTJs, so that it causes uncertainty of switching time. Write errors happen when the programming current is removed before the MTJ switching process is completed \cite{statisticalSTT}. In SLC cells raising the amplitude of programming current reduces the MTJ switching time and improves the write reliability \cite{ISCAS}. But in MLC cells, since the resistance difference between hard and soft bits is low, raising the amplitude of programming current in soft transition may cause flipping the resistance state of the large MTJ and overwriting the value held by the cell. Sensing error and read disturbance are the two main sources of read operation failures in MLC STT-RAMs \cite{staterestrict}. Sensing error happens when MTJ resistance state can not be verified before ending the sensing period due to the small or false sense margin. In MLC STT-RAM the sense margin between adjacent states is smaller and therefore distinction between the resistance of states is harder than that of SLC. Read disturbance occurs when the read current changes the resistance state of MTJ. This is also exacerbated with thermal fluctuations. In MLC STT-RAM cells since the probability of read disturbance is very low, it is ignored in most analysis \cite{staterestrict}. \section{Related Work} \label{sec:Related} This section presents an overview of recent works on designing energy-efficient on-chip memory for CNN accelerators. Most of the computations in CNNs/DNNs are based on matrix/vector multiplications and additions. There exist a considerable body of literature on performing these computations efficiently in hardware via GPUs \cite{GPU}, FPGA devices \cite{FPGA} and custom ASIC devices \cite{Diannao,Eyeriss,SCALEDEEP}. There have been studies to investigate memory footprint reduction through pruning between layers in neural networks \cite{DeepCompression,NIPS1992_647}. Some works reduced the precision of network's parameters to lower the number of required bits \cite{ReducedPrecisionSF}. However, works that reduce the precision policies can degrade the CNN accuracy. Authors in \cite{error_resilience} take advantage of error resiliency of machine learning application and tried to design energy-efficient accelerators. They employ a hybrid SLC/MLC memory to address the reliability issues of MLC system. More clearly, some cells are written in MLC mode and the rest in SLC mode selectively with the aim of increasing the total reliability. The clear weakness of such a design is that effective capacity of memory system is reduced and the whole potential of MLC design is not unleashed. In our architecture, we do not sacrifice the capacity at all, and all cells operate in MLC mode. NVM-based neural network accelerator has been the subject of research in \cite{PIM, PipeLayer,CELIA,DSE,MAXNVM}. These NVM-based accelerators usually focus on the fully connected layers to evaluate their idea. This is because a fully connected layer has much more weights than a convolutional layer and managing this amount of data is more important than convolutional layer weights. It must be noted that, in some works NVM is used as the logic not necessarily as the memory component. Some authors have focused on energy-efficient STT-RAM. They employed some techniques in both circuit and architecture levels \cite{STTCache,ReliableSTT,Cacherevive}. These works have fixed the high write energy of STT-RAM while conserving the accuracy of read and write operations. Authors in \cite{Stochastic} proposed embedding STT-RAM devices into neural networks as neurons. This work claimed that magnetic tunnel junctions can be interpreted as a stochastic memresistive synapse for neuromorphic systems. Another method employed by \cite{spintronic} proposed a quality-configurable single-level cell STT-RAM memory array. It stores data with different accuracy level based on the application requirement. All of mentioned techniques, are designed for special purposes and can not be used in a general neural network accelerators. Authors in \cite{CNNBuffer} have applied a precision-tunable MLC STT-RAM buffer for energy-efficient general-purpose neural network accelerators. This work leverages error resilience feature of neural networks to tolerate the reliability issues in MLC STT-RAMs. In this work 16-bit fixed-point number system is used for representing data and weights. Our works is built on top of this work and further improves the reliability. \section{Motivation} \label{sec:Motivation} There are 2 motivations behind this work. The first motivation is that limited range of weights leads to a situation where all covered numbers in the IEEE half-precision floating-point representation are not used. This leaves us some unused bits in the representation to be used as backup for other cell. By carefully deciding what bits to backup, we can improve the reliability of the system. The second observation is that MLC STT programming process is asymmetric. Two bit patterns "11" and "00" require less power to program while patterns "10" and "01" are energy-hungry. This feature can be exploited to enhance the energy efficiency of the system by increasing the number of "11" and "00" through data manipulations. In the following subsections, we first investigate these two observations with more details and then propose two schemes to exploit them. \subsection{Limited Range of Weights} Many previous works noticed that weights in CNNs span in a short ranges \cite{WeightNormalization}. That has been the insight behind many pruning and quantization schemes \cite{DeepCompression,NIPS1992_647,Systematic}. According to these works weights are limited between -1 and 1 \cite{WeightNormalization}, because after any convolutional layers a weight normalization is performed. Having this observation in mind, we show that the second bit of the numbers is never be used. For better understanding, we show this phenomena through an example. Fig. \ref{fig:Float} shows 4 special numbers: "-1.0", "+1.0", "+1.99", and "+2.0" in full-precision floating-point representation. The first two rows in the figure represents "-1.0", and "1.0"; the largest numbers required by CNN. The first bit indicates the sign; negative in the first row and positive in the second row. Then in the exponent region, all bits are selected, but the second bit. These 2 cases show the biggest numbers that can be obtained when the second bit is unused. Fortunately, if we do not use the second bit, we can successfully cover any number between -1 and 1 because the largest numbers are already covered. Also, to cover any number between -1 and 1, we need to either reduce the exponent, or increase the mantissa, which in both case the second bit remains untouched. If the same exponent as the two previous ones is used (e.g. "01111111"), and if a non-zero value for the mantissa is picked, the number would have a value greater than 1 or less than -1, as it is a case in the third row of the example. So, we can conclude that for any number between -1 and 1, the second bit will not be used, and it can be a good candidate to be borrowed to host the sign bit. The very first number that utilizes the second is "+2.0", as shown in the last row, which is not used in CNNs anyway. More clearly, when the second bit is one, the exponent value is $2^7$=128 that needs to be subtracted from the bias which is 127. While the bits in mantissa part are all zero, the mantissa value would be 1.0. Hence, $1.0 \times 2^{128-127}=2$. Note that while the second bit is not used, the first bit is used frequently. Because there is roughly even number of negative and positive numbers in the weights and parameters of CNNs. Therefore, we run into a situation where the pattern "10", and "00" happens a lot in the first two bits. If the number is positive, we have "00", which is reliable to be saved. However, if the number is negative, we get pattern "10", which is highly vulnerable to error and also needs a huge amount of power during the programming. Later, we show how we can utilize the second bit to store the weights safely. \begin{figure*} \centering \includegraphics[width=5in]{Figures3/example_32.pdf} \caption{IEEE standard 754 floating-point representation. First bit is sign bit, the next 8 bits are exponents, and the rest are mantissa. There are 4 different numbers: -1, 1, 1.99, and 2 in full-precision floating-point representation.} \label{fig:Float} \end{figure*} \subsection{MLC STT Programming Asymmetry} As mentioned in the previous section, when an MLC STT-RAM is programmed, patterns "00" and "11" need one iteration to finish, while patterns "10" and "01" require two iterations (Fig. \ref{fig:base}). Basically, in the first iteration, either the cell is programmed into "00" and "11", and then either the process is stopped or another step is taken to put the cell into "01" or "10". This way, we can claim that MLC STT-RAM programming is content-dependent. At 2-bit granularity, patterns "11" and "00" consume less power, and patterns "10" and "01" are slow and consume high power. Therefore the power consumption can be reduced, if any scheme can manipulate the data block in such a way that it gets fewer number of "01" and "10". Interestingly, the patterns "11" and "00" are also more resilient to soft errors. Because these 2 states are the base states and thus the cell has higher stability. In other words, patterns "11" and "00" are both power- and reliability-friendly. In the next section, we show how by employing simple operations the number of vulnerable bits can be reduced in the CNNs to better tolerate soft errors and also reduce the power consumption both at the same time. \section{The Proposed Scheme} \label{sec:proposed} \subsection{Schemes} We rely on the fact that the second most significant bit is always unused and can be utilized to save the sign bit (MSB) bit. Also, for STT-RAM, programming hard bit takes one iteration and soft bits takes up two iterations. Our goal is twofold; first protecting the MSB bit and second reforming the bit stream in such a way that number of "10" and "01" are reduced. In this regard, we introduce three reformations as follows: 1) No Change, 2) Rotate Right by One, and 3) Rounding to Nearest. The first operation is called No Change because weights are written as is. The second operation is called Rotate Right by One, where the weight is rotated by one to the right. This can help when some patterns such as "10XXX01" appears in the bit stream and by rotating one bit, error-resilient patterns will place next to each other 110XXX0. Finally, the third operation tries to round the weight to the nearest MLC-friendly value. In order to figure out how many bits are required to be taken for rounding, we conduct an experiment where 1 million random numbers between -1 and 1 are generated. In this experiment, we flip one bit at a time and measure the error rate based on Error Sum of Squares (SSE). Fig. \ref{fig:Float} shows the SSE when different bit positions in half-precision floating-point are flipped. As can be seen from the figure, the last 4 bits have small impact and the SSE rate is very low. We limit the area where this scheme is applied to, because if we expand this area further (from last 4 digits to last 8 digits), the accuracy of classification will drop drastically. Based on our experiment to maintain the accuracy, it is best to select only the last 4 digits to be rounded. So, we take the last four bits and try to round them to an MLC-friendly value. To do so, since there are 4 MLC-friendly values ("0000", "0011", "1100", and "1111") in a 4-bit stream, we divide the 16 possible values into 4 classes uniformly and remap each to one the MLC-friendly value. The rounding process is shown in Tab. \ref{tab:rounding}. As can be seen the first 4 values are assigned to "0000" and so on. \begin{figure*} \centering \includegraphics[width=5in]{Figures3/STT-Precision.pdf} \caption{IEEE standard 754 floating-point representation. First bit is sign bit, the next 8 bits are exponents, and the rest are mantissa. There are 4 different numbers: -1, 1, 1.99, and 2 in full-precision floating-point representation.} \label{fig:Float} \end{figure*} \begin{table}[] \caption{\label{tab:rounding} Rounding the bit patterns to MLC-friendly values.} \begin{tabular}{|c|c|c|c||c|} \hline \multicolumn{4}{|c||}{Values} & Rounded \\ \hline 0000 & 0001 & 0010 & 0011 & 0000 \\ \hline\hline 0100 & 0101 & 0110 & 0111 & 0011 \\ \hline\hline 1000 & 1001 & 1010 & 1011 & 1100 \\ \hline\hline 1100 & 1101 & 1110 & 1111 & 1111 \\ \hline\hline \end{tabular} \end{table} For better understanding, we explain each scheme by an example shown in Tab. \ref{tab:Example} and Fig. \ref{fig:Hybrid}. In our example we take three weights and try to reduce the number of soft bits. The first example in the Tab. \ref{tab:Example} is "0.004222". By looking at the binary representation in half-precision floating-point, we can see that patterns "00", "01", "10", and "11" occur 3, 3, 0, and 2 times, respectively. Then we sum up number of "11" and "00" after applying each three reformation schemes to compare with summation of "10" and "01". As can be seen from the last column in the table, we better to write this value unchanged as none of the other scheme is helpful. The next example in Tab. \ref{tab:Example} is 0.020614 where the binary representation has 2, 4, 1, and 1 bit patterns of "00","01", "10", and "11", respectively. However, if we rotate the bit-stream by one, as shown in the third row of the table, we can see that the number of soft bits is reduced from 5 to 3. Hence, in this situation, storing the weight in shifted format is the best option. Finally, the last example in the Tab. \ref{tab:Example} is 0.0004982. As can be seen from the table, the number of "00" and "11" for No Change mode and Rotate mode are 4 and 4. Since CNNs are robust to inaccuracies, we round the last four digits to the nearest MLC friendly value based on the Tab. \ref{tab:rounding} mapping. For this particular example, since the last four digits are "0101", we round it to "0011". Doing so leads us to the situation where the number of soft bits is reduced to 2. \begin{table*}[] \centering \caption{\label{tab:Example} Examples for selection between 3 schemes (NoChange, Rotate, and Round).} \begin{tabular}{|l|c||c|c|c|c|c|c|c|} \hline \textbf{Weight} & \textbf{Binary} & \multicolumn{2}{c|}{\textbf{Operation}} & \textbf{00} & \textbf{01} & \textbf{10} & \textbf{11} & \textbf{Best} \\ \hline \hline \multirow{3}{*}{0.004222} & \multirow{3}{*}{00 01 11 00 01 01 00 11} & NoChange & 00 01 11 00 01 01 00 11 & 3 & 3 & 0 & 2 & \checkmark \\ \cline{3-9} & & Rotate & 00 10 11 10 00 10 10 01 & 2 & 1 & 4 & 1 & \\ \cline{3-9} & & Round & 00 01 11 00 01 01 00 00 & 4 & 3 & 0 & 1 & \\ \hline \hline \multirow{3}{*}{0.020614} & \multirow{3}{*}{00 10 01 01 01 00 01 11} & NoChange & 00 10 01 01 01 00 01 11 & 2 & 4 & 1 & 1 & \\ \cline{3-9} & & Rotate & 00 11 00 10 10 10 00 11 & 3 & 0 & 3 & 2 & \checkmark \\ \cline{3-9} & & Round & 00 10 01 01 01 00 00 11 & 3 & 3 & 1 & 1 & \\ \hline \hline \multirow{3}{*}{0.0004982} & \multirow{3}{*}{00 01 00 00 00 01 01 01} & NoChange & 00 01 00 00 00 01 01 01 & 4 & 4 & 0 & 0 & \\ \cline{3-9} & & Rotate & 00 10 10 00 00 00 10 10 & 4 & 0 & 4 & 0 & \\ \cline{3-9} & & Round & 00 01 00 00 00 01 00 11 & 5 & 2 & 0 & 1 & \checkmark \\ \hline \hline \end{tabular} \end{table*} \subsection{Overhead Analysis and Metadata} We have to maintain some metadata to determine the mode of each weight (NoChange, Rotate, and Round). The metadata itself has to be stored in our STT-RAM memory, which is unreliable. Losing meta-data may cause a severe damage, because rotate may change the absolute value of the floating-point representation significantly. To overcome this difficulty, rather than having four schemes to utilize the 2-bit metadata, we proposed three schemes to store the metadata into a tri-level MLC, not a 2-bit MLC. As shown by many previous works, tri-level MLC is very reliable (close to SLC) \cite{staterestrict}. As a matter of fact, tri-level STT provides better error rate by sacrificing the information density; three states can be realized by tri-level STT and not by four. Using tri-level STT it is guaranteed that our metadata is safe and we will not impose any malfunction issues. From storage overhead point of view, we need to store 2 bits per each 16-bit weight leading to the overhead of 12.5\%. To reduce this overhead, we propose a grouping-based approach where weights are wrapped together and the best scheme is realized for each block of the wights. For example, we can apply our scheme at the granularity of four, where the three proposed schemes are examined for 4 weights together. Grouping weights together may slightly reduce the chance of finding the best scheme, however, can reduce the storage overhead significantly. \begin{figure*}[t] \centering \includegraphics[width=5in]{Figures3/Hybrid.pdf} \caption{Final state of bit-stream for selecting the reliable weights} \label{fig:Hybrid} \end{figure*} \noindent{\textbf{Putting them all together }} It must be noted that when we are rotating and rounding the weights, it is kept local. In other words, we apply to rotate scheme to each weight and count the soft and hard bits. At the same time, we count the soft bit when there is no change and when the rounding is applied to each weight. Then, we sum the number of soft bits together resulted from NoChanage/rotate/rounding to each weight separately. Finally, we chose the best scheme to be used as the final mode for the block. Fig. \ref{fig:Hybrid} shows the final format of bit stream for the three examples shown in the Tab. \ref{tab:Example}. For the first case, only the sign bit is protected by duplication to the second bit. For the second case, in addition to protecting the sign bit, we rotate the bitstream by 1 to the right and then write the weight into the buffer. Finally, in the case of rounding, the last two cells are manged to store the nearest number. Tab. \ref{tab:Overhead} shows the storage overhead for different granularity. As can be seen, the overhead can be reduced to less than 1\%. \begin{table*}[] \centering \caption{\label{tab:Overhead} Storage overhead for different granularity} \begin{tabular}{|l||l|} \hline Granularity & Overhead \\ \hline 1 & 2 bits/1 weight=16 bits = 0.125 \\ \hline 2 & 2 bits/ 2 weights = 32 bits=0.0625 \\ \hline 4 & 2 bits / 4 weights = 64 bits=0.03125 \\ \hline 8 & 2 bits /8 weights= 128 bits=0.015625 \\ \hline 16 & 2 bits /16 weights = 256 bits=0.0078125 \\ \hline \end{tabular} \end{table*} \section{Methodology} \label{sec:Methodology} \noindent \textbf{Classification Accuracy } We use Google Tensorflow \cite{TensorFlow} to evaluate two states of the art models: VGG16 and Inception V3. The input dataset to these models is ImageNet and we use transfer learning to train the models. The network is trained for 30 epochs, and the batch size of 100 is used. \noindent\textbf{Error model } In order to model the error induced by the MLC STT-RAM substrate, we use the previously proposed model \cite{compression}. In this model, read and write error rates are separated and faults are injected accordingly. To inject the errors, we assume all "00" and "11" are immune to soft errors because these two states are highly stable. However, for "01" and "10", we use a uniform fault injector to flip a bit. The probability of fault injections are taken from Ref. \cite{staterestrict} and are in the range of $1.5 \times 10 ^{-2}$ to $2 \times 10 ^{-2}$ \cite{staterestrict}. To incorporate the error model, we read all pre-trained weights and inject faults to the entire dataset. Then we store the pre-train weights to be used during the inference. We do not retrain the model because faults happen at the inference time by the memory substrate, and they will not be detected because of error detection complexity. So, it is not feasible to fine-tune the network after faults happen. Therefore, to be fair, we do not retrain the network. Finally, we report the accuracy of classification to judge between different systems. \noindent\textbf{Energy model} We use NVSim \cite{NVSIM} to evaluate the energy consumption of the proposed system. We also use the per bit energy cost reported in Tab. \ref{tab:Cost} to obtain the soft and hard bits cost of read and write operations. \noindent\textbf{Bandwidth Model} SCALESim \cite{SCALESim} is used to calculate the bandwidth of our systolic array. This simulator faithfully models a systolic array where all buffers are of the type of double-buffer. \begin{table*}[t] \centering \caption{\label{tab:Cost} Soft and hard bits cost of reading and writing.} \begin{tabular}{|l||c|c|l|} \hline \multicolumn{1}{|c||}{} & SLC STT-RAM & MLC STT-RAM & \multicolumn{1}{c|}{Hybrid} \\ \hline \hline Read latency (cycle) & 13 & 19 & Soft: 14, Hard: 20 \\ \hline Write latency (cycle) & 49 & 90 & Soft: 50, Hard: 95 \\ \hline Read energy (nJ) & 0.415 & 0.424 & Soft: 0.427, Hard: 0.579 \\ \hline Write energy (nJ) & 0.876 & 1.859 & Soft: 1.084, Hard: 2.653 \\ \hline \end{tabular} \end{table*} \section{Evaluated Results} \label{sec:Results} In this section, we study the impact of our schemes on two models: VGG16 and Inception V3. Also, we show the results for 5 different granularity: 1, 2, 4, 8, and 16 words. \noindent{\textbf{Bit count comparison }} Number of bit patterns have direct relation with power consumption and performance. In this experiment, we count how often different patterns are occurred. Fig. \ref{fig:BC_XX_Gran} shows the bit count for 6 different systems, baseline plus the proposed scheme with 5 different granularity. We show the results separately for VGG16 and Inception V3. As can be seen from the figure, Granularity\_1 shows a higher number of "00" and "11". As we increase the granularity, the number of "11" and "00" patterns decreases. However, the drop is not very significant, we only lose 5\% of these patterns if we increase the granularity from 1 to 16. Note that as the granularity increases, the storage overhead goes down as exhibited in Tab. \ref{tab:Overhead}. Fig. \ref{fig:BC_XX_Gran} shows that in VGG16 the "01" pattern increases as the granularity increases, while in Inception V3 we observe the opposite trend; pattern "01" stays the same, but pattern "10" increases. \begin{figure*} \centering \includegraphics[width=5in]{Figures3/BC_XX_Gran.pdf} \caption{Bit count for 6 different systems, baseline and the proposed scheme with 5 different granularity.} \label{fig:BC_XX_Gran} \end{figure*} \noindent{\textbf{Energy Consumption }} Fig. \ref{fig:mot} (left), shows the energy consumption for different granularity and for read and write operations. Compared to the baseline all different granularity consume less energy. For example for Granularity\_1 of VGG16, the read energy consumption is reduced by 8\%, and for the largest granularity, the read energy is reduced by 7\%. On the other hand, for Inception V3, the reduction of read energy is almost 8\%, while the write energy is reduced by 5\%. It must be noted similar to bit count results, when the granularity increases the gain degrades. This is due to the fact that fewer blocks are found to apply any scheme but NoChange, and the system is similar to the baseline. \noindent\textbf{Classification Accuracy } Fig. \ref{fig:mot} (right) compares the accuracy for four different systems: 1) Unprotected Baseline, 2) Baseline+Rounding, 3) Baseline+Rotate and 4) Baseline+Rounding+Rotate (hybrid). Also, in this figure, the accuracy of both models in the error-free scenario are shown with dotted lines. When the system is unprotected (first bar) the classification accuracy significantly drops from 0.97 and 0.88 to 0.69 and 0.74, respectively. Now, we add our scheme one by one to the system to observe their impact. First, we include the rounding in the second bar. When rounding is added to the baseline the accuracy increase 12\%, 11\%, respectively. Then, the rotate scheme is added to the baseline. As the result of including this scheme, the accuracy boosts up to 0.84 and 0.89. This scheme is slightly better than the rounding scheme independently. Finally, the hybrid scheme is applied to the system. Hybrid here refers to a system where the best of (NoChange, Roatate and Rounding) is picked up. For the hybrid system, the classification accuracy reaches to the level of error-free scenario. Our hybrid scheme provides as good as accuracy compared to the error-free baseline. However, system 2 and 3 do slightly poorer than the error-free baseline. This figure shows that we can reduce the storage overhead further by applying only one scheme, but with lower accuracy. \begin{figure*}[t] \centering \includegraphics[width=5in]{Figures3/Energy_Gran.pdf} \label{fig:sub-first} \caption{Energy consumption for different granularity.} \end{figure*} \begin{figure*} \centering \includegraphics[height=1.4in]{Figures3/Accuracy.pdf} \label{fig:sub-second} \caption{Accuracy for four different systems: 1) unprotected baseline, 2) Baseline+Rounding, 3) baseline+rotate and 4) baseline+rounding+rotate (hybrid).} \label{fig:mot} \end{figure*} \noindent \textbf{Bandwidth} Fig. \ref{fig:BW} demonstrates the bandwidth of memory sub-system for two cases: off-chip and on-chip traffics. Since there are many layers in each network, and there are 3 separate buffers in a systolic array, we report the maximum bandwidth for off- and on-chip traffic for top-3 layers in terms of bandwidth to account for the worst-case scenario. Also, the size of on-chip memory is varied from 256 KB to 2048 KB (ratio of 4). The system with 256 KB is an SRAM-based design, the rest are representative of a system equipped with MLC STT-RAM. For both cases (off-chip and on-chip), the required bandwidth is reduced significantly. For instance, In \emph{Conv11} layer of VGG16, the bandwidth is reduced from 25.5 bytes per cycle to roughly 17.1 bytes per cycle. The Inception V3 enjoys more from larger MLC STT-RAM buffers, and the required maximum bandwidth drops to 10 bytes per cycle with STT-RAM size of 2048 KB. For the case of VGG16 on-chip traffic, the MLC STT-RAM is quite useful. The on-chip traffic is reduced by 24\% for \emph{Con12}. The on-chip bandwidth stays the same for two layers and slightly reduced for one layer in Inception V3. Although the on-chip traffic is larger than the off-chip, but one can observe the advantages of the MLC STT-RAM. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{Figures3/BW.pdf} \end{center} \caption{Maximum on-chip and off-chip bandwidth for different sizes of MLC STT-RAM.} \label{fig:BW} \end{figure*} \section{Conclusion} \label{sec:conclusion} CNNs are becoming more and more popular due to their high accuracy and robustness. Newer models need larger memory to store their weights on-chip. To avoid the costly off-chip transactions, one solution is to increase memory capacity by employing emerging memory technology such as MLC STT-RAM. To address problems such as reliability and high dynamic power consummation associated with MLC STT-RAM, we propose a simple yet effective scheme to concurrently increase their reliability and reduce power consumption. Our hybrid scheme leverages from the fact that read and write operations are content-dependent, and thus data manipulation can impact the access time. In this regard, we devise a rounding and rotating mechanism to change the data block in such a way that the number of error-resilient patterns increases and at the same time the number of high-power patterns decreases. We chose the best option among the pure baseline, rotated, and rounding format solutions to achieve the highest level of reliability and accuracy. To overcome the difficulty of metadata management, we propose a grouping mechanism that combines some blocks together to further reduces the storage overhead. Our experimental results show that we can maintain the same level of accuracy as the baseline while reducing the read and write energy consumption by 9\% and 6\%, respectively. \bibliographystyle{ieeetr}
1,116,691,497,932
arxiv
\section{Introduction} \label{sec:intro} Model-free reinforcement learning (RL) has been successfully applied to a range of challenging problems \cite{kober2012reinforcement,deisenroth2013survey}, and has recently been extended to handle large neural network policies and value functions \cite{mnih2015human,lillicrap2015continuous,wang2015dueling,heess2015learning,hausknecht2015deep,SchulmanLAJM15}. This makes it possible to train policies for complex tasks with minimal feature and policy engineering, using the raw state representation directly as input to the neural network. However, the sample complexity of model-free algorithms, particularly when using very high-dimensional function approximators, tends to be high~\cite{SchulmanLAJM15}, which means that the benefit of reduced manual engineering and greater generality is not felt in real-world domains where experience must be collected on real physical systems, such as robots and autonomous vehicles. In such domains, the methods of choice have been efficient model-free algorithms that use more suitable, task-specific representations \cite{peters2010relative,deisenroth2013survey}, as well as model-based algorithms that learn a model of the system with supervised learning and optimize a policy under this model~\cite{deisenroth2011pilco,levine2015end}. Using task-specific representations dramatically improves efficiency, but limits the range of tasks that can be learned and requires greater domain knowledge. Using model-based RL also improves efficiency, but limits the policy to only be as good as the learned model. For many real-world tasks, it may be easier to represent a good policy than to learn a good model. For example, a simple robotic grasping behavior might only require closing the fingers at the right moment, while the corresponding dynamics model requires learning the complexities of rigid and deformable bodies undergoing frictional contact. It is therefore desirable to bring the generality of model-free deep reinforcement learning into real-world domains by reducing their sample complexity. In this paper, we propose two complementary techniques for improving the efficiency of deep reinforcement learning in continuous control domains: we derive a variant of Q-learning that can be used in continuous domains, and we propose a method for combining this continuous Q-learning algorithm with learned models so as to accelerate learning while preserving the benefits of model-free RL. Model-free reinforcement learning in domains with continuous actions is typically handled with policy search methods \cite{peters2006policy,peters2010relative}. Integrating value function estimation into these techniques results in actor-critic algorithms \cite{hafner2011reinforcement,lillicrap2015continuous,SchulmanLAJM15}, which combine the benefits of policy search and value function estimation, but at the cost of training two separate function approximators. Our proposed Q-learning algorithm for continuous domains, which we call normalized advantage functions (NAF), avoids the need for a second actor or policy function, resulting in a simpler algorithm. The simpler optimization objective and the choice of value function parameterization result in an algorithm that is substantially more sample-efficient when used with large neural network function approximators on a range of continuous control domains. Beyond deriving an improved model-free deep reinforcement learning algorithm, we also seek to incorporate elements of model-based RL to accelerate learning, without giving up the strengths of model-free methods. One approach is for off-policy algorithms such as Q-learning to incorporate off-policy experience produced by a model-based planner. However, while this solution is a natural one, our empirical evaluation shows that it is ineffective at accelerating learning. As we discuss in our evaluation, this is due in part to the nature of value function estimation algorithms, which must experience both good and bad state transitions to accurately model the value function landscape. We propose an alternative approach to incorporating learned models into our continuous-action Q-learning algorithm based on \emph{imagination rollouts}: on-policy samples generated under the learned model, analogous to the Dyna-Q method \cite{sutton1990integrated}. We show that this is extremely effective when the learned dynamics model perfectly matches the true one, but degrades dramatically with imperfect learned models. However, we demonstrate that iteratively fitting local linear models to the latest batch of on-policy or off-policy rollouts provides sufficient \emph{local} accuracy to achieve substantial improvement using short imagination rollouts in the vicinity of the real-world samples. Our paper provides three main contributions: first, we derive and evaluate a Q-function representation that allows for effective Q-learning in continuous domains. Second, we evaluate several na\"{i}ve options for incorporating learned models into model-free Q-learning, and we show that they are minimally effective on our continuous control tasks. Third, we propose to combine locally linear models with local on-policy imagination rollouts to accelerate model-free continuous Q-learning, and show that this produces a large improvement in sample complexity. We evaluate our method on a series of simulated robotic tasks and compare to prior methods. \section{Related Work} \label{sec:related} Deep reinforcement learning has received considerable attention in recent years due to its potential to automate the design of representations in RL. Deep reinforcement learning and related methods have been applied to learn policies to play Atari games \cite{mnih2015human,schaul2015prioritized} and perform a wide variety of simulated and real-world robotic control tasks \cite{hafner2011reinforcement,lillicrap2015continuous,levine2013guided,deimportance,hafner2011reinforcement}. While the majority of deep reinforcement learning methods in domains with discrete actions, such as Atari games, are based around value function estimation and Q-learning \cite{mnih2015human}, continuous domains typically require explicit representation of the policy, for example in the context of a policy gradient algorithm \cite{SchulmanLAJM15}. If we wish to incorporate the benefits of value function estimation into continuous deep reinforcement learning, we must typically use two networks: one to represent the policy, and one to represent the value function \cite{SchulmanLAJM15,lillicrap2015continuous}. In this paper, we instead describe how the simplicity and elegance of Q-learning can be ported into continuous domains, by learning a single network that outputs both the value function and policy. Our Q-function representation is related to dueling networks \cite{wang2015dueling}, though our approach applies to continuous action domains. Our empirical evaluation demonstrates that our continuous Q-learning algorithm achieves faster and more effective learning on a set of benchmark tasks compared to continuous actor-critic methods, and we believe that the simplicity of this approach will make it easier to adopt in practice. Our Q-learning method is also related to the work of \citet{rawlik2013stochastic}, but the form of our Q-function update is more standard. As in standard RL, model-based deep reinforcement learning methods have generally been more efficient \cite{li2004iterative,watter2015embed,li2004iterative,wahlstrom2015pixels,levine2013guided}, while model-free algorithms tend to be more generally applicable but substantially slower \cite{SchulmanLAJM15,lillicrap2015continuous}. Combining model-based and model-free learning has been explored in several ways in the literature. The method closest to our imagination rollouts approach is Dyna-Q \cite{sutton1990integrated}, which uses simulated experience in a learned model to supplement real-world on-policy rollouts. As we show in our evaluation, using Dyna-Q style methods to accelerate model-free RL is very effective when the learned model perfectly matches the true model, but degrades rapidly as the model becomes worse. We demonstrate that using iteratively refitted local linear models achieves substantially better results with imagination rollouts than more complex neural network models. We hypothesize that this is likely due to the fact that the more expressive models themselves require substantially more data, and that otherwise efficient algorithms like Dyna-Q are vulnerable to poor model approximations. \section{Background} \label{sec:bg} In reinforcement learning, the goal is to learn a policy to control a system with states $\bm{x} \in \mathcal{X}$ and actions $\bm{u} \in \mathcal{U}$ in environment $E$, so as to maximize the expected sum of returns according to a reward function $r(\bm{x},\bm{u})$. The dynamical system is defined by an initial state distribution $p(\bm{x}_1)$ and a dynamics distribution $p(\bm{x}_{t+1}|\bm{x}_t, \bm{u}_t)$. At each time step $t \in [1,T]$, the agent chooses an action $\bm{u}_t$ according to its current policy $\pi(\bm{u}_t | \bm{x}_t)$, and observes a reward $r(\bm{x}_t, \bm{u}_t)$. The agent then experiences a transition to a new state sampled from the dynamics distribution, and we can express the resulting state visitation frequency of the policy $\pi$ as $\rho^\pi(\bm{x}_t)$. Define $R_t=\sum_{i=t}^T\gamma^{(i-t)}r(\bm{x}_i,\bm{u}_i)$, the goal is to maximize the expected sum of returns, given by $R=\mathbb{E}_{r_{i \geq 1}, \bm{x}_{i \geq 1} \sim E, \bm{u}_{i \geq 1}\sim \pi}[R_1]$, where $\gamma$ is a discount factor that prioritizes earlier rewards over later ones. With $\gamma < 1$, we can also set $T = \infty$, though we use a finite horizon for all of the tasks in our experiments. The expected return $R$ can be optimized using a variety of model-free and model-based algorithms. In this section, we review several of these methods that we build on in our work. \paragraph{Model-Free Reinforcement Learning.} When the system dynamics $p(\bm{x}_{t+1}|\bm{x}_t,\bm{u}_t)$ are not known, as is often the case with physical systems such as robots, policy gradient methods~\cite{peters2006policy} and value function or Q-function learning with function approximation~\cite{sutton1999policy} are often preferred. Policy gradient methods provide a simple, direct approach to RL, which can succeed on high-dimensional problems, but potentially requires a large number of samples~\cite{SchulmanLAJM15,schulman2015high}. Off-policy algorithms that use value or Q-function approximation can in principle achieve better data efficiency~\cite{lillicrap2015continuous}. However, adapting such methods to continuous tasks typically requires optimizing two function approximators on different objectives. We instead build on standard Q-learning, which has a single objective. We summarize Q-learning in this section. The Q function $Q^\pi(\bm{x}_t, \bm{u}_t)$ corresponding to a policy $\pi$ is defined as the expected return from $\bm{x}_t$ after taking action $\bm{u}_t$ and following the policy $\pi$ thereafter: \begin{equation} \label{eq:q} \begin{split} Q^\pi(\bm{x}_t, \bm{u}_t) = \mathbb{E}_{r_{i \geq t}, \bm{x}_{i > t} \sim E, \bm{u}_{i > t}\sim \pi}[R_t|\bm{x}_t,\bm{u}_t] \\ \end{split} \end{equation} Q-learning learns a greedy deterministic policy \mbox{$\bm{\mu}(\bm{x}_t) = \arg\max_{\bm{u}} Q(\bm{x}_t,\bm{u}_t)$}, which corresponds to \mbox{$\pi(\bm{u}_t|\bm{x}_t) = \delta(\bm{u}_t=\bm{\mu}(\bm{x}_t))$}. Let $\theta^Q$ parametrize the action-value function, and $\beta$ be an arbitrary exploration policy, the learning objective is to minimize the Bellman error, where we fix the target $y_t$: \begin{equation} \label{eq:qlearn} \begin{split} L(\theta^Q) &= \mathbb{E}_{\bm{x}_t\sim\rho^\beta,\bm{u}_t\sim\beta,r_t\sim E}[(Q(\bm{x}_t,\bm{u}_t|\theta^Q)-y_t)^2] \\ y_t &= r(\bm{x}_t,\bm{u}_t) + \gamma Q(\bm{x}_{t+1},\bm{\mu}(\bm{x}_{t+1})) \end{split} \end{equation} For continuous action problems, Q-learning becomes difficult, because it requires maximizing a complex, nonlinear function at each update. For this reason, continuous domains are often tackled using actor-critic methods~\cite{konda1999actor,hafner2011reinforcement,silver2014deterministic,lillicrap2015continuous}, where a separate parameterized ``actor" policy $\pi$ is learned in addition to the Q-function or value function ``critic,'' such as Deep Deterministic Policy Gradient (DDPG) algorithm~\cite{lillicrap2015continuous}. In order to describe our method in the following sections, it will be useful to also define the value function $V^\pi(\bm{x}_t,\bm{u}_t)$ and advantage function $A^\pi(\bm{x}_t, \bm{u}_t)$ of a given policy $\pi$: \begin{equation} \label{eq:va} \begin{split} & V^\pi(\bm{x}_t) = \mathbb{E}_{r_{i \geq t}, \bm{x}_{i > t} \sim E, \bm{u}_{i \geq t}\sim \pi}[R_t|\bm{x}_t,\bm{u}_t] \\ & A^\pi(\bm{x}_t, \bm{u}_t) = Q^\pi(\bm{x}_t, \bm{u}_t) - V^\pi(\bm{x}_t). \end{split} \end{equation} \paragraph{Model-Based Reinforcement Learning.} If we know the dynamics $p(\bm{x}_{t+1}|\bm{x}_t,\bm{u}_t)$, or if we can approximate them with some learned model $\hat{p}(\bm{x}_{t+1}|\bm{x}_t,\bm{u}_t)$, we can use model-based RL and optimal control. While a wide range of model-based RL and control methods have been proposed in the literature \cite{deisenroth2013survey,kober2012reinforcement}, two are particularly relevant for this work: iterative LQG (iLQG) \cite{li2004iterative} and Dyna-Q \cite{sutton1990integrated}. The iLQG algorithm optimizes trajectories by iteratively constructing locally optimal linear feedback controllers under a local linearization of the dynamics \mbox{$\hat{p}(\bm{x}_{t+1}|\bm{x}_t,\bm{u}_t)=\mathcal{N}(\bm{f}_{\bm{x} t}\bm{x}_t + \bm{f}_{\bm{u} t}\bm{u}_t, \bm{F}_t)$} and a quadratic expansion of the rewards $r(\bm{x}_t,\bm{u}_t)$ ~\cite{tassa2012synthesis}. Under linear dynamics and quadratic rewards, the action-value function $Q(\bm{x}_t, \bm{u}_t)$ and value function $V(\bm{x}_t)$ are locally quadratic and can be computed by dynamics programming. The optimal policy can be derived analytically from the quadratic $Q(\bm{x}_t, \bm{u}_t)$ and $V(\bm{x}_t)$ functions, and corresponds to a linear feedback controller \mbox{$\bm{g}(\bm{x}_t) = \hat{\bm{u}}_t + \bm{k}_t + \bm{K}_t(\bm{x}_t - \hat{\bm{x}}_t)$}, where $\bm{k}_t$ is an open-loop term, $\bm{K}_t$ is the closed-loop feedback matrix, and $\hat{\bm{x}}_t$ and $\hat{\bm{u}}_t$ are the states and actions of the nominal trajectory, which is the average trajectory of the controller. Employing the maximum entropy objective~\cite{levine2013guided}, we can also construct a linear-Gaussian controller, where $c$ is a scalar to adjust for arbitrary scaling of the reward magnitudes: \begin{equation} \label{eq:ilqg_controller} \begin{split} \pi^{iLQG}_t(\bm{u}_t|\bm{x}_t) = \mathcal{N}(\hat{\bm{u}}_t + \bm{k}_t + \bm{K}_t(\bm{x}_t-\hat{\bm{x}}_t), -cQ_{\bm{u},\bm{u} t}^{-1}) \end{split} \end{equation} When the dynamics are not known, a particularly effective way to use iLQG is to combine it with learned time-varying linear models $\hat{p}(\bm{x}_{t+1}|\bm{x}_t,\bm{u}_t)$. In this variant of the algorithm, trajectories are sampled from the controller in Equation~(\ref{eq:ilqg_controller}) and used to fit time-varying linear dynamics with linear regression. These dynamics are then used with iLQG to obtain a new controller, typically using a KL-divergence constraint to enforce a trust region, so that the new controller doesn't deviate too much from the region in which the samples were generated \cite{levine2014learning}. Besides enabling iLQG and other planning-based algorithms, a learned model of the dynamics can allow a model-free algorithm to generate synthetic experience by performing rollouts in the learned model. A particularly relevant method of this type is Dyna-Q \cite{sutton1990integrated}, which performs real-world rollouts using the policy $\pi$, and then generates synthetic rollouts using a model learned from these samples. The synthetic rollouts originate at states visited by the real-world rollouts, and serve as supplementary data for a variety of possible reinforcement learning algorithms. However, most prior Dyna-Q methods have focused on relatively small, discrete domains. In Section~\ref{sec:fictional}, we describe how our method can be extended into a variant of Dyna-Q to achieve substantially faster learning on a range of continuous control tasks with complex neural network policies, and in Section~\ref{sec:experiments}, we empirically analyze the sensitivity of this method to imperfect learned dynamics models. \section{Continuous Q-Learning with Normalized Advantage Functions} \label{sec:normq} We first propose a simple method to enable Q-learning in continuous action spaces with deep neural networks, which we refer to as normalized advantage functions (NAF). The idea behind normalized advantage functions is to represent the Q-function $Q(\bm{x}_t, \bm{u}_t)$ in Q-learning in such a way that its maximum, $\arg\max_{\bm{u}} Q(\bm{x}_t, \bm{u}_t)$, can be determined easily and analytically during the Q-learning update. While a number of representations are possible that allow for analytic maximization, the one we use in our implementation is based on a neural network that separately outputs a value function term $V(\bm{x})$ and an advantage term $A(\bm{x},\bm{u})$, which is parameterized as a quadratic function of nonlinear features of the state: \begin{equation} \label{eq:normq} \begin{split} Q(\bm{x},\bm{u}|\theta^Q) &= A(\bm{x},\bm{u}|\theta^A) + V(\bm{x}|\theta^V) \\ A(\bm{x},\bm{u}|\theta^A) &= -\frac{1}{2}(\bm{u}-\bm{\mu}(\bm{x}|\theta^\mu))^T \bm{P}(\bm{x}|\theta^P) (\bm{u}-\bm{\mu}(\bm{x}|\theta^\mu))\nonumber\\ \end{split} \end{equation} $\bm{P}(\bm{x}|\theta^P)$ is a state-dependent, positive-definite square matrix, which is parametrized by $\bm{P}(\bm{x}|\theta^P) = \bm{L}(\bm{x}|\theta^P)\bm{L}(\bm{x}|\theta^P)^T$, where $\bm{L}(\bm{x}|\theta^P)$ is a lower-triangular matrix whose entries come from a linear output layer of a neural network, with the diagonal terms exponentiated. While this representation is more restrictive than a general neural network function, since the Q-function is quadratic in $\bm{u}$, the action that maximizes the Q-function is always given by $\bm{\mu}(\bm{x}|\theta^\mu)$. We use this representation with a deep Q-learning algorithm analogous to \citet{mnih2015human}, using target networks and a replay buffers as described by \cite{lillicrap2015continuous}. NAF, given by Algorithm~\ref{alg:nafq}, is considerably simpler than DDPG. \begin{algorithm} \footnotesize \caption{Continuous Q-Learning with NAF} \label{alg:nafq} \begin{algorithmic} \STATE Randomly initialize normalized Q network $Q(\bm{x},\bm{u}|\theta^Q)$. \STATE Initialize target network $Q'$ with weight $\theta^{Q'}\leftarrow \theta^Q$. \STATE Initialize replay buffer $R\leftarrow\emptyset$. \FOR{episode=$1,M$} \STATE Initialize a random process $\mathcal{N}$ for action exploration \STATE Receive initial observation state $\bm{x}_1\sim p(\bm{x}_1)$ \FOR{t=$1,T$} \STATE Select action $\bm{u}_t=\mu(\bm{x}_t|\theta^\mu)+\mathcal{N}_t$ \STATE Execute $\bm{u}_t$ and observe $r_t$ and $\bm{x}_{t+1}$ \STATE Store transition ($\bm{x}_t,\bm{u}_t,r_t,\bm{x}_{t+1}$) in $R$ \FOR{iteration=$1, I$} \STATE Sample a random minibatch of $m$ transitions from $R$ \STATE Set $y_i = r_i + \gamma V'(\bm{x}_{i+1}|\theta^{Q'})$ \STATE Update $\theta^Q$ by minimizing the loss: $L=\frac{1}{N}\sum_i (y_i - Q(\bm{x}_i, \bm{u}_i|\theta^Q))^2$ \STATE Update the target network: $\theta^{Q'}\leftarrow \tau\theta^Q + (1-\tau)\theta^{Q'}$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Decomposing $Q$ into an advantage term $A$ and a state-value term $V$ was suggested by ~\citet{baird1993advantage,harmon1996multi}, and was recently explored by~\citet{wang2015dueling} for discrete action problems. Normalized action-value functions have also been proposed by ~\citet{rawlik2013stochastic} in the context of an alternative temporal difference learning algorithm. However, our method is the first to combine such representations with deep neural networks into an algorithm that can be used to learn policies for a range of challenging continuous control tasks. In general, $A$ does not need to be quadratic, and exploring other parametric forms such as multimodal distributions is an interesting avenue for future work. The appendix provides details on adaptive exploration rule derivation with experimental results, and a variational interpretation of Q-learning which gives an intuitive explanation of the behavior of NAF that conforms with empirical results. \section{Accelerating Learning with Imagination Rollouts} \label{sec:fictional} While NAF provides some advantages over actor-critic model-free RL methods in continuous domains, we can improve their data efficiency substantially under some additional assumptions by exploiting learned models. We will show that incorporating a particular type of learned model into Q-learning with NAFs significantly improves sample efficiency, while still allowing the final policy to be finetuned with model-free learning to achieve good performance without the limitations of imperfect models. \subsection{Model-Guided Exploration} \label{sec:ilqg_guide} One natural approach to incorporating a learned model into an off-policy algorithm such as Q-learning is to use the learned model to generate good exploratory behaviors using planning or trajectory optimization. To evalaute this idea, we utilize the iLQG algorithm to generate good trajectories under the model, and then mix these trajectories together with on-policy experience by appending them to the replay buffer. Interestingly, we show in our evaluation that, even when planning under the true model, the improvement obtained from this approach is often quite small, and varies significantly across domains and choices of exploration noise. The intuition behind this result is that off-policy iLQG exploration is too different from the learned policy, and Q-learning must consider alternatives in order to ascertain the optimality of a given action. That is, it's not enough to simply show the algorithm \emph{good} actions, it must also experience bad actions to understand which actions are better and which are worse. \subsection{Imagination Rollouts} As discussed in the previous section, incorporating off-policy exploration from good, narrow distributions, such as those induced by iLQG, often does not result in significant improvement for Q-learning. These results suggest that Q-learning, which learns a policy based on minimizing temporal differences, inherently requires noisy on-policy actions to succeed. In real-world domains such as robots and autonomous vehicles, this can be undesirable for two reasons: first, it suggests that large amounts of on-policy experience are required in addition to good off-policy samples, and second, it implies that the policy must be allowed to make ``its own mistakes'' during training, which might involve taking undesirable or dangerous actions that can damage real-world hardware. One way to avoid these problems while still allowing for a large amount of on-policy exploration is to generate synthetic on-policy trajectories under a learned model. Adding these synthetic samples, which we refer to as \textit{imagination rollouts}, to the replay buffer effectively augments the amount of experience available for Q-learning. The particular approach we use is to perform rollouts in the real world using a mixture of planned iLQG trajectories and on-policy trajectories, with various mixing coefficients evaluated in our experiments, and then generate additional synthetic on-policy rollouts using the learned model from each state visited along the real-world rollouts. This approach can be viewed as a variant of the Dyna-Q algorithm~\cite{sutton1990integrated}. However, while Dyna-Q has primarily been used with small and discrete systems, we show that using iteratively refitted linear models allows us to extend the approach to deep reinforcement learning on a range of continuous control domains. In some scenarios, we can even generate all or most of the real rollouts using off-policy iLQG controllers, which is desirable in safety-critic domains where poorly trained policies might take dangerous actions. The algorithm is given as Algorithm~\ref{alg:dynaq}, and is an extension on Algorithm~\ref{alg:nafq} combining model-based RL. \begin{algorithm} \footnotesize \caption{Imagination Rollouts with Fitted Dynamics and Optional iLQG Exploration} \label{alg:dynaq} \begin{algorithmic} \STATE Randomly initialize normalized Q network $Q(\bm{x},\bm{u}|\theta^Q)$. \STATE Initialize target network $Q'$ with weight $\theta^{Q'}\leftarrow \theta^Q$. \STATE Initialize replay buffer $R\leftarrow\emptyset$ and fictional buffer $R_f\leftarrow\emptyset$. \STATE Initialize additional buffers $B\leftarrow\emptyset,B_{old}\leftarrow\emptyset$ with size $nT$. \STATE Initialize fitted dynamics model $\mathcal{M}\leftarrow\emptyset$. \FOR{$episode=1, M$} \STATE Initialize a random process $\mathcal{N}$ for action exploration \STATE Receive initial observation state $\bm{x}_1$ \STATE Select $\mu'(\bm{x}, t)$ from \{$\mu(\bm{x}|\theta^\mu), \pi^{iLQG}_t(\bm{u}_t|\bm{x}_t)$\} with probabilities \{$p,1-p$\} \FOR{$t=1,T$} \STATE Select action $\bm{u}_t=\mu'(\bm{x}_t,t)+\mathcal{N}_t$ \STATE Execute $\bm{u}_t$ and observe $r_t$ and $\bm{x}_{t+1}$ \STATE Store transition ($\bm{x}_t,\bm{u}_t,r_t,\bm{x}_{t+1},t$) in $R$ and $B$ \IF{$\mod(episode\cdot T+t,m)=0$ and $\mathcal{M}\neq\emptyset$} \STATE Sample $m$ ($\bm{x}_i,\bm{u}_i,r_i,\bm{x}_{i+1},i$) from $B_{old}$ \STATE Use $\mathcal{M}$ to simulate $l$ steps from each sample \STATE Store all fictional transitions in $R_f$ \ENDIF \STATE Sample a random minibatch of $m$ transitions $I\cdot l$ times from $R_f$ and $I$ times from $R$, and update $\theta^Q,\theta^{Q'}$ as in Algorithm~\ref{alg:nafq} per minibatch. \ENDFOR \IF{$B_f$ is full} \STATE $\mathcal{M}\leftarrow$ FitLocalLinearDynamics($B_f$) (see Section~\ref{sec:fitting}) \STATE $\pi^{iLQG}\leftarrow$ iLQG\_OneStep($B_f,\mathcal{M}$) (see appendix) \STATE $B_{old}\leftarrow B_f, B_f\leftarrow\emptyset$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} Imagination rollouts can suffer from severe bias when the learned model is inaccurate. For example, we found it very difficult to train nonlinear neural network models for the dynamics that would actually improve the efficiency of Q-learning when used for imagination rollouts. As discussed in the following section, we found that using iteratively refitted time-varying linear dynamics produced substantially better results. In either case, we would still like to preserve the generality and optimality of model-free RL while deriving the benefits of model-based learning. To that end, we observe that most of the benefit of model-based learning is derived in the early stages of the learning process, when the policy induced by the neural network Q-function is poor. As the Q-function becomes more accurate, on-policy behavior tends to outperform model-based controllers. We therefore propose to switch off imagination rollouts after a given number of iterations.\footnote{In future work, it would be interesting to select this iteration adaptively based on the expected relative performance of the Q-function policy and model-based planning.} In this framework, the imagination rollouts can be thought of as an inexpensive way to pretrain the Q-function, such that fine-tuning using real world experience can quickly converge to an optimal solution. \subsection{Fitting the Dynamics Model} \label{sec:fitting} In order to obtain good imagination rollouts and improve the efficiency of Q-learning, we needed to use an effective and data-efficient model learning algorithm. While prior methods propose a variety of model classes, including neural networks \cite{heess2015learning}, Gaussian processes \cite{deisenroth2011pilco}, and locally-weighted regression \cite{atkeson1997locally}, we found that we could obtain good results by using iteratively refitted time-varying linear models, as proposed by \citet{levine2014learning}. In this approach, instead of learning a good global model for all states and actions, we aim only to obtain a good local model around the latest set of samples. This approach requires a few additional assumptions: namely, it requires the initial state to be either deterministic or low-variance Gaussian, and it requires the states and actions to all be continuous. To handle domains with more varied initial states, we can use a mixture of Gaussian initial states with separate time-varying linear models for each one. The model itself is given by $p_t(\bm{x}_{t+1} | \bm{x}_t, \bm{u}_t) = \mathcal{N}(\mathbf{F}_t [\bm{x}_t ; \bm{u}_t] + \mathbf{f}_t, \mathbf{N}_t)$. Every $n$ episodes, we refit the parameters $\mathbf{F}_t$, $\mathbf{f}_t$, and $\mathbf{N}_t$ by fitting a Gaussian distribution at each time step to the vectors $[\bm{x}_t^i; \bm{u}_t^i; \bm{x}_{t+1}^i]$, where $i$ indicates the sample index, and conditioning this Gaussian on $[\bm{x}_t;\bm{u}_t]$ to obtain the parameters of the linear-Gaussian dynamics at that step. We use $n = 5$ in our experiments. Although this approach introduces additional assumptions beyond the standard model-free RL setting, we show in our evaluation that it produces impressive gains in sample efficiency on tasks where it can be applied. \section{Experiments} \label{sec:experiments} We evaluated our approach on a set of simulated robotic tasks using the MuJoCo simulator \cite{todorov2012mujoco}. The tasks were based on the benchmarks described by \citet{lillicrap2015continuous}. Although we attempted to replicate the tasks in previous work as closely as possible, discrepancies in the simulator parameters and the contact model produced results that deviate slightly from those reported in prior work. In all experiments, the input to the policy consisted of the state of the system, defined in terms of joint angles and root link positions. Angles were often converted to sine and cosine encoding. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/domains-2.png} \caption{Example task domains.} \label{fig:domains} \end{subfigure} \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/phiexplore31_reacher.png} \caption{NAF and DDPG on multi-target reacher. } \label{fig:normq_reacher} \end{subfigure} \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[width=0.9\linewidth]{figs/phiexplore17_peg.png} \caption{NAF and DDPG on peg insertion.} \label{fig:normq_peg} \end{subfigure} \caption{(a) Task domains: top row from left (manipulation tasks: peg, gripper, mobile gripper), bottom row from left (locomotion tasks: cheetah, swimmer6, ant). (b,c) NAF vs DDPG results on three-joint reacher and peg insertion. On reacher, the DDPG policy continuously fluctuates the tip around the target, while NAF stabilizes well at the target.\vspace{-0.1 in}} \label{fig:naf} \end{figure*} For both our method and the prior DDPG~\cite{lillicrap2015continuous} algorithm in the comparisons, we used neural networks with two layers of 200 rectified linear units (ReLU) to produce each of the output parameters -- the Q-function and policy in DDPG, and the value function $V$, the advantage matrix $\bm{L}$, and the mean $\bm{\mu}$ for NAF. Since Q-learning was done with a replay buffer, we applied the Q-learning update 5 times per each step of experience to accelerate learning ($I=5$). To ensure a fair comparison, DDPG also updates both the Q-function and policy parameters 5 times per step. \subsection{Normalized Advantage Functions} \label{sec:exp_naf} In this section, we compare NAF and DDPG on 10 representative domains from~\citet{lillicrap2015continuous}, with three additional domains: a four-legged 3D ant, a six-joint 2D swimmer, and a 2D peg (see the appendix for the descriptions of task domains). We found the most sensitive hyperparameters to be presence or absence of batch normalization, base learning rate for ADAM~\cite{kingma2014adam} $\in\{1e^{-4},1e^{-3},1e^{-2}\}$, and exploration noise scale $\in\{0.1,0.3,1.0\}$. We report the best performance for each domain. We were unable to achieve good results with the method of \citet{rawlik2013stochastic} on our domains, likely due to the complexity of high-dimensional neural network function approximators. Figure~\ref{fig:normq_reacher}, Figure~\ref{fig:normq_peg}, and additional figures in the appendix show the performances on the three-joint reacher, peg insertion, and a gripper with mobile base. While the numerical gap in reacher may be small, qualitatively there is also a very noticeable difference between NAF and DDPG. DDPG converges to a solution where the deterministic policy causes the tip to fluctuate continuously around the target, and does not reach it precisely. NAF, on the other hand, learns a smooth policy that makes the tip slow down and stabilize at the target. This difference is more noticeable in peg insertion and moving gripper, as shown by the much faster convergence rate to the optimal solution. Precision is very important in many real-world robotic tasks, and these result suggest that NAF may be preferred in such domains. On locomotion tasks, the performance of the two methods is relatively similar. On the six-joint swimmer task and four-legged ant, NAF slightly outperforms DDPG in terms of the convergence speed; however, DDPG is faster on cheetah and finds a better policy on walker2d. The loss in performance of NAF can potentially be explained by downside of the mode-seeking behavior as analyzed in the appendix, where it is hard to explore other modes once the quadratic advantage function finds a good one. Choosing a parametric form that is more expressive than a quadratic could be used to address this limitation in future work. The results on all of the domains are summarized in Table~\ref{tab:normq}. Overall, NAF outperformed DDPG on the majority of tasks, particularly manipulation tasks that require precision and suffer less from the lack of multimodal Q-functions. This makes this approach particularly promising for efficient learning of real-world robotic tasks. \begin{table}[ht] \centering \footnotesize \begin{tabular}{|c |c |c c| c c | } \hline Domains &- & DDPG & episodes & NAF& episodes \\ \hline Cartpole &-2.1& -0.601 & 420 & -0.604 & \textbf{190} \\ Reacher &-2.3& -0.509 & 1370 & \textbf{-0.331} & \textbf{1260} \\ Peg &-11& -0.950 & 690 & \textbf{-0.438} & \textbf{130} \\ Gripper& -29& 1.03 & 2420 & \textbf{1.81} & \textbf{1920} \\ GripperM& -90& -20.2 & 1350 & \textbf{-12.4} & \textbf{730} \\ Canada2d &-12& -4.64 & 1040& \textbf{-4.21} & 900 \\ Cheetah &-0.3& \textbf{8.23} & \textbf{1590} & 7.91 & 2390 \\ Swimmer6& -325 & -174 & 220 & \textbf{-172} & \textbf{190} \\ Ant &-4.8& -2.54 & 2450 & -2.58 & \textbf{1350} \\ Walker2d &0.3& \textbf{2.96} & \textbf{850} & 1.85 & 1530 \\ \hline \end{tabular} \caption{\footnotesize Best test rewards of DDPG and NAF policies, and the episodes it requires to reach within 5\% of the best value. ``-" denotes scores by a random agent.\vspace{-0.1 in} \label{tab:normq} \end{table} \subsection{Evaluating Best-Case Model-Based Improvement with True Models} \label{sec:exp_model_rl} In order to determine how best to incorporate model-based components to accelerate model-free Q-learning, we tested several approaches using the ground truth dynamics, to control for challenges due to model fitting. We evaluated both of the methods discussed in Section~\ref{sec:fictional}: the use of model-based planning to generate good off-policy rollouts in the real world, and the use of the model to generate on-policy synthetic rollouts. \begin{figure*}[t!] \begin{subfigure}[t]{0.33\textwidth} \centering\captionsetup{width=.8\linewidth}% \includegraphics[width=0.99\linewidth]{figs/fixedreach_rollout10_true.png} \caption{NAF on single-target reacher. \vspace{-0.1 in}} \label{fig:normq_fixedreacher_true} \end{subfigure} \begin{subfigure}[t]{0.33\textwidth} \centering\captionsetup{width=.8\linewidth}% \includegraphics[width=0.99\linewidth]{figs/nn03_reacher.png} \caption{NAF on single-target reacher. \vspace{-0.1 in}} \label{fig:normq_fixedreacher_fitted} \end{subfigure} \begin{subfigure}[t]{0.33\textwidth} \centering\captionsetup{width=.8\linewidth}% \includegraphics[width=0.99\linewidth]{figs/grip03_rollout5_fitted_wait20000.png} \caption{NAF on single-target gripper. \vspace{-0.1 in}} \label{fig:normq_fixedgrip_fitted} \end{subfigure} \caption{Results on NAF with iLQG-guided exploration and imagination rollouts (a) using true dynamics (b,c) using fitted dynamics. ``ImR" denotes using the imagination rollout with $l=10$ steps on the reacher and $l=5$ steps on the gripper. ``iLQG-$x$" indicates mixing $x$ fraction of iLQG episodes. Fitted dynamics uses time-varying linear models with sample size $n=5$, except ``-NN" which fits a neural network to global dynamics.\vspace{-0.1 in}} \label{fig:imr} \end{figure*} Figure~\ref{fig:normq_fixedreacher_true} shows the effect of mixing off-policy iLQG experience and imagination rollouts on the three-joint reacher. It is noticeable that mixing the good off-policy experience does not significantly improve data-efficiency, while imagination rollouts always improve data-efficiency or final performance significantly. In the context of Q-learning, this result is not entirely surprising: Q learning must experience both good and bad actions in order to determine which actions are preferred, while the good model-based rollouts are so far removed from the policy in the early stages of learning that they provide little useful information. Figure~\ref{fig:normq_fixedreacher_true} also evaluates two different variants of the imagination rollouts approach, where the rollouts in the real world are performed either using the learned policy, or using model-based planning with iLQG. In the case of this task, the iLQG rollouts achieve slightly better results, since the on-policy imagination rollouts sampled around these off-policy states provide Q-learning with additional information about alternative action not taken by the iLQG planner. In general, we did not find that off-policy rollouts were consistently better than on-policy rollouts across all tasks, but they did consistently produce good results. Performing off-policy rollouts with iLQG may be desirable in real-world domains, where a partially learned policy might take undesirable or dangerous actions. Further details of these experiments are provided in the appendix. \subsection{Guided Imagination Rollouts with Fitted Dynamics} In this section, we evaluated the performance of imagination rollouts with learned dynamics. As seen in Figure~\ref{fig:normq_fixedreacher_fitted}, we found that fitting time-varying linear models following the imagination rollout algorithm is substantially better than fitting neural network dynamics models for the tasks we considered. There is a fundamental tension between efficient learning and expressive models like neural nets. We cannot hope to learn useful neural network models with a small number of samples for complex tasks, which makes it difficult to acquire a good model with fewer samples than are necessary to acquire a good policy. While the model is trained with supervised learning, which is typically more sample efficient, it often needs to represent a more complex function (e.g. rigid body physics). However, having such expressive models is more crucial as we move to improve model accuracy. Figure~\ref{fig:normq_fixedreacher_fitted} presents results that compare fitted neural network models with the true dynamics when combined with imagination rollouts. These results indicate that the learned neural network models negate the benefits of imagination rollouts on our domains. To evaluate imagination rollouts with fitted time-varying linear dynamics, we chose single-target variants of two of the manipulation tasks: the reacher and the gripper task. The results are shown in Figure~\ref{fig:normq_fixedreacher_fitted} and \ref{fig:normq_fixedgrip_fitted}. We found that imagination rollouts of length 5 to 10 were sufficient for these tasks to achieve significant improvement over the fully model-free variant of NAF. Adding imagination rollouts in these domains provided 2-5 factors of improvement in data efficiency. In order to retain the benefit of model-free learning and allow the policy to continue improving once it exceeds the quality possible under the learned model, we switch off the imagination rollouts after 130 episodes (20,000 steps) on the gripper domain. This produces a small transient drop in the performance of the policy, but the results quickly improve again. Switching off the imagination rollouts also ensures that Q-learning does not diverge after it reaches good values, as were often observed in the gripper. This suggests that imagination rollouts, in contrast to off-policy exploration discussed in the previous section, is an effective method for bootstrapping model-free deep RL. It should be noted that, although time-varying linear models combined with imagination rollouts provide a substantial boost in sample efficiency, this improvement is provided at some cost in generality, since effective fitting of time-varying linear models requires relatively small initial state distributions. With more complex initial state distributions, we might cluster the trajectories and fit multiple models to account for different modes. Extending the benefits of time-varying linear models to less restrictive settings is a promising direction and build on prior work~\cite{levine2015end,fu2015one}. That said, our results show that imagination rollouts are a very promising approach to accelerating model-free learning when combined with the right kind of dynamics model. \section{Discussion} In this paper, we explored several methods for improving the sample efficiency of model-free deep reinforcement learning. We first propose a method for applying standard Q-learning methods to high-dimensional, continuous domains, using the normalized advantage function (NAF) representation. This allows us to simplify the more standard actor-critic style algorithms, while preserving the benefits of nonlinear value function approximation, and allows us to employ a simple and effective adaptive exploration method. We show that, in comparison to recently proposed deep actor-critic algorithms, our method tends to learn faster and acquires more accurate policies. We further explore how model-free RL can be accelerated by incorporating learned models, without sacrificing the optimality of the policy in the face of imperfect model learning. We show that, although Q-learning can incorporate off-policy experience, learning primarily from off-policy exploration (via model-based planning) only rarely improves the overall sample efficiency of the algorithm. We postulate that this caused by the need to observe both successful and unsuccessful actions, in order to obtain an accurate estimate of the Q-function. We demonstrate that an alternative method based on synthetic on-policy rollouts achieves substantially improved sample complexity, but only when the model learning algorithm is chosen carefully. We demonstrate that training neural network models does not provide substantive improvement in our domains, but simple iteratively refitted time-varying linear models do provide substantial improvement on domains where they can be applied. \newpage \section*{Acknowledgement} We thank Nicholas Heess for helpful discussion and Tom Erez, Yuval Tassa, Vincent Vanhoucke, and the Google Brain and DeepMind teams for their support.
1,116,691,497,933
arxiv
\section{Introduction} The Internet of Things (IoT) is an emerging technology which envisions to connect billions of ``things" to the Internet. This enables numerous applications in diverse areas such as smart home, e-Health, and smart city. Low-power and Lossy Networks (LLNs) play an indispensable role in realizing IoT. These networks are typically composed of constrained devices with limited power, memory and processing. In addition, LLNs suffer from high packet loss rates and low throughput. These characteristics and limitations make designing routing protocols for LLNs challenging. ROLL, a working group of the Internet Engineering Task Force (IETF), evaluated the common standard routing protocols of the Internet and concluded that these protocols are not suitable for LLNs because of their heavy overhead. Consequently, the ROLL group designed the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL)~\cite{rfc6550} to meet the low overhead requirement of LLNs. Because of limited resources, nodes in LLNs are unable to run and benefit from complex security solutions such as those that use assymetric cryptography. This makes RPL vulnerable to a range of attacks~~\cite{RaoofML19, DAOinsider , perazzo2017dio,pu2019energy, mayzaud2014study,dvir2011vera, weekly2012evaluating,wallgren2013routing,DBLP:journals/twc/KhabbazianMB09}. An internal attacker may alter, inject, replay, and generate data or control messages to impact the normal operation of RPL networks. For example, in the version number attack~\cite{mayzaud2014study}, a malicious insider node initiates an unnecessary global network repair process by increasing the version number. Another example, albeit with lower impact, is the DAO insider attack~\cite{DAOinsider} in which the attacker repeatedly sends DAO control messages to the root, causing wasteful transmissions by the nodes on the path from the attacker to the root. In this work, we present the DAO induction attack, a novel attack in which a malicious insider node induces nodes to transmit unnecessary DAO control messages. Similar to the version number attack, the DAO induction attack can cause a large number of transmissions in the network. Unlike the version number attack, the DAO induction attack may not be detectable by the root of the network, as will be explained later. The main contributions of our work are as follows: \begin{itemize} \item We introduce the DAO induction attack, a novel attack against the RPL protocol. \item We evaluate the impact of the DAO induction attack on power consumption, communications overhead, latency, and packet loss ratio. \item We propose a lightweight solution to detect the DAO induction attack. Our solution is fully compatible with the RPL protocol, and imposes nearly no overhead on IoT devices. \end{itemize} The rest of the paper is organized as follows. Section \ref{protocol} provides an overview of the RPL routing protocol. The adversary model is given in Section~\ref{sec:Adv}. Section~\ref{DTSNattack} describes the DAO induction attack. Section~\ref{evaluation} evaluates the attack's impact on the network performance. Section~\ref{mitigation} briefly overviews the existing mitigation techniques, and proposes a new lightweight solution to detect the DAO induction attack. Finally, Section~\ref{conclusion} concludes the paper. \section{Overview of the RPL Protocol} \label{protocol} RPL is a distance-vector routing protocol, which can operate on various link layer standards including Bluetooth and IEEE 802.15.4~\cite{IEEE80215}. RPL builds one or more Destination Oriented Directed Acyclic Graph (DODAG), a loop free topology as shown in Fig.~\ref{RPLinstance}. DODAG has a single root as a destination node with no outgoing edges. The root acts as the data sink of DODAG. Each DODAG is specified by an instance ID, a DODAG ID, and a version number. RPL uses DODAG to support three different traffic patterns: multipoint-to-point (MP2P) from end nodes to the root, point-to-multipoint (P2MP) from root to end nodes, and point-to-point (P2P) traffic. The DODAG structure is built step by step. To this end, nodes periodically transmit a control message called Destination Information Object (DIO). DIO messages contain important information including an objective function to calculate rank, a number that determines a node's position with respect to the root. Ranks monotonically increase in the downward direction (i.e., towards leaf nodes), and are used to avoid loops. After receiving a DIO message from a lower rank node, the receiving node adds the address of the DIO sender to its candidate parent set, and calculates its rank with respect to the new candidate parent. The candidate parent that results in the best rank is selected as the node's preferred parent. At the end of this procedure, each node has upward paths towards the root (through its parents). DIO messages are periodically sent by nodes according to a trickle timer. If a new node wants to join the network, it should receive a DIO message to obtain DODAG information. If the new node does not receive a DIO message, it can send a DODAG Information Solicitation (DIS) message requesting DODAG information. When an existing node in the network receives a DIS message, it replies by transmitting a DIO message. To support downward routs (i.e. routs from the root), RPL uses another type of control message called Destination Advertisement Object (DAO). A node that wants to be reachable by the root advertises its address in a DAO message, and sends it to one of its DAO parents. The course of action taken by the node's DAO parent depends on the RPL mode of operation. \begin{figure} \centering \includegraphics[width=0.25\textwidth]{rpl2.pdf} \caption{An example of a RPL DODAG. Solid lines show each node's preferred parents, and dashed lines show node's other DAO parents. For instance, $\{N_3, N_6,N_8\}$ is the parent set of $N_5$, and $N_3$ is the preferred parent of $N_5$.} \label{RPLinstance} \end{figure} RPL supports two modes of operation: storing (table-driven) and non-storing (source routing). In the storing mode, all non-leaf nodes maintain a routing table for destinations, while in the non-storing mode only the root maintains a routing table. In both modes, a node that receives a DAO message forwards it to one of its DAO parents; this ensures that the DAO message is ultimately received by the root. In storing mode, a node updates its routing table before forwarding the DAO message. This update is not required in the non-storing mode as non-root nodes do not maintain any routing table. In the non-storing mode, P2P packets travel up from the source all the way to the root and then travel down to the destination. In the storing mode, however, a P2P packet can start traveling down towards the destination as soon as it reaches a common ancestor of the source and the destination. \begin{comment} In the case of receiving DAOs, it is important whether the DAO is "new". It is noteworthy that in Non-storing mode every DAO message a node receives is ``new". However, in Storing mode a DAO message must satisfy one of the following conditions to be considered ``new" \begin{enumerate} \item it has a newer Path Sequence Number, \item it has additional Path Control bits, or \item it is a No-Path DAO message that removes the last Downward route. \end{enumerate} A node may suppress sending a DAO message if it receives a DAO message that is not ``new". A node must increment the DAO sequence field if it wants to send a new DAO. Receiving a DAO can trigger sending a unicast DAO to a DAO parent. A node should not send this DAO immediately. It should wait to receive all possible DAO. \end{comment} \section{Adversary Model} \label{sec:Adv} We assume that RPL is either in no-secure mode, or uses a shared secret key (at the link-layer or by itself) to secure its messages. In either case, nodes cannot authenticate the root's messages, as every node uses the same secret key. We assume that the adversary controls a single insider node (e.g. a compromised node), hence knows the network's secret key. We refer to the node controlled by the adversary as the malicious node. The malicious node can be any node in the network except the root. In this work, we limit the malicious node's misbehaviour to 1) running the DAO induction attack (explained next), and 2) selectively dropping DAO packets to avoid detection by the root. Attacks combining the DAO induction attack with other existing ones can be more powerful, and lie outside the scope of this work. \section{The DAO induction Attack} \label{DTSNattack} In RPL, each node maintains a DAO Trigger Sequence Number (DTSN), and reports it in its DIO messages. If a node receives a DIO message from one of its DAO parents, and realizes that the parent has incremented its DTSN, the node must schedule a DAO transmission. In non-storing mode, the node must in addition increment its own DTSN. Therefore, in this mode, a DTSN increment by a node will cause all its descendants to increment their DTSN in turn, triggering DAO transmissions from the entire sub-DODAG. In DAO induction attack, a malicious insider node repeatedly increases its DTSN to trigger DAO transmissions. This can cause many transmissions particularly in the non-storing mode (a common mode of operation as many IoT devices are too constrained to operate in the storing mode~\cite{rfc6550}) as all descendants of the malicious node transmit each time the malicious node increments its DTSN. To avoid detection by the root, the attacker can simply refrain from forwarding DAO message of its descendants to the root. The DTSN counter is an 8-bit unsigned integer, so it has a limited range. This limitation, however, does not restrict the number of times a malicious node can update DTSN in a DAO induction attack. This is because in RPL, sequence counters operate according to a `lollipop' fashion~\cite{perlman1983fault}, where an increment of a sequence number with the maximum value will wrap the number back to zero. Therefore, the number of times an attacker can increment DTSN is practically unlimited. Similar to the version number and DAO insider attacks, the DAO induction attack can be mitigated by enabling security mechanisms at the link-layer or at RPL itself. However, these mechanisms are ineffective when the attacker is an insider or a compromised node. \begin{comment} \textcolor{red}{The DTSN counter similar to the version number is 8-bits unsigned integer with a maximum value of 255. However, this size limitation cannot restrict the number of DTSN updates caused by a malicious node. The RPL sequence counters are managed based on `lollipop' fashion~\cite{perlman1983fault}. According to the `lollipop', RPL considers the values less than $127$ as a circular sequence number of size $128$, while it uses the values from $128$ to $255$ as a linear sequence counter to show a bootstrap and restart of a counter. When a sequence counter reaches its maximum value, in the next update it wraps back to zero. Consequently, an attacker can increment the DTSN to perform the DAO induction attack as much as it wants.} \textcolor{green}{I am not sure if we need this part?!}\textcolor{red}{To compare two sequence counters $A$ and $B$ with values `$i$' and `$j$', RPL defines a window ($W$) with size of $16$ and applies the following rules: \begin{enumerate} \item If both counters are less than or equal to $127$ or greater than or equal to $128$, and $|i-j|\leq W$ : \begin{itemize} \item if ($i<j$ and $j-i<W$) or ($i>j$ and $i-j>W$), then $B$ is greater than $A$, and they are not equal. \item if ($i<j$ and $j-i>W$) or ($i>j$ and $i-j<W$), then $A$ is greater than $B$, and they are not equal. \end{itemize} \item If one of the counters is in the interval $[0,127]$ and the other one is in $[128,255]$, RPL compares them as below: \begin{itemize} \item if $(256 + j - i)$ is less than or equal to $W$, then $B$ is greater than $A$, and they are not equal. \item if $(256 + j - i)$ is greater than $W$, then $A$ is greater than $B$, and they are not equal. \end{itemize} \item Two counters are equal iff $i=j$. \end{enumerate} } \end{comment} \section{Experimental Analysis}\label{evaluation} To evaluate the impact of the DAO induction attack on the network's performance, we performed a diverse set of simulations using the Contiki operating system~\cite{dunkels2004contiki}, a lightweight and open-source operating system designed for IoT. \subsection{Simulation settings} \begin{table}[t] \vspace{-0.2cm} \caption{Simulation parameters settings} \label{tab:example} \small \centering \begin{tabular}{?c??c?} \Xhline{0.8pt} \textbf{Simulation parameters} & \textbf{Value} \\ \Xhline{0.8pt} Simulation time & $1800s$\\ \Xhline{0.8pt} Radio medium & Unit Disk Graph Medium \\ \Xhline{0.8pt} Topology dimension & $150m \times 150m$ \\ \Xhline{0.8pt} Number of nodes & $20$, $30$, $40$, and $50$ \\ \Xhline{0.8pt} Modes of operation & Storing and Non-storing \\ \Xhline{0.8pt} Transmission range & $40m$ \\ \Xhline{0.8pt} Interference range & $80m$\\ \Xhline{0.8pt} Traffic rate per node & 1 packet per minute \\ \Xhline{0.8pt} Node type & Tmote Sky\\ \Xhline{0.8pt} Number of simulations & 10 per each topology \\ \Xhline{0.8pt} link layer protocol & IEEE 802.15.4\\ \Xhline{0.8pt} MAC protocol & CSMA-CA\\ \Xhline{0.8pt} \end{tabular} \end{table} We used the Tmote Sky mote, which is an MSP430-based board benefiting from a radio chip compatible with the IEEE 802.15.4 link layer protocol. We employed this mote for all the nodes including the malicious node. To implement the DAO induction attack, we modified the RPL protocol stack of the Contiki OS on the malicious node. Similar to other nodes, the malicious node joins the network and actively participates in the creation and maintenance of the DODAG. The main difference between the malicious node and the others is that it is programmed to periodically increment its DTSN number, and send it in a DIO message to its neighbours. To evaluate the maximum impact of the DAO induction attack in the non-storing mode, we selected the malicious node randomly from the neighbours of the root. Note that these nodes have the maximum number of descendants among all non-root nodes. We considered a sample scenario in which nodes are distributed randomly in a $150m \times 150m$ square area network. Each node is static and transmits one data packet of $50$ bytes to the root every $60$ seconds. To simulate link failure, we used the Unit DISK Graph Model (UDGM). We evaluated the impacts of the DAO attack on the following metrics. \begin{itemize} \item \textit{DAO overhead:} the total number of DAO transmissions including transmissions of original DAO messages as well as transmissions for forwarding DAO messages towards the root. \item \textit{Average power consumption:} the average consumed power by each node in the network. \item \textit{Packet loss ratio:} the packet loss ratio averaged over all the node in the network. The packet loss ratio of a node is one minus the ratio of the number of received packets by the DODAG root from that node over the total number of packets sent by the node. \item \textit{Average Latency:} the average end-to-end latency of all packets successfully received by the root. \end{itemize} \subsection{Impact of the DAO induction attack} \begin{figure}[t] \centering \setlength\abovecaptionskip{-0.3\baselineskip} \includegraphics[width=0.43\textwidth]{dao.pdf} \caption{The impact of the DAO induction attack on the number of DAO transmissions in the storing and non-storing modes.} \label{dao} \end{figure} \begin{figure}[t] \centering \setlength\abovecaptionskip{-0.3\baselineskip} \includegraphics[width=0.43\textwidth]{power.pdf} \caption{The impact of the DAO induction attack on the average power consumption in the storing and non-storing modes.} \label{power} \end{figure} \begin{figure}[t] \centering \setlength\abovecaptionskip{-0.3\baselineskip} \vspace{0.1cm} \includegraphics[width=0.43\textwidth]{latency.pdf} \caption{The impact of the DAO induction attack on the average end-to-end latency in the storing and non-storing modes.} \label{latency} \end{figure} \begin{figure}[t] \centering \setlength\abovecaptionskip{-0.3\baselineskip} \includegraphics[width=0.43\textwidth]{lost.pdf} \caption{The impact of the DAO induction attack on the packet loss ratio in the storing and non-storing modes.} \label{PLR} \end{figure} Fig.~\ref{dao} shows the total number of DAO transmissions (i.e., the DAO overhead) for both RPL modes of operation. As shown, the DAO induction attack significantly increases the DAO overhead in both storing and non-storing modes. In larger networks, this overhead is higher: When there is no attack, the DAO overhead increases slowly with the number of nodes. Under the DAO induction attack, however, the DAO overhead grows at a significantly higher rate. Note that, the impact of the DAO induction attack is higher in the non-storing mode than the storing mode. This is expected because, in the non-storing mode, a DTSN increment triggers all the nodes in the attacker's sub-DODAG to transmit DAO messages. Fig.~\ref{power} shows the average power consumption of nodes when the network is under the DAO induction attack. To calculate the average power consumption, we used the collect-view feature available in Contiki. As shown, the power consumption increase because of the DAO induction attack is more noticeable in the non-storing mode than in the storing mode. This is expected because, in the non-storing, the attack engages more nodes and generates more overhead. Fig.~\ref{latency} shows the impact of the DAO induction attack on the average end-to-end latency. As shown in the figure, the DAO induction attack significantly increases the average end-to-end delay in the network. This increase is considerably higher in the non-storing mode than the storing mode. Again, the underlying reason is that the DAO induction attack engages more nodes and creates more overhead in the non-storing mode. Finally, the impact of the DAO attack on the packet loss ratio is shown in Fig.~\ref{PLR}. This impact is insignificant in small networks particularly in the storing mode. The impact is, however, considerable in networks with about 40 and more nodes. As in the previous cases, the DAO induction attack is more severe in the non-storing mode than in the storing mode. \section{Mitigation }\label{mitigation} Following, we first enumerate the existing mitigation solutions by categorizing them into two classes: proactive and reactive. We then present our solution to detect the DAO induction attack. This solution, unlike the existing ones, imposes nearly no overhead on IoT devices, which is important as these devices typically have limited resources. \subsection{Proactive solutions} Proactive solutions aim to eliminate the possibility of the attack completely. Recall that the impact of the DAO induction attack is significantly more severe in the non-storing mode than the storing mode. Theretofore, it is more important to mitigate this attack in the non-storing mode. In the non-storing mode, the DAO induction attack is similar to the version number attack as both the version number and DTSN must be first incremented by the root. Hence both attacks can be prevented if root's messages can be authenticated. Authentication can be achieved using digital signatures, or hash chains as described below. \subsubsection{Digital signatures} the conventional way to provide authentication is by digital signatures. Use of digital signatures in IoT networks is challenging. First challenge is to securely distribute the root's public key. Currently, manual installation is the only feasible method to distribute security keys among constrained devices~\cite{raza2016s3k}. Another major challenge is that existing digital signature methods are computationally heavy~\cite{mossinger2016towards}. \subsubsection{Hash-chain} as used in VeRA~\cite{dvir2011vera}, hash chains can be used for authentication. Similar to digital signatures, hash chain based solutions impose communication and computation overheads even in normal conditions when network is under no attack. More importantly, for this solutions to work, the root of the hash chain must be securely distributed. In the absence of computationally heavy asymmetric cryptography operations -- as constrained nodes have difficulty performing these operations -- the daunting manual installation seems to be the only feasible option. \subsection{Reactive solutions} Reactive solutions, unlike proactive ones, do not eliminate the possibility of the attack. Instead, they aim to detect and mitigate the attack upon detection. A reactive security solution consists of two phases: detection and reaction. The aim of the first phase is to detect the onset of the attack by monitoring the network. When an attack is detected, the solution goes to the reaction phase where the attacking node is isolated/removed. Monitoring of the network can be performed by either the internal IoT nodes, or external monitoring nodes. Each of these approaches have their own issues. The former approach imposes overheads on IoT devices, which is not desirable if they are power constrained (e.g., when they run on batteries). The latter approach can be costly particularly when multiple external monitoring nodes are needed. Our proposed monitoring solution, presented next, uses the root node for detecting the DAO attack, and imposes nearly no overhead on IoT devices. In addition, simulation results show that our solution has a high detection rate. \subsection{Our proposed detection solution}\label{proposal} As mentioned earlier, the existing solutions all impose overhead even in normal condition when the network is under no attack. Our detection solution, however, imposes nearly no overhead. Our solution requires IoT nodes to follow two simple rules, both supported by the RPL protocol. First, each node should select up to two non-preferred parent nodes, whenever such nodes exist. Second, each node should schedule its DAO transmission to be forwarded to its preferred DAO parent when it hears a DTSN increment by a non-preferred parent. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{rplleak.pdf} \caption{An example of a DODAG under the DAO induction attack by node $A$. Node ``P'' is the preferred parent and node ``DP'' is the DAO parent of node ``L'' respectively.} \label{leakage} \end{figure} Let us use an example to explain why following these rules helps the root to detect the DAO induction attack. Consider the sample network shown in Fig.~\ref{leakage}. Node $A$ is the attacker, and the network operates in the non-storing mode. Notice that node $L$ has two DAO parents: the preferred parent $P$, and the non-preferred parent $DP$. When the attacker $A$ increments its DTSN, all its descendants including node $DP$ increment their DTSN in turn, and report this change through DIO messages. When node $L$ hears $DP$'s DIO message, it schedules a DAO transmissions through its preferred parent $P$ instead of $DP$. This DAO message cannot be dropped by the attacker as it does not go through the attacker. The root will then receive the DAO message and detect the DAO induction attack as it did not start the DTSN increment\footnote{Note that a single bit in DAO message can indicate that the message was generated as the result of a DTSN increment.}. Note that when the DTSN increment is legitimate (i.e., it is started by the root), all the nodes in the network will schedule a DAO transmission. Therefore, following the aforementioned rules does not impose any extra overhead on IoT devices. To detect the DAO induction attack, the network should have a node (like $L$) with two DAO parrents; one in the attacker's sub-DODAG, and the preferred one outside the attacker's sub-DODAG. An interesting question is how often such a node exists. To answer this question, we run simulations using the Contiki operating system. We changed the RPL setting of Contiki, to allow nodes select more than one DAO parent whenever possible. Let $u_1, u_2, \ldots, u_n$ be the set of non-root IoT nodes, and $d_i$, $1\leq i \leq n$, be a binary number, such that it is equal to 1 iff a DAO induction attack by node $u_i$ is detectable. The detection rate is calculated as the weighted average of $d_i$, where weight of $d_i$ is the number of descendants of $u_i$ (i.e., the number of nodes that are affected by the DAO attack launched by $u_i$). \begin{figure}[t] \centering \setlength\abovecaptionskip{-0.3\baselineskip} \includegraphics[width=0.43\textwidth]{detection.pdf} \caption{The detection ratio of the DAO induction attack when RPL nodes have more than one DAO parent. Each RPL node are able to choose $k$ extra DAO parents. } \label{detection} \end{figure} For a given network size, the detection rate is calculated for ten different networks of that size. Fig.~\ref{detection}. shows the average of these ten detection rates for network size of 20, 30, 40 and 50. In the figure, $k$ indicates the number of non-preferred DAO parents that each RPL node can have. For instances, for $k=1$, each node selects exactly one non-preferred DAO parent whenever possible. As shown, the detection rate of our solution is close to $100\%$ if nodes select two non-preferred DAO parents whenever possible. \section{Conclusion}\label{conclusion} In this paper, we introduced the DAO induction attack, a novel security attack against the RPL protocol in which a malicious insider node increments its DTSN number periodically to flood the network with control messages. Through various simulations, we showed that the attack adversely impacts network performance, and power consumption particularly when the network operates in the non-storing mode. To mitigate, we proposed a lightweight detection solution that imposes nearly no overhead on IoT devices. Simulation results show that our solution can detect the DAO induction attack with high probability. \bibliographystyle{ieeetr}
1,116,691,497,934
arxiv
\section{Introduction} \input{intro} \section{ATLAS detector} \input{detector} \section{Lepton reconstruction} \input{e_reco} \input{mu_reco} \section{Event selection} \input{e_selection} \input{mu_selection} \input{exp_bg} \input{simulation} \input{data_driven} \section{Systematic uncertainties} \input{systematics} \input{results} \section{Limit-setting procedure} \input{simple_limits} \input{limits_with_interf} \section{Conclusions} \input{conclusion} \section{Acknowledgements} \input{acknowl} \bibliographystyle{JHEP} \providecommand{\href}[2]{#2}\begingroup\raggedright \section{Limits on spin-1 Kaluza-Klein $S^1/Z_2$ bosons } \label{sec_KK} The model proposed in ref.~\cite{Rizzo:1999en} assumes a single extra spatial dimension with size of order 1 TeV$^{-1}$, compactified onto an $S^1/Z_2$ orbifold. In the minimal model considered here, all of the SM fermions are on the same orbifold point. The model is completely specified by a single parameter, the compactification scale, which drives the masses of the KK modes. As for the case of \zpssm , this type of model can be classified as sequential to the Standard Model since the KK couplings are kept SM-like, although enhanced by a factor of $\sqrt{2}$. However, contrary to any of the \zp\ models, the interference with \dy\ is very strong and is a potentially distinctive feature of this type of model~\cite{Rizzo:2009pu,Bella:2010sc}. Because of the strong destructive interference effects mentioned above, it is not possible to put limits on \xbr\ as done for the preceding models. Instead, a coupling strength $g$ is introduced that multiplies the fermion couplings, $g_{\lambda_{f}}^{X}$, where $X$ stands for the new massive \zkk\ resonance, and $\lambda_{f}$ can be the helicity coupling, $\lambda_{f}=$L,R, as done in ref.~\cite{Bella:2010sc}. The resulting differential cross section, after the $g_{\lambda_{f}}^{X} \longrightarrow g\times g_{\lambda_{f}}^{X}$ transformation, is \begin{equation*} \frac{d\sigma}{ds} \propto \left|{\dy} + \frac{ g_{\lambda_{q}}^{X} g_{\lambda_{\ell}}^{X} }{ s-m_{X}^2 + i\Gamma_{X} m_{X} }\right|^2 \longrightarrow \left|{\dy} + g^2\frac{ g_{\lambda_{q}}^{X} g_{\lambda_{\ell}}^{X} }{ s-m_{X}^2 + ig^2\Gamma_{X} m_{X} }\right|^2. \end{equation*} Flat priors of $g^4$ and $g^2$ are used in the limit-setting procedure, as opposed to \xbr\ used earlier. A flat prior in $g^4$ can be assumed when the pure \zkk\ cross-section term dominates. If the interference term between \zkk\ and \dy\ dominates, a flat prior in $g^2$ is better motivated. Two-dimensional templates are produced in order to scan the $g$ parameter in the region 0--2.2 and the \zkk\ pole masses (\mkk) between 130~GeV and 6~TeV. The strong interference with the \dy\ implies a greater sensitivity to shape distortions, especially at the high-end of the mass window, and therefore, two more systematic uncertainties which are found to be negligible in the non-interfering channels have to be taken into account here. First, an uncertainty on the muon momentum resolution, which goes up to 20\%--30\% above 2.5~TeV. Second, an uncertainty on the extrapolation of the \ttbar\ and diboson backgrounds, due to the fit function choice and the fit range variation; in the dimuon channel, this uncertainty ranges from 2\% to 6\% in the 2--3~TeV mass range, relative to the full background. These two uncertainties do not affect the dielectron channel due to a better resolution and to the dominance of the QCD and \wpjet\ background uncertainties over the \ttbar\ and diboson background uncertainties. The observed and expected limits on $g^4$ and $g^2$ are translated into limits on $g$, which are shown as a function of \mkk\ in figure~\ref{fig:zkk_interf_lim_comb} for the combination of dielectron and dimuon channels. The fast broadening of the expected one and two-sigma bands above 2~TeV is due to the destructive interference becoming the dominant feature of the signal shape. Lower limits on \mkk\ are derived from the \zkk\ hypothesis ($g = 1$); they are displayed in table~\ref{tab:limits_ZZ}. Contrary to the non-interfering case, high-mass candidates in data induce observed limits which are better than expected. The obtained mass limits are higher than the indirect limits from electroweak precision measurements~\cite{Rizzo:1999en,GG}. \begin{figure}[!tb] \centering \includegraphics[width=0.49\columnwidth]{g4vsMassLimit_KK_cb.eps} \includegraphics[width=0.49\columnwidth]{g2vsMassLimit_KK_cb.eps} \caption{ Expected and observed 95\% CL limits on $g$ as a function of \mkk, for the combination of dielectron and dimuon channels, using a flat prior on $g^4$ (left) and on $g^2$ (right). } \label{fig:zkk_interf_lim_comb} \end{figure} \begin{table}[!hbt] \caption{ The observed and expected 95\% CL lower limits on the mass of the \zkk\ (i.e. $g=1$) for the \ee\ and \mumu\ channels separately and for their combination. } \label{tab:limits_ZZ} \begin{center} \begin{tabular}{l|ccc} \hline \hline & $\zkk\to \ee$ & $\zkk\to \mumu$ & $\zkk\to \ll$ \\ \hline & \multicolumn{3}{c}{$g^4$ prior} \\ Observed Limit [TeV] & 3.35 & 3.55 & 4.16 \\ Expected Limit [TeV] & 3.11 & 3.38 & 4.07 \\ \hline & \multicolumn{3}{c}{$g^2$ prior} \\ Observed Limit [TeV] & 4.03 & 3.93 & 4.71 \\ Expected Limit [TeV] & 3.52 & 3.79 & 4.53\\ \hline \hline \end{tabular} \end{center} \end{table} \section{Limits on Minimal \zp\ bosons } Limits are also set in the framework of \MM~\cite{Villadoro}. In this framework, the coupling of the new boson \zpMM\ to fermions is determined by its coupling to the B--L current, \gbl, and its coupling to the weak hypercharge Y, \gy. It is convenient to refer to the ratios $\gbltilde \equiv \gbl / g_Z$ and $\gytilde \equiv \gy / g_Z$, where $g_Z$ is the coupling of the SM $Z$ boson defined by $g_Z = 2 M_Z / v$ ($v = 246$~GeV is the Higgs vacuum expectation value in the SM). \gammap~and~$\theta$ are chosen as independent parameters with the following definitions: $\gbltilde=\gammap \cos\theta$, $\gytilde=\gammap \sin\theta$. The \gammap\ parameter measures the strength of the \zpMM\ boson coupling relative to the SM $Z$ boson coupling, while $\theta$ determines the mixing between the generators of the B--L and the weak hypercharge $Y$ gauge groups. Specific values of \gammap\ and $\theta$ correspond to \zp\ bosons in various models such as the \zpBL\ boson and \zpthreeR\ boson. Signal templates are built which take into account both the interference and the dependence of the \zpMM\ boson width on \gammap\ and $\theta$. The coupling to hypothetical right-handed neutrinos and to $W$~boson pairs is neglected. As for the KK model, the two-dimensional signal templates are made by reweighting the simulated \dy\ samples with the ratio of differential cross sections $\delta\sigma(\zpMM + \dy)/\delta\sigma(\dy)$. For a given value of $\theta$ and for each of the tested pole masses, dilepton invariant mass templates are created with varying values of \gammap\ between 0.01 and 2. The templates at these chosen values of \gammap\ are interpolated to all values of \gammap\ by using a smooth interpolating function in each dilepton mass bin. The likelihood fit across all dilepton mass bins finds the most probable value of \gammap\ for each $\theta$ and \zpMM\ boson mass \Mmin. Systematic uncertainties are applied as in the case of \xbr\ limits. Limits are set on the relative coupling strength \gammap\ as a function of the \zpMM\ boson mass, as shown in figure~\ref{fig:minimal_limits}. The two $\theta$ values yielding the minimum and maximum cross sections are used to define a band of limits in the (\gammap, \Mmin) plane. Table~\ref{tab:minimal_limits_mass} shows the range of the lower limits on the \zpMM\ boson mass for representative values of \gammap. The range of the upper limits on \gammap\ for representative values of the \zpMM\ boson mass is shown in table~\ref{tab:minimal_limits_coupling}. \begin{table}[!htbp] \caption{Range of the observed and expected 95\% CL lower limits on the \zpMM\ boson mass for $\theta \in [0,\pi]$ and representative values of the relative coupling strength \gammap. Both lepton channels are combined.} \begin{center} \begin{tabular}{l|cc} \hline \hline \gammap & 0.1 & 0.2 \\ \hline Observed range [TeV] & 0.67-1.43 & 1.11-2.10 \\ Expected range [TeV] & 0.58-1.47 & 1.17-2.07 \\ \hline \hline \end{tabular} \label{tab:minimal_limits_mass} \end{center} \end{table} \begin{table}[!htbp] \caption{Range of the observed and expected 95\% CL upper limits on the relative coupling strength \gammap\ for $\theta \in [0,\pi]$ and representative values of the \zpMM\ boson mass. Both lepton channels are combined.} \begin{center} \begin{tabular}{l|cc} \hline \hline \zpMM\ mass [TeV] & 1 & 2 \\ \hline Observed limit & 0.08-0.16 & 0.16-1.10 \\ Expected limit & 0.07-0.15 & 0.17-1.01 \\ \hline \hline \end{tabular} \label{tab:minimal_limits_coupling} \end{center} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=0.7\columnwidth]{ExpectedObservedLimits_comb_0514.eps} \caption{Expected (hatched area and dotted lines) and observed (filled area and solid lines) upper limits on \gammap\ within the \MM\ parameterization. The limits are shown for different test masses and are obtained by combining the dielectron and dimuon channels. The gray band envelops all limit curves, which depend on the choice of $\theta$. The lower boundary corresponds to $\tan{\theta}= 1.43$ and the upper boundary to $\tan{\theta} = -1.19$. The limit curves for two representative values of $\theta$ are shown: $\tan{\theta}=0$ and $\tan{\theta}=-2$ which correspond to the \zpBL\ model and the \zpthreeR\ model at specific values of \gammap\ respectively. } \label{fig:minimal_limits} \end{figure} \section{Data-SM expectation comparison} Figure~\ref{fig:mll} shows the invariant mass (\mll) distribution for the dielectron (top) and dimuon (bottom) final states after final selection. The bin width of the histograms is constant in $\log \mll$, chosen such that a possible signal peak spans multiple bins and the templates are smooth. Figure~\ref{fig:mll} also displays the expected \zpssm\ signal for two mass hypotheses. Tables~\ref{tab:backgroundTableEE} and~\ref{tab:backgroundTableMuon} show the number of data events and the estimated backgrounds in bins of reconstructed dielectron and dimuon invariant mass above 110~GeV. The number of observed events in the normalization region, from 70 to 110 GeV, is 1,236,646 in the dielectron channel and 985,180 in the dimuon channel. The dilepton invariant mass distributions are well described by the Standard Model. \begin{figure}[tbp] \includegraphics[width=0.9\textwidth]{hisMee.eps} \includegraphics[width=0.9\textwidth]{mass_log_new_2masses.eps} \caption{Dielectron (top) and dimuon (bottom) invariant mass (\mll) distributions after final selection, compared with the stacked sum of all expected backgrounds, with two example \zpssm\ signals overlaid. The bin width is constant in $\log \mll$.} \label{fig:mll} \end{figure} \begin{table}[tbp] \caption{Expected and observed number of events in the dielectron channel. The errors quoted include both statistical and systematic uncertainties.} \label{tab:backgroundTableEE} \begin{center} \begin{tabular}{lccccc} \hline\hline \mepem [GeV] & 110--200 & 200--400 & 400--800 & 800--1200 & 1200--3000\\ \hline \zgstar & $ 26700 \pm 1100 $ & $ 2960 \pm 120 $ & $ 265 \pm 13 $ & $ 12.1 \pm 0.9 $ & $ 1.47 \pm 0.18 $ \\ \ttbar & $ 1300 \pm 120 $ & $ 410 \pm 40 $ & $ 26.5 \pm 2.8 $ & $ 0.41 \pm 0.17 $ & $ 0.034 \pm 0.034 $ \\ Diboson & $ 415 \pm 21 $ & $ 146 \pm 8 $ & $ 16.2 \pm 0.9 $ & $ 0.88 \pm 0.05 $ & $ 0.101 \pm 0.011 $ \\ QCD and \wpjet & $ 1900 \pm 600 $ & $ 510 \pm 200 $ & $ 50 \pm 31 $ & $ 2.0 \pm 1.8 $ & $ 0.26 \pm 0.31 $ \\ \hline Total & $ 30300 \pm 1300 $ & $ 4030 \pm 240 $ & $ 357 \pm 34 $ & $ 15.4 \pm 2.0 $ & $ 1.86 \pm 0.35 $ \\ \hline Data & $ 29816$ & $ 4026$ & $ 358$ & $ 17$ & $ 3$\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \caption{Expected and observed number of events in the dimuon channel. The errors quoted include both statistical and systematic uncertainties.} \label{tab:backgroundTableMuon} \begin{center} \begin{tabular}{lccccc} \hline\hline \mmumu\ [GeV] & 110--200 & 200--400 & 400--800 & 800--1200 & 1200--3000\\ \hline \zgstar & $ 21200 \pm 1200 $ & $ 2090 \pm 230 $ & $ 173 \pm 15 $ & $ 7.7 \pm 0.8 $ & $ 0.98 \pm 0.16 $\\ \ttbar & $ 900 \pm 100 $ & $ 270 \pm 50 $ & $ 18 \pm 11 $ & $ 0.32 \pm 0.07 $ & $ 0.019 \pm 0.007 $\\ Diboson & $ 289 \pm 32 $ & $ 97 \pm 24 $ & $ 11.8 \pm 2.7 $ & $ 0.59 \pm 0.26 $ & $ 0.087 \pm 0.016 $\\ \hline Total & $ 22400 \pm 1200 $ & $ 2460 \pm 240 $ & $ 203 \pm 19 $ & $ 8.7 \pm 0.9 $ & $ 1.09 \pm 0.16 $\\ \hline Data & $ 21945$ & $ 2294$ & $ 197$ & $ 10$ & $ 2 $\\ \hline\hline \end{tabular} \end{center} \end{table} The data are compared to the Monte Carlo simulation in the search region 0.13~TeV$<\mll <3.0$~TeV. The agreement is first studied by computing the significance of the difference in each mass bin, with statistical and systematic uncertainties taken into account. The largest positive local significance is about $2\sigma$ in the dielectron channel and about $1\sigma$ in the dimuon channel, and the largest negative local significance is $-2\sigma$ in both channels. The comparison is then performed by means of templates~\cite{Aad:2011xp, CDF:Zpmumu2fb}. The templates provide the expected yield of events ($\bar{n}$) in each \mll\ bin. When neglecting interference, $\bar{n}$ is given by $\bar{n} = n_X(\lambda , {\pmb\nu}) + n_{\dy} ({\pmb\nu}) + \nobg ({\pmb\nu})$, where $n_{X}$ represent the number of events produced by the decay of a new resonance $X$ ($X=\zp ,\zstar , \gstar , \ts, \rhot/\omegat , \Ronetwo$, where \rhot/\omegat\ and \Ronetwo\ are techni-mesons, see below); $n_{\dy}$ and \nobg\ are the number of \dy\ (Drell-Yan) and other backgrounds events, respectively. The symbol $\lambda$ represents the parameter of interest of the model, and ${\pmb\nu}$ is the set of Gaussian-distributed nuisance parameters incorporating the systematic uncertainties. When including the effects of interference, $\bar{n} = n_{X+\dy}(\lambda , {\pmb\nu}) + \nobg ({\pmb\nu})$, where $n_{X+\dy}$ is the number of signal plus \zgstar\ events and $X$ can be \zkk\ or a Minimal \zp\ boson. Signal templates provide the expected line-shape of the dilepton resonances. The significance of a signal is summarized by a \pval, the probability of observing a signal-like excess at least as extreme as the one observed in data, assuming the null hypothesis. The outcome of the search is ranked using a log-likelihood ratio (LLR), with the likelihood function defined as the product of the Poisson probabilities over all mass bins in the search region, using a \zpssm\ template. Explicitly: \begin{equation*} {\rm LLR} = -2\ {\rm ln}\ \frac{\mathcal{L} ({\rm data}\ |\ \hat{n}_{\zp}, \hat{M}_{\zp}, \hat{\pmb\nu} ) }{\mathcal{L} ({\rm data}\ |\ (\hat{n}_{\zp} = 0), \hat{\hat{\pmb\nu}} ) } \end{equation*} where $\hat{n}_{\zp}$, $\hat{M}_{\zp}$, $\hat{\pmb\nu}$ and $\hat{\hat{\pmb\nu}}$ are respectively the best-fit values for the \zp~normalization, \zp~mass and nuisance parameters, which maximize the likelihood~$\mathcal{L}$ given the data, assuming in the numerator that a \zp\ signal is present and in the denominator that no signal is present. The LLR is scanned as a function of \zp\ cross section and \mzp\ over the full considered mass range. The observed \pval\ for the dielectron and dimuon samples is 36\%\ and 68\%, respectively. For the combination of both channels, the observed \pval\ is 40\%. \section{Limits on spin-1 SSM and \esix\ \zp\ bosons } Due to mixing between the $U(1)_\chi$ and $U(1)_\psi$ groups, in the \esix\ models the lightest new boson is a linear combination of the \zpchi\ and \zppsi\ bosons depending on the mixing angle \te6. For six specific values of this mixing angle, the diboson resonance is named $Z'_{\psi}$, \zpN, \zpeta, \zpI, \zpsq , and $\zp _{\chi}$. The corresponding mixing angle values are displayed in table~\ref{tab:e6angleDef}. Like the SSM, these models prescribe the couplings of the \zp\ boson to the SM fermions. The expected intrinsic width of the \zp\ boson in the \esix\ models is predicted to be between 0.5\% and 1.3\%~\cite{Dittmar:2003ir,Accomando:2010fz} of its mass, while in the SSM the intrinsic width is predicted to be about $3$\%. \begin{table}[tbp] \caption{ Mixing angle values for the \esix\ models considered. } \label{tab:e6angleDef} \begin{center} \begin{tabular}{l|cccccc} \hline \hline Model & \zppsi & \zpN & \zpeta & \zpI & \zpsq & \zpchi \\ \hline $\sin\te6$ & 0 & $-1/4$ & $\sqrt{3/8}$ & $\sqrt{5/8}$ & $3\sqrt{6}/8$ & 1 \\ $\cos\te6$ & 1 & $\sqrt{15}/4$ & $\sqrt{5/8}$ & $-\sqrt{3/8}$ & $-\sqrt{10}/8$ & 0 \\ \hline \hline \end{tabular} \end{center} \end{table} Figure~\ref{fig:Zplimit_res} shows the 95\% CL observed and expected exclusion limits on $\xbr (\zp \to \ee)$ and $\xbr (\zp \to \mumu)$ obtained with \zpssm\ templates. It also shows the theoretical cross section times branching fraction for the \zpssm\ and for the lowest and highest \xbr\ of \esix-motivated \zp\ models. The combination of the dielectron and dimuon channels is shown in figure~\ref{fig:combinedlimit_res}. The rise of the \xbr\ limit at high invariant mass is due mainly to the fast fall of the parton luminosity at high momentum transfer which enhances the low-mass tail, causing a distortion in the resonance peak shape. \begin{figure}[!t] \centering \includegraphics[width=0.49\columnwidth]{Logmasslimit_comb_zprimexsec_ee} \includegraphics[width=0.49\columnwidth]{Logmasslimit_comb_zprimexsec_mm} \caption{Expected and observed 95\% CL limits on \xbr\ and expected \xbr\ for \zpssm\ production and the two \esix-motivated \zp\ models with lowest and highest \xbr\ for the dielectron (left), and the dimuon (right) selections. The dashed lines around the \zpssm\ theory curve represent the theoretical uncertainty, which is similar for the other theory curves. } \label{fig:Zplimit_res} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.7\columnwidth]{Logmasslimit_comb_zprimexsec_vsMass_combo.eps} \caption{Expected and observed 95\% CL limits on \xbr\ and expected \xbr\ for \zpssm\ production and the two \esix-motivated \zp\ models with lowest and highest \xbr\ for the combination of the dielectron and dimuon channels. The dashed lines around the \zpssm\ theory curve represent the theoretical uncertainty, which is similar for the other theory curves. } \label{fig:combinedlimit_res} \end{figure} The 95\% CL \xbr\ limit is used to set mass limits for each of the models considered. The limits obtained for the \zpssm\ are displayed in table~\ref{tab:limits}. The combined observed (expected) mass limit for the \zpssm\ is 2.22\ (2.25)~TeV. The combined mass limits on \esix-motivated \zp\ are given in table~\ref{e6massLimits}. \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the mass of the \zpssm\ boson for the \ee\ and \mumu\ channels separately and for their combination. } \label{tab:limits} \begin{center} \begin{tabular}{l|ccc} \hline \hline & $\zpssm \to \ee$ & $\zpssm \to \mumu$ & $\zpssm \to \ll$ \\ \hline Observed limit [TeV] & 2.08 & 1.99 & 2.22 \\ Expected limit [TeV] & 2.13 & 2.00 & 2.25 \\ \hline \hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the masses of \esix-motivated \zp\ bosons. Both lepton channels are combined. } \label{e6massLimits} \begin{center} \begin{tabular}{l|cccccc} \hline \hline Model & \zppsi & \zpN & \zpeta & \zpI & \zpsq & \zpchi \\ \hline Observed limit [TeV]&1.79 &1.79 &1.87 &1.86 &1.91 &1.97 \\ Expected limit [TeV]&1.87&1.87&1.92&1.91&1.95&2.00 \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Limits on spin-1 \zstar\ bosons } A model with quark-lepton universality is adopted~\cite{wzstar_refmod,wzstar_refmod2} to fix the coupling strength of the \zstar\ boson to fermions. The gauge coupling is chosen to be the same as in the SM SU(2) group, and the scale of the new physics is proportional to the mass of the new heavy bosons. The parameters of the model are fixed by requiring that the total and partial decay widths of \wstar, the charged partner of \zstar, be the same as those of the \wpssm\ boson with the same mass. The width of the \zstar\ is then 3.4\% of its mass. As a result of the tensor form of the coupling, the \zstar\ does not interfere with \dy, and the angular distribution of its decay to dileptons is different from that of a \zp\ boson. Figure~\ref{fig:combinedlimit_Zs} shows the 95\% CL observed and expected exclusion limits on $\xbr (\zstar \to \ll)$ as well as the cross section times branching fraction expected from theory. The corresponding 95\% CL limits on the mass of the \zstar\ boson are shown in table~\ref{tab:limits_Zs}. \begin{figure}[tbp] \centering \includegraphics[width=0.7\textwidth]{ZStar_limit_comb.eps} \caption{Expected and observed 95\% CL limits on \xbr\ and expected \xbr\ for \zstar\ boson production for the combination of dielectron and dimuon channels. The dashed lines around the \zstar\ theory curve represent the theoretical uncertainty. } \label{fig:combinedlimit_Zs} \end{figure} \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the mass of the \zstar\ boson for the \ee\ and \mumu\ channels separately and for their combination. } \label{tab:limits_Zs} \begin{center} \begin{tabular}{l|ccc} \hline \hline & $\zstar \to \ee$ & $\zstar \to \mumu$ & $\zstar \to \ll$ \\ \hline Observed limit [TeV] & 2.10 & 1.97 & 2.20 \\ Expected limit [TeV] & 2.13 & 1.99 & 2.22 \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Limits on spin-2 Randall-Sundrum gravitons} The phenomenology of the RS model used in this work can be described in terms of the mass of the graviton and the ratio \kovermb . The expected intrinsic width of the \gstar\ is proportional to $(\kovermb)^2$, and is 1.4\% for $\kovermb =0.1$. Limits at the 95\% CL on $\xbr (\gstar \to \ell^+\ell^-)$ are computed assuming two values of \kovermb: 0.1 and 0.2. These limits are then compared to the theoretical cross section times branching fraction assuming eight different values of \kovermb\ between 0.01 and 0.2. The \xbr\ limits obtained with $\kovermb=0.1$ are used for \kovermb\ hypotheses below or equal to 0.1, while those with $\kovermb=0.2$ are used for \kovermb\ hypotheses larger than 0.1 and below or equal to 0.2. Limits at the 95\% CL on the graviton mass are derived from this comparison for each \kovermb\ hypothesis and are shown in table~\ref{tab:limits_Gs} for $\kovermb=0.1$, and in table~\ref{tab:combinedLimitsG} and figure~\ref{fig:gstar_2Dlim_comb} for the combined dilepton channel for all values of \kovermb. \begin{table}[tbp] \caption{The observed and expected 95\% CL lower limits on the mass of the \gstar\ with a coupling of \kovermb $=0.1$ for the \ee\ and \mumu\ channels separately and for their combination. } \label{tab:limits_Gs} \begin{center} \begin{tabular}{l|ccc} \hline \hline & $\gstar \to \ee$ & $\gstar \to \mumu$ & $\gstar \to \ll$ \\ \hline Observed limit [TeV] & 2.03 & 1.92 & 2.16 \\ Expected limit [TeV] & 2.04 & 1.93 & 2.17 \\ \hline \hline \end{tabular} \end{center} \end{table} \begin{figure}[tbp] \centering \includegraphics[width=0.7\textwidth]{Graviton2DLimitsComb.eps} \caption{Exclusion regions in the plane of \kovermb\ versus graviton mass for the combination of dielectron and dimuon channels. The region above the curve is excluded at 95\% CL. } \label{fig:gstar_2Dlim_comb} \end{figure} \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the mass of the \gstar\ with varying coupling \kovermb. Both lepton channels are combined. } \label{tab:combinedLimitsG} \begin{center} \begin{tabular}{l|cccccccc} \hline \hline \kovermb & 0.01 & 0.03 & 0.05 & 0.1 & 0.12 & 0.14 & 0.17 & 0.2\\ \hline Observed limit [TeV] & 0.92 & 1.49 & 1.72 & \LimitCombinedG & 2.23 & 2.32 & 2.42 & 2.51 \\ Expected limit [TeV] & 1.02 & 1.53 & 1.81 & \LimitCombinedExpectedG & 2.25 & 2.33 & 2.44 & 2.53 \\ \hline \hline \end{tabular} \end{center} \end{table} \clearpage \section{Limits on Torsion models} The Torsion heavy state (TS) can be treated as a fundamental propagating field characterized by its mass, \mts , and the couplings between TS and fermions. These couplings are assumed to be universal at the Planck scale and remain so at the TeV scale for all fermions except the top quark~\cite{Belyaev:2007fn}. Therefore the phenomenology of Torsion decays to dilepton states can be described in terms of two parameters: the TS mass and one coupling (\etats ). Since \etats\ can {\em a priori} take any value between 0 and 1, the intrinsic width could be very large. The interference effects with \dy\ are negligible. Limits are computed on $\xbr (\ts\to\ll)$ for five values of \etats\ in the range 0.1--0.5. Limits on \xbr\ are then translated into limits on \mts\ in the same way as above for the RS graviton, by comparing them to the theoretical \xbr\ as a function of \mts\ for each value of \etats . Additionally, the \xbr\ limits obtained for $\etats=0.1$ are used to set mass limits for $\etats=0.05$, which is conservative because the TS width is smaller for $\etats=0.05$. The resulting exclusion region in the (\mts, \etats) plane is displayed in figure~\ref{fig:TS_2D_comb} and table~\ref{tab:combinedLimitsTS} for the combined dielectron and dimuon channels. The limits on \mts\ obtained in each channel for $\etats=0.2$ are shown in table~\ref{tab:limits_Ts}. \begin{figure}[tbp] \centering \includegraphics[width=0.7\columnwidth]{torsion_2d_plot_comb.eps} \caption{Exclusion regions in the plane of \etats\ versus Torsion mass for the combination of dielectron and dimuon channels. The region above the curve is excluded at 95\% CL. } \label{fig:TS_2D_comb} \end{figure} \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the mass of Torsion heavy states with varying coupling \etats. Both lepton channels are combined. } \label{tab:combinedLimitsTS} \begin{center} \begin{tabular}{l|cccccc} \hline \hline \etats & 0.05 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline Observed limit [TeV] & 1.52 &1.94 &2.29 &2.50 &2.69 &2.91\\ Expected limit [TeV] & 1.58 &1.96&2.31&2.55&2.77&3.02\\ \hline \hline \end{tabular} \end{center} \end{table} \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the mass of Torsion heavy states with a coupling of $\etats=0.2$ for the \ee\ and \mumu\ channels separately and for their combination. } \label{tab:limits_Ts} \begin{center} \begin{tabular}{l|ccc} \hline \hline & $\ts \to \ee$ & $\ts \to \mumu$ & $\ts \to \ll$ \\ \hline Observed limit [TeV] & 2.15 & 2.07 & 2.29 \\ Expected limit [TeV] & 2.20 & 2.08 & 2.31 \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Limits on Technicolor} \subsection*{LSTC model} The Low-scale Technicolor (LSTC) model~\cite{TC2,TC6,Eichten:2012br} postulates the existence of vector (\rhot , \omegat ) and axial (\at ) techni-mesons, in addition to light techni-pions (\pit ). Due to techni-isospin symmetry, \rhot\ and \omegat\ are nearly degenerate in mass. Therefore this analysis searches for a combination of \rhot\ and \omegat , with \omegat\ being the dominant component since its branching fraction to dileptons is approximately one order of magnitude larger than that of the \rhot . In this work, the LSTC parameters are chosen to be the same as in ref.~\cite{Eichten:2012br} (in particular, the LSTC parameter $\sin\chi = 1/3$) and the mass of the \at\ state is assumed to be 10\% higher than that of \rhot. Limits are computed on $\xbr$ for the decay of the techni-mesons to dilepton final states. When building the signal templates, it is assumed that the mass splitting is $M_{\rhot}-M_{\pit}=M_W$. Negative interference contributions are neglected. The intrinsic widths of the \rhot , \omegat\ and \at\ resonances are much smaller than the experimental resolution. The resulting limits on the \rhot/\omegat\ mass are displayed in table~\ref{tab:limits_LSTC}. \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the mass of the \rhot/\omegat\ in the $M_{\rhot}- M_{\pit}=M_W$ hypothesis for the \ee\ and \mumu\ channels separately and for their combination. } \label{tab:limits_LSTC} \begin{center} \begin{tabular}{l|ccc} \hline \hline & $\rhot/\omegat \to \ee$ & $\rhot/\omegat \to \mumu$ & $\rhot/\omegat \to \ll$ \\ \hline Observed limit [TeV] & 0.85 & 0.70 & 0.85 \\ Expected limit [TeV] & 0.85 & 0.71 & 0.89 \\ \hline \hline \end{tabular} \end{center} \end{table} The \xbr\ limits are then translated into exclusion regions in the $(M_{\rhot /\omegat} , M_{\pit})$ plane, shown in figure~\ref{fig:LSTC_2D}. The notation $\rhot /\omegat$ indicates the combination of the two resonances. The mass splitting between $\rhot$ and $\pit$ determines whether decay modes such as $\rhot\to W\pit$ or multi-$\pit$ are allowed kinematically. Therefore the choice of the value of the mass of \pit\ has an impact on the ratio between the \at\ and \rhot\ cross sections. Another foundational assumption of the LSTC model is that the walking TC gauge coupling causes an enhancement of $M_{\pit}$ relative to $M_{\rhot}$ and the other vector meson masses. This tends to close off the $\rhot \to \pit \pit$ decay channel and, even more strongly, closes off the \omegat\ and $\at \to 3 \pit$ channels~\cite{Lane:1989ej}. If $M_{\omegat}> 3 M_{\pit}$, the $\omegat \to \pit \pit \pit$ channel opens up and quickly becomes the dominant decay mode of \omegat. Therefore the dilepton branching fractions become substantially smaller and there is no sensitivity in the $M_{\pit}<M_{\rhot/\omegat}/3$ region in the dilepton channel. \begin{figure}[tbp] \centering \includegraphics[width=0.7\columnwidth]{RhovsPi_ExclusionContour.eps} \caption{ The 95\% CL excluded region (in red) in the plane \pit\ mass as a function of the \rhot/\omegat\ mass, assuming $M_{\at}=1.1\times M_{\rhot /\omegat}$, for the combination of dielectron and dimuon channels. The dotted line corresponds to $M_{\rhot /\omegat}-M_{\pit}=M_W$. The black dashed line shows the expected limit, with the green dashed lines showing the $\pm 1 \sigma$ bands. The blue hashed region in which $M_{\pit} > M_{\rhot /\omegat}$ is excluded by theory. This search is insensitive in the region below the purple dashed-dotted line ($M_{\pit}<M_{\rhot/\omegat}/3$). } \label{fig:LSTC_2D} \end{figure} \subsection*{MWT model} The Minimal Walking Technicolor (MWT)~\cite{TC3,TC4,TC5} model can be characterized by the following parameters: \begin{myitemize} \item bare axial and vector masses: $M_A$ and $M_V$; \item \gtilde, the strength of the spin-1 resonance interaction; \item $M_H$, the Higgs boson mass; \item $s$, the coupling of the Higgs boson to composite spin-1 states; \item $S$, the $S$-parameter obtained using the zeroth Weinberg Sum Rule~\cite{Appelquist:1998xf,Belyaev:2008yj}. \end{myitemize} This model predicts only two resonances, \Rone\ and \Rtwo. $M_{\Rone}$ is lower than $M_{\Rtwo}$ and generally very close to $M_A$. In contrast to LSTC, \Rone\ and \Rtwo\ are neither degenerate nor very narrow. In this work, three free parameters have been set to $M_H=200$~GeV, $s=0$, and $S=0.3$, following the recommendation from ref.~\cite{Andersen:2011nk}. The mass of the lightest resonance, $M_{\Rone}$, is then scanned in steps of 100~GeV for various values of \gtilde . For each choice of \gtilde\ and $M_{\Rone}$, the values of $M_{\Rtwo}$, $M_A$ and $M_V$ are uniquely determined. Limits on $\xbr (\Ronetwo\to \ll)$ are first set as a function of $M_{\Rone }$ assuming $\gtilde=2, 3, 4, 5, 6$, where the notation \Ronetwo\ indicates that both resonances are taken into account in the spectrum. They are then translated into a 95\% CL exclusion area in the $(M_A,\gtilde)$ plane, as shown in figure~\ref{fig:MWT_2D} and table~\ref{tab:combinedLimitsMA}. The limits from the Tevatron, as well as the theoretical limits, including the requirement to stay in the walking regime, are described in detail in ref.~\cite{Belyaev:2008yj}. Note that the edge of the excluded area varies only very weakly as a function of $s$ and $M_H$, so a Higgs boson mass of $\approx 125$~GeV would not change the results significantly. A theoretical re-interpretation of the CMS results from \wp\ boson searches~\cite{Chatrchyan:2011dx}, in terms of the parameters $M_A$ and \gtilde, is described in ref.~\cite{Andersen:2011nk}. \begin{table}[tbp] \caption{ The observed and expected 95\% CL lower limits on the $M_A$ parameter with varying coupling \gtilde. Both lepton channels are combined. } \label{tab:combinedLimitsMA} \begin{center} \begin{tabular}{l|ccccc} \hline \hline \gtilde & 6 & 5 & 4 & 3 & 2 \\ \hline Observed limit [GeV] & 359 & 485 & 768 &1175 &1566 \\ Expected limit [GeV] & 352 & 516 & 742 &1233&1605\\ \hline \hline \end{tabular} \end{center} \end{table} \begin{figure}[tbp] \centering \includegraphics[width=0.6\columnwidth]{Ma_g_plane_area_range.eps} \caption{ Bounds in the ($M_A$, $\tilde{g}$) plane of the MWT parameter space: (i) The electroweak precision measurements exclude the dark area in the bottom left corner. (ii) The requirement to stay in the walking regime excludes the hatched area in the right corner. (iii) The red area (black dashed line) shows the observed (expected) exclusion at 95\% CL in the dilepton channel. The green dashed lines show the $\pm 1 \sigma$ bands of the expected exclusion limit. } \label{fig:MWT_2D} \end{figure} \section{Simulated samples} The \zp, \gstar , and LSTC signals, as well as the $Z/\gamma^*$ process, are generated with \pythia\ 6.421~\cite{Sjostrand:2006za} using MRST2007 LO**~\cite{mrst,Sherstnev:2008dm} parton distribution functions (PDFs). The Minimal \zp\ and \zkk\ signals are obtained by reweighting the large sample of $Z/\gamma^*$ events from \pythia\ with the appropriate ratio of differential cross sections~\cite{Salvioni:2009mt,Salvioni:2009jp,pythia8}. \zstar\ and Torsion signals are generated with \comphep~\cite{comphep}, while MadGraph~\cite{MadGraph4} is used for MWT signals; CTEQ6L1~\cite{Pumplin:2002vw} PDFs are used in both cases. The diboson processes are generated with \herwig~6.510~\cite{herwig} using MRST2007 LO** PDFs. The \ttbar\ background is generated with \mcatnlo~4.01~\cite{mcatnlo} using CTEQ66~\cite{Nadolsky:2008} PDFs. For \ttbar\ events, \jimmy~4.31~\cite{jimmy} is used to describe multiple parton interactions and \herwig\ to describe the remaining underlying event and parton showers. Final-state photon radiation is handled by \photos~\cite{fsr_ref}. The generated samples are processed through a full ATLAS detector simulation~\cite{atlas:sim} based on GEANT4~\cite{geant}. \section{Expected signals and backgrounds} \label{sec:expected_s_and_b} The $Z/\gamma^*$ cross section is calculated at next-to-next-to-leading order (NNLO) in QCD using PHOZPR~\cite{Hamberg:1990np} with MSTW2008 NNLO PDFs~\cite{mstw}. The ratio of this cross section to the leading-order cross section is used to determine a mass-dependent QCD K-factor, which is then applied to the results of the leading-order simulation. The same QCD K-factor is applied to the \zp , \zkk , Torsion, and LSTC signals. Its value is 0.91 at 2~TeV and slowly increases up to 1.15 at 250~GeV. A different K-factor is applied to the \gstar\ signal, with values that vary between 1.6 and 1.8 depending on the graviton mass and \kovermb~\cite{Matthews:2009np}, and with a value of 1.75 above 750~GeV, consistent with ref.~\cite{ATLAS_grav_diphoton}. Finally, no QCD K-factor is applied to the leading-order \zstar\ cross section since the \zstar\ model uses an effective Lagrangian with a different Lorentz structure. The \zpssm, \zp(\esix), Torsion states, and techni-mesons interfere minimally with the \dy\ process, and the \zstar\ and \gstar\ do not interfere at all. The effect of interference on the resonance line-shape is therefore neglected for all these states. On the other hand, the interference of the \zkk\ boson with \dy\ is very strong and cannot be neglected \cite{GG,Bella:2010sc}. The interference effect is also taken into account in the \MM\ framework. Higher-order electroweak corrections (beyond the photon radiation included in the simulation) are calculated using \horace~\cite{horace,CarloniCalame:2007cd}, yielding an electroweak K-factor ($K_{{\rm EW}}$) due to virtual heavy gauge boson loops. Its value at 2~TeV is 0.92 in the dielectron channel and 0.93 in the dimuon channel, and slowly increases up to 1.05 at 250~GeV. The electroweak K-factor is applied only to the \dy\ background and not to the expected signals, with the exception of Technicolor and Kaluza-Klein states. In the case of Technicolor, $K_{{\rm EW}}$ is applied because production proceeds via the \dy\ process. Since interference is an important feature of the Kaluza-Klein boson model, the electroweak K-factor is applied to the full amplitude ($\cal{M}$) of the process, including the \zkk\ amplitude: $\left|\cal{M}_{\textit{\dy}}+\cal{M}_{\textit{\zkk}}\right|^2 \longrightarrow K_{{\rm EW}}\times\left|\cal{M}_{\textit{\dy}}+\cal{M}_{\textit{\zkk}}\right|^2$. This approximation is conservative. Although interference is taken into account for Minimal \zp\ bosons, for consistency with the treatment of the other \zp\ models the electroweak K-factor is applied only to the pure \zgstar\ part of the amplitude: $\left|\cal{M}_{\textit{\dy}}+\cal{M}_{\textit{\zp}}\right|^2 \longrightarrow\left|\cal{M}_{\textit{\dy}}+\cal{M}_{\textit{\zp}}\right|^2 + \left(K_{{\rm EW}}-1\right)\times\left|\cal{M}_{\textit{\dy}}\right|^2$. For the other backgrounds, the diboson cross sections are calculated to next-to-leading order (NLO) using {\sc mcfm}~\cite{Campbell:1999mcfm} with an uncertainty of 5\%, and the \ttbar\ cross section is predicted at approximate-NNLO, with an uncertainty of $+7.0/-9.6$\%~\cite{Moch:2008qy,Langenfeld:2009tc}. At very high masses, the statistical significance of the diboson and \ttbar\ simulated samples becomes insufficient. Therefore their invariant mass distribution is fitted to the functional form $y(x)= p_{1}\cdot x^{p_{2}+p_{3}\log{x}}$ which is then used to extrapolate the \ttbar\ background above 0.8~TeV and the diboson background above 1.5~TeV.
1,116,691,497,935
arxiv
\section{Introduction} \label{introduction} Understanding the mechanism by which protoplanetary discs are dispersed is important, in particular, because it constrains the timescale within which planets can form \citep{2001ApJ...553L.153H, 2003ApJ...598L..55R}. Based on the discovery of discs with inner holes \citep{2001ApJ...560..957D, 2002ApJ...568.1008C}, it is now generally thought that disc dispersal happens from the inside out \citep[e.g.][]{2013MNRAS.428.3327K}. Such discs with inner holes have thus been labelled ``transition discs''. Originally observed as a deficiency in the near infrared component of the disc spectral energy distribution (which can be explained by a dustless inner hole, still populated by gas) inner holes in the dust have subsequently been directly imaged, verifying their existence \citep[e.g.][]{2011ApJ...732...42A}. Inner holes in gas have also been observed for some transition discs \citep[e.g.][]{2014A&A...562A..26B, 2015A&A...579A.106V}. Multiple explanations for the appearance of inner holes have been proposed; however the most promising are either clearing by a planet (or planets) or photoevaporation \citep{2011ARA&A..49...67W}. An enduring puzzle for understanding the clearing of protoplanetary discs is the absence of a significant population of older T Tauri stars which have ceased accreting, lack signatures of an inner disc but retain residual gas and dust at radii beyond 10\,AU \citep{2012MNRAS.426L..96O}. Secular disc evolution models that include both accretion onto the star and photoevaporation tend to predict that, once photoevaporation halts the accretion on to the star, a few Jupiter masses of gas should be left at radii beyond 10\,AU and that this should survive for of order half a Myr thereafter before ultimate photoevaporation \citep{2010MNRAS.401.1415O, 2011MNRAS.412...13O, 2012MNRAS.422.1880O}. This prediction runs counter both to the aformenentioned lack of non-accreting systems with large holes in the {\it dust} and also to the low upper limits on {\it gas} mass ($\sim 0.1$ Jupiter masses) detected in non-accreting (Weak Line) T Tauri stars \citep{2013ApJ...762..100C, 2015A&A...583A..66H}. Apparently then, once accretion ceases, the reservoir of gas at large radii must either be small or else then rapidly cleared by an unidentified mechanism. Throughout this paper we will refer to the statistics of non-accreting transition discs as providing an observational benchmark for testing models of disc clearing. Predictions for the sequence of outer disc clearing by photoevaporation have been developed by more than a decade of radiation hydrodynamical modeling involving a range of high energy radiation sources from the central star, though full radiation hydrodynamical modeling is still not available in the case of the FUV (far-ultraviolet, i.e. non-ionising ultraviolet continuum) owing to the complexity of combining this with the complex thermochemical models in this regime \citep{2009ApJ...690.1539G, 2015ApJ...804...29G}. A number of authors \citep{1994ApJ...428..654H, 2001MNRAS.328..485C, 2006MNRAS.369..216A, 2006MNRAS.369..229A} have studied the effect of photoevaporation by Lyman continuum photons on discs, calculating mass loss profiles and integrated mass loss rates. In such models the properties of the mass flow at the base of the wind are set by imposing ionisation equilibrium, taking into account the role of the diffuse field of recombination photons, emitted from the static atmosphere of the inner disc, in irradiating the disc at larger radius. In polychromatic Monte Carlo radiative transfer models using the \textsc{mocassin} code, \cite{2009ApJ...699.1639E} found that X--rays {($ 100\,$eV $< h\nu < 1$\,keV)} are much more effective at penetrating large columns into the disc than the extreme ultraviolet (EUV, $10 < h\nu <100 $\,eV) and hence will govern the mass-loss properties of discs unless there are geometrical effects which preferentially obscure the X-ray emission. {\cite{2010MNRAS.401.1415O, 2011MNRAS.412...13O, 2012MNRAS.422.1880O}} used \textsc{mocassin} to develop a temperature prescription as a function of the ionisation parameter for all gas optically thin to the soft ($<1$~keV) X--rays (defined as that within the column of $10^{22}$\,particles\,cm$^{-2}$ from the star). They applied this prescription to new models of disc photoevaporation for different star and disc masses. \cite{2012MNRAS.422.1880O} also unexpectedly found that for a particularly low mass disc, dispersal was very rapid (on timescales of order hundreds of years) by a mechanism that they termed ``thermal sweeping''. The key point here is that very rapid dispersal of gas in the outer disc, once the surface density has fallen below a given threshold, offers the prospect of being able to explain the lack of significant gas reservoirs around non-accreting stars. {\cite{2012MNRAS.422.1880O, 2013MNRAS.436.1430O}} proposed analytic expressions for the threshold for thermal sweeping which involved equating the radial scale length of X--ray heated gas ($\Delta$) with the vertical scale height ($H$). The resulting surface density thresholds were used both in these papers and by \cite{2015MNRAS.454.2173R} in order to explore how such sweeping affects the statistics of gas/dust detection around non-accreting T Tauri stars. Nevertheless, it needs to be stressed that these analytic expressions were based on a simple criterion for thermal sweeping ($\Delta/H=1$) that was inferred from only two, two-dimensional, radiation hydrodynamical simulations {(in the limit of low stellar mass and high X--ray luminosity)} and therefore one should be cautious about extrapolating these conditions to different physical regimes. Accordingly, in this paper, we perform a suite of radiation hydrodynamical simulations which explore the conditions required for rapid radiative disc dispersal, in particular testing the suggestion of Owen et al. (2012) that rapid clearing is triggered once $\Delta/H$ rises to a value of around unity. We find that although rapid clearing is indeed associated with large $\Delta/H$ values, stable mass loss can still ensue when $\Delta/H$ is greater than unity. Furthermore, we find that $\Delta/H$ is not always sensitive to the disc surface density. We explore the reason for this difference compared to the work by Owen et al. (2013), develop a new criterion for rapid disc dispersal and discuss the consequences of the new criterion. The structure of the paper is as follows. In Section 2 we review the rationale behind the surface density criteria previously proposed by Owen et al (2012, 2013). Sections 3 and 4 contain the details and testing of our numerical implementation. In section 5 we present our main simulation results, show that the previous thermal sweeping theories are inadequate and introduce and test a new criterion for rapid disc clearing. In section 6 we discuss the consequences of our new thermal sweeping criterion on populations of viscous discs undergoing internal photoevaporation. Our summary and main conclusions are presented in section 7. \section{The prior theory of thermal sweeping} Owen et al. (2012) proposed a criterion for thermal sweeping involving equality between the radial pressure scale length in the X--ray heated gas ($\Delta=1/{{d\log{P}}\over{dR}}$) and the local vertical pressure scale length ($H=1/{{d\log{P}\over{dz}}}\sim c_s/\Omega$). Assuming that X--rays penetrate through to the surface density peak close to the disc inner edge $\Sigma_{\textrm{max}}$ and that the X--ray heated column at the disc inner edge is 10$^{22}$\,cm$^{-2}$, imposing pressure balance at the X--ray heated interface gives a critical surface density for thermal sweeping of \begin{equation} \Sigma_{\textrm{\textrm{TS}}} = 0.43\textrm{g}\,\textrm{cm}^{-2}\left(\frac{\mu}{2.35}\right)\left(\frac{T_{\textrm{X}}}{400\,K}\right)^{1/2}\left(\frac{T_{D}}{20K}\right)^{-1/2} \label{oldequn} \end{equation} where $\mu$, $T_{X}$ and $T_{D}$ are the mean molecular weight and X--ray heated and dust temperatures respectively. \cite{2013MNRAS.436.1430O} attempted a more rigorous analysis of the criterion for the onset of thermal sweeping, specifically addressing two assumptions used in their original approach \\ \noindent i) Relaxing the assumption that the column of X--ray heated gas to the star is always $10^{22}$ cm$^{-2}$ (we refer to this as being ``column limited'') and allowing instead for the possibility that the density is sufficiently high that the X--rays cannot heat the gas above the dust temperature. We refer to this latter scenario as being ``density limited'' \\ \noindent ii) Relaxing the assumption that the dust to X--ray heated transition occurs at the peak surface density of the disc. Instead the transition from X--ray heated to dust heated gas is located self-consistently at some radius interior to that of peak surface density. \\ In recognition of the fact that the flow near the disc rim is nearly radial, \cite{2013MNRAS.436.1430O} solved for 1D steady state flows with mass loss rates set by conditions at the X--ray sonic surface. Such flows are highly subsonic in the vicinity of the disc rim and thus the structure in this region (which is important for assessing the onset of thermal sweeping in 2D) is close to one of hydrostatic equilibrium. This allowed \cite{2013MNRAS.436.1430O} to propose analytic criteria for the onset of thermal sweeping (i.e. assuming that this occurs when $\Delta = H$) in both the density limited and column limited regimes. They found that \citep[in contrast to the hypothesis in][]{2012MNRAS.422.1880O} the X--ray heated interface is generally set by the density limited criterion and that in this case the critical peak surface density {\it increases} with inner hole radius and X--ray luminosity. Motivated by these findings they developed an ``improved'' criterion for thermal sweeping which we give below (correcting typos in Owen et al 2013): \begin{gather} \nonumber \Sigma_{\textrm{\textrm{TS}}} = 0.033\textrm{g\,cm}^{-2}\left(\frac{L_{\textrm{X}}}{10^{30}\textrm{erg s}^{-1}}\right)\left(\frac{T_{\textrm{\textrm{1AU}}}}{100\textrm{K}}\right)^{-1/2} \\ \nonumber \times \left(\frac{M_\ast}{M_{\odot}}\right)^{-1/2}\left(\frac{R_{\textrm{\textrm{max}}}}{\textrm{\textrm{AU}}}\right)^{-1/4} \\ \times \exp\left[\frac{1}{2}\left(\frac{R_{\textrm{\textrm{max}}}}{\textrm{\textrm{AU}}}\right)^{1/2} \left(\frac{T_{\textrm{\textrm{1AU}}}}{100\textrm{K}}\right)^{-1} \right] \label{newsig2} \end{gather} where $L_{\textrm{X}}$, $T_{1\textrm{\textrm{AU}}}$, $R_{\textrm{\textrm{max}}}$ are the X--ray luminosity of the star, the dust temperature at 1\,AU and the radius of maximum surface density (which is assumed to be conincident with the inner hole radius). The exponential term in the above expression causes the critical surface density to increase with radius (see the blue line in Figure \ref{compare} of this paper); this would imply an important role for thermal sweeping at large radius even for models with relatively high surface density normalisation. When this criterion was combined with plausible models for disc secular evolution it was predicted that thermal sweeping should limit maximum hole sizes in X--ray luminous sources to around 25-40\,AU. \section{Numerical Method} \label{introBla} We perform radiation hydrodynamic (RHD) simulations in this paper using a modified version of the RHD code \textsc{torus} \citep{2000MNRAS.315..722H, 2012MNRAS.420..562H, 2015MNRAS.448.3156H, 2015MNRAS.453.2277H}. \textsc{torus} is primarily a Monte Carlo radiation transport code, though no Monte Carlo radiative transfer is used in this paper. Rather we use the same simplified EUV/X--ray heating prescription (based on the ionisation parameter in optically thin regions: see 2.2 below) as in Owen et al. (2012), in part to remain consistent with their work but also to reduce the computational expense. \subsection{Hydrodynamics and gravity} \textsc{torus} uses a flux conserving, finite difference hydrodynamics algorithm. It is total variation diminishing (TVD), includes a Rhie-Chow interpolation scheme to prevent odd--even decoupling \citep{1983AIAAJ..21.1525R} and, in this paper, we use the van Leer flux limiter \citep{vanleer}. The disc's self-gravity is negligible and so we simply assume a point source potential determined by the star. Testing of the hydrodynamics algorithm in \textsc{torus} is given in \cite{2012MNRAS.420..562H}. \subsection{Ionisation parameter heating} \label{ionparam} We use {an extension of} the scheme implemented by \cite{2012MNRAS.422.1880O}, where the temperature in any cell optically thin to the X--rays is prescribed as a function of ionisation parameter \begin{equation} \xi = \frac{L_{\textrm{X}}}{n r^2} \label{ionparamEq} \end{equation} where $L_{\textrm{X}}$, $n$ and $r$ are the X--ray luminosity, local number density and distance from the star at which the ionisation parameter is being evaluated. The temperature function $f(\xi)$ was determined by comparison with the Monte Carlo photoionisation code \textsc{mocassin} \citep{2003MNRAS.340.1136E, 2008ApJS..175..534E} and is given by \begin{gather} T_{\textrm{hot}} = \frac{10^{a_0\log_{10}({\xi})+b_0\log_{10}({\xi})^{-2}}}{1 + c_0\log_{10}({\xi})^{-1} + d_0\log_{10}({\xi})^{-2} + e_0}\\ T_{\textrm{cold}} = \max(10^{f_0\log_{10}({\xi}) + g_0}, T_{\rm{dust}})\\ f(\xi) = \min(T_{\textrm{hot}}, T_{\textrm{cold}}) \label{fxi} \end{gather} where the numerical constants (subscript 0) are included in Table \ref{constants}. The resulting temperature--ionisation parameter relation is shown in Figure \ref{ionparamplot}. We impose a minimum temperature of 10\,K assuming that the ambient radiation field sets this floor value. This ionisation parameter heating is applied to all cells that are optically thin, defined as those for which the column number density to the star is less than $10^{22}$ particles cm$^{-2}$ \citep{2010MNRAS.401.1415O}. In optically thin cells we set the temperature equal to the maximum of the temperature prescribed by $f(\xi)$ and the local dust temperature. In cells optically thick to the X--rays the local dust temperature is applied (see section \ref{discConstruct}). \begin{table} \centering \caption{The constants used in the temperature-ionisation parameter heating function (equations 4--6).} \label{constants} \begin{tabular}{@{}l l@{}} \hline Constant & Value \\ \hline $a_0$ & $8.9362527959248299\times10^{-3}$\\ $b_0$ & -4.0392424905367275\\ $c_0$ & 12.870891083912458\\ $d_0$ & 44.233310301789743\\ $e_0$ & 4.3469496951396964\\ $f_0$ & 3.15\\ $g_0$ & 23.9\\ \hline \end{tabular} \end{table} { \subsubsection{Limitations of our ionisation parameter heating} The $T(\xi)$ function used here is extended from the version used by \cite{2012MNRAS.422.1880O} down to lower values of $\xi$ {using optically thin boxes in} \textsc{mocassin} {calculations, where the role of attenuation is considered unimportant}, until an imposed lower bound on the temperature of 10\,K. This is the version used by Owen et al. (2013). Although we sample the whole viable range of $\xi$, once X--ray heating becomes relatively weak (i.e. for low $\xi$) the effects of FUV heating and molecular cooling may also become important. Unfortunately FUV heating is not necessarily some simple function of the local properties, therefore in this work we only explore the effect of X-ray driven thermal sweeping {described using the $T(\xi)$ profile in Figure 1}. {In this paper we will show that the detailed form in the low temperature regime (and in particular the existence of an implied pressure maximum) plays a much more important role in determining the onset of thermal sweeping than has been believed hitherto\footnote{This finding is leading us to re-examine the detailed thermal structure of X-ray irradiated gas in the low X-ray flux, high density regime, which will be presented in future work. Here we study thermal sweeping using the previously adopted temperature-ionization parameter form of Owen et al. (2013). Thus, we strongly caution readers to be careful when considering the use of such a profile at low $\xi$.}} {In addition to missing lower temperature physics, this prescription assumes ionisation equilibrium which may not always apply during fast-acting thermal sweeping. } \begin{figure} \hspace{-20pt} \includegraphics[width=9.6cm]{./ionparam2.pdf} \caption{The temperature-ionisation parameter prescription used for the calculations in this paper. It is constructed using equations 4--6 and the constants in Table 1. The diagonal lines represent lines of constant pressure. } \label{ionparamplot} \end{figure} \subsection{Further implementation} We use a 2D cylindrical grid for all models in this paper. Since we assume reflective symmetry about the disc mid plane we only model half of the disc (though we have checked this with simulations that do not assume reflective symmetry, finding any differences are negligible). In this implementation of \textsc{torus} we use a fixed, uniformly spaced, grid to ensure robust results \citep[artificially induced instabilities can possibly arise on non--uniform or adaptive meshes,][]{2000ApJS..131..273F}. Our simulations are MPI parallelized and use domain decomposition. The radiation hydrodynamics uses operator splitting, i.e. we perform hydrodynamic and ionisation parameter heating steps sequentially. We used a variety of total grid sizes and cell numbers, so the resolution varies. However, we always ensured that the disc scale height at the radius of peak surface density is resolved by at least 5 cells. We checked for convergence in a test calculation using $128^2$, $256^2$ and $512^2$ cells, finding good agreement, with marginally easier rapid clearing in the lower resolution simulations. For reference, the cell sizes are given in Table \ref{models}. We use a von Neumann-Richtmyer artificial viscosity scheme. The models are initially allowed to evolve using hydrodynamics only, with the temperature set by the dust temperature only, until the disc settles into a steady state (typically up to 5 rotation periods at the inner disc rim). \subsection{Disc construction} \label{discConstruct} We construct the disc by defining the peak mid plane density $\rho_{\textrm{\textrm{max}}}$ at some radial distance $R_{\textrm{\textrm{max}}}$ (which can be translated into a surface density given the disc scale height). The mid plane density $\rho_{\textrm{mid}}$ is initially described by \begin{equation} \rho_{\textrm{mid}} = \rho_{\textrm{\textrm{max}}} (R/R_{\textrm{\textrm{max}}})^{-9/4}. \label{rhodist} \end{equation} The dust temperature distribution is either taken from the models of \cite{2001ApJ...553..321D}, or is vertically isothermal and described by \begin{equation} T_{d} = \max\left({T_{1\rm{\textrm{AU}}}\left(\frac{R}{\rm{\textrm{AU}}}\right)^{-1/2}, 10}\right). \label{dusttemp} \end{equation} Equation \ref{dusttemp} also reasonably describes the mid--plane temperature structure in the D'Alessio et al. (2001) models. We use two models in this paper, one with $T_{1\textrm{AU}}=50\,$K and one with $T_{1\textrm{AU}}=100$\,K. The vertical structure is initially constructed by imposing a profile corresponding to hydrostatic equilibrium for the case that the disc is vertically isothermal, i.e. \begin{equation} \rho(r,z) = \rho_{\textrm{mid}}\exp(-z^2/(2H^2)) \end{equation} where $H$ is the disc scale height $c_s/\Omega$. For the vertically isothermal models, this gives a surface density profile of the form \begin{equation} \Sigma(r) \propto R^{-1}. \label{Rmin1} \end{equation} The radial surface density profile for the models using the D'Alessio et al. (2001) temperature grid is similar, approximately of the form $\Sigma(R)\propto R^{-0.93}$. The models in this paper are 2D cylindrically symmetric. We initially impose a Keplerian velocity profile for the azimuthal velocity, while the velocity in other directions is initially zero. The radial transition from disc to inner hole is initially not continuous; however we begin the simulation run with hydrodynamics only (i.e. no radiation field) to allow the disc inner edge to relax. We set the $\alpha$-viscosity coefficient to a low value ($10^{-6}$) as we do not expect secular evolution of the disc due to redistribution of angular momentum on the timescale on which the steady state wind solution is established. As with the simulations of \cite{2012MNRAS.422.1880O} we assume a constant mean particle mass of 1.37 over the whole simulation grid. Once the disc is irradiated by X--rays, the properties of X--ray heated gas in the disc mid-plane and its interface with the dust heated disc can also be estimated semi--analytically using an approach which we discuss in the appendix. \\ \section{Code testing} \begin{figure*} \hspace{-20pt} \includegraphics[width=4.3cm]{./200B.pdf} \includegraphics[width=4.3cm]{./400B.pdf} \includegraphics[width=4.3cm]{./600B.pdf} \includegraphics[width=4.3cm]{./800B.pdf} \hspace{-20pt} \includegraphics[width=4.3cm]{./1000B.pdf} \includegraphics[width=4.3cm]{./1200B.pdf} \includegraphics[width=4.3cm]{./1400B.pdf} \includegraphics[width=4.3cm]{./1500B.pdf} \includegraphics[width=10cm]{./colourbar.pdf} \caption{The evolution of the density distribution of a disc in the column limited regime. The disc is stable until a plume of material moving vertically at the disc inner edge allows the X--rays to propagate further into the disc. } \label{xraywarmsnaps} \end{figure*} \textsc{torus} is an extensively tested code \citep[see e.g.][]{2009A&A...498..967P, 2012MNRAS.420..562H,2015MNRAS.453.1324B, 2015MNRAS.448.3156H}; however, for the applications in this paper some new features have been added such as the ionisation parameter heating function. We ran test calculations of stable discs to compare with expectations from Owen et al. (2012). We found mass loss rates to within 40 per cent of the relation B4 from their work \begin{equation} \dot{M} = 4.8\times10^{-9}\left(\frac{M_*}{M_{\odot}}\right)^{-0.148}\left(\frac{L_{\textrm{X}}}{10^{30}\,\rm{erg/s}}\right)^{1.14}M_{\odot} \rm{yr}^{-1} \end{equation} which was fitted to their simulations results. A deviation of 40 per cent is in line with the range of differences between the models and fit from Owen et al. (2012). We also checked that the specific angular momentum and Bernoulli constant \begin{equation} \frac{v^2}{2} + \Psi + \int\frac{\textrm{d}p}{\rho} \end{equation} were invariant along streamlines for a disc in a steady state, finding that these vary by less than 0.035 and 5 per cent along 80AU of any given streamline respectively. The small variation in the Bernoulli constant arises both from the necessity of fitting a barotropic equation of state along the streamline in order to evaluate the $\int dp/\rho$ term (resulting in interpolation error) and from small departures from a steady flow; these deviations are similar in magnitude to those found by Owen et al (2010). \begin{table} \centering \caption{Parameters used in our initial thermal sweeping test calculation, which is similar to that presented in Owen et al. (2012).} \label{model1} \begin{tabular}{@{}l c l@{}} \hline Parameter & Value & Description \\ \hline $R_{\textrm{\textrm{max}}}$ & 5AU & Inner hole radius\\ $\rho_{\textrm{\textrm{max}}}$ & $1\times10^{-14}$g\,cm$^{-3}$ &Peak mid--plane density\\ $T_{\textrm{\textrm{1AU}}}$ & 50\,K & 1AU mid--plane dust temperature \\ $T_{\textrm{D}}(z>0)$ & D'Alessio & Vertical dust temperature profile\\ $M_*$ & 0.1\,M$_{\odot}$ & Star mass\\ $L_{\textrm{X}} $ & $2\times10^{30}$\,erg\,s$^{-1}$ &X--ray luminosity \\ $\Sigma_{\textrm{\textrm{max}}}$ & 0.258\,g\,cm$^{-2}$ & Peak surface density \\ \hline \end{tabular} \end{table} \begin{figure} \hspace{-5pt} \includegraphics[width=9.5cm]{./xrayWarm_Rinner.pdf} \caption{The evolution of the disc inner radius for our initial thermal sweeping test calculation, which has similar parameters to that presented in Owen et al. (2012). Note that once instability initiates, the disc inner radius increases nonlinearly with time. The black line shows a linear evolution of the disc inner edge.} \label{rinnerEvo} \end{figure} \begin{table*} \centering \caption{Summary of the parameters of the simulations in this paper. $R_{\textrm{\textrm{max}}}$ is the location of the peak mid--plane density, either in the long-term for a stable disc, or for an unstable disc that just prior to rapid clearing. $\Sigma_{\textrm{\textrm{max}}}$ is the surface density at $R_{\textrm{\textrm{max}}}$. $T_{1\textrm{\textrm{AU}}}$ is the dust temperature at 1\,AU. $\rho_{\textrm{\textrm{max}}}$ is the mid--plane density at $R_{\textrm{\textrm{max}}}$. All models have $L_{\textrm{X}}=2\times10^{30}$\,erg\,s$^{-1}$.} \label{models} \begin{tabular}{@{}l l l l l l c l l l l@{}} \hline Model ID & Stellar mass & $R_{\textrm{\textrm{max}}}$ & $\Sigma_{\textrm{\textrm{max}}}$ & $T_{1\textrm{\textrm{AU}}}$ &$\rho_{\textrm{\textrm{max}}}$ & Column & Vertically & Stable? & resolution\\ & M$_\odot$ & AU & g\,cm$^{-2}$ & K & g\,cm$^{-3}$ & limited? & isothermal? & & AU \\ \hline A & 0.7 & 28.1 & 7.2 & 100 & $1.20\times10^{-13}$ & No & Yes & Yes & 0.4 \\ B & 0.7 & 28.5 & 0.72 & 100 & $1.20\times10^{-14}$ & No & Yes & Yes & 0.4\\ C & 0.7 & 29.1 & 0.34 & 100 & $5.50\times10^{-15}$ & No & Yes & Yes & 0.4\\ D & 0.7 & 29.1 & $7.\times10^{-2}$ & 100 & $1.26\times10^{-15}$ & No & Yes & Yes & 0.4\\ E & 0.7 & 35.5 & 0.136 & 100 & $6.77\times10^{-16}$& No & No & Yes & 0.4\\ F & 0.7 & 35.5 & $2.8\times10^{-2}$ & 100 & $3.40\times10^{-16}$ & No & No & No & 0.4\\ G & 0.7 & 20.8 & 0.20 & 100 & $4.62\times10^{-16}$ & No & No & No & 0.4\\ H & 0.7 & 26.0 & $5.2\times10^{-2}$ & 100 & $5.81\times10^{-16}$ & No & No & No & 0.4\\ I & 0.1 & 11.0 & 5.8 & 50 & $1.34\times10^{-14}$ & No & No & Yes & 0.4\\ J & 0.1 & 7.9 & 1.32 & 50& $2.94\times10^{-14}$ & Yes & Yes & Yes & 0.2\\ K & 0.1 & 7.7 & $0.174$ & 50 & $1.26\times10^{-14}$ & Yes & Yes & No & 0.1\\ L & 0.1 & 7.0 & 0.52 & 50 & $1.20\times10^{-14}$& Yes & No & No & 0.1\\ M & 0.1 & 8.6 & 0.166 & 50 & $7.41\times10^{-15}$ & Yes & Yes & No & 0.2\\ N & 0.1 & 7.6 & 0.28 & 50 & $8.22\times10^{-15}$& Yes & Yes & No & 0.2\\ O & 0.1 & 7.0 & $9\times10^{-2}$ & 50 & $4.49\times10^{-15}$ & Yes & No & No & 0.1 \\ P & 0.1 & 25 & 0.2 & 50 & $2.0\times10^{-15}$ & No & Yes & No & 0.4\\ Q & 0.1 & 25 & 2. & 50 & $2.0\times10^{-14}$ & No & Yes & Yes & 0.4\\ \hline \end{tabular} \end{table*} As a further test, we also first consider a thermal sweeping scenario very similar to that in the original calculation presented in Owen et al. (2012). The parameters of this model (which has the same D'Alessio dust temperature structure and has a very similar peak mid--plane density and inner hole radius to the original model) are given in Table \ref{model1}. Snapshots of the density evolution of this first model are given in Figure \ref{xraywarmsnaps}. The morphological evolution is the same as that observed in the original thermal sweeping models. A billowy plume of material at the disc inner edge appears just prior to rapid disc clearing. Once the instability is fully initiated, over 20\,AU of the disc clears in about 700 years. We illustrate the accelerated clearing through Figure \ref{rinnerEvo}: for a surface density profile given by equation \ref{Rmin1}, constant mass loss (as in the case of normal X--ray photoevaporation) results in a {\it linear} increase of inner hole radius with time as seen at times less than $300$ years. Subsequently the non-linear increase of disc radius with time indicates the transition to runaway clearing. \\ In summary, \textsc{torus} reproduces the behaviour expected from previously published simulations. It conserves physical constants accurately and for stable and unstable discs is consistent with the results presented by Owen et al. (2012). \section{Results} \subsection{The suite of simulations} We ran a suite of 2D radiation hydrodynamic simulations of disc photoevaporation using the procedure discussed in section \ref{introBla}. This includes simulations in the column limited and density limited regimes. {Since there are a large number of possible free parameters (i.e. all of those associated with the stellar and disc properties) and it is the evolution of the disc properties that should tip a given disc into the thermal sweeping regime, we predominantly focus on modifying the disc parameters rather than the stellar.} We explore two different stellar masses (0.1 \& 0.7 M$_\odot$) and a range of disc inner hole radii and masses. All models consider an X--ray luminosity of $2\times10^{30}$\,erg\,s$^{-1}$. A summary of the simulation parameters are given in Table \ref{models}. We run all models until it is clear whether normal clearing or radiative instability (i.e. nonlinear inner hole growth) is occurring, with a maximum simulation time of about 6000 years. \subsection{Testing the Owen et al. (2013) criterion for the onset of thermal sweeping} In Figure \ref{sigma13} we show the ratio of the peak disc surface density in our models to the surface density at which thermal sweeping is predicted to initiate according to the Owen et al. (2013) approach (equation \ref{newsig2} in this paper). The points are colour coded blue and red for stable and unstable models respectively. An accurate criterion should separate the stable and unstable models about a ratio value of 1. The Owen et al. (2013) approach predicts that all models except model A should be unstable; however this is certainly not the case in the simulations. There are two possible reasons why this surface density threshold fails to distinguish stable and unstable models. The first is that the criterion on which this surface density is based (i.e. $\Delta/H = 1$; see Section 1) is incorrect. The other is that the error might be introduced in going from this requirement to a corresponding column density; the latter step depends on the vertical structure of the disc and is therefore not unique for given mid-plane properties. We can distinguish these possibilities by examining the $\Delta/H$ values corresponding to each model (Figure \ref{DeltaH}). We do not measure $\Delta/H$ directly from the simulations because there is no steady state for those simulations that turn out to be unstable. Instead we follow Owen et al (2013) in deriving analytic expressions for the predicted values of $\Delta/H$ as a function of conditions at the cavity rim (see Appendix). Figure \ref{DeltaH} again colour codes the simulation outcomes, with blue and red being stable and unstable respectively. Note that we place an upper limit on $\Delta/H$ in this plot, as the ratio can become very large. Analytically derived $\Delta/H$ values give rise to predictions about the stability of the models consistent with the surface density estimate, in that almost all models are expected to become unstable. We thus demonstrate that the reason that the density threshold proposed by Owen et al. (2013) does not work is because $\Delta/H=1$ is apparently not the fundamental criterion for instability. Since Figure \ref{DeltaH} suggests that, out of the models run, stable and unstable models are separated at about $\Delta/H \sim 5$, it is perhaps tempting to modify the criterion by just proposing a higher $\Delta/H$ threshold; we do not do this because we shall see that the value of $\Delta/H$ can be very insensitive to disc surface density. We illustrate this in Figure \ref{sigma_DeltaH}, where we take a set of models with stellar and disc parameters identical to model Q but simply change the surface density normalisation. The blue-black curve shows that it is possible to vary the disc column density normalisation by two orders of magnitude while only affecting the value of $\Delta/H$ by less than a factor $2$. Thus a criterion based on the value of $\Delta/H$ is likely to be highly inaccurate in predicting the threshold column density for the onset of thermal sweeping. \subsection{A new criterion for thermal sweeping} We have developed a new criterion for thermal sweeping which is consistent with all the simulations and which is based on the maximum pressure that can be attained by X--ray heated gas. Figure \ref{ionparamplot} depicts a set of isobars in the plane of ionisation parameter against temperature, with pressure rising towards the upper left of the plot. Evidently there is a maximum possible pressure $P_{\textrm{Xmax}}$ (at fixed X--ray flux) which is associated with the feature in the ionisation parameter versus temperature relation at $\xi \sim 1\times10^{-7}$ and a temperature of $\sim 100$K. The existence of this maximum pressure places an absolute upper limit on the extent to which the X--ray heated region can penetrate into the disc. If the maximum pressure of X--ray heated gas is less than the maximum disc mid-plane pressure $P_{\textrm{Dmax}}$ at the inner rim then there is no means by which the disc can be engulfed by a front of runaway X--ray heating. We might therefore expect that $P_{\textrm{Xmax}} < P_{\textrm{Dmax}}$ is a {\it sufficient} condition for stability. \begin{figure} \hspace{-15pt} \includegraphics[width=8.8cm]{./Owen13.pdf} \caption{The ratio of the model peak surface density to the critical surface density for thermal sweeping according to the Owen et al. (2013) approach - equation \ref{newsig2} in this paper. Stable and unstable models should be separated by a ratio value of unity.} \label{sigma13} \end{figure} \begin{figure} \hspace{-15pt} \includegraphics[width=8.8cm]{./DeltaH.pdf} \caption{Analytic values of $\Delta/H$ for the simulations in this paper. Blue and red points are stable and unstable respectively. According to the existing theory, $\Delta/H > 1$ should result in an unstable disc, however these results do not reflect this.} \label{DeltaH} \end{figure} \begin{figure} \hspace{-15pt} \includegraphics[width=8.8cm]{./sigmaDeltaH.pdf} \caption{The variation in $\Delta/H$ (left axis, blue-black line) or the ratio of critical to peak mid--plane pressure (right axis, red-black line) as a function of peak surface density for a disc with a 25\,AU inner hole about a 0.1\,M$_{\odot}$ star with $L_X=2\times10^{30}$\,erg\,s$^{-1}$. Close to $\Delta/H=1$, the ratio is not very sensitive to changes in the disc peak surface density. Conversely, the pressure ratio scales linearly over all surface densities. } \label{sigma_DeltaH} \end{figure} We can also assess whether $P_{\textrm{Xmax}} < P_{\textrm{Dmax}}$ should be a {\it necessary} condition for stability, i.e. whether there are also stable solutions where $P_{\textrm{Xmax}} > P_{\textrm{Dmax}}$ but where the interface between X--ray heated and disc gas occurs at a pressure $P_i < P_{\textrm{Dmax}}$. We however argue that such an interface would be unstable since perturbations would drive the solution up the steep branch of the ionisation parameter temperature plot at $\xi < 10^{-7}$. Pressure is a negative function of density along this branch and therefore under-dense regions can evolve up the branch towards the pressure maximum. The radial extent of such excursions is however limited if $P_{\textrm{Xmax}} < P_{\textrm{Dmax}}$. We therefore propose that this is both a necessary and sufficient condition for stability. We test this hypothesis {in} Figure \ref{ionparamcrit} where again stable and unstable models are colour coded and we plot the ratio of the maximum pressure in the dust heated disc to $P_{\textrm{Xmax}}$: \begin{equation} P_{\textrm{Xmax}} = P_{\textrm{TS}} = \frac{L_{\textrm{X}}}{\xi_{\textrm{crit}}R_{\textrm{max}}^2}k_B T_{\textrm{crit}} \label{pcrit} \end{equation} where $\xi_{\textrm{crit}}$ and $T_{\textrm{crit}}$ are the temperature and ionisation parameter corresponding to the maximum pressure attainable by X--ray heated gas. From the temperature--ionisation parameter relation, we find that $\xi_{\textrm{crit}}=1.2\times10^{-7}$ and $T_{\textrm{crit}}=113\,$K. We see that the ratio $P_{\textrm{Xmax}}/P_{\textrm{Dmax}}$ is indeed an excellent discriminant between stable and unstable models. Furthermore, in Figure \ref{sigma_DeltaH} the black-red line shows the variation of the pressure ratio for a disc with a 25\,AU hole (i.e. similar to model Q) at different surface density normalisations. Note that we have already argued that for such a disc $\Delta/H$ is not always sensitive to changes in the surface density, making it a poor criterion. Conversely, our new criterion scales linearly with the disc surface density. {It is important to note that under this new criterion thermal sweeping depends on the form of the low $\xi$ end of the $T(\xi)$ function{, and is thus sensitive to the assumptions made in obtaining it}. If FUV heating dominates in these regions, then this region of $T(\xi)$ may not be accessible to the disc and the physics controlling thermal sweeping is likely to be qualitatively different. It will be important to assess the role of FUV heating and molecular cooling in future work. } For this critical pressure criterion, the corresponding critical peak mid--plane volume density for thermal sweeping is \begin{equation} n_{\textrm{TS}} = 4.2\times10^{10}\,\textrm{cm}^{-3}\left(\frac{R_{\textrm{max}}}{\textrm{AU}}\right)^{-3/2}\left(\frac{T_{\textrm{1AU}}}{100}\right)^{-1}\left(\frac{L_{\textrm{X}}}{10^{30}}\right). \label{nts} \end{equation} Although we go on to discuss critical surface densities, it is important to emphasise that thermal sweeping is actually determined by a criterion on the volume density, not the surface density. One could therefore conceive of two discs with identical surface densities, but different thermal structures such that the mid--plane density differs sufficiently that one disc is stable and the other unstable. Nevertheless, in practice a surface density criterion for thermal sweeping is more accessible and more useful than a volume density estimate. The D'Alessio models (in which the temperature rises above the mid-plane) have a higher surface density at fixed mid-plane density than a vertically isothermal model and thus assuming a vertically isothermal disc to calculate the critical surface density for thermal sweeping should provide a reasonable lower limit. Hence we approximate \begin{equation} \Sigma_{\textrm{\textrm{TS}}} = 2\rho_{\textrm{\textrm{TS}}}\frac{c_s}{\Omega} \end{equation} which, using equation \ref{nts}, assuming $\mu = 1.37$ and inserting other constants, results in \begin{multline} \Sigma_{\textrm{\textrm{TS}}} = 0.075\textrm{\,g\,cm}^{-2}\left(\frac{L_{\textrm{X}}}{10^{30}}\right)\left(\frac{M_*}{M_{\odot}}\right)^{-1/2} \\ \times \left(\frac{T_{\textrm{1AU}}}{100}\right)^{-1/2}\left(\frac{R_{\textrm{max}}}{\textrm{AU}}\right)^{-1/4}. \label{myTSequn} \end{multline} Interestingly, this criterion is very similar to the expression derived using the Owen et al. (2013) approach (equation \ref{newsig2}) but without the exponential term. This difference can be readily understood in that we now just require for stability that the pressure in the dust heated disc exceeds the maximum pressure of X--ray heated gas; Owen et al. (2013) proposed a more stringent requirement for stability by additionally placing constraints on the scale length of X--ray heated gas, a condition that required that the interface was a sufficiently large number of pressure scale lengths from the disc pressure maximum. Our criterion is more readily satisfied and we therefore find a lower surface density threshold for thermal sweeping than Owen et al 2013. \begin{figure} \hspace{-15pt} \includegraphics[width=8.8cm]{./IonParamCriterion.pdf} \caption{The ratio of the disc maximum mid--plane pressure to the critical pressure for rapid radiative disc dispersal (equation \ref{pcrit}). There is a clear transition from instability to stability once the ratio exceeds unity.} \label{ionparamcrit} \end{figure} Although the disc temperature in our simulations scales as $R^{-1/2}$, we set the disc temperature to a floor value of $10$K at radii beyond \begin{equation} R_{\textrm{floor}} = 1\,\textrm{\textrm{AU}}\left(\frac{T_{\textrm{1AU}}}{10}\right)^2 \end{equation} and so beyond $R_{\textrm{floor}}$ the critical surface density for thermal sweeping is \begin{equation} \Sigma_{\textrm{TS}} = 0.24\textrm{\,g\,cm}^{-2}\left(\frac{L_{\textrm{X}}}{10^{30}}\right)\left(\frac{M_*}{M_{\odot}}\right)^{-1/2}\left(\frac{R_{\textrm{max}}}{\textrm{AU}}\right)^{-1/2}. \label{myTSequn2} \end{equation} We reiterate that these surface density estimates assume a vertically isothermal disc. \begin{figure} \hspace{-10pt} \includegraphics[width=9cm]{./comparison_0p1Msol_10Kfloor.pdf} \caption{A comparison of the critical surface density for thermal sweeping from Owen et al. (2012, 2013) and the new relation derived here. Note that these relations assume a vertically isothermal disc and will likely be a lower limit for warmer discs with lower mid--plane densities. This plot assumes $T_{\textrm{X}}=400$\,K, $T_{\textrm{\textrm{1AU}}}=50$\,K and $M_*= 0.1\,M_{\odot}$. } \label{compare} \end{figure} \begin{figure} \hspace{-10pt} \includegraphics[width=9cm]{./newApproach.pdf} \caption{The ratio of the model peak surface density to the critical surface density for thermal sweeping according to our new criterion - equation 22 in this paper. Stable and unstable models should be separated by a ratio value of unity. The new criterion is much more accurate than the old (see Figure \ref{sigma13}). The small discrepancies are consistent with the way that changes in the assumed vertical structure affect the mapping from mid-plane to vertiaclly integrated quantities.} \label{sigma2} \end{figure} We compare this new composite relation (equations \ref{myTSequn}, \ref{myTSequn2}) alongside the Owen et al. (2012) and Owen et al. (2013) expressions in Figure \ref{compare}. In constructing Figure \ref{compare} we assume that $T_{\textrm{X}}=400$\,K (for the Owen et al. 2012 criterion), $T_{\textrm{\textrm{1AU}}}=50$\,K and $M_*= 0.1\,M_{\odot}$ (and that the disc is vertically isothermal). We see that, unlike the criteria previously proposed, our new critical surface density threshold declines (albeit mildly) with radius and thus sweeping at large radius is harder than for the previous prescriptions. On the other hand, it is important to note that the radial decrease of the disc surface density in our simulations (and also in observed discs - \citealt{2009ApJ...700.1502A}) is \textit{steeper} ($\Sigma \propto R^{-1}$) than the radial decrease in the critical surface density ($\Sigma \propto R^{-1/4}$ or $\Sigma \propto R^{-1/2}$). This means that a disc that becomes unstable to rapid radiative clearing at small radii should then clear out the whole disc. It also means that, for canonical disc surface density profiles, thermal sweeping will always eventually set in at some large radius in the disc. We reiterate that the actual criterion is on the peak mid--plane pressure and hence the volume density, not the surface density. We should therefore not expect the new surface density criterion to be completely accurate. In Figure \ref{sigma2} we show the ratio of the model peak surface density to the critical surface density for thermal sweeping given by our new criterion. Compared with the old criterion (see Figure \ref{sigma13}) there is much better agreement: the new solution is accurate to within a factor of 2, even though the surface density is not the fundamental parameter. \section{Discussion} \subsection{The clearing radius for discs with holes opened by photoevaporation} Combining the theory of normal disc photoevaporation detailed by Owen et al. (2010, 2011, 2012) with the theory of viscous disc accretion presented by \cite{1998ApJ...495..385H} we can constrain the maximum possible inner hole radius for viscous discs with inner holes opened by photoevaporation \citep[c.f.][]{2006MNRAS.369..229A}. For normal photoevaporation, to zeroth order the photoevaporative mass loss rate \begin{equation} \dot{M_w} = 8\times10^{-9}\left(\frac{L_{\textrm{X}}}{10^{30}}\right)M_{\odot}\,\textrm{yr}^{-1} \label{mw} \end{equation} is approximately equal to the accretion rate at gap opening \citep{2006MNRAS.369..229A,2011MNRAS.412...13O} and we can ignore the effects of photoevaporation on the previous evolution of the disc. Using the self-similar disc evolution model for $\nu\propto R$ given by \cite{1974MNRAS.168..603L,1998ApJ...495..385H}, at the time of gap opening the surface density profile is \begin{equation} \Sigma_{GO} = \frac{M_d(0)}{2\pi RR_1}T_{GO}^{-3/2}\exp\left(-\frac{R}{R_1T_{GO}}\right). \label{Hartmann} \end{equation} Here $T$ denotes normalised time ($T=1+t/t_s$) where $t_s$ is the viscous time at the initial characteristic radius of the disc ($R_1$) and the subscript $GO$ dentoes the normalised time at gap opening. By equating equation \ref{Hartmann} with the evolution of the accretion rate in the viscous similarity solution we obtain: \begin{equation} T_{GO} = \left(\frac{M_d(0)}{2t_s\dot{M_w}}\right)^{2/3}. \label{Hartmann2} \end{equation} Once the gap is opened, then the disc profile remains roughly constant, and described by equation \ref{Hartmann} during the time that photoevaporation erodes the inner hole. Thus equating equation \ref{Hartmann} to the thermal sweeping criterion (equation \ref{myTSequn}) we can solve for the radius at which thermal sweeping will initiate for a viscous accretion disc undergoing photoevaporation. In practice it turns out that thermal sweeping occurs in the region of the disc where the radial exponential fall-off (equation \ref{Hartmann}) is important. This means that the radius for thermal sweeping cannot be written in closed form and requires numerical solution. In Figure \ref{rinner_plot} we plot the full numerically evaluated solution. We assume that the initial disc mass $M_d(0)$ is 10 per cent of the stellar mass. We use the fit to the dependence of mean X--ray luminosity on stellar mass of \cite{2005ApJS..160..401P}, i.e. \begin{equation} \log_{10}(L_X) = 30.37+1.44*\log_{10}(M_*/M_\odot). \end{equation} We also derive $T_{1\textrm{\textrm{AU}}}$ as a function of stellar mass by linear interpolation of the values used for the simulations in this paper (i.e. 50 and 100\,K for 0.1 and 0.7\,M$_{\odot}$ stars respectively). We assign values of $R_1$ in equation 24 by assuming a value of $\alpha$ and an initial mass accretion rate, since \begin{equation} \frac{M_{d}(0)}{2t_s} = \dot{M}(0) \end{equation} from \cite{1998ApJ...495..385H} gives \begin{multline} \left(\frac{R_1}{\textrm{AU}}\right) = 63.6\left(\frac{M_d(0)}{0.1M_{\odot}}\right)\left(\frac{\alpha}{10^{-2}}\right) \left(\frac{T_{1\textrm{AU}}}{100}\right) \\ \times \left(\frac{M_*}{M_{\odot}}\right)^{-1/2}\left(\frac{\dot{M}(0)}{10^{-7}M_{\odot}\textrm{yr}^{-1}}\right)^{-1}. \end{multline} Figure \ref{rinner_plot} shows the resulting numerical solution for a range of $\alpha$ and $\dot{M}(0)$ values. Lower viscosities and higher initial mass accretion rates are more conducive to thermal sweeping, though in general it only ever initiates at very large radii and should have little bearing on the overall evolution of such normal discs. \begin{figure} \hspace{-15pt} \includegraphics[width=9cm]{./newStuff.pdf} \caption{The radius beyond which rapid disc clearing would take place as a function of the mass of the central source, for discs undergoing normal internal photoevaporation and viscous accretion.} \label{rinner_plot} \end{figure} Note that we have ignored viscous spreading and the removal of mass due to photoevaporation prior to gap opening and have therefore slightly over-estimated the disc surface densities at gap opening. Nevertheless, the modest depletion of gas by photoevaporation prior to gap opening (Owen et al 2011) will not dramatically reduce the very large clearing radii reported here. Although normal viscous accretion and internal photoevaporation is unlikely to lead to thermal sweeping, it could still arise if some other process such as planet formation can lower the peak surface density below the critical value \begin{figure*} \includegraphics[width=16cm]{./hist_haworth16_4.pdf} \caption{Histograms showing the ratio of the non-accreting to accreting transition disc lifetimes (left panel) and inner hole radii at which thermal sweeping initiates (right panel). For a population of discs evolving under the combined action of viscosity, X--ray photoevaporation and the new thermal sweeping criterion given in Equation \ref{myTSequn}. } \label{histograms} \end{figure*} \subsection{Population synthesis models} Since our new calculations suggested that Owen et al (2013) over-estimated the surface density at which thermal sweeping sets in, it is important to quantify the effect a much less efficient thermal sweeping process would have on a population of evolving discs. Owen et al. (2012, 2013) suggested that thermal sweeping would destroy the outer disc almost immediately after photoevaporation had opened a gap in the inner disc and it had drained onto the central star. Such rapid destruction was necessary to avoid producing a large number of non-accreting transition discs with large holes, and was consistent with the transition disc statistics. The large radii that we estimate for the onset of thermal sweeping in Figure \ref{rinner_plot} lead us to now expect that thermal sweeping will do little to help avoid the over-prediction of relic gas discs at large radii. We confirm this by applying our new thermal sweeping criterion to the synthetic disc population of Owen et al. (2011). This population evolved under the action of viscosity and X--ray photoevaporation starting from a single disc model \citep[a][zero time similarity solution]{1974MNRAS.168..603L}. It was designed to match the general observational properties of disc evolution (disc fraction and accretion rate evolution as a function of time). Variety in disc evolution came from the spread in X--ray luminosities alone, which in turn created a spread in photoevaporation rates. We post-process this simulation set, which did not originally include thermal sweeping and the disc was entirely destroyed by standard photoevaporation. After the gap has opened and the inner disc has drained we assume thermal sweeping takes place once the peak surface density in the remaining outer disc drops below the threshold given in Equation \ref{myTSequn}. We then record the inner hole radius where this occurred, the remaining disc mass and the lifetime over which the disc would have appeared as a accreting and non-accreting transition disc. Figure \ref{histograms} shows histograms of the ratio of the non-accreting transition disc lifetime to the accreting transition disc lifetime for individual discs (left panel) and the inner hole radius at which thermal sweeping initiates (right panel). The inner hole radii at which thermal sweeping begins is around $\sim$300 AU, consistent with the general picture discussed above. These clearing radii are significantly bigger than the $\leq$40 AU found by Owen et al. (2013). As shown in the left panel of Figure \ref{histograms} this results in the majority of discs spending a large fraction of time as a non-accreting transition disc with a large hole. We find thermal sweeping only initiates once the hole radius becomes comparable with the outer radius of the disc and the surface density begins to drop exponentially rather than with a $R^{-1}$ power-law. The remaining disc mass at this point is small $\sim$10$^{-5}$ -- 10$^{-4}$ M$_{\odot}$. In fact we find that with this revised thermal sweeping criterion, thermal sweeping has little impact on the total evolution of the disc and without thermal sweeping the remaining disc would be quickly removed by ordinary photoevaporation. The small number of discs with a small rapid clearing radius have the very highest photoevaporation rates. For large hole radii ($> 20$~AU) the number of non-accreting (or those with upper limits) to accreting transition discs is observed to be small $\sim 20$\% \citep{2011ARA&A..49...67W}. Therefore, it appears X--ray driven thermal sweeping is unable to effectively destroy the final remnant disc as previously hypothesised. It is possible that other components of the radiation field not considered here, such as the FUV, play an important role in the final evolution of protoplanetary discs \citep[e.g.][]{2015ApJ...804...29G}. \section{Summary and conclusions} We have used radiation hydrodynamic simulations to investigate the final, rapid, radiative clearing of gas from protoplanetary discs. We draw the following main conclusions from this work. \\ \noindent 1) Rapid radiative clearing does not fundamentally occur when the ratio of vertical and radial pressure scale lengths $\Delta/H = 1$, as proposed by Owen et al. (2012, 2013). Rather it hinges upon the requirement that the maximum pressure attainable by X--ray heated gas must be less than the pressure in the dust heated disc at its maximum (near the disc inner edge). \\ \noindent 2) We present an equation for the critical volume density (equation \ref{nts}) for rapid radiative clearing, as well as a lower limit critical surface density expression (equation \ref{myTSequn}), based on an assumed vertically isothermal temperature profile in the disc. Our new critical surface density estimate is both quantitatively and qualitatively different to the previous estimates of Owen et al. (2012, 2013) and, generally, will result in thermal sweeping happening less readily than previously expected (see Figure \ref{compare}). \\ \noindent 3) We use the previously established theory of disc photoevaporation to calculate the maximum possible inner hole radius as a function of the stellar mass, for viscous discs with gaps opened by photoevaporation. We find that thermal sweeping only happens at radii where it can have a significant impact on disc evolution for low $\alpha$ parameters and high initial accretion rates. Even in this regime, thermal sweeping only initiates beyond 100\,AU. It is still possible that some other mechanism could reduce the disc surface density sufficiently that thermal sweeping initiates at smaller radii.\\ \noindent 4) Since rapid radiative clearing happens less readily than previously believed, the time discs spend in the non--accreting phase will be longer than estimates such as those by \cite{2015MNRAS.454.2173R}. \\ \noindent 5) X--ray driven thermal sweeping does not appear to be the solution to the lack of non-accreting transition discs with large holes. Thus, further work is required to explain the apparent speed up of outer disc dispersal following the shut-off of accretion onto the central star and clearing of the inner disc. {In particular it is possible that FUV heating, which may dominate in components of the disc where X--ray heating is weak but is not included here, could play an important role in the final clearing of protoplanetary discs. } \section*{Acknowledgments} {We thank the referee, Barbara Ercolano, for her swift but insightful review of the paper, which also highlighted important avenues for future research.} We {also} thank Giovanni Rosotti and Stefano Facchini for useful discussions. TJH is funded by the STFC consolidated grant ST/K000985/1. Support for CJC and additional hardware costs are provided by the DISCSIM project, grant agreement 341137 funded by the European Research Council under ERC-2013-ADG. JEO acknowledges support by NASA through Hubble Fellowship grant HST-HF2-51346.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. This work was undertaken on the COSMOS Shared Memory system at DAMTP, University of Cambridge operated on behalf of the STFC DiRAC HPC Facility. This equipment is funded by BIS National E-infrastructure capital grant ST/J005673/1 and STFC grants ST/H008586/1, ST/K00333X/1. DiRAC is part of the National E-Infrastructure.
1,116,691,497,936
arxiv
\section{Introduction}\label{sec1} Interferometric techniques have been used in optical astronomy for over a hundred years in view of their potential to achieve the highest angular spatial resolution \cite{1Michelson1921}. Different interferometric techniques are under continuous development for use in both ground-based and space-based observatories. Of particular interest is their use in the study of the properties of stars, for example in the measurement of their sizes, in the characterisation of multiple stellar systems, and in the precise measurement of their positions in the sky and their motions (astrometry) \cite{2Labeyrie1978},\cite{3Monnier2003},\cite{4Lawson2000},\cite{5glindemann2011},\cite{6ESO}. A list of optical and infrared astronomical interferometers can be found here \cite{7list}. The technique is not simple and is limited by seeing conditions (Earth’s atmospheric turbulence) and usually working with optical interferometers requires a certain amount of technical and optical expertise. Teaching astronomical interferometry to undergraduate physics students and graduate students is therefore not an easy task. It is usually presented briefly in Optics courses \cite{8Hecht1974},\cite{9Born_pablo},\cite{10Pedrotti2017}, and when dealing with Astronomy, most textbooks mainly focus on the classic Michelson interferometer \cite{11fundamAstron}. Due to these difficulties, few practical activities are carried out for training in these techniques at the university level and this is done mostly related to astronomy and astrophysics and space sciences courses. At laboratory level the use of a Michelson-type radio interferometer \cite{12Koda16}, the use of optical telescopes coupled to laser sources with polymeric optical fibers simulating stars \cite{13Illarramendi2014},\cite{14Arregui2017} or the set-up of interferometric experiments \cite{15Carbonell18} have been introduced to make such practices. In this work, we take a further step in the development of practical work on interferometry in astronomy. By positioning different plates having several apertures, with various diameters and separations, at the entrance of a 28 cm telescope, we have built a simple interferometer and with it, we have observed three bright stars (Betelgeuse, Rigel and Sirius). We have analysed the stellar interferograms by using optical interferometry theory. It is shown that the atmospheric turbulence causes reduction of the long-exposure fringe visibility by a factor that depends on Fried parameter. By studying the decay of the visibility with baseline, we have estimated the Fried parameter ($r_0$) for each case. The star sizes could not be estimated due to the small values of baselines provided by the experiment \cite{1Michelson1921},\cite{2Labeyrie1978},\cite{3Monnier2003},\cite{4Lawson2000},\cite{5glindemann2011} \section{Theoretical background}\label{sec2} A simplified operation of the stellar interferometer can be carried out by using a single telescope whose aperture is covered by a lid with two circular pinholes of variable separation between them, called baseline $B$. The optical fundamentals of this simple interferometer are based on those of Young's double-slit experiment, where the beams emerging from each pinhole of diameter $D$ form interference fringes in the focal plane of the telescope or plane of observation. An illustration of the procedure of this interferometer is shown in Fig. \ref{fig1}. The quality of the interference fringes detected at the observation plane is measured by the fringe visibility or contrast $V$. This is calculated by the following expression \cite{8Hecht1974}: \begin{equation} V = \frac{I_{max}-I_{min}}{I_{max}+I_{min}} \label{eq:eq1} \end{equation} \, where $I_{max}$ and $I_{min}$ are the maximum and minimum intensities of the interference fringes, respectively. The visibility is scaled from 0 to 1, where 0 means no fringes and 1 denotes fringes with perfect contrast. \begin{figure}[ht] \centering \includegraphics[scale=0.75]{fig1.pdf} \caption{Simple scheme of a double pinhole stellar interferometer. The two pinholes ($Q_1$ and $Q_2$) of diameter $D$ are separated by a distance $B$, and are placed far away from the source.} \label{fig1} \end{figure} Taking the simplest model to describe the emission of a star, i.e. a circular, uniform, and spatially incoherent source emitting quasi-monochromatic light, the diffraction-limited interference pattern at the plane of observation can be expressed as follows \cite{9Born_pablo}: \begin{equation} I(\alpha)=I_{0}\left(\frac{J_{1}\left(\frac{\pi}{\lambda} \alpha D\right)}{\frac{\pi}{\lambda} \alpha D}\right)^{2}\left(1+V_{s} \cos \left(\frac{2 \pi}{\lambda} \alpha B\right)\right) \label{eq:eq2} \end{equation} \begin{equation*} \text{with } V_{s}=2\left\lvert{\frac{J_{1}\left(\frac{\pi \alpha^{\prime} B}{\lambda}\right)}{\frac{\pi \alpha^{\prime} B}{\lambda}}}\right\rvert \end{equation*} where $J_1$ is the first-order Bessel function of the first kind, $\alpha^{'}$ is the angular size of the source and $\alpha$ is the observation angle. The product $\alpha B$ is the optical-path difference for small values of $\alpha$, and $I_0$ is a constant. $V_s$ is the spatial fringe visibility for this simple model. $V_s$ does not depend on $\alpha$, and it is inversely proportional to the source size and to the baseline distance. In fact, the steady decrease of $V_s$ from a value of 1 when $(\pi \alpha^{'} B)/\lambda=0$ to a value of 0 when $(\pi \alpha^{'} B)/\lambda=1.22\pi$ allows the determination of the source size if $V_s$ is measured as a function of the baseline distance $B$. The easiest procedure to estimate the source diameter is to determine the lowest value of B for which the interference fringes disappear. The reduction of $V_s$ as the source size or the baseline increases is a result of the spatial coherence of the light. The function $\frac{J_1(\frac{\pi \alpha D}{\lambda})}{\frac{\pi \alpha D}{\lambda}}$ represents the irradiance distribution of the diffraction-limited response to a point source illuminating one circular pinhole, which is the Airy pattern. The total number of visible fringes is limited by the diffraction effect, as well as by the value of the visibility function. If the baselines used in the measurements were short enough to provide very small values for the quotient $(\pi \alpha^{'} B)/\lambda$ , the value of the fringe visibility would be very close to 1 and, therefore, Eq. (\ref{eq:eq2}) could be simplified to: \begin{equation} I(\alpha)=I_{0}\left(\frac{J_{1}\left(\frac{\pi}{\lambda} \alpha D\right)}{\frac{\pi}{\lambda} \alpha D}\right)^{2}\left(1+\cos \left(\frac{2 \pi}{\lambda} \alpha B\right)\right) \label{eq:eq3} \end{equation} \, which is the diffraction-limited irradiance distribution produced by a point source emitting a quasi-monochromatic light of wavelength $\lambda$. If, in addition, a finite spectral bandwidth $\Delta\lambda$ of the light is taken into account, the resulting fringe pattern would be formed by adding up the interference patterns given by Eq. (\ref{eq:eq3}) at all wavelengths included in $\Delta\lambda$. This effect reduces the fringe visibility as the observation angle $\alpha$ or, equivalently, the time delay between beams $\tau$ is increased. The time delay for the interference fringes to vanish is called the coherence time $\tau_c$, which is defined as the reciprocal of the frequency bandwidth of the light. Taking this definition into account, an approximate diffraction-limited irradiance distribution at the plane of observation could be written as: \begin{equation} I(\alpha)=I_{0}\left(\frac{J_{1}\left(\frac{\pi}{\lambda} \alpha D\right)}{\frac{\pi}{\lambda} \alpha D}\right)^{2}\left(1+V_{t}\cos \left(\frac{2 \pi}{\lambda} \alpha B\right)\right) \label{eq:eq4} \end{equation} \begin{equation*} \text{with } V_{t}= 1 - \alpha B \frac{\Delta \lambda}{\lambda^2} = 1 - m\frac{\Delta \lambda}{\lambda} \end{equation*} \, An exact expression for the temporal fringe visibility $V_t$ depends on the form of the spectral bandwidth \cite{9Born_pablo},\cite{10Pedrotti2017}. The parameter $m$ in Eq.(\ref{eq:eq4}) is an integer number that indicates the order of interference. One of the consequences of observing a source with a significant bandwidth is the dependence of the temporal visibility $V_t$ on the position considered on the observation plane. In particular, the value of $V_t$ decreases down to 0 as $\alpha$ increases up to $\alpha_e =\lambda^2 / B\Delta \lambda$ or as the interference order becomes $m = \lambda/\Delta\lambda$. This decrease in the visibility is more pronounced when the value of $\Delta \lambda$ is greater. The behavior of the temporal fringe visibility $V_t$ as a function of the wavelength distribution is a result of the temporal coherence of the light. If the values of $B$ and $\Delta \lambda$ used in the measurements resulted in very high values of $\alpha_e$, $V_t$ at the observation positions close to the optic axis would be nearly 1. In that case, Eq. (\ref{eq:eq4}) could also be simplified to Eq. (\ref{eq:eq3}). The preceding equations are only valid for an atmosphere without turbulence. Turbulence in Earth’s atmosphere causes the fringes to undergo random changes due to inhomogeneities found by light in its path to the interferometer. For long-exposure times and under the assumption that the spatial and temporal fringe visibilities ($V_s$ and $V_t$, respectively) are unity, the diffraction-limited irradiance distribution at the plane of observation can be expressed as follows: \begin{equation} I(\alpha)=I_{0}\left(\frac{J_{1}\left(\frac{\pi}{\lambda} \alpha D\right)}{\frac{\pi}{\lambda} \alpha D}\right)^{2}\bigg(1+V_{a}\cos \left(\frac{2 \pi}{\lambda} \alpha B\right)\bigg) \label{eq:eq5} \end{equation} \begin{equation*} \text{with } V_{a}= exp\left[-3.44\left(\frac{B}{r_0}\right)^{5/3}\right] \end{equation*} \, $V_a$ is the atmospheric fringe visibility of the time-averaged interference pattern produced by the light that has passed through the Earth’s turbulent atmosphere \cite{17Fried65}. $r_0$, known as the Fried parameter, is an atmospheric coherence length that measures the seeing quality of the atmosphere \cite{18Roddier1981a}. The smaller $r_0$ is, the larger the effects of turbulence on the propagating wave are. $r_0$ varies with wavelength as $\lambda^{6/5}$, so it becomes smaller at shorter wavelengths, which implies a more severe turbulence effect on the wavefront. Typical values for $r_0$ at good seeing conditions are 15-20 cm at visible wavelengths. In our case, the turbulence-induced random phase fluctuations of the fields drive visibility rapidly toward zero as the baseline $B$ is increased (see expression of $V_a$ in Eq.(\ref{eq:eq5})), that is, atmospheric turbulence causes the light field to become spatially incoherent for baselines $B$ $\geq$ $r_0$. By analyzing the dependence of $V_a$ on baseline, we can estimate $r_0$, which allows us to characterize the atmospheric turbulence \cite{17Fried65}. \subsection{Four-pinhole interference} Another type of interferometer can be made by covering the telescope with a lid having more than two pinholes. This arrangement could provide more baselines, with different lengths and orientations, thus allowing to conduct additional measurements and therefore to obtain more information. The interference pattern produced by four pinholes would be generated by the superposition of all beams coming from each hole. The general expression for the irradiance distribution would be given in terms of six different visibilities describing the correlation of the optical fields in each of the combinations of pairs of holes that can be considered. We have worked out the diffraction-limited interference pattern produced by four pinholes placed as shown in Fig. \ref{fig2}. Since two of the baselines are equal in this case, only four visibilities are necessary. Following Eq. (\ref{eq:eq5}), the intensity is given by: \begin{equation} \label{eq:eq6} \begin{aligned} I(\alpha)=I_{0}\left(\frac{J_{1}\left(\frac{\pi}{\lambda} \alpha D\right)}{\frac{\pi}{\lambda} \alpha D}\right)^{2}\bigg(4+4 V_{1} \cos \bigg(\frac{\pi}{\lambda} \alpha\left(B_{2}-B_{1}\right)\bigg) \,\, + \\ \, 4 V_{2} \cos \bigg(\frac{\pi}{\lambda} \alpha\left(B_{2}+B_{1}\right)\bigg) +2 V_{3} \cos \left(\frac{2 \pi}{\lambda} \alpha B_{1}\right)+2 V_{4} \cos \left(\frac{2 \pi}{\lambda} \alpha B_{2}\right)\bigg) \end{aligned} \end{equation} \, For very short baseline distance and a quasi-monochromatic light, the visibilities can be determined by the Earth’s turbulent effects by using the expression of $V_a$ shown in Eq. (\ref{eq:eq5}) for each corresponding baseline. In an atmosphere without turbulence, Eq. (\ref{eq:eq6}) could be simplified to: \begin{equation} I(\alpha)=I_{0}\left(\frac{J_{1}\left(\frac{\pi}{\lambda} \alpha D\right)}{\frac{\pi}{\lambda} \alpha D}\right)^{2}\bigg(\cos \left(\frac{\pi}{\lambda} \alpha B_{1}\right)+\cos \left(\frac{\pi}{\lambda} \alpha B_{2}\right)\bigg)^{2} \label{eq:eq7} \end{equation} for the case of having a quasi-monochromatic light illuminating the four pinholes with short baselines. If, in addition, the holes were equally spaced, $B_1=B_2/3=B$, by applying trigonometric relations Eq. (\ref{eq:eq7}) could be simplified to the more familiar expression to describe the interference pattern generated by 4-slits or holes \cite{8Hecht1974},\cite{9Born_pablo},\cite{10Pedrotti2017}. \begin{equation} I(\alpha)=I_0\left(\frac{J_1\left(\frac{\pi}{\lambda} \alpha D\right)}{\frac{\pi}{\lambda} \alpha D}\right)^2 \frac{\sin ^2\left(4\left(\frac{\pi}{\lambda} \alpha \mathrm{B}\right)\right)}{\sin ^2\left(\frac{\pi}{\lambda} \alpha \mathrm{B}\right)} \label{eq:eq8} \end{equation} \section{Experimental background}\label{sec3} \subsection{Stellar observation}\label{subsec2} The interference fringes were obtained for the bright stars Betelgeuse, Rigel and Sirius. Betelgeuse is the biggest star of our night sky in terms of angular size. It was the first star that was resolved by Michelson and Pease using stellar interferometry \cite{1Michelson1921}. As a red giant, its large size comes along with a relatively low temperature. Rigel, a known blue super-giant star ($\beta$ Orion) has a surface temperature that surpasses the 10000 K. Despite its angular size, much smaller than Betelgeuse’s, its high temperature makes it a bright object in the sky. Sirius is a much smaller star compared with Betelgeuse. As an A-type star, its temperature is very high. In addition, this star is a closer, at only 8.6 light years. Table \ref{tab1:star_properties} shows their relevant properties. Sirius is a binary star, consisting of Sirius A (the brighter star) and Sirius B (the dimmer one). Since the relation between the emitted brightnes of those stars, known as the contrast factor $f$ of the binary star, is very small, Sirius could be approximated as a single star for this study. \begin{table}[ht] \begin{center} \begin{minipage}{400pt} \caption{Properties of the stars studied: the radius ($R_{\odot}$) considering them as spheres, the distance defined by the line between star and our position (in light-years), Size as the angular aperture (mas = milliarcseconds), $f$ represents the brightness ratio between the stars in case of double stars, $m_v$ is the apparent magnitude, $T$ the surface temperature.} \label{tab1:star_properties} \begin{tabular}{lccccccc} \hline\hline \textbf{Star} & \multicolumn{1}{l}{\textbf{Radius ($\boldsymbol{\rm{R_\odot}}$)}} & \multicolumn{1}{l}{\textbf{Distance ($\boldsymbol{\rm{Ly}}$)}} & \multicolumn{1}{l}{\textbf{Size ($\rm{\textbf{mas}}$)}} & \textbf{f} & \multicolumn{1}{l}{\textbf{$\text{\textbf{m}}_\text{\textbf{V}}$}} & \multicolumn{1}{l}{\textbf{T($\rm{\textbf{K}}$)}} & \\ \hline \textbf{Betelgeuse} & 887 & 497.95 & 54.04 & - & 0.52 & 3600\\ \textbf{Rigel} & 79 & 863 & 2.6 & - & 0.15 & 12100\\ \textbf{Sirius A} & 2 & 8.6 & 6.04 & 0.0001 & -1.46 & 9940\\ \hline\hline \end{tabular} \end{minipage} \end{center} \end{table} \subsection{Experimental set-up}\label{subsubsec2} The observations were carried out with a telescope Celestron C11 XLT with a 280 mm aperture and focal distance of 2800 mm from the campus of an European University. Once the telescope was well focused, its aperture was blocked by a lid with two or four circular pinholes. Fig. \ref{fig2} illustrates the arrangement of the pinholes in the lids. Several lids with two pinholes were made with different combinations of pinhole diameters $D$ and pinhole separation $B$. The lids were made of a robust cardboard and the pinholes were cautiously shaped in order to minimize errors. A lid with four pinholes was used to detect interference fringes of Betelgeuse star. All the chosen values for the pinhole diameters $D$ allowed us to observe the interference fringes clearly. Table \ref{tab2:Exp_param} summarises the experimental parameters used in the capture of the interference fringes. \begin{figure}[ht] \centering \includegraphics[scale=0.75]{fig2.pdf} \caption{Diagrams of the lids. Left, lid with two holes. Right, lid with four holes. The diameters of the two (or four) holes in each lid are equal.} \label{fig2} \end{figure} \begin{table} \begin{center} \begin{minipage}{400pt} \caption{Values of the experimental parameters used in each observation night. The absolute error for the values of D and B is 1 mm.}\label{tab2:Exp_param} \begin{tabular}{cccc} \hline\hline \multicolumn{4}{c}{\textbf{BETELGEUSE}} \\ \hline \textbf{D (mm)} & \textbf{B (mm)} & \textbf{Filter} & \textbf{Date (yy-mm-dd)} \\ \hline 30 & 133 & H$\alpha$ 35 nm & 17-03-15 \\ 30 & 238 & H$\alpha$ 35 nm & 17-03-15 \\ 51 & 147 & H$\alpha$ 35 nm & 17-03-15 \\ \hline \multicolumn{4}{c}{\textbf{Four holes}} \\ \hline 30 & $B_1$=133 & H$\alpha$ 35 nm & 17-03-15 \\ & $B_2$=238 & & \\ \hline \hline \multicolumn{4}{c}{\textbf{RIGEL}} \\ \hline \multicolumn{1}{l}{\textbf{D (mm)}} & \multicolumn{1}{l}{\textbf{B (mm)}} & \multicolumn{1}{l}{\textbf{Filter}} & \multicolumn{1}{l}{\textbf{Date (yy-mm-dd)}} \\ \hline 51 & 147 & H$\alpha$ 35 nm & 17-03-15 \\ 51 & 222 & H$\alpha$ 35 nm & 17-03-15 \\ \hline \hline \multicolumn{4}{c}{\textbf{SIRIUS}} \\ \hline \multicolumn{1}{l}{\textbf{D (mm)}} & \multicolumn{1}{l}{\textbf{B (mm)}} & \multicolumn{1}{l}{\textbf{Filter}} & \multicolumn{1}{l}{\textbf{Date (yy-mm-dd)}} \\ \hline 22 & 220 & H$\alpha$ 7 nm & 17-03-10 \\ 22 & 158 & H$\alpha$ 7 nm & 17-03-10 \\ 61 & 178 & H$\alpha$ 7 nm & 17-03-10 \\ 30 & 238 & H$\alpha$ 35 nm & 17-03-23 \\ 30 & 133 & H$\alpha$ 35 nm & 17-03-23 \\ 50 & 222 & H$\alpha$ 35 nm & 17-03-23 \\ 65 & 162 & H$\alpha$ 35 nm & 17-03-23 \\ 66 & 162 & H$\alpha$ 35 nm & 17-03-29 \\ 65 & 205 & H$\alpha$ 35 nm & 17-03-29 \\ 90 & 187 & H$\alpha$ 35 nm & 17-03-29 \\ 66 & 162 & H$\beta$ 8.5 nm & 17-03-29 \\ 90 & 187 & H$\beta$ 8.5 nm & 17-03-29 \\ \hline\hline \end{tabular} \end{minipage} \end{center} \end{table} The images of the fringe patterns were acquired with a camera attached to the focal plane of the telescope. We used a DMK41AU02 equipped with the Sony ICX205AL CCD chipset with a pixel size of 4.65 $\mu m$ and a dynamic range of 36 dB. To improve the quality of the images, we have processed them by stacking video sequences for every observation with the same methodology described in previous papers \cite{19Rojas2017},\cite{20SanchezLavega2019}. Hence, we can obtain images with a better quality and remove the random noise that could be in the individual frames due to the effects of atmospheric turbulence and telescope vibrations. To obtain resolved fringes at different wavelengths, we used $H_{\alpha}$ filters centered on a wavelength of 656.3 nm with bandwidths of 7 and 35 nm and $H_{\beta}$ filter with a 8.5 nm bandwidth centered at 486.1 nm \cite{21baader7nm}. We used very long exposure times, ranging from 0.3 s to 2.4 s so that suitable fringe images were detected. This fact adversely affected the measurements due to the additional effect of different turbulence scales in the atmosphere, which usually changes on timescales above a few milliseconds. The obtained long-exposure images were converted to digital values, according to the camera range. The brightness levels from the interference patterns were digitized into 512 levels. Those levels were treated with the free-software ImageJ, which allowed us to obtain the profile of the images \cite{22ImageJ}. The software also allows us to determine the values of the irradiance $I_{max}$ and $I_{min}$ to calculate the visibility of the fringe pattern. In order to show the photometric cuts as a function of the angular size in the focal plane, the distance in pixels were transformed to radians throughout the focal length of our telescope of 2800 mm by taking into account the magnifying effect induced by the Barlow lens. \section{Results and discussion}\label{sec4} In this Section, we will study the interference patterns obtained with the values of baseline ($B$) and spectral bandwidth ($\Delta \lambda$) displayed in Table \ref{tab2:Exp_param}. The fact of using such small values of $B$ and such narrowband filters leads to the atmospheric turbulence being the cause of the reduction in the visibility of the observed fringe patterns. As an example of the experimentally detected interferograms, Fig. \ref{fig3} shows the interference patterns obtained for different combinations of values of $D$ and $B$ with the $H_{\alpha}$ filter of 35 nm, together with their corresponding relative irradiance profiles. Figs. \ref{fig3}(a) and (b) correspond to the Betelgeuse star, Figs. \ref{fig3}(c) and (d) to Rigel, and Figs. \ref{fig3}(e) and (f) to Sirius. As can be seen, the fringe patterns obtained through the atmosphere and the telescope with the double-pinhole lid deviates little from the only diffraction–limited interference patterns. We can notice both that the interference fringes are modulated by the larger Airy pattern generated by the circular apertures and that the spatial frequency of the fringes increases as $B$ is increased. We also notice that the fringe visibility is poor in almost all cases. Further interference fringes and relative irradiance profiles with low visibility can be seen in Fig. \ref{fig4}, which displays the interference fringes from Sirius obtained with the narrowest $H_{\alpha}$ filter and the $H_{\beta}$ filter. The interference patterns obtained with other experimental parameters (see Table \ref{tab2:Exp_param}) are very similar to those shown in Figs. \ref{fig3} and \ref{fig4}. Since the conditions to approximate $V_s$ and $V_t$ to 1 are satisfied in our experiments, the obtained visibilities are mainly due to the effect of the atmospheric turbulence. On the one hand, the very short baselines used in the capture of the interference fringes give values for $\frac{\pi\alpha^{'}}{\lambda}$ that are less than 0.4 rad, thus providing that $V_s$ is very close to 1. The fact that the visibility does not decrease implies that we cannot estimate the source size. For the case of Betelgeuse, which is the largest star analyzed in this work, interference fringes would disappear entirely with a baseline of about 3 meters \cite{1Michelson1921}. On the other hand, the order of interference in which the fringe visibility is zero using the broadest filter, namely the $H_{\alpha}$ filter with $\Delta \lambda$ = 35 nm, is 18, which is placed far away from the maximum of interference. This implies that $V_t$ at the observation positions near the optics axis can also approximated by 1. Therefore, we can conclude that the cause of the reduction in the visibility of our fringe patterns is due to the atmospheric turbulence. This fact allows us to use Eq. (\ref{eq:eq5}) to describe the irradiance distribution of the interference fringes. In order to do so, it is necessary to know the value of the Fried parameter corresponding to each measurement, which can be calculated from the analysis of the dependence of the visibility values on the baseline. Fig. \ref{fig5} shows the curves of the fringe visibility $V$ of stars analyzed as a function of the two-baseline distance. The data plotted in each curve have been obtained on the same night. As expected, the values of the fringe visibility are quite low and they tend to diminish as the baseline distance increases. In the calculation of the visibilities, $I_{max}$ has been determined from the irradiance of the central maximum and $I_{min}$ from the average of the two adjacent minima (see Fig. \ref{fig3} and Fig. \ref{fig4}). All irradiance measurements were corrected for the background irradiance and the errors in the visibility values were estimated from the standard deviations calculated from five independent measurements taken for each of the interference patterns obtained. It can be shown that the error in the visibility arising from the uncertainties of the baselines and pinhole diameters ($\pm$ $1$ $mm$) is negligible in comparison with the standard-deviation error. \begin{figure} \centering \includegraphics[scale=0.6]{fig3.pdf} \caption{Two-pinhole interference pattern produced with the $H_{\alpha}$ filter, $\Delta \lambda$ = 35nm. On the left and on the center, the interference patterns obtained experimentally and the photometric cuts; on the right, the theoretical curves calculated from Eq. (5) with $\lambda$ = 656.3 nm, and the corresponding values $D$, $B$, and $r_0$. (a) Betelgeuse star, $D$ = 51 mm, $B$ = 147 mm; (b) Betelgeuse star, $D$ = 30 mm, $B$ = 238 mm; (c) Rigel star, $D$ = 51 mm, $B$ = 147 mm; (d) Rigel star, $D$ = 51 mm, $B$ = 222 mm; (e) Sirius star, $D$ = 90 mm, $B$ = 187 mm; (f) Sirius star, $D$ = 66 mm, $B$ = 205 mm.} \label{fig3} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale=0.65]{fig4.pdf} \caption{Two-pinhole interference pattern produced by Sirius with the narrow $H_{\alpha}$ filter ($\Delta \lambda$ = 7 nm) and with $H_{\beta}$ filter ($\Delta \lambda$=8.5 nm). On the left and on the center, the interference pattern obtained experimentally and the photometric cut. On the right the theoretical curve calculated from Eq. (5) with the corresponding values $D$, $B$, and $r_0$. (a) $\lambda$ = 656.3 nm, $D$=22 mm, $B$ = 220 mm; (b) $\lambda$ = 486.1 nm, $D$ = 66 mm, $B$ = 162 mm.} \label{fig4} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale=0.65]{fig5.pdf} \caption{Visibility measurements as a function of the two-baseline distance obtained for the three stars. The dashed lines are the fittings of the data to the expression of $V_a$. Data obtained from (a) Betelgeuse and Rigel on March 15, 2017; (b) Sirius on March 10, 2017; (c) Sirius on March 23, 2017; (d) Sirius $H_{\alpha}$ and $H_{\beta}$ on March 29, 2017.} \label{fig5} \end{figure} By fitting the expression of $V_a$ in Eq. (\ref{eq:eq5}) to the measured visibility data plotted in Fig. \ref{fig5}, we have estimated the Fried parameter $r_0$ for each observation night at the wavelengths used, i.e. at 656.3 nm for the fringes obtained with the $H_{\alpha}$ filters and at 486.1 nm for those captured with the $H_{\beta}$ one. The values estimated from the fittings for $r_0$ have been displayed in Table \ref{tab3:rzero_results}. The highest values of the Fried parameter are obtained for the night of March 15, 2017, on which the interference fringes of Betelgeuse and Rigel were measured with the $H_{\alpha}$ filter of 35 nm. It can be noticed that the $r_0$ values at 656.3 nm obtained from the interference fringes of Sirius are lower than those obtained from Betelgeuse and Rigel. This could be due to stronger atmospheric turbulence on the corresponding observation nights. It could also be due to the longer exposure times used to obtain the interference fringes from Sirius, since its emission in the spectral range of the $H_{\alpha}$ filter is lower than the emission of the other stars. On the other hand, the spectral dependence of the two $r_0$ values calculated from Fig. \ref{fig5} (d), at 656.3 nm and 486.1 nm, agrees very well with the theoretical spectral dependence of the Fried parameter, that is, $r_0 \propto \lambda^{6/5}$. By using the proportionality constant ($0.0133 \pm 0.0008\;cm/nm^{6/5}$) calculated from the value of $r_0$ at $\lambda$ = 656.3 nm, we get $r_0$ = 22 $\pm$ 2 cm at $\lambda$ = 486.1 nm, which agrees very well with the experimental value displayed in Table \ref{tab3:rzero_results}. Using the same proportionality constant, we estimate that $r_0$ at 500 nm is 23 $\pm$ 2 cm. This value is slightly higher than the typical Fried parameter at a good observation site. This result, together with the larger values obtained for $r_0$ on the other observation nights, indicates that the atmospheric seeing was small in all cases. In all of them, the obtained values for $r_0$ are of the order of the size of the telescope and, of course, larger than the diameter of the holes. These values of $r_0$ are in agreement with the fact that the shape of the interference fringes through the atmosphere are dominated by the diffraction effect, as was shown in Figs. \ref{fig3} and \ref{fig4}. Nevertheless, it must be noticed that better results for Fried parameters would be obtained if a more systematic approach of varying the values of $B$ were carried out with the aid of this first approach. \begin{table} \centering \begin{minipage}{400pt} \caption{Fried parameters at the corresponding wavelength for each observation night obtained from the fittings of the experimental points of Fig \ref{fig5}. Errors of Fried parameters have been estimated from the fittings using Eq. \ref{eq:eq5}.} \label{tab3:rzero_results} \begin{tabular}{ccccc} \hline\hline \textbf{Date (yy-mm-dd)} & \textbf{Star} & \textbf{$r_0$ (cm)} & \textbf{$\lambda$ (nm)} & \textbf{$\Delta\lambda$ (nm)}\\ \hline 17-03-15 & Betelgeuse & \;\,36\;$\pm$\;10 & 656.3 & 35 \\ 17-03-15 & Rigel & 33\;$\pm$\;2 & 656.3 & 35 \\ 17-03-10 & Sirius & 24\;$\pm$\;5 & 656.3 & 7 \\ 17-03-23 & Sirius & 25\;$\pm$\;2 & 656.3 & 35 \\ 17-03-29 & Sirius & 32\;$\pm$\;2 & 656.3 & 35\\ 17-03-29 & Sirius & 22\;$\pm$\;4 & 486.1 & 8.5\\ \hline\hline \end{tabular} \end{minipage} \end{table} In the third column of Fig. \ref{fig3}, we have included the theoretical photometric curves obtained from Eq. (\ref{eq:eq5}) using the corresponding values of $r_0$, $D$ and $B$ for each of the six measurements. As can be seen, the experimental and theoretical photometric curves are in good agreement regarding the variation of the number of visible fringes with different combinations of $D$ and $B$ and also in the fringe visibilities. If the fringe visibility is high enough, the number of visible fringes can be easily calculated from the angular radius of the Airy disk ($1.22\lambda/D$) and the angular fringe spacing ($\lambda/B$), in the same way as in the diffraction–limited interference fringes. For instance, the change in the combination of values of $D$ and $B$ employed for the two fringe patterns produced by Betelgeuse (Figs. \ref{fig3}(a) and (b)) predicts a strong increase in the total number of visible fringes, from 7 to 19, which can be clearly seen in the experimental curves. In contrast, if the fringe visibility is low, the visible fringes are blurred and it is more difficult to analyze the effect of changing the values of $D$ and $B$ on the interference patterns, as happens in the Sirius interferograms (Figs. \ref{fig3}(e) and (f)). The theoretical profiles obtained from Eq. (\ref{eq:eq5}) using the corresponding wavelength and values of $D$, $B$ and $r_0$ have also been included in Fig. \ref{fig4}. If the fringes were only diffraction limited, we should observe 25 fringes in the interference pattern displayed in Fig. \ref{fig4} (a) and 5 fringes in that displayed in Fig. \ref{fig4} (b). In spite of the low visibility values of the two interference patterns, the theoretical predictions agree quite well with the experimental results. Finally, we have studied the interference pattern obtained for the case of four holes placed in the lid (see Fig. \ref{fig2}). It is well-known that, if the number of pinholes producing the interference is increased, the interference pattern changes with both the emergence of secondary maxima of irradiance and the narrowing and brightening of the fringes (principal maxima of irradiance). Fig. \ref{fig6_general_scheme} shows the interference pattern produced by Betelgeuse using the lid with the four holes and the $H_{\alpha}$ filter of 35 nm in bandwidth. \begin{figure}[ht!] \centering \includegraphics[scale=0.7]{fig6.pdf} \caption{Four-pinhole interference pattern produced by Betelgeuse with the $H_{\alpha}$ filter of 35 nm. On the left and on the center, the interference pattern obtained experimentally and its photometric cut. On the right, the theoretical curve calculated from Eq. (\ref{eq:eq6}) with $\lambda$ = 656.3 nm, $D$= 30 mm, $B_1$ = 133 mm $B_2$= 238 mm, $V_1$ = 0.87, $V_2$ = 0.32, $V_3$ = 0.18, $V_4$ = 0.52.} \label{fig6_general_scheme} \end{figure} We can clearly appreciate the effect of having more than two holes on the interference pattern. Since the baselines are very short and the light can be assumed to be quasi-monochromatic, the obtained interference pattern can be described from Eq. (\ref{eq:eq6}) using the parameters of the measurement and the four visibilities calculated from the expression of $V_a$ of Eq. (\ref{eq:eq5}). The experimental photometric cut and the theoretical result have been plotted in the second and in the third panels of Fig. \ref{fig6_general_scheme}, respectively. The four visibilities have been calculated by using the equation of $V_a$ with the four different baseline distances of the lid and with the value of the Fried parameter estimated previously from Betelgeuse measurements carried out in the same night. As can be seen, a very good agreement is obtained between the theoretical and experimental curves, confirming the value of the Fried parameter. \section{Summary}\label{sec5} In this work, we show a simple experiment in which long-exposure interference patterns produced by three stars, Betelgeuse, Rigel and Sirius, can be detected using a bandpass filter and a digital camera coupled to a small telescope obscured by different lids with two or four holes. From the analysis of the interference patterns, we can reach the following conclusions: \, - It has been demonstrated that the obtained long-exposure fringe patterns produced by the stars can be very well described by diffraction-limited interference patterns produced by a quasi-monochromatic point source, but with a fringe visibility reduced due to the atmospheric turbulence. - In spite of the adverse effect of the atmospheric turbulence on the interference patterns, the interferograms are dominated by the phenomenon of diffraction and, consequently, it has been possible to verify the dependence of the interference patterns on different values of the two-pinhole baseline and the diameter. - Through the analysis of the fringe visibility as a function of the two-pinhole baseline, we have been able to characterize the effect of the atmospheric turbulence on astronomical observations by calculating the Fried parameter for each observation night. - A singular interference pattern produced by the Betelgeuse star using a lid with four pinholes has been observed and analyzed. The obtained interference pattern is satisfactorily described by an expression corresponding to the diffraction-limited and turbulence-affected interference pattern produced by a point source illuminating the four holes quasi-monochromatically. The experiment can be addressed to students and/or teachers in high schools and universities. Its simplicity and the interest of the results obtained make this experiment ideal to be implemented in postgraduate subjects of astrophysics, astronomy or optics. In addition to showing the principle of operation of the Michelson stellar interferometer with the use of a telescope, a digital camera and several band-pass filters, the experiment allows underlining important concepts related to spatial interferometry, such as spatial and temporal coherence and astronomic seeing. \section*{Competing interests}\label{sec7} The authors declare no competing interests. \section*{Availability of data and materials}\label{sec8} The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
1,116,691,497,937
arxiv
\section{INTRODUCTION} Over the last decade there have been several studies investigating the fate of relativistic symmetries at the Planck scale (see, {\it e.g.}, Refs.\cite{amelino2013quantum,magueijo2003generalized,amelino2012fate}). Particular interest has been devoted to scenarios such that the (inverse of the) Planck scale would set the minimum allowed value for wavelengths, and notably it has emerged that such a feature could be implemented within a fully relativistic picture \cite{amelino2001testable,amelino2011principle}. This is the realm of the so-called doubly-special (or deformed-special) relativistic (DSR) theories \cite{amelinoDSRijmpd1135,amelino2001testable,amelino2010doubly,kowalski2003non} where the Planck scale plays the role of a second relativistic invariant, in addition to the speed-of-light scale. Evidently these DSR scenarios require the adoption of deformed Poincar\'e transformations connecting inertial frames \cite{amelinoDSRijmpd1135}, and the associated new invariant laws may include indeed an oberver-independent minimum-wavelength law and/or an observer-independent modification of the dispersion/on-shellness relation (MDR). Another major implication of the DSR deformations of Poincar\'e transformations is that the laws of composition of momenta are no longer linear: in order to preserve their relativistic covariance they must be deformed matching the deformation of the Poincar\'e transformations. The mathematical formalism of Hopf algebras has been found to be a natural possibility\footnote{Note however that examples of DSR-relativistic scenarios not involving Hopf algebras have been provided in Refs.\cite{freidelDESITTERSNYDER,balena}.} for formalizing DSR-relativistic scenarios \cite{agostini2004hopf}, since Hopf algebras can be deformations of standard Lie algebras in the form of non-linear deformations of both commutation relations among symmetry generators and the so-called coproducts (on which the laws of composition of momenta are based; see later). So far all of these studies (with the only exception of the exploratory analysis in Ref.\cite{gacmixingold}) focused on \textit{universal} deformations of the special-relativistic symmetries, {\it i.e.} deformations that affect identically all particle types. Here we explore the possibility that the description of Planck-scale physics might require an additional level of complexity, namely different particles' kinematics might be dictated by different relativistic symmetries, {\it i.e.} we wonder whether, within the DSR framework, it is possible to consistently formulate relativistic theories that attribute different laws of kinematics to different particles, \textit{non-universal} deformations of relativistic symmetries. For the case of broken relativistic symmetries (preferred-frame scenarios) particle-dependent effects where first considered by Coleman and Glashow in Ref.\cite{coleman1997cosmic} and there were several developments of that research direction (see, {\it e.g.}, Ref.\cite{smeREVIEW}), but the possibility of particle-dependent properties within a relativistic picture has been so far only explored preliminarily by one of us in Ref.\cite{gacmixingold}, providing most results only at leading order in the deformation scale. We shall here report results valid to all orders in the deformation scale and consider a rather wide class of possible properties attributed to different types of particles. We also intend to show that these scenarios can be built using rather mild modifications of standard Hopf-algebra techniques. As we shall discuss extensively in the following sections, the key building block for our scenarios is a novel mathematical tool which we call \textit{mixing coproduct}, a generalization of the standard Hopf-algebra notion of coproduct. \\ We shall here mostly postpone the analysis of the phenomenological implications of our \textit{non-universal} deformations of relativistic symmetries, but the careful reader will notice how that ultimate objective guides our technical efforts. In particular, while \textit{non-universal} deformations of relativistic symmetries do not necessarily require modifications of the on-shell relation, in all of our case studies there is at least one type of particle governed by modified on-shellness. Indeed, one of the main reasons of interest in DSR-relativistic theories has been their rich phenomenology when they involve modified on-shellness, a rare case of Planck-scale effect testable with presently-available technologies \cite{grbgac1998,gacSMOLINprd2009,amelino2015icecube,jacob2007neutrinos}. Of particular interest for the analysis we here report is that fact that some recent tests of modifications of on-shellness for neutrinos have led to preliminarily encouraging results \cite{amelino2015icecube,Amelino-Camelia:2016ohi,amelino2016icecube,paperbyMA}, for values of the symmetry-deformation scale of about a tenth of the Planck scale, whereas for photons other analyses have led to apparently robust bounds on the symmetry-deformation scale going all the way up to the order of the Planck scale. In spite of the very preliminary nature of the mentioned neutrino studies, one cannot avoid to wonder what would have to be the relativistic picture if it was actually true that modifications of relativistic symmetries are stronger for neutrinos than for photons, leading us indeed to speculate about non-universality of the deformation scheme \cite{abdo2009limit,aharonian2008limits}. In addition to the present preliminary assessment of data on dispersion for photons and neutrinos further motivation for non-universality is found upon contemplating macroscopic bodies (like soccerballs and planets) in a DSR picture: it is easy to see \cite{amelino2011relative} that a phenomenologically viable DSR picture should ensure that the deformation of relativistic properties fades away for macroscopic bodies, and there are known relativistic mechanisms to enforce this property (see, {\it e.g.}, \cite{amelino2011relative} and references therein); however it is still unclear what should be the DSR-relativistic kinematics applicable to processes involving a fundamental particle and a more macroscopic systems. Assuming that indeed the DSR-deformation of relativistic properties is, for example, stronger for the electron than for the bucky ball, what are the conservation laws that one should apply for collisions between an electron and a bucky ball? Our paper is organized as follows. In Section II we briefly review a well-known example of \textit{universal} deformation of the Poincar\'e symmetries, based on the $\kappa$-Poincar\'e Hopf algebra. In Section III we start showing how it is possible to generalize such a setup in order to allow for models where different particles obey different symmetry laws. The first case we consider is that of having two (or more) deformed Poincar\'e algebras with different deformation scales, e.g. $\ell$ and $\ell'$. This gives us the possibility to introduce the notion of \textit{mixing coproduct} for the formalization of the kinematics of such \textit{non-universal} deformed-symmetry models. In Section IV we show that a scenario previously advocated by Magueijo and Smolin \cite{magueijo2003generalized} can be equivalently reformulated in terms of a specific mixing coproduct. In Section V we add a further element of complexity and explore the possibility to define a mixing coproduct between different algebras, focusing on the case of mixing coproducts involving the $\kappa$-Poincar\'e algebra and the standard Poincar\'e Lie algebra. In our closing Section VI we offer a perspective on our results and some observations on possible future developments. \section{Universal coproduct: $\kappa$-Poincar\'{e} case of study} In preparation for discussing DSR scenarios with nonuniversal relativistic properties, we find useful to briefly review the most studied DSR scenarios with universal relativistic properties, which is based on mathematical structures found in the $\kappa$- Poincar\'e Hopf algebra (specifically the so-called \textit{bi-cross-product basis} of the $\kappa$- Poincar\'e Hopf algebra). When written in the \textit{bi-cross-product basis}, the $\kappa$- Poincar\'e commutators are \begin{align}\label{bicross} \begin{split} &[P_\mu, P_\nu ]= 0, \quad [ R_i, P_0 ]= 0, \quad [ R_i,P_j ]= i\epsilon_{ijk}P_k, \\ &[ R_i, R_j ]= i\epsilon_{ijk} R_k, \quad [ R_i, N_j ]= -i\epsilon_{ijk} N_k, \\ &[N_{i},P_{0}]=iP_{i} \, \quad [N_{i},N_{j}]= -i\epsilon_{ijk}R_{k} ,\\ &[N_{i},P_{j}] = i\delta_{ij}\left(\frac{1-e^{-2\ell P_{0}}}{2\ell}+\frac{\ell}{2}{\vec{P}}^{2}\right)-i\ell P_{i}P_{j} , \end{split} \end{align} and, as a consequence, the mass Casimir gets deformed into \begin{equation} \left(\frac{2}{\ell} \sinh\left(\frac{P_0}{2\ell}\right)\right)^2 - e^{\ell P_0}P_i P^i . \end{equation} The modification of the commutators between boosts and translation generators also requires a deformation of the laws of composition of momenta. Indeed, one can easily check that under the action of $N_i$, give in terms of the commutators \eqref{bicross}, one would have that \begin{equation} [N^i_{[p,k]}, p_\mu+k_\mu] = [N^i_p + N^i_k, p_\mu + k_\mu] \neq 0 , \end{equation} even if $p_\mu+ k_\mu = 0$. This means that the usual conservation laws would not be covariant. As one can check by using the relations in Eqs. \eqref{bicross}, the correct modification (in order to achieve covariance) is \begin{equation} p_\mu \oplus_{\ell} k_\mu =\begin{cases} p_{0}+k_{0}\\ p_{i}+e^{- \ell p_{0}}k_{i}\end{cases} , \end{equation} for momenta, while for the boosts one has \begin{equation} N^i_p \oplus N^i_k = N^i_p + e^{- \ell p_{0}} N^i_k + \ell\epsilon_{ijn} p^j R^n_k . \end{equation} These modifications are such that now the condition of covariance of the composition law is obeyed \begin{equation} [N^i_p \oplus N^i_k, p_\mu \oplus k_\mu] = 0 , \end{equation} as the reader can easily verify. In the formalism of Hopf algebras these observations can be formalized by saying that the co-algebra, i.e. the set of relations that define the action of the generators on the product of fields, is non-primitive and, in particular, is modified as follows \begin{equation} \Delta P_i = P_i \otimes 1 + e^{-\ell P_0} \otimes P_i \, , \quad \Delta N_i = N_i \otimes 1 + e^{-\ell P_0} \otimes N_i + \ell \epsilon_{ijk} P_j \otimes R_k \end{equation} The coproducts of $R^i$ and $P_0$ have not been written down explicitly because they remain primitive, i.e. they are dictated by the Leibniz rule. \\ To sum up, we have seen that there are two key ingredients needed by a deformed relativistic picture, i.e. the deformed algebra closed by the generators of non-linearly deformed symmetry transformations has to be compatible with both the form of the mass Casimir (or, equally, the associated on-shell relation) and the conservation laws for the associated charges. We stress that for compatibility we mean that the boost generator must leave invariant the on-shell relation and must transform covariantly the composition law for momenta. \section{Mixing coproduct: $\kappa$-Poincar\'{e} case of study} \label{sec:mixingl-l'} We are now ready to contemplate a further generalization of the notion of relativistic-symmetry deformation by allowing for \textit{non-universal} scenarios. In this Section we introduce the notion of \textit{mixing coproduct} and provide a suitable formalization of it. As already mentioned, this would allow us to formulate a relativistic model where different particles (i.e. particles with different quantum numbers) do not follow the same relativistic laws. In particular, within the formalism of Hopf algebras, we can infer how to compose particles' four momenta from the coalgebraic sector. Thus, the mixing coproduct would be a mathematical object that gives us ``mixed" composition laws where the momenta we compose represent the charges associated to translation generators belonging to different algebras (for instance a Lie algebra and a Hopf algebra, or two different Hopf algebras) or, in some cases, to different ``bases" of the same Hopf algebra. We shall explain in this section what we mean exactly by these different cases. Let us start by considering three Hopf algebras $H$, $H'$ and $H''$ and two maps $\phi$ and $\phi'$ defined as \begin{equation} \phi:H \to H'', \qquad \phi':H'\to H'' \end{equation} with inverse given by $\phi^{-1}$ and $\phi'^{-1}$ respectively. Then, it is possible to define a mixing coproduct \cite{gacmixingold} by composing these two maps as follows \begin{align}\label{mixing coproduct}\begin{split} &H'' \xrightarrow[]{\Delta''} H''\otimes H'' \xrightarrow[]{\phi^{-1} \otimes \phi'^{-1}} H\otimes H'.\\ \end{split}\end{align} Thanks to the mixing coproduct $\phi^{-1} \otimes \phi'^{-1}$, we are here composing the momenta of two particles, whose symmetries are dictated by $H$ and $H'$ respectively, and the resulting particle with momentum given by the sum follows again a distinct symmetry group, i.e. $H''$. (i.e. it is the corresponding charge of the generator of translations in $H''$). It is not difficult to realize that, introducing analogous maps, it would be possible to have also mixing coproducts with target space either $H \otimes H''$ or $H' \otimes H''$. \\ In order to gather some confidence with this novel object and the related formalism, we shall discuss a couple of relevant examples. \\ As first example we consider the case in which $H,H',H''$ are three $\kappa$-Poincar\'{e} algebras $\kappa\mathcal{P}$, $\kappa'\mathcal{P}$ and $\kappa''\mathcal{P}$ which differ for the magnitude of the deformation parameter, $\ell$, $\ell'$, and $\ell''$ respectively. By means of the maps $\phi$ and $\phi'$, which in this case are simply morphisms, we can define two different ways to compose momenta in a mixed way $$\oplus_{l\ell'}:M\oplus M' \to M'',$$ $$\oplus_{\ell'l}:M'\oplus M \to M'',$$ where here $M$, $M'$ and $M''$ stand for the three momentum spaces. This can be done by writing down explicitly the actions of the morphisms $\phi$ and $\phi'$ over the algebras $\kappa\mathcal{P}$ and $\kappa'\mathcal{P}$ respectively. \\ It is known that a given Hopf algebra can be written, as far as explicit formulas are concerned, in some rather different ways, depending on the conventions adopted \cite{kowalski2003non}. In fact, for Hopf algebras one must allow both linear and non-linear maps between the generators, mapping one ``basis" into another \cite{kowalski2003non}. If we express the three Hopf Algebras all in the bicrossproduct basis, then the simplest way to define the morphisms $\phi$ and $\phi'$ is given by the following expressions \begin{equation}\label{isol-l'} \begin{alignedat}{3} &\phi(P_{\mu})={\ell'' \over \ell }P_{\mu}'', \qquad &&\phi'(P_{\mu}')={\ell'' \over \ell'}P_{\mu}'', \\ &\phi(R_{i})=R_{i}'', \qquad &&\phi'(R_{i}')=R_{i}'', \\ &\phi(N_{i})=N_{i}'', \qquad &&\phi'(N_{i}')=N_{i}''. \end{alignedat} \end{equation} Here $G \, \in \, \kappa\mathcal{P}$, $G' \, \in \, \kappa'\mathcal{P}$, and $ G'' \, \in \, \kappa''\mathcal{P}$ are used to denote the symmetry generators (in the bicrossproduct basis) of the three $\kappa$-Poincar\'{e} algebras characterized by different deformation parameters $\ell$, $\ell'$, and $\ell''$. In particular, it is possible to prove that these maps define isomorphisms between the Hopf algebras. For the sake of brevity, let us focus only on the morphism $\phi$. The reader can straightforwardly verify that \begin{align*} &\phi([P_{\mu},P_{\nu}])=0=[\phi(P_{\mu}),\phi(P_{\nu})],\\ &\phi([R_i,R_j])=\epsilon_{ijk}R_{k}''=[\phi(R_i),\phi(R_j)],\\ &\phi([N_i,N_j])=-\epsilon_{ijk}R_{k}''=[\phi(N_i),\phi(N_j)],\\ &\phi([R_i,N_j])=\epsilon_{ijk}N_k''=[\phi(R_i),\phi(N_j)] .\\ &\phi([R_i,P_0])=0=[\phi(R_i),\phi(P_0)],\end{align*} It is also rather simple to verify the following equalities \begin{align*} &\phi([R_i,P_j])=\epsilon_{ijk}{\ell'' \over \ell }P_k''={\ell'' \over \ell }[R_i'',P_j'']=[\phi(R_i),\phi(P_j)],\\ &\phi([N_i,P_0])={\ell'' \over \ell }P_i''={\ell'' \over \ell }[N_i'',P_0'']=[\phi(N_i),\phi(P_0)] \end{align*} and finally \begin{align*} \phi([N_i,P_j]) &=\phi\Bigl[\delta_{ij}\Bigl({1-e^{-2 \ell P_{0}} \over 2\ell}+{\ell \over 2}|\bar{P}|^2\Bigr)- \ell P_{j}P_{k}\Bigr]\\ &=\delta_{ij}\Bigl({1-e^{ -2 \ell { \ell'' \over \ell }P_{0}''} \over 2 \ell}+{\ell \over 2}{\ell''^2 \over \ell^2}|\bar{P}''|^2\Bigr)- \ell {\ell''^2 \over \ell^2}P_{j}''P_{k}''\\&={\ell'' \over \ell} \Bigl[\delta_{ij}\Bigl({1-e^{-2\ell''P_{0}''} \over 2\ell''}+{\ell'' \over 2} \ell \bar{P}''|^2\Bigr)-\ell''P_{j}''P_{k}''\Bigr]\\&=[\phi(N_i),\phi(P_j)]. \end{align*} These observations establish the isomorphism at the level of the algebra sector (i.e. commutators). Then, we need to look also at the coalgebra. For the coproducts we indeed find \begin{align*} & \phi \otimes \phi ( \Delta ( P_0 ) ) = { \ell'' \over \ell } P_0 '' \otimes \mathds{1} + \mathds{1} \otimes { \ell'' \over \ell} P_0 = \Delta'' ( \phi ( P_0 )) , \\ &\phi\otimes\phi(\Delta(P_i))={\ell'' \over \ell} P_0'' \otimes\mathds{1}+e^{-\ell{\ell'' \over\ell }P_0}\otimes{\ell'' \over \ell}P_0=\Delta''(\phi(P_i)),\\ &\phi\otimes\phi(\Delta(R_i))=R_i''\otimes\mathds{1}+\mathds{1}\otimes R_i''=\Delta''(\phi(R_i)) \end{align*} and \begin{align*} \phi\otimes\phi(\Delta(N_i)) & =N_i''\otimes\mathds{1}+e^{-\ell{\ell'' \over \ell}P_0}\otimes N_i''+\ell\epsilon_{ijk}{\ell'' \over \ell}P_j''\otimes R_k'' = \Delta''(\phi(N_i)). \end{align*} The last check we need concerns the compatibility of the map with the antipodes (then the compatibility with the counits follows straightforwardly), and also in this case it is easy to verify that \begin{align*} &\phi(S(P_0))=-{\ell'' \over \ell}P_0''=S''(\phi(P_0)),\\ &\phi(S(P_i))=-e^{-\ell{\ell'' \over \ell}P_0''}{\ell'' \over \ell}P_i''=S''(\phi(P_i)),\\ &\phi(S(R_i))=-R_i''=S''(\phi(R_i))\\ \end{align*} and finally $$\phi(S(N_i))=-e^{ \ell {\ell'' \over \ell }P_0''}N_i''+ \ell \epsilon_{ijk}e^{ \ell {\ell'' \over\ell }P_0''}{\ell'' \over\ell }P_j''R_k''=S''(\phi(N_i)).$$ We therefore established that the morphisms $\phi$ and $\phi'$ are actually isomorphisms connecting $\kappa\mathcal{P}$ with $\kappa' \mathcal{P}$ and $\kappa'\mathcal{P}$ with $\kappa'' \mathcal{P}$. These isomorphisms also has an inverse map, which for example for $\phi$ is given by \begin{align*} &\phi^{-1}(P_{\mu}'')={ \ell \over \ell''}P_{\mu}, \\ &\phi^{-1}(R_{i}'')=R_{i}, \\ & \phi^{-1}(N_{i}'')=N_{i} . \end{align*} Of course, the inverse also constitutes an isomorphism between Hopf algebras. Indeed inverting the isomorphism simply amounts to exchange of roles between deformation scales, for example exchanging the roles of $\ell$ and $\ell''$. \\ Using the two morphisms $\phi$ and $\phi'$ we can construct, as anticipated in \eqref{mixing coproduct}, the mixing coproducts involving these three $\kappa$-Poincar\'{e} algebras as follows \begin{align}\label{isol-l'2}\begin{split} &p \oplus_{\ell\ell'}'' q=\begin{cases}{\ell'' \over\ell }p_{0}+{\ell'' \over \ell'}q_{0}\\{\ell'' \over\ell }p_{i}+{\ell'' \over \ell'}e^{- \ell p_{0}}q_{i}\end{cases},\\ &q \oplus_{\ell'\ell}'' p=\begin{cases}{\ell'' \over \ell'}q_{0}+{\ell'' \over\ell }p_{0}\\ {\ell'' \over \ell'}q_{i}+{\ell'' \over\ell }e^{-\ell'q_{0}}p_{i}\end{cases} .\\ \end{split}\end{align} These relations give us two possible composition laws between momenta that belong to different momentum spaces. In fact, here $p\in M$, $q\in M'$ while, by definition, $p \oplus_{l\ell'}'' q , \, q \oplus_{\ell' \ell}'' p \, \in M''$. Consequently, this simple framework provides us a first example of a deformed relativistic theory where we are able to compose particles' momenta that live on different momentum spaces or, in other words, represent the charges associated to the symmetry transformations of different Hopf algebras. These two sums of momenta \eqref{isol-l'2} differ only for the order of the addenda. It should be noticed that these laws are not well defined when one of the three deformation parameters vanishes, i.e. we have to impose that $\ell \neq 0 \,$ , $\ell' \neq 0 \,$ , $\ell'' \neq 0$. This means that the above introduced morphisms cannot be used to compose a particle with symmetries described by the $\kappa$-Poincar\'{e} group with another whose momenta follow the standard Poincar\'{e} symmetries. As we shall see later, within our framework, this can be done with another class of maps. However, let us point out that this is consistent with the fact that we have proven they are isomorphisms between Hopf algebras and, therefore, they could not relate the Poincaré Lie algebra with a Hopf algebra since there is no isomorphism connecting them. Finally, let us notice that, according to \eqref{isol-l'2}, given two momenta $p$ and $q$ of two particles in $\kappa\mathcal{P}$ and in $\kappa'\mathcal{P}$ respectively, then the system of equations represented by the condition $p\oplus q=0$ (i.e. the condition of the conservation of momenta) does not depend on the specific choice of the target momentum space $M''$ nor on the order of addenda. A second interesting example that allows us to study the properties as well as the meaning of mixing coproducts has been first studied in Ref.\cite{barcaroli2014relative}. In this case, one considers the map \begin{equation} \psi:\kappa{\mathcal{P}}\to\kappa{\mathcal{P}} , \end{equation} which is explicitly given by the following formulas \begin{align}\begin{split}\label{mappaLeo} &P_0=\psi(K_0)=K_0,\\ &P_i=\psi(K_i)=e^{{ \ell \over 2}K_0}K_i,\\ &R_{i}=\psi(M_{i})=M_i,\\ &N_{i}=\psi(B_i)=e^{{ \ell \over 2}K_0}(B_i-{ \ell \over 2}\epsilon_{ijk}K_jM_k), \end{split}\end{align} where $(P_\mu , R_i, N_j)$ are the generators of the $\kappa$-Poincaré algebra in the bicrossproduct basis while $(K_\mu, M_i, B_i)$ are the generators of the $\kappa$-Poincaré algebra in the so-called classical basis \cite{kosinski1994classical}. In the classical basis we have \begin{align}\label{class} \begin{split} &[B_i, K_0] = iK_i \, , \quad [B_i, K_j] = i\delta_{ij}K_0 \, , \quad [B_i, B_j] = -i \epsilon_{ijk}M_k , \\ &[M_i, K_0] = 0 \, , \quad [M_i, K_j] = i\epsilon_{ijk}K_k \, , \quad [M_i, M_j] = i \epsilon_{ijk}M_k , \\ &[K_\mu, K_\nu] = 0 \, , \quad [M_i, B_j] = i\epsilon_{ijk}B_k , \end{split} \end{align} while the coproducts are rather complicated and lengthy, so we do not report them but they can be found in \cite{Borowiec} or in references therein. \\ Thanks to this map $\psi$ and following steps similar to those we did above, one can write for instance the composition law \begin{align}\begin{split}\label{mixingLeo} &p \oplus_{B-S}^B k=\begin{cases}p_{0}+k_{0}\\p_{i}+e^{- \ell p_{0}}e^{{\ell \over 2}k_0}k_{i}\end{cases}\\ \end{split}\end{align} that mixes particles $p$ and $k$ obeying the same symmetry group but expressed in two different bases (bicrossproduct the former and standard the latter), while giving back a momentum in the $\kappa$-Poincar\'{e} bicrossproduct basis. While in the first part of this section we "mixed" Hopf algebras with different deformation parameter but described in the same basis, here we are "mixing" two copies of the same Hopf algebra (same deformation parameter) but adopting two different bases. From a phenomenological point of view, one of the main points of interest resides in the fact that in the bicrossproduct basis one has on-shellness of the type \begin{eqnarray} {4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}p_0\Bigr) = e^{\ell p_0}p_i p^i \end{eqnarray} whereas for the classical basis the on-shellness is undeformed \begin{eqnarray} k^2_0 = k_i k^i . \end{eqnarray} \section{Mixing coproduct for the Magueijo-Smolin DSR picture} \label{MS} Before moving on to more ambitious implementations of the notion of mixing coproduct, we find appropriate to offer a brief aside showing that the mixing-coproduct techniques introduced in the previous section can also shed light on results previously obtained without abstracting the notion of mixing coproduct. Our main objective for this small aside it to show that a composition law introduced by Magueijo and Smolin in Ref.\cite{magueijo2003generalized} can be rephrased in the language of mixing coproducts. The key ingredient is an operator $$U[\ell]\equiv \exp(\ell P_0 D), $$ acting on the Poincar\'{e} algebra, with $$D\equiv P_0{\partial \over \partial P_0}+P_1{\partial \over \partial P_1}.$$ Then, taking the generators transformed by this map $$X[\ell]\equiv U^{-1} X U,$$ one can easily check that $X[\ell]$ still obey the Poincar\'{e} commutation rules. Thus, we have again the Poincar\'{e} algebra but with a modified coalgebric sector and with generators of infinitesimal translations given by $$P_i[\ell]\equiv U[\ell] P_i = {P_i \over 1+\ell P_0},$$ and its inverse $$P_i={P_i[\ell] \over 1-\ell P_0[\ell]}.$$ We also define the map $u[\ell]$ $$X \mapsto U X[\ell] U^{-1},$$ which for the generators $P_i$ reads $$P_i \mapsto {P_i[\ell] \over 1-\ell P_0[\ell]}.$$ Following the formalism used in the previous section for the mixing coproduct, we construct the chain of maps \begin{equation} \label{cat1} P\Bigl[{\ell \over 2}\Bigr]\xrightarrow{u^{-1}\bigl[{\ell \over 2}\bigr]}P\xrightarrow{\Delta}P\otimes P\xrightarrow{u[\ell]\otimes u[\ell]}P[\ell]\otimes P[\ell], \end{equation} which, when applied backwards, gives the momentum space the Magueijo-Smolin composition relation \cite{magueijo2003generalized} $${p_i \over 1- {\ell \over 2} p_0}={q_i \over 1-\ell q_0}+{k_i \over 1-\ell k_0},$$ i.e., \begin{equation} p_i={q_i+k_i-\ell q_0 k_i-\ell q_i k_0 \over 1-{\ell \over 2}(q_0+k_0)}. \label{eq:MS_composition_rule} \end{equation} As a further aside it is amusing to notice that there is also another route for obtaining the Maueijo-Smolin composition law. We start doing that by noticing that we already have a map linking $P\bigl[{\ell \over 2}\bigr]$ to $P[\ell]$: $$P\Bigl[{\ell \over 2}\Bigr]\xrightarrow{u^{-1}\bigl[{\ell \over 2}\bigr]}P\xrightarrow{u[\ell]}P[\ell].$$ We call this map $w\bigl[{\ell \over 2},\ell\bigr]$ \begin{equation} \label{w} w\Bigl[{\ell \over 2},\ell\Bigr] \triangleright P_{i}\Bigl[{\ell \over 2}\Bigr] = {P_{i}[\ell] \over 1-{\ell \over 2}P_0[\ell]}. \end{equation} Now we need to prove that the chain in Eq. \eqref{cat1} is indeed the same chain given by \begin{equation} \label{cat2} P\Bigl[{\ell \over 2}\Bigr]\xrightarrow{\Delta\bigl[{\ell \over 2}\bigr]}P\Bigl[{\ell \over 2}\Bigr]\otimes P\Bigl[{\ell \over 2}\Bigr]\xrightarrow{w\bigl[{\ell \over 2},\ell\bigr]\otimes w\bigl[{\ell \over 2},\ell\bigr]}P[\ell]\otimes P[\ell]. \end{equation} First we observe that the coproduct of $P_i\bigl[{\ell \over 2}\bigr]$ can be obtained using the relations $$P_i\Bigl[{\ell \over 2}\Bigr]={P_i \over 1+{\ell \over 2}P_0}, \quad P_i={P_i[{\ell \over 2}] \over 1-{\ell \over 2}P_0[{\ell \over 2}]},$$ from which it follows that \begin{align} \label{cop} \begin{split} \Delta\Bigl[{\ell \over 2}\Bigr]P_i\Bigl[{\ell \over 2}\Bigr]= & {P_i \otimes \mathbb{I}+\mathbb{I}\otimes P_i \over 1+{\ell \over 2}(P_0 \otimes \mathbb{I}+\mathbb{I}\otimes P_0)} \\ = & {{P_i\bigl[{\ell \over 2}\bigr] \over 1-{\ell \over 2}P_0\bigl[{\ell \over 2}\bigr]} \otimes \mathbb{I}+\mathbb{I}\otimes {P_i\bigl[{\ell \over 2}\bigr] \over 1-{\ell \over 2}P_0\bigl[{\ell \over 2}\bigr]} \over 1+{\ell \over 2}({P_0\bigl[{\ell \over 2}\bigr] \over 1-{\ell \over 2}P_0\bigl[{\ell \over 2}\bigr]} \otimes \mathbb{I}+\mathbb{I}\otimes {P_0\bigl[{\ell \over 2}\bigr] \over 1-{\ell \over 2}P_0\bigl[{\ell \over 2}\bigr]})}. \end{split} \end{align} At this point given two particles with momenta $q$ and $k$ respectively in $P[\ell]$ and applying backwards the chain in Eq. \eqref{cat2}, we obtain the transformation $$(q_i,k_i) \mapsto \Bigl({q_i \over 1-{\ell \over 2}q_0},{k_i \over 1-{\ell \over 2}k_0}\Bigr)$$ which has to be replaced in the coproduct of Eq. \eqref{cop}. In order to do so we first notice that if $$x_i\mapsto {x_i \over 1-{\ell \over 2}x_0}$$ then $${x_i \over 1-{\ell \over 2} x_0} \mapsto {{x_i \over 1-{\ell \over 2}x_0} \over 1-{\ell \over 2}{x_0 \over 1-{\ell \over 2}x_0}}={x_i \over 1-\ell x_0},$$ Thus, substituting $\Bigl({q_i \over 1-{\ell \over 2}q_0},{k_i \over 1-{\ell \over 2}k_0}\Bigr)$ in the coproduct of Eq. \eqref{cop} we have \begin{align*} {{q_i \over 1-\ell q_0}+{k_i \over 1-\ell k_0} \over 1+{\ell \over 2}\Bigl({q_0 \over 1-\ell q_0}+{k_0 \over 1-\ell k_0}\Bigr)} ={q_i+k_i-\ell q_0 k_i-\ell q_i k_0 \over 1-{\ell \over 2}(q_0+k_0)}, \end{align*} which is exactly the Magueijo-Smolin composition rule of Eq. \eqref{eq:MS_composition_rule}. The map $w\bigl[{\ell \over 2},{\ell}\bigr]:P\bigl[{\ell \over 2}\bigr]\to P[\ell]$ is indeed like a rescaling map between two $\kappa$-Poincar\'{e} algebra with two distinct deformation scales $\ell$ and ${\ell \over 2}$, i.e., $${P_i\bigl[{\ell \over 2}\bigr] \over 1-{\ell \over 2}P_0\bigl[{\ell \over 2}\bigr]}\mapsto{{P_{i}[\ell] \over 1-{\ell \over 2}P_0[\ell]} \over 1-{\ell \over 2}{P_{0}[\ell] \over 1-{\ell \over 2}P_0[\ell]}}={P_{i}[\ell] \over 1-\ell P_0[\ell]}.$$ The equivalance between the chains in Eq. \eqref{cat1} and \eqref{cat2} can also be interpreted as the equivalence between maps $$\Delta\circ u^{-1}[\ell]=(u^{-1}[\ell]\otimes u^{-1}[\ell])\circ \Delta[\ell],$$ which shows the compatibility between the coproducts of $P$ and $P[\ell]$ algebras, and more generally between the coproducts of $P[\ell']$ and $P[\ell]$ algebras. \section{Mixing coproducts between Poincar\'{e} and $\kappa$-Poincar\'{e} algebras} \label{sec:classificazione} So far we focused on coproducts mixing pairs of algebras that were isomorphic to one another (or two bases of the same algebra). In this section we show that one can consistently introduce mixing coproducts also for non-isomorphic algebras, focusing on the case of mixing the standard Poincar\'{e} (Lie) algebra and the $\kappa$-Poincar\'{e} Hopf algebra. It should be noticed that this is rather challenging even though the Poincar\'{e} (Lie) algebra is obtained from the $\kappa$-Poincar\'{e} Hopf algebra in the limit in which the deformation parameter is removed. In fact, the mixing coproducts we analyzed in Section III all involve maps which are not analytic as the deformation parameters are removed ($\ell \, \rightarrow 0$). In this section we shall truly need a new type of mixing coproduct. For definiteness and simplicity we focus on the case of a 1+1-dimensional spacetime. We start by introducing notation for the most general composition law\footnote{Here and in the rest of this paper, if not otherwise specified, we denote with $\boxplus$ the mixing coproducts obtained following this procedure.} \begin{equation} p \boxplus k = \begin{cases} \epsilon(p_0,p_1,k_0,k_1)p_0+\zeta(p_0,p_1,k_0,k_1)k_0 \\ f(p_0,p_1,k_0,k_1)p_1+g(p_0,p_1,k_0,k_1)k_1 \end{cases}, \label{eq:comp1} \end{equation} where $p$ is the momentum of a $\kappa$-Poincar\'{e} particle and $k$ is the momentum of a Poincar\'{e} particle\footnote{By stating that $p$ is the momentum of a $\kappa$-Poincar\'{e} particle we mean that if $p'$ is another particle momentum of the same type, then $$p\oplus p'=\begin{cases}p_0+p_0',\\p_1+e^{-\ell p_0}p_1'\end{cases},$$ whereas of course if $k$ and $k'$ are two Poincar\'{e} particle momenta then obviously $$k+k'=\begin{cases}k_0+k_0',\\k_1+k_1'\end{cases}.$$}, and we require that under a suitable boost generator $N_{[p,k]}$ \begin{equation}\label{conservazione_quadrimpulso}p \boxplus k=0 \Rightarrow [N_{[p,k]},p \boxplus k]=0.\end{equation} This requirement guarantees the covariance of the composition rule in \eqref{eq:comp1}. In \eqref{eq:comp1} $\epsilon$, $\zeta$, $f$ and $g$ are general functions of the momenta $p,k$, which for $\ell \to 0$ have to be equal to 1. We shall consistently denote by $p$ momenta of the $\kappa$-Poincar\'{e} type and by $k$ momenta of the Poincar\'{e} type. Firstly, let us focus on two ``complementary" proposals which are respectively \begin{equation}\label{sistemaEp0,p1}p \boxplus k=\begin{cases}\varepsilon(p_0,p_1)p_0+k_0 \\ f(p_0,p_1)p_1+k_1\end{cases}\end{equation} and \begin{equation}\label{sistemaEk0,k1}p \boxplus k=\begin{cases}p_0+\tilde{\varepsilon}(k_0,k_1)k_0 \\ p_1+\tilde{f}(k_0,k_1)k_1\end{cases},\end{equation} where the boost must be of the form $N_{[p,k]}=h(p_0,p_1)N_{[p]}+N_{[k]}$ for the former case, and $N_{[p,k]}=N_{[p]}+\tilde{h}(k_0,k_1)N_{[k]}$ for the latter. As we explicitly show in Appendix B, given the compatibility condition \eqref{conservazione_quadrimpulso} all the solutions of these systems, i.e. \eqref{sistemaEp0,p1} and \eqref{sistemaEk0,k1}, are of the type \begin{equation}\label{box eps p0p1}p \boxplus k=\begin{cases} \varepsilon p_0+k_0 \\\sqrt{\varepsilon^2p_0^2-{4 \over \ell^2}\sinh^2\Bigl({\ell\over 2}p_0\Bigr)+e^{ \ell p_0}p_1^2}{p_1 \over | p_1|}+k_1\end{cases}\end{equation} and \begin{equation}\label{box eps k0k1}p \boxplus k=\begin{cases} p_0+\tilde{\varepsilon}k_0 \\\ p_1+e^{{\ell \over 2}\tilde{\varepsilon}k_0}\sqrt{{4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}\tilde{\varepsilon}k_0\Bigr)-k_0^2+k_1^2}{k_1 \over | k_1|}\end{cases}\end{equation} respectively. Thus, we would be left with just one undetermined function: $\varepsilon$ for the first system, and $\tilde{\varepsilon}$ for the second. However, it is rather easy to understand that not all the choices for $\varepsilon$ and $\tilde{\varepsilon}$ can be acceptable. Indeed, if we take a look at Eqs. \eqref{box eps p0p1} and \eqref{box eps k0k1}, then it follows that we must have \footnote{Notice that these two disequalities can be obtained by imposing zero spatial momentum either for the particle $p$ or for the particle $k$ respectively.} \begin{equation} \varepsilon p_0^2 \ge {4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}p_0\Bigr) \quad \to \quad \varepsilon \ge {2 \over \ell p_0}\sinh\Bigl({ \ell \over 2}p_0\Bigr) , \end{equation} for the first case, and \begin{equation} {4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}\tilde{\varepsilon}k_0\Bigr)\ge k_0^2 \quad \to \quad \tilde{\varepsilon} \ge {2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr) \end{equation} in the second situation. Essentially, this is a direct consequence of the functional form of the expressions. Moreover, it is natural to require that $\varepsilon$ and $\tilde{\varepsilon}$ have to be both surjective and injective for any value of, respectively, $p_1$ and $k_1$. They must also go to $1$ for $\ell$ going to zero and, finally, need to have the same sign of the energy ($p_0$ or $k_0$) in order to guarantee that the above introduced mixing coproducts actually represent a good and consistent choice for composing the momenta (the energies) of two particles. Given that, we can now deduce several facts about the mixing coproducts we are trying to construct. For instance, if we concentrate on Eq. \eqref{box eps p0p1} and consider the case in which $\varepsilon$ depends only on $p_0$ and enjoys the aforementioned properties, then $\varphi(p_0)=\varepsilon p_0$ will have an inverse $\tilde{\varphi}(k_0)$ with the same characteristics. We can then choose $\tilde{\varepsilon}={\tilde{\varphi} \over k_0}$ in Eq. \eqref{box eps k0k1}. Notice that, with these choices, when we impose momentum conservation in a scattering process (i.e. $p\boxplus (-k)=0$), both composition laws give the same relations between $p$ and $k$. From the former we find \begin{equation} p \boxplus (-k)=0 \quad \to \quad \begin{cases}k_0=\varphi(p_0) \\ k_1=\sqrt{\varphi^2(p_0)-{4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}p_0\Bigr)+e^{\ell p_0}p_1^2}{p_1 \over | p_1|}\end{cases} \end{equation} and \begin{equation} p\boxplus (-k)=0 \quad \to \quad \begin{cases}\tilde{\varphi}(k_0)=p_0\\ k_1=\sqrt{k_0^2-{4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}\tilde{\varphi}(k_0)\Bigr)+p_1^2e^{\ell \tilde{\varphi}(k_0)}}{p_1 \over |p_1|}\end{cases} \end{equation} from the latter. The fact that we obtained two identical systems suggests that, as we shall show later, these two composition laws can be regarded one as the inverse of the other. This is a remarkable feature since for the consistency of this composition laws we must always have \begin{align*}&\varphi\equiv\varepsilon p_0 \ge {2 \over \ell}\sinh\Bigl({\ell \over 2}p_0\Bigr),\\ & \tilde{\varphi}\equiv\tilde{\varepsilon} k_0 \ge {2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr).\end{align*} From the former of the above disequalities we find \begin{equation} \varphi^{-1}\varphi(p_0)=\tilde{\varphi}\varphi(p_0)=p_0 \ge \tilde{\varphi}\Bigl[{2 \over \ell}\sinh\Bigl({\ell \over 2}p_0\Bigr)\Bigr], \end{equation} or equivalently \begin{equation} \tilde{\varphi}(k_0) \le {2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr), \end{equation} where we have introduced the notation $k_0={2 \over \ell}\sinh\Bigl({\ell \over 2}p_0\Bigr)$. Due to the properties we imposed on the functions, the same relation must hold also when we exchange the two sides and thus \begin{equation} \tilde{\varphi}(k_0) = {2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr) \end{equation} and also \begin{equation} \varphi(p_0)={2 \over \ell}\sinh\Bigl({\ell \over 2}p_0\Bigr). \end{equation} With this procedure we thus conclude that there is only \textit{one} possible pair of mixing coproducts that deforms the momenta and verifies a sort of inverse relation. This is given by \begin{equation}\label{(p+k)P p0}p\boxplus_{\mathcal{P}}k=\begin{cases}{2 \over \ell}\sinh\Bigl({\ell \over 2}p_0\Bigr)+k_0 \\ e^{{\ell \over 2}p_0}p_1+k_1\end{cases}\end{equation} and \begin{equation}\label{bla}p\boxplus k=\begin{cases}p_0+{2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr) \\ p_1+\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)k_1\end{cases}.\end{equation} Let us now consider a particle of mass $m$ with momentum $p$ whose symmetries are described by the $\kappa$-Poincaré algebra. If we define \begin{equation} E={2 \over \ell}\sinh\Bigl({\ell \over 2}p_0\Bigr), \qquad \Pi=e^{{\ell \over 2}p_0}p_1 \end{equation} then \begin{equation} E^2-\Pi^2={4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}p_0\Bigr)-e^{\ell p_0}p_1^2=m^2 . \end{equation} This tells us that the map $(p_0,p_1)\to(E,\Pi)$ ``transforms'' the particle $p$ into a particle with standard Poincaré symmetries\footnote{Notice that this is true also for the mixing coproducts of Eq. \eqref{box eps p0p1}.}. Given that, it is possible to regard the mixing coproduct of Eq. \eqref{(p+k)P p0} as a trivial sum of momenta once we deform the momentum coordinates $(p_0 , p_1)$, just as we saw already in the previous sections. On the other hand, we can not give a similar interpretation for the composition law in Eq.\eqref{bla}. In other words, we would like to have a sort of complementary map that transforms the momentum of a particle with Poincaré symmetries into the momentum associated to the $\kappa$-Poincaré algebra. This is not possible since if $k$ is a Poincaré particle with mass $m$ and we define \begin{equation} \tilde{E}={2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr), \qquad \tilde{\Pi}=\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)k_1 \end{equation} then $${4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}\tilde{E}\Bigr)-e^{\ell \tilde{E}}\tilde{\Pi}^2 \ne m^2,$$ and the spatial part of the coproduct \eqref{bla} is not of the form $p_1+e^{-\ell p_0}\tilde{\Pi}$, regardless of the choice of $\tilde{\Pi}$. However, we can show that a suitable modification of Eq. \eqref{bla} actually allows to overcome these obstructions and to have a composition law that admits an interpretation analogous to Eq. \eqref{(p+k)P p0}. To this end, let us consider a general mixing coproduct of the form \begin{equation}\label{p0k0}p\boxplus k=\begin{cases}\varepsilon(p_0,k_0)p_0+k_0 \\ f(p_0,k_0)p_1+k_1\end{cases},\end{equation} with $N_{[p,k]}=hN_{[p]}+N_{[k]}.$ Then we have \begin{equation} [N_{[p,k]},\varepsilon p_0+k_0]=h\Bigl({\partial \varepsilon \over \partial p_0}p_1p_0+\varepsilon p_1\Bigr)+{\partial \varepsilon \over \partial k_0}k_1p_0+k_1 \end{equation} and \begin{equation} [N_{[p,k]},fp_1+k_1]=h\Bigl[{\partial f \over \partial p_0}p_1^2+f\Bigl({1-e^{-2 \ell p_0} \over 2 \ell}-{\ell \over 2}p_1^2\Bigr)\Bigr]+{\partial f \over \partial k_0}k_1p_1+k_0. \end{equation} Consequently $\varepsilon$ and $f$ must satisfy the following system of differential equations \begin{equation}\label{sistema p0k0}\begin{cases}h\Bigl({\partial \varepsilon \over \partial p_0}p_1p_0+\varepsilon p_1\Bigr)-fp_1\Bigl({\partial \varepsilon \over \partial k_0}p_0+1\Bigr)=0 \\ h\Bigl[{\partial f \over \partial p_0}p_1^2+f\Bigl({1-e^{-2 \ell p_0} \over 2 \ell}-{\ell \over 2}p_1^2\Bigr)\Bigr]-f{\partial f \over \partial k_0}p_1^2-\varepsilon p_0=0 \\ \varepsilon p_0+k_0=0\end{cases}.\end{equation} We redirect the reader to the Appendix A for the detailed analysis of this system. The solution is given by any pair of function $(f,\varepsilon)$ that reduces to Eqs. \eqref{(p+k)P p0} when $\varepsilon p_0+k_0=0$, provided that, as always, they also have the correct behavior for $\ell \rightarrow 0$ and do not present any sort of singularity. Notice that the fact that $\varepsilon={2 \over \ell p_0}\sinh\Bigl({\ell \over 2}p_0\Bigr)$ when $\varepsilon p_0+k_0=0$ allows us to rewrite the constraints as \begin{equation} {2 \over \ell }\sinh\Bigl({\ell \over 2}p_0\Bigr)+k_0=0, \end{equation} or also \begin{equation} p_0+{2 \over \ell }\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)=0. \end{equation} Thus, when the constraint is satisfied, we have \begin{equation} \varepsilon={2 \over \ell p_0}\sinh\Bigl({\ell \over 2}p_0\Bigr)={k_0 \over {2 \over \ell }\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)} \end{equation} and \begin{equation} f=e^{{\ell \over 2} p_0}=e^{-\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)}={1 \over {\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}}. \end{equation} Finally, defining $\tilde{\varepsilon}={1 / \epsilon}, \quad \tilde{f}={1 / f}$, we can rewrite the system of Eq. \eqref{p0k0} as \begin{equation}\label{p0k0invmolt}p\boxplus k=\begin{cases}\varepsilon(p_0,k_0)(p_0+\tilde{\varepsilon}(p_0,k_0)k_0) \\ f(p_0,k_0)(p_1+\tilde{f}(p_0,k_0)k_1)\end{cases},\end{equation} where now $\tilde{\varepsilon}$ and $\tilde{f}$ reduce to \eqref{bla} on the constraint surface. \\ It is rather easy to prove that, given a general mixing coproduct of the form \begin{equation} p\boxplus k=\begin{cases}\tilde{\zeta}(p_0+\tilde{\varepsilon}k_0) \\ \tilde{g}(p_1+\tilde{f}k_1)\end{cases} \end{equation} or also \begin{equation} p\boxplus k=\begin{cases}\zeta(\varepsilon p_0+k_0) \\ g(fp_1+k_1)\end{cases} , \end{equation} then the solutions $\tilde{\varepsilon}$, $\tilde{f}$, $\varepsilon$ and $f$ do not depend on the common factors $\tilde{\zeta}$, $\tilde{g}$, $\zeta$ and $g$, respectively. In fact, considering for instance the former case and acting with a boost $N_{[p,k]}=\tilde{i}(N_{[p]}+\tilde{h}N_{[k]})$, we find that $N_{[p,k]} \triangleright (p\boxplus k)_0$ has the following form \begin{equation} [\tilde{i}(N_{[p]}+\tilde{h}N_{[k]})\triangleright \tilde{\zeta}](p_0+\tilde{\varepsilon}k_0)+\tilde{\zeta}\tilde{i}(N_{[p]}+\tilde{h}N_{[k]})\triangleright(p_0+\tilde{\varepsilon}k_0). \end{equation} If, as usual we ask that $N_{[p,k]}\triangleright p\boxplus k=0$ when $p\boxplus k=0$, then $p_0+\tilde{\varepsilon}k_0=0$ in the above equation, and as a result it is easy to realize that both $\tilde{\zeta}$ and $\tilde{i}$ do not play any role in the identification of $\tilde{\varepsilon}$ (or $\tilde{h}$). Given that, we can ignore the functions $\varepsilon(p_0,k_0)$ and $f$ in Eq. \eqref{p0k0invmolt}, which we can then rewrite as \begin{equation}\label{p0k0inv}p\boxplus k=\begin{cases}p_0+\tilde{\varepsilon}(p_0,k_0)k_0 \\ p_1+\tilde{f}(p_0,k_0)k_1\end{cases}\end{equation} with $\tilde{\varepsilon}={2 \over \ell k_0}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)$ and $\tilde{f}={\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}$, on the constraint solutions. This allows us to modify the mixing coproduct in Eq. \eqref{bla} in the way we needed. Indeed, keeping $\tilde{\varepsilon}$ unmodified with respect to the case in Eq. \eqref{bla} while changing $\tilde{f}$ as \begin{equation} \tilde{f}=e^{-\ell(p_0+\tilde{\varepsilon}k_0)}\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)={e^{-\ell p_0} \over {\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}} \end{equation} we eventually obtain the mixing coproduct \begin{equation}\label{(p+k)kP_k0}p\boxplus_{\kappa\mathcal{P}} k=\begin{cases}p_0+{2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr) \\ p_1+e^{-\ell p_0}{k_1 \over {\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}}\end{cases}.\end{equation} If we assume that $k$ is a particle with mass $m$ and define \begin{equation} \tilde{E}={2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr), \qquad \tilde{f}={k_1 \over {\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}} \end{equation} we actually find that \begin{equation} {4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}\tilde{E}\Bigr)-e^{\ell\tilde{E}}\tilde{\Pi}^2 = m^2: \end{equation} i.e. the particle $k$ has been ''deformed'' into a $\kappa$-Poincaré particle. It is worth noting that Eq. \eqref{(p+k)kP_k0} can be interpreted as a non-commutative law in the sense that we can hypothesize that by exchanging $p$ with $k$ one would have to write down \begin{equation}\label{(k+p)kP k0}k\boxplus_{\kappa\mathcal{P}} p=\begin{cases}{2 \over \ell}\ln\Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)+p_0 \\ {k_1 \over {\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}}+{p_1 \over \Bigl({\ell \over 2}k_0+\sqrt{{\ell^2 \over 4}k_0^2+1}\Bigr)^2}\end{cases},\end{equation} where the last term in the spatial part of the composition law can be rewritten simply as $e^{-\ell\tilde{\varepsilon}k_0}p_i$. Notice that we only multiplied the spatial part of the mixing coproduct \eqref{bla} by the function $e^{-\ell\tilde{\varepsilon}k_0}$, which as aforementioned does not alter the covariance of the composition law. In summary, the composition laws in Eqs. \eqref{(p+k)P p0} , \eqref{(p+k)kP_k0} (or also \eqref{(k+p)kP k0}) define a Poincaré-like sum and a $\kappa$-Poincaré, respectively, with total momenta associated to the Poincaré or the $\kappa$-Poincaré algebra respectively. Without the need to provide all the details of the proof (which follows the line of reasoning used to derive Eq. \eqref{p0k0}), we can generalize the above discussions to the cases \begin{equation} p\boxplus k=\begin{cases}\varepsilon(p_0,p_1,k_0,k_1)p_0+k_0\\f(p_0,p_1,k_0,k_1)p_1+k_1\end{cases} \end{equation} and \begin{equation} p\boxplus k=\begin{cases}p_0+\tilde{\varepsilon}(k_0,k_1,p_0,p_1)k_0\\ p_1+\tilde{f}(k_0,k_1,p_0,p_1)k_1\end{cases}, \end{equation} where, on the constraint solutions, the above functions must coincide with either \eqref{box eps p0p1} or \eqref{box eps k0k1}. By doing so, for any $\tilde{\varepsilon}$ we also have the non-commutative law \begin{align*} &p\boxplus_{\kappa\mathcal{P}}k=\begin{cases}p_0+\tilde{\varepsilon}k_0\\ p_1+e^{-\ell p_0}e^{-{\ell \over 2}\tilde{\varepsilon}k_0}\sqrt{{4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}\tilde{\varepsilon}k_0\Bigr)-k_0^2+k_1^2}{k_1 \over |k_1|}\end{cases},\\ &k\boxplus_{\kappa\mathcal{P}}p=\begin{cases}\tilde{\varepsilon}k_0+p_0\\e^{-{\ell \over 2}\tilde{\varepsilon}k_0}\sqrt{{4 \over \ell^2}\sinh^2\Bigl({\ell \over 2}\tilde{\varepsilon}k_0\Bigr)-k_0^2+k_1^2}{k_1 \over |k_1|}+e^{-\ell \tilde{\varepsilon}k_0}p_1\end{cases}.\end{align*} This concludes the classification of all the mixing coproducts involving the Poincaré and the $\kappa$-Poincaré algebras, which are of the form \begin{equation} p\boxplus k=\begin{cases}\varepsilon p_0+\tilde{\varepsilon} k_0\\ f p_1+\tilde{f}k_1\end{cases}. \end{equation} \section{Summary and outlook} The main goal of this work was to provide illustrative examples of consistent scenarios with non-universal deformation of relativistic symmetries. Clearly the key challenge for such scenarios concerns the introduction of suitable mixing products, of which we provided several examples. Our results preliminarily suggest that scenarios with non-universal deformation of relativistic symmetries could in principle be realized in Nature and deserve dedicated experimental testing, such as the mentioned studies looking for effects in the neutrino sector at a level of magnitude which is already excluded for photons. While we feel that such tests should not be postponed, there are clearly still several tasks to be faced before fully establishing the consistency of these scenarios. For example, it would be interesting to establish whether they are compatible with the setup of (possibly deformed) quantum field theories, though one should perhaps view this as a long-term goal, since even the understanding of quantum field theories with universal deformation of relativistic symmetries has still not reached full maturity. Our mixing coproducts are in principle directly applicable to interactions between composite and fundamental particles (when governed by different relativistic properties). However, in some cases it should be possible to apply a constructive approach for such mixing coproducts: ideally there might be cases in which one only needs to postulate some universally-deformed relativistic properties for fundamental particles, then deriving the implied (different) relativistic properties of various types of composite particles, and in such cases also the mixing coproducts could be derived, rather then requiring a dedicated postulate. We feel that performing such derivations, even just in some particularly simple toy model, would be an important contributions to the further development of the investigations we here reported. For what concerns the broader picture of phenomenology it will be interesting to identify some characteristic observable differences between the new scenarios of non-universal deformations of relativistic symmetries, on which we here focused, and the scenarios in which relativistic symmetries are broken in particle-dependent manner, which have already been studied for several years\cite{coleman1997cosmic,smeREVIEW}. \section*{Acknowledgements} We are grateful to Leonardo Barcaroli for contributing to the initial stages of this project.
1,116,691,497,938
arxiv
\section{Introduction} Anomalies are image regions not conforming with the rest of the image. Detecting them is a challenging image analysis problem, as there seems to be no straightforward definition of what is (ab)normal for a given image. Anomalies in images can be high-level or low-level outliers. High-level anomalies are related to the semantic information presented in the scene. For example, human observers immediately detect a person inappropriately dressed for a given social event. In this work, we focus on the problem of detecting anomalies due to low or mid level rare local patterns present in images. This is an important problem in many industrial, medical or biological applications. \begin{figure} \small \centering \begin{tikzpicture} \newlength{\nextfigheighta} \newlength{\figwidtha} \newlength{\figsepa} \setlength{\nextfigheighta}{0cm} \setlength{\figwidtha}{0.15\textwidth} \setlength{\figsepa}{0.155\textwidth} \node[anchor=south, inner sep=0] (input) at (0,\nextfigheighta) {\includegraphics[width=\figwidtha]{Experiments/Toy/color}}; \node[anchor=south, inner sep=0] (conv21nod) at (\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/detections_conv21_nd_color}}; \node[anchor=south, inner sep=0] (conv21d) at (2*\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/detections_conv21_color}}; \node [anchor=south] at (input.north) {\footnotesize \begin{tabular}{c}\;\\ Input \end{tabular}}; \node [anchor=south] at (conv21nod.north) {\footnotesize {\begin{tabular}{c} Detection on (a) \end{tabular}}}; \node [anchor=south] at (conv21d.north) {\footnotesize \begin{tabular}{c} \;\\ \textbf{Detection on (b)} \end{tabular}}; \setlength{\figwidtha}{0.080\textwidth} \setlength{\figsepa}{0.085\textwidth} \addtolength{\nextfigheighta}{-0.75\figsepa} \node[anchor=south, inner sep=0] (original1) at (0,\nextfigheighta) {\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_original1}}; \node[anchor=south, inner sep=0] () at (\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_original2}}; \node[anchor=south, inner sep=0] () at (2*\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_original3}}; \node[anchor=south, inner sep=0] () at (3*\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_original4}}; \node[anchor=south, inner sep=0] () at (4*\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_original5}}; \addtolength{\nextfigheighta}{-0.75\figsepa} \node[anchor=south, inner sep=0] (residual1) at (0,\nextfigheighta) {\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_residual1}}; \node[anchor=south, inner sep=0] () at (\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_residual2}}; \node[anchor=south, inner sep=0] () at (2*\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_residual3}}; \node[anchor=south, inner sep=0] () at (3*\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_residual4}}; \node[anchor=south, inner sep=0] () at (4*\figsepa,\nextfigheighta){\includegraphics[width=\figwidtha]{Experiments/Toy/color_conv21_residual5}}; \node [anchor=east] at (original1.west) {\scriptsize (a)}; \node [anchor=east] at (residual1.west) {\scriptsize (b)}; \end{tikzpicture} \caption{Image anomalies are successfully detected by removing all self-similar content and then looking for structure in the residual noise. Top row: left, an image with a color anomaly (the red dot); middle, detections obtained from top five principal components of CNN features shown in (a); right, detections on features shown in (b), obtained after removing the self-similar content. Cyan corresponds to good detection and orange extremely salient detection. } \label{fig:conv_denoising} \end{figure} We introduce in this paper an unsupervised method for detecting anomalies in an arbitrary image. The method does not rely on a training dataset of normal or abnormal images, neither on any other prior knowledge about the image statistics. It directly detects anomalies with respect to residual images estimated solely from the image itself. We only use a generic, qualitative background image model: we assume that anything that repeats in an image is \textit{not} an anomaly. In a nutshell, our method removes from the image its self-similar content (considered as being normal). The residual is modeled as colored Gaussian noise, but still contains the anomalies according to their definition: they do not repeat. Detecting anomalies in noise is far easier and can be made rigorous and unsupervised by the \emph{a-contrario} theory~\cite{desolneux2007gestalt} which is a probabilistic formalization of the \emph{non-accidentalness} principle~\cite{lowe1985perceptual}. The \emph{a-contrario} framework has produced impressive results in many different detection or estimation computer vision tasks, such as, segment detection~\cite{grompone2010lsd}, spots detection \cite{grosjean2009contrario}, vanishing points detection~\cite{lezama2014finding}, mirror-symmetry detection~\cite{patraucean2013detection}, among others. The fundamental property of the \emph{a-contrario} theory is that it provides a way for automatically computing detection thresholds that yield a control on the number of false alarms (NFA). It favorably replaces the usual p-value when multiple testing is involved. It follows that not only one can detect anomalies in arbitrary images without complex modeling, but in addition the anomalies are associated an NFA which is often very small and therefore offers a strong guarantee of the validity of the detection. We shall show detections performed directly on the image residual, or alternatively on residuals extracted from dense low and mid-level features of the VGG neural net~\cite{simonyan2014very}. The paper is organized as follows. Section~\ref{sec:relatedWork} discusses previous work while Section~\ref{sec:method} explains the proposed method and its implementation. Section~\ref{sec:experiments} presents results of the proposed method on real/synthetic data, and a comparison to other state-of-the-art anomaly detectors. We finally close in Section~\ref{sec:conclusions}. \vspace{-.5em} \section{Related Work} \label{sec:relatedWork} \vspace{-.5em} The 2009 review \cite{chandola2009anomaly} examining about 400 papers on anomaly detection considered allegedly all existing techniques and application fields. It is fairly well completed by the more recent \cite{pimentel2014review} review. These reviews agree that classification techniques like SVM can be discarded, because anomalies are generally not observed in sufficient number and lack statistical coherence. There are exceptions like the recent method~\cite{ding2014experimental} which defines anomalies as rare events that cannot be learned, but after estimating a background density model, the right detection thresholds are nevertheless learned from anomalies. A broad related literature exists on saliency measures, for which learning from average fixation maps by humans is possible \cite{tavakoli2011fast}. Saliency detectors try to mimic the human visual perception and in general introduce semantic prior knowledge (e.g., face detectors). This approach works particularly well with neural networks trained on a base of detect/non-detect with ground truth obtained by for example, gaze trackers\cite{huang2015salicon}. % Anomaly detection has been generally handled as a ``one class'' classification problem. In~\cite{markou2003noveltyA} authors concluded that most research on anomaly detection was driven by modeling background data distributions, to estimate the probability that test data do not belong to such distributions \cite{grosjean2009contrario,honda2001finding,goldman2004anomaly,aiger2010phase}. % Autoencoders neural networks can be used to model background~\cite{An2016,Schlegl2017}. The general idea is to compute the norm between the input and a reconstruction of the input. Another successful background based method is the detection of anomalies in periodic patterns of textile \cite{tsai2003automated,perng2010novel}. % In~\cite{itti1998model,murray2011saliency}, center surround detectors based on color, orientation and intensity filters are combined to produce a final saliency map. Detection in image and video is also done in \cite{gao2008discriminant} with center-surround saliency detectors which stem from \cite{itti2000saliency} adopting similar image features. In~\cite{honda2001finding}, the main idea is to estimate the probability of a region conditioned on the surroundings. A more recent non parametric trend is to learn a sparse dictionary representing the background (i.e., \emph{normality}) and to characterize outliers by their non-sparsity \cite{margolin2013makes,boracchi2014novelty,elhamifar2012see,adler2015sparse,carrera2015detecting}. % The self-similarity principle has been successfully used in many different applications~\cite{efros1999texture,buades2005non}. % The basic assumption of this generic background model, is that in normal data, features are densely clustered. Anomalies instead occur far from their closest neighbors. This idea is implemented by clustering (anomalies being detected as far away from the centroid of their own cluster), or by simple rarity measurements based on nearest neighbor search (NN)~\cite{boiman2007detecting,seo2009static,goferman2012context}. % Background probabilistic modeling is powerful when images belong to a restricted class of homogeneous objects, like textiles. But, regrettably, this method is nearly impossible to apply on generic images. Similarly, background reconstruction models based on CNNs are restrictive and do not rely on provable detection thresholds. Center-surround contrast methods are successful for saliency enhancement, but lack a formal detection mechanism. Being universal, the sparsity and the self-similarity models are tempting and thriving. But again, they lack a rigorous detection mechanism, because they work on a feature space that is not easily modeled. We propose to benefit of the above methods while avoiding their mentioned limitations. To this aim, we do construct a probabilistic background model, but it is applied to a new feature image that we call the \emph{residual}. This residual is obtained by computing the difference between a self-similar version of the target image and the target itself. Being not self-similar, this background is akin to a colored noise. Hence a hypothesis test can be applied, and more precisely multiple hypothesis testing (also called \textit{a contrario} method), as proposed in~\cite{grosjean2009contrario}. In that way, we present a general and simple method that is universal and detects anomalies by a rigorous threshold. It does not require learning, and it is easily made multiscale. \vspace{-.5em} \section{Method} \label{sec:method} \vspace{-.5em} Our method is built on two main blocks: a removal of the self-similar image component, and a simple statistical detection test on the residual based on the \textit{a contrario} framework. \vspace{-.5em} \subsection{Construction of the residual image} \label{denoising} \vspace{-.5em} The proposed self-similarity based background subtraction is inspired from patch-based non-local denoising algorithms, where the estimate is done from a set of similar patches~\cite{buades2005non}. This search is generally performed locally around each patch \cite{dabov2007image,buades2005non} to keep computational cost low and to avoid noise overfitting. The main difference with non-local denoisers is that we \textit{forbid} local comparisons. The nearest neighbor search is performed \textit{outside a square region surrounding each query patch}. This square region is defined as the union of all the patches intersecting the query patch. Otherwise any anomaly with some internal structure might be considered a valid structure. What matters is that the event represented by the anomaly is unique, and this is checked away from it. For each patch $P$ in the image the $n$ most similar patches denoted by $P_i$ are searched and averaged to give a self-similar estimate, \begin{equation} \hat{P} = \frac{1}{Z}\sum_{i=1}^{n} \exp \left(-\frac{ \|P - P_i\|_2^2 }{h^2}\right) P_i \label{eq:nlmeans} \end{equation} where $Z=\sum_{i=1}^{n} \exp \left(-\frac{ \|P - P_i\|_2^2 }{h^2}\right)$ is a normalizing constant, and $h$ is a parameter. Since each pixel belongs to several different patches, they will therefore receive several distinct estimates that can be averaged. Algorithm \ref{algo:model} gives a generic pseudocode for this process, which ends with the generation of a residual image $r(u)$ allegedly containing only noise and the anomalies (see Figure~\ref{fig:conv_denoising}). The intuition is that it is much easier to detect anomalies in $r(u)$ than in $u$. \begin{algorithm}[t] \caption{Computation of the unstructured residual} \begin{spacing}{1.0} \begin{algorithmic}[1] \REQUIRE Multichannel Image $u$, $n$ the number of nearest neighbors \ENSURE Model $\hat{u}$ of $u$ based on $\mathcal{D}$, residual $r(u)=\hat{u}-u$. \FORALL{Multichannel patch $P$ of $u$} \STATE Compute $n$ near.neigh. $\{P_i\}$ of $P$ \textbf{(outside square region)}. \STATE Reconstruct the patch (using \eqref{eq:nlmeans}) \ENDFOR \FORALL{pixels $j$ in $u$} \STATE $\hat{u}(j) = \frac{\sum_{i \in \{s|j \in W_s, s \in \llbracket 1, N \rrbracket\}}^{} \hat{P}_i(j)}{\#\{s|j \in W_s, s \in \llbracket 1, N \rrbracket\}}$ \ENDFOR \end{algorithmic} \end{spacing} \textbf{Notation convention.} $W_s:$ set of pixels in the patch centered at $s$. $\hat{P}_i(j):$ value at pixel $j$ of the reconstructed patch centered at $i$. \label{algo:model} \end{algorithm} \subsection{Statistical detection by the \textbf{\textit{a contrario}} approach} Our goal is to detect structure in the residual image $r(u)=\hat{u}-u$. We are in a much better situation modeling $r(u)$ than $u$. Indeed, contrarily to $u$, $r(u)$ is by construction \emph{unstructured} and akin to a colored noise (as illustrated in Fig.~\ref{fig:conv_denoising}). In what follows we assume that $r(u)$ is a spatial stationary random process and follow~\cite{grosjean2009contrario}, who proposed automatic detection thresholds in any colored Gaussian noise. Given a set of random variables $(X_i)_{i\in[|1,N|]}$ a function $f$ is called an NFA if it guarantees a bound on the expectation of its number of false alarms under the null-hypothesis, namely, $\forall{\epsilon>0}, \mathbb{E}[\#\{i, f(i, X_i) \le \epsilon\}] \le \epsilon$. In other words, thresholding all the $f(i, X_i)$ by $\epsilon$ should give up to $\epsilon$ false alarms when $(X_i)_{i\in[|1,N|]}$ verifies the null-hypothesis. In our case, we consider \begin{equation} f(i, \mathbf{x}) = N \mathbb{P}(|X_i| \ge |x_i|), \label{eq:nfa} \end{equation} Where $i$ index among the $N$ executed tests (detailed below), $X_i$ is a random variable distributed as the residual at position $i$, and $x_i$ the actual measured value (pixel or feature value) at position $i$. The null-hypothesis is that the residual, represented by $(X_i)_{i\in[|1,N|]}$, verifies that each $X_i$ follows a standard normal distribution. Independence is not required. \noindent \textbf{Residual distribution.} In practice the distribution of the residual $r(u)$ is not necessarily Gaussian. A careful study of the residual distribution lead us to consider that it follows a generalized Gaussian distribution (GCD). We approximately estimate the GCD parameters, and then apply a non-linear mapping to make it normally distributed. \vspace{.25em} \noindent \textbf{Choice of NFA.} The choice of the NFA given in~\eqref{eq:nfa} enables to detect anomalies in both tails of the Gaussian distribution (i.e., very bright or very dark spots). To detect anomalies of all sizes, the detection is carried out independently at $N_\text{scales}$ scales computed from the residual at the original resolution (by Gaussian subsampling of factor two). Let us denote by $\Omega_s$ the set of pixels in the residual image at scale $s$ having $N_\text{feat}$ number of features. When working with colored noise, Grosjean and Moisan~\cite{grosjean2009contrario} propose to convolve the noise with a measure kernel to detect spots of a certain size. This corresponds to the generation of new image features $\bar{r}(u) = r(u) \ast K$, where $K$ is a disk of a given radius. This idea is used in our framework, where the residual is convolved with kernels of small sizes. Since we apply the detection at all dyadic scales, the tested radii are limited to a small set of $N_\text{kernel}$ values (1,2 to 3) at each scale. Because the residual is assumed to be a stationary Gaussian field, the result after filtering is also Gaussian. The variance is estimated and the filtered residuals are normalized to have unit variance. This is the input to the NFA~\eqref{eq:nfa} computation (i.e., $\textbf{x}_i$). Thus, the inputs to the detection phase are multi-channel images of different scales, where each pixel channel, representing a given feature, follows a standard normal distribution. Then, the number of tests is $ N = N_\text{kernel} \cdot N_\text{feat} \cdot \sum_{i=0}^{N_\text{scales}-1} |\Omega_s|. $ \vspace{-.5em} \subsection{Choice of the image features} \vspace{-.5em} \label{features} Anomaly detectors work either directly on image pixels or on some feature space but the detection in the residual, which is akin to unstructured noise, is fairly independent of the choice of the features. We used with equal success the raw image color pixels, or some intermediate feature representation extracted from the VGG convolutional neural network~\cite{simonyan2014very}. % To compress the dynamical range of the feature space we apply a square root function to the network features. In order to reduce the feature space dimension, we compute the principal components (PCA) and keep only the first five. This is done per input image independently. \noindent \textbf{Parameters.} The main method parameter is the number of allowed false alarms in the statistical test. In all presented experiments, we set NFA=$10^{-2}$. Hence, an anomaly is detected at pixel $\mathbf{x}$ in channel $i$ iff the NFA function $f(i, \mathbf{x})$ is below $\epsilon=10^{-2}$. This implies a (theoretical) expectation of less than $10^{-2}$ ``casual'' detection per image under the null hypothesis that the residual image is noise. Obviously the lower the NFA the better. Most anomalies have a much lower NFA. For the basic method working on image pixels we used two disks of radius one and two, while for the neural network features, we add a third disk of radius three. The number of scales is set to $N_{scale}=4$ in all tests. The patch size in Alg.~\ref{algo:model} is $8\! \times\! 8\! \times\! 3$ for the pixels variant, while when using neural nets features, we use a patch size of $5\! \times\! 5\! \times\! 5$. The number of nearest patches is always set to $n=16$, and $h=10$. Results presented herein use the outputs from VGG-19 layers \verb!conv1_1!, \verb!conv2_1! and \verb!conv3_1!. % \vspace{-.5em} \section{Experiments} \label{sec:experiments} \vspace{-.5em} \begin{figure*}[th] \centering \begin{tikzpicture} \newlength{\nextfigheight} \newlength{\figwidth} \newlength{\figsep} \setlength{\nextfigheight}{0cm} \setlength{\figwidth}{0.105\textwidth} \setlength{\figsep}{0.11\textwidth} \node[anchor=south, inner sep=0] (example1) at (0,\nextfigheight) {\includegraphics[width=\figwidth]{Experiments/Toy/color}}; \node[anchor=south, inner sep=0] (nonn1) at (\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_color}}; \node[anchor=south, inner sep=0] (conv111) at (2*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv11_color}}; \node[anchor=south, inner sep=0] (conv211) at (3*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv21_color}}; \node[anchor=south, inner sep=0] (conv311) at (4*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv31_color}}; \node[anchor=south, inner sep=0] (salicon1) at (8*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/salicon_color}}; \node[anchor=south, inner sep=0] (itti1) at (6*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/itti_color}}; \node[anchor=south, inner sep=0] (cohen1) at (5*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/cohen_color}}; \node[anchor=south, inner sep=0] (drfi1) at (7*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/drfi_color}}; \node [anchor=south] at (example1.north) {\scriptsize Input}; \node [anchor=south] at (nonn1.north) {\scriptsize \vphantom{p}\texttt{pixels}\vphantom{p}}; \node [anchor=south] at (conv111.north) {\scriptsize \vphantom{p}\texttt{conv1\_1}\vphantom{p}}; \node [anchor=south] at (conv211.north) {\scriptsize \vphantom{p}\texttt{conv2\_1}\vphantom{p}}; \node [anchor=south] at (conv311.north) {\scriptsize \vphantom{p}\texttt{conv3\_1}\vphantom{p}}; \node [anchor=south] at (salicon1.north) {\scriptsize \vphantom{p}SALICON \cite{huang2015salicon}\vphantom{p}}; \node [anchor=south] at (itti1.north) {\scriptsize \vphantom{p}Itti \textit{et al.}\cite{itti1998model}\vphantom{p}}; \node [anchor=south] at (cohen1.north) {\scriptsize \vphantom{p}Mishne \!-\! Cohen \cite{mishne2013multiscale}\vphantom{p}}; \node [anchor=south] at (drfi1.north) {\scriptsize \vphantom{p}DRFI \cite{jiang2013salient}\vphantom{p}}; \addtolength{\nextfigheight}{-0.68\figsep} \node[anchor=south, inner sep=0] (example2) at (0,\nextfigheight) {\includegraphics[width=\figwidth]{Experiments/Toy/form}}; \node[anchor=south, inner sep=0] (nonn2) at (\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_form}}; \node[anchor=south, inner sep=0] (conv112) at (2*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv11_form}}; \node[anchor=south, inner sep=0] (conv212) at (3*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv21_form}}; \node[anchor=south, inner sep=0] (conv312) at (4*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv31_form}}; \node[anchor=south, inner sep=0] (salicon2) at (8*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/salicon_form}}; \node[anchor=south, inner sep=0] (itti2) at (6*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/itti_form}}; \node[anchor=south, inner sep=0] (cohen2) at (5*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/cohen_form}}; \node[anchor=south, inner sep=0] (drfi2) at (7*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/drfi_form}}; \addtolength{\nextfigheight}{-0.61\figsep} \node[anchor=south, inner sep=0] (example5) at (0,\nextfigheight) {\includegraphics[width=\figwidth]{Experiments/Toy/density}}; \node[anchor=south, inner sep=0] (nonn5) at (\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_density}}; \node[anchor=south, inner sep=0] (conv115) at (2*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv11_density}}; \node[anchor=south, inner sep=0] (conv215) at (3*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv21_density}}; \node[anchor=south, inner sep=0] (conv315) at (4*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/detections_conv31_density}}; \node[anchor=south, inner sep=0] (salicon5) at (8*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/salicon_density}}; \node[anchor=south, inner sep=0] (itti5) at (6*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/itti_density}}; \node[anchor=south, inner sep=0] (cohen5) at (5*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/cohen_density}}; \node[anchor=south, inner sep=0] (drfi5) at (7*\figsep,\nextfigheight){\includegraphics[width=\figwidth]{Experiments/Toy/drfi_density}}; \addtolength{\nextfigheight}{-0.72\figsep} \node[anchor=south, inner sep=0] (example6) at (0,\nextfigheight) {\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/noise}}; \node[anchor=south, inner sep=0] (nonn6) at (\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/detections_noise1}}; \node[anchor=south, inner sep=0] (conv116) at (2*\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70, width=\figwidth]{Experiments/Toy/detections_conv11_noise1}}; \node[anchor=south, inner sep=0] (conv216) at (3*\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/detections_conv21_noise1}}; \node[anchor=south, inner sep=0] (conv316) at (4*\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/detections_conv31_noise1}}; \node[anchor=south, inner sep=0] (salicon6) at (8*\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/salicon_noise}}; \node[anchor=south, inner sep=0] (itti6) at (6*\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/itti_noise}}; \node[anchor=south, inner sep=0] (cohen6) at (5*\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/cohen_noise}}; \node[anchor=south, inner sep=0] (drfi6) at (7*\figsep,\nextfigheight){\includegraphics[clip,trim=0 70 0 70,width=\figwidth]{Experiments/Toy/drfi_noise1}}; \end{tikzpicture} \vspace{.15em} \begin{tikzpicture} \newlength{\nextfigheightd} \newlength{\figwidthd} \newlength{\figsepd} \setlength{\nextfigheightd}{0cm} \setlength{\figwidthd}{0.105\textwidth} \setlength{\figsepd}{0.11\textwidth} \node[anchor=south, inner sep=0] (example1) at (0,\nextfigheightd) {\includegraphics[width=\figwidthd]{Experiments/Real/door_eq}}; \node[anchor=south, inner sep=0] (nonn1) at (\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_39}}; \node[anchor=south, inner sep=0] (conv111) at (2*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_conv11_39}}; \node[anchor=south, inner sep=0] (conv211) at (3*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_conv21_39}}; \node[anchor=south, inner sep=0] (conv311) at (4*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_conv31_39}}; \node[anchor=south, inner sep=0] (salicon1) at (8*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/salicon_door}}; \node[anchor=south, inner sep=0] (itti1) at (6*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/itti_door}}; \node[anchor=south, inner sep=0] (cohen1) at (5*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/cohen_door}}; \node[anchor=south, inner sep=0] (drfi1) at (7*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/drfi_39}}; \addtolength{\nextfigheightd}{-0.75\figsepd} \node[anchor=south, inner sep=0] (example2) at (0,\nextfigheightd) {\includegraphics[width=\figwidthd]{Experiments/Real/man}}; \node[anchor=south, inner sep=0] (nonn2) at (\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_114}}; \node[anchor=south, inner sep=0] (conv112) at (2*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_conv11_114}}; \node[anchor=south, inner sep=0] (conv212) at (3*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_conv21_114}}; \node[anchor=south, inner sep=0] (conv312) at (4*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Toy/detections_conv31_114}}; \node[anchor=south, inner sep=0] (salicon2) at (8*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/salicon_man}}; \node[anchor=south, inner sep=0] (itti2) at (6*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/itti_man}}; \node[anchor=south, inner sep=0] (cohen2) at (5*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/cohen_man}}; \node[anchor=south, inner sep=0] (drfi2) at (7*\figsepd,\nextfigheightd){\includegraphics[width=\figwidthd]{Experiments/Real/drfi_114}}; \addtolength{\nextfigheightd}{-0.9\figsepd} \node[anchor=south, inner sep=0] (example3) at (0,\nextfigheightd) {\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/mine2}}; \node[anchor=south, inner sep=0] (nonn3) at (\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Toy/detections_satellite}}; \node[anchor=south, inner sep=0] (conv113) at (2*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Toy/detections_conv11_satellite}}; \node[anchor=south, inner sep=0] (conv213) at (3*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Toy/detections_conv21_satellite}}; \node[anchor=south, inner sep=0] (conv313) at (4*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Toy/detections_conv31_satellite}}; \node[anchor=south, inner sep=0] (salicon3) at (8*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/salicon_mine2}}; \node[anchor=south, inner sep=0] (itti3) at (6*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/itti_mine2}}; \node[anchor=south, inner sep=0] (cohen3) at (5*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/cohen_mine2}}; \node[anchor=south, inner sep=0] (drfi3) at (7*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/drfi_satellite}}; \addtolength{\nextfigheightd}{-0.9\figsepd} \node[anchor=south, inner sep=0] (example6) at (0,\nextfigheightd) {\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/textile}}; \node[anchor=south, inner sep=0] (nonn6) at (\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/detections_textile}}; \node[anchor=south, inner sep=0] (conv116) at (2*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/detections_conv11_textile}}; \node[anchor=south, inner sep=0] (conv216) at (3*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/detections_conv21_textile}}; \node[anchor=south, inner sep=0] (conv316) at (4*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/detections_conv31_textile}}; \node[anchor=south, inner sep=0] (salicon6) at (8*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/salicon_textile}}; \node[anchor=south, inner sep=0] (itti6) at (6*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/itti_textile}}; \node[anchor=south, inner sep=0] (cohen6) at (5*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/cohen_textile}}; \node[anchor=south, inner sep=0] (drfi6) at (7*\figsepd,\nextfigheightd){\includegraphics[clip,trim=0 15 0 15,width=\figwidthd]{Experiments/Papers/drfi_textile}}; \end{tikzpicture} \caption{Detection results on synthetic (top four rows) and real (bottom four rows) images. Detections represented by circles, with radius proportional to detected scale and color to detection strength (NFA). White: weak detection - NFA $\in [10^{-3}, 10^{-2}]$, cyan: mild detection - NFA $\in [10^{-8}, 10^{-3}]$, green: strong detection - NFA $\in [10^{-21}, 10^{-8}]$, and orange: very strong detection - NFA $\le 10^{-21}$. Red: detection with lowest NFA. Examples in rows 5th and 6th are from the Toronto dataset~\cite{bruce2006saliency} while 7th and 8th from \cite{mishne2013multiscale} and \cite{tsai1999automated} respectively.} \label{fig:toy_real} \end{figure*} In absence of a valid test image database for anomalies, we used the most common images proposed in the literature (see Fig.~\ref{fig:toy_real}) and we adopted the following comparison methodology, that was applied to our method and to other four state-of-the-art ones for comparison: a) \textit{Sanity check:} verifying that for toy examples proposed in the literature the sole detection is the anomaly; b) \textit{Theoretical sanity check:} verify the \textit{a contrario} principle: "no detection in white noise" c) \textit{Classic challenging images:} we verify the detector power on classic challenging images of the literature: side scan sonar, textile, mammography and natural images. In the case of the mammography where one paper computed an NFA, we verify crucially that by computing the NFA on the residual instead of the image, we gain a huge factor, the NFA being divided by eleven orders of magnitude. \vspace{.5em} We tested our proposed anomaly detector on two different input image representations: the basic one, \texttt{pixels}, directly applies the anomaly detection procedure to the residuals obtained from the color channels, and three different variants using as input features extracted at different levels from the VGG network~\cite{simonyan2014very}, namely, very low level~ (\texttt{conv1\_1}), low level~(\texttt{conv2\_1}), and medium level~ (\texttt{conv3\_1}) features. As we shall check the four detections are similar and can be fused by a mere pixel union of all detections. Existent anomaly detectors are often tuned for specific applications, which probably explains the poor code availability. We compared to Mishne and Cohen~\cite{mishne2013multiscale}, a state-of-the-art anomaly detector with available code, to the salient object detector DRFI~\cite{jiang2013salient} (which is state-of-the-art according to \cite{borji2015salient}), and to the state-of-the-art human gaze predictor SALICON~\cite{huang2015salicon}. We also compared to the Itti~\textit{et al.} salient object detector \cite{itti1998model}, which works reasonably well for anomaly detection. All methods produce saliency maps where anomalies have the highest score. Anomalies for Mishne and Cohen are red-colored, while the other methods don't have a threshold for anomalies. More results are available in the supplementary materials. \vspace{.35em} \noindent \textbf{Synthetic images.} The proposed method performs well on synthetic examples as shown in Figure~\ref{fig:toy_real}). Some weak false detections are found when using as input features extracted at different layers of the VGG net. All the other compared methods miss some detections. SALICON successfully detects the anomalous density on the fourth example but misses several anomalies in others or introduces numerous wrong detections. Itti~\textit{et al}.~method successfully detects the anomalous color structure in the first example, but fails to detect the other ones. Mishne and Cohen and DRFI methods do not perform well on any of the five synthetic examples. \vspace{.35em} \noindent \textbf{Real images.} The comparison on real images is more intricate and requires looking in detail to find out whether detections make sense (Figure~\ref{fig:toy_real}). In the garage door (fourth row), there are two detections that stand out (lens flare and red sign), some others -- less visible -- can be found (door scratches or holes in the brick wall). For our method, the main detections are present in all the variants. There are also specific anomalies that can be detected only at a given layer of the neural network. For example, % \texttt{conv1\_1} detects the holes in the brick wall and the gap between the garage door and the wall, in addition to the ones detected with \texttt{pixels} input. The variants \texttt{conv2\_1} and \texttt{conv3\_1} detect a missing part of a brick in the wall. Saliency methods detect the red sign but not the lens flare. Mishne and Cohen one only detects the garage door gap. The second real example is a man walking in front of some trees. Our method detects the man with \texttt{pixels} and \texttt{conv1\_1}. DRFI and SALICON detect the man while Mishne and Cohen and Itti~\textit{et al}.~do not. The third real example is a radar image showing a mine, while the last example is a defect in a periodic textile. All methods detect the anomalies, with more or less precision. Note that the detection in the top right corner for both \texttt{pixels} and \texttt{conv1\_1} (and only these) correspond to a defect inside the periodic pattern. \vspace{.5em} \noindent \textbf{Comparison to the \textit{a contrario} method of Grosjean and Moisan~\cite{grosjean2009contrario}.} This \textit{a contrario} method is designed to detect spots in colored noise textures, and was applied to the detection of tumors in mammographies. This detection algorithm is the only other one computing NFAs, and we can directly compare them to ours. The detection results on a real mammography (having a tumor) are shown in Figure~\ref{fig:grosjean}. With our method the tumor is detected with a much significant NFA (NFA of $10^{-12}$ whereas in~\cite{grosjean2009contrario} NFA of $0.15$). Our self-similar anomaly detection method shows fewer false detections, actually corresponding to rare events like the crossings of arterials. \begin{figure}[ht] \includegraphics[clip,trim=0 0 0 60,width=0.32\linewidth]{Experiments/Grosjean/orig} \includegraphics[clip,trim=0 0 0 60,width=0.32\linewidth]{Experiments/Grosjean/detections} \includegraphics[clip,trim=0 0 0 60,width=0.32\linewidth]{Experiments/Grosjean/grosjean} \caption{The region represented by the large white spot in the left image is a tumor. The proposed self-similarity anomaly detector successfully detects the tumor with a much significant NFA than the one from Grosjean and Moisan~\cite{grosjean2009contrario} (an NFA of $10^{-12}$ versus their reported NFA of $0.15$), while making fewer false detections.} \label{fig:grosjean} \end{figure} \vspace{-.6em} \section{Conclusion} \label{sec:conclusions} \vspace{-.5em} We have shown that anomalies are easier detected on the residual image, computed by removing the self-similar component, and then performing hypothesis testing. It is reassuring to see that our method finds all anomalies proposed in the literature with very low NFA. In addition, we have experimentally shown that the method verifies the non-accidentalness principle: no anomalies are detected in white noise. We plan to build a database of test images with anomalies to run extensive validation and comparison. We also plan to extend the method to videos, by analyzing anomalies in the motion field. \vspace{-.5em} \bibliographystyle{IEEEbib}
1,116,691,497,939
arxiv
\section{Introduction} \subsection{Multi-Agent Reinforcement Learning in Energy Systems} With the increased controllability of power consumption and generation at the edge of modern power systems, devising centralized control approaches to manage flexible devices within these systems is becoming nearly impossible. In particular, the dynamical models of these interconnected systems present considerable nonlinearities, which challenges the applicability of classical control methods. In addition, the conflicting operational costs/objectives of heterogeneous devices often obstruct the formulation of a system-wide operational objective. Thus, decentralized, data-driven control approaches, where edge controllers utilize data to derive effective local control policies, provide a viable pathway to realizing the resilient and efficient operation of future energy systems. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{img/architecture.png} \caption{PowerGridworld architecture for an $N$-agent environment comprised of both single-component agents (Agent 1) and multi-component agents, which can be added in any number and combination. Given a base system load and the agents' individual power profiles, a power flow solution is computed at each control step and may be used to update the agents' states and rewards.} \label{fig-architecture} \end{figure} Reinforcement learning (RL) approaches have shown great potential in several power systems control and load management tasks \cite{claessens2016convolutional, xu2019optimal, yang2019two, duan2019deep}. In addition, multi-agent RL (MARL) approaches have advanced and have been applied in many complex systems, including games \cite{vinyals2019grandmaster} and autonomous driving \cite{shalev2016safe}. Recently, MARL approaches have also found applications in the power systems domain, with an emphasis on voltage regulation problems \cite{gao2021consensus, chen2021powernet, pigott2021gridlearn}. These applications utilize the capabilities of MARL to devise local control policies without any knowledge of the models of the underlying complex systems. However, due to the significant focus on applying MARL in power system decentralized control tasks, there has been no standardized RL test environment that supports the development of heterogeneous power system components and the deployment of off-the-shelf MARL approaches. \subsection{Related Software} Table \ref{tab-software} summarizes the features of MARL frameworks for power systems. \begin{table*}[t] \centering \caption{Comparison of software features for MARL environments for power systems.} \label{tab-software} \begin{tabular}{l|p{2.4cm}|p{1.9cm}|p{1.5cm}|p{2cm}|p{3.5cm}} \Xhline{4\arrayrulewidth} \textbf{Package} & \textbf{Power Systems \qquad Application} & \textbf{Agent Customization} & \textbf{Composable Agents} & \textbf{Control Step} & \textbf{MARL Training Interfaces}\\ \Xhline{4\arrayrulewidth} PettingZoo & None & Unlimited & No & User-defined & RLLib, OpenAI \\ \hline CityLearn & Demand response & Buildings and subsystems & No & 1-hour & Package-specific \\ \hline GridLearn & Demand response, voltage regulation & Buildings and subsystems & No & Sub-hourly & Package-specific \\ \hline PowerGridworld & Any energy management/optimization & Unlimited & Yes & User-defined & RLLib, OpenAI\\ \hline \end{tabular} \vspace{-5pt} \end{table*} \subsubsection{General MARL Framework} The development of MARL is relatively new compared to its single-agent counterpart, and there is currently no commonly and widely used MARL framework. PettingZoo \cite{terry2020pettingzoo}, a Python library, has a goal of developing a universal application programming interface (API) for formulating MARL problems, as OpenAI Gym \cite{brockman2016openai} did for single-agent RL problems. However, because the key advantages of PettingZoo---namely, an efficient formulation suitable for the turn-based games and an ability to handle agent creation and death within episodes---are less relevant to power system control problems, PowerGridworld does not adopt the PettingZoo APIs at this stage for simplicity. \subsubsection{Multi-Agent Energy Systems} CityLearn \cite{vazquez2019citylearn,vazquez2020citylearn} is an open-source library aimed at implementing and evaluating MARL for building demand response and energy coordination. By design, the heating and cooling demands of buildings in CityLearn are guaranteed to be satisfied, allowing researchers to focus on energy balance and load shifting for the control problem. To achieve this, building thermal models and associated energy demands are precomputed using EnergyPlus \cite{crawley2001energyplus}, and control actions are limited to active energy storage decisions rather than those affecting passive (thermal) mass. CityLearn is intended to provide a benchmark MARL environment from the standpoint of building demand response and, as such, it is highly constrained in terms of the types of agents and models that are available. CityLearn energy models include buildings, heat pumps, energy storage, and batteries, while the state and action spaces of the agents themselves must be constructed from a predefined variable list. Control steps in CityLearn are restricted to 1-hour resolution, and grid physics is not modeled. To address this, GridLearn \cite{pigott2021gridlearn} utilizes the building models provided in CityLearn and extends its functionality to include power system simulation. The added power flow model, implemented using pandapower \cite{thurner2018pandapower}, allows researchers studying decentralized control to consider both building-side and grid-level objectives. The GridLearn case study presented in \cite{pigott2021gridlearn} demonstrates that this platform can be used to train MARL controllers to achieve voltage regulation objectives in a distribution system by controlling behind-the-meter resources. \subsubsection{MARL Training} While many open-source choices exist for MARL training, we highlight two of the most popular: RLLib (multiple algorithms available) and OpenAI's multi-agent deep deterministic policy gradient (MADDPG). RLLib \cite{liang2018rllib,rllib-webpage} is a framework for scalable RL training built on the Ray Python library \cite{moritz2018ray}, and it supports a variety of training paradigms for single-agent, multi-agent, hierarchical, and offline learning. RLLib can be deployed on both cloud and high performance computing (HPC) systems, and it provides a number of training ``abstractions," enabling users to develop custom, distributed RL algorithms. The multi-agent API in PowerGridworld is derived from RLLib's own \texttt{MultiAgentEnv} API and thus is readily integrated into this framework. OpenAI\cite{openai} has played a central role in the evolution of both theory and software for RL and MARL. In addition to creating the Gym API, OpenAI released a series of tutorials and implementations in the mid-2010s that have continued to hold traction in the RL community, including the SpinningUp blog\footnote{\url{https://spinningup.openai.com/en/latest/}} and the baselines GitHub repository.\footnote{https://github.com/openai/baselines} The OpenAI implementation \cite{maddpg-webpage} of the MADDPG \cite{lowe2017multi} is a popular choice for MARL with continuous control. As described in greater detail in Section \ref{sec-software-description}, PowerGridworld makes it easy for users to leverage the implementations of both RLLib's and OpenAI's RL algorithms. To the best of our knowledge, no previous software packages exist that enable users to implement arbitrary multi-agent scenarios with with a power systems focus---in particular, with the ability to incorporate power flow solutions into the agents' observation spaces and rewards. We believe that PowerGridworld begins to bridge this gap by enabling highly modular, customizable environments that readily integrate with open-source, scalable MARL training frameworks such as RLLib. \section{PowerGridworld} \subsection{Description of Software}\label{sec-software-description} PowerGridworld is designed to provide users with a lightweight, modular, and customizable framework for creating power-systems-focused, multi-agent Gym environments that readily integrate with existing RL training frameworks. The purpose of this software, which is available as an open-source Python package\footnote{\href{https://github.com/NREL/PowerGridworld}{\url{https://github.com/NREL/PowerGridworld}}}, is to enable researchers to rapidly prototype simulators and RL algorithms for power systems applications at the level of detail of their choice, while also enabling the use of cloud and HPC via integration with scalable training libraries such as RLLib \cite{liang2018rllib} and Stable Baselines \cite{raffin2019stable}. \subsubsection{Architecture} The PowerGridworld design pattern is based on the OpenAI Gym API, which has become the \emph{de facto} standard interface for training RL algorithms. The Gym API essentially consists of the following two methods: \begin{itemize} \item \texttt{reset}: Initialize the simulation instance and return an observation of the initial state space, $s_0$. \item \texttt{step}: For each control step, apply an input control action, $a_t$, and return a a new state space observation, $s_t$; a step reward, $r_t$; a termination flag; and any desired metadata. \end{itemize} A simulator that is wrapped in the Gym API is often referred to as an \emph{environment}, and one instance of the simulation is often called an \emph{episode}. The core functionality of the PowerGridworld package is to extend the Gym API to include multi-agent simulations and to allow a user to combine environments that simulate individual devices or subsystems into a single, multi-component agent. This ``plug-and-play" functionality is highly useful in power systems applications because it enables the user to rapidly create heterogeneous agents using basic building blocks of distributed energy resources (DERs). We illustrate the PowerGridworld architecture in Fig. \ref{fig-architecture}. Here, the \texttt{MultiAgentEnv} environment (blue) encapsulates $N$ agents that subclass one of two types: \begin{enumerate}[a)] \item \texttt{ComponentEnv} environments (green), which implement a single, independent agent. This class is a slight extension of the OpenAI Gym API. \item \texttt{MultiComponentEnv} environments (yellow), which are a composition of component environments. For example, Agent $N$ could represent a smart building agent composed of building thermodynamics, photovoltaics (PV), and battery physics, each implemented as a separate \texttt{ComponentEnv}. \end{enumerate} The multi-agent Gym API can be readily plugged into an RL training framework such as RLLib (grey), where agent-level policies (red) are learned. Once the individual device physics has been implemented according to the \texttt{ComponentEnv} API, the software automates the creation of \texttt{MultiComponentEnv} environments. Any number of \texttt{ComponentEnv} and \texttt{MultiComponentEnv} agents can then be added to the \texttt{MultiAgentEnv} environment. \subsubsection{Power Flow Solver Integration} Another key feature of PowerGridworld is the integration of a power flow solver for simulating the grid physics that underlies the multi-agent environment. Although our examples utilize the open distribution system simulator (OpenDSS) \cite{opendss} to solve the power flow on a test feeder, any power flow solver wrapped in the \texttt{PowerFlowSolver} API can be utilized. \subsection{Advantages} The advantages of using PowerGridworld over MARL software packages are as follows. First, the plug-and-play modularity with a three-tier hierarchy (\emph{cf.} Fig. \ref{fig-architecture}) allows environments to be created from simpler components. Second, the multi-agent environment design allows both homogeneous and heterogeneous agent types. Third, the power flow solution can be used in agent-level states and rewards. Finally, PowerGridworld adheres to RLLib's multi-agent API, with converters for both CityLearn/GridLearn and OpenAI's MADDPG interfaces. \subsection{Limitations} Next, we list some of the limitations of PowerGridworld. First, time stepping is synchronous and of fixed frequency. However, we have a road map for implementing both hierarchical and multi-frequency time stepping. Second, the communication model is limited. Centralized communication, whereby the process driving the environment collects and communicates variables between agents, is relatively straightforward to implement using only the Gym API. More advanced paradigms require custom implementations. Finally, the initial version of the \texttt{MultiAgentEnv} serializes calls to the agents (i.e., there is no parallelism). \section{Case Studies} In this section, we present two examples of how PowerGridworld can be used to formulate multi-agent control tasks in energy systems. \subsection{Multi-Agent Building Coordination Environment}\label{subsec-building-coordination} In the first example, we consider three RL agents in a homogeneous setting. Each agent controls three components within one building: one HVAC system, one on-site PV system, and one energy storage (ES) system. Using this setup, this example demonstrates how to use the PowerGridworld package to model a learning environment that allows agent coordination while achieving each agent's own objective. To this end, the MARL system is implemented as follows: \begin{enumerate} \item For each agent/building, the HVAC system needs to be controlled so that thermal comfort can be realized with minimal energy consumption. As a result, the HVAC component reward, $r_t^{i, HVAC}$, includes penalties for both thermal discomfort and energy consumption. \item The PV and ES systems are two additional components that are controlled by an agent to modify the building's net power consumption, but for simplicity, the rewards related to these two components are set to be zero, i.e., $r_t^{i, PV} = r_t^{i, ES}=0$. \item We designed a simple scenario with a sudden PV generation drop when the system loading level is high. If all three buildings, which connect to the same bus in the distribution system, only care about their own objective, the voltage at the common bus might fall below the limit, i.e., $v_{comm} < \underline{v}$. As a result, voltage support -- maintaining $v_{comm} \geq \underline{v}$ -- requires the three buildings to coordinate with one another. \end{enumerate} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{img/homo_mae_maddpg.png} \caption{Learning curves of using MADDPG to train control policies for the multi-agent coordinated building control. All x-axes represent training iterations. Losses, i.e., $\mathcal{L}_{actor}$ and $\mathcal{L}_{critic}$, are averaged among three agents.} \label{fig-homo-mae} \end{figure} Based on the setup above, at control step $t$ and for agent $i$, the total agent reward is \begin{equation} r_t^i = r_t^{i, agent} + r_t^{i, sys} \end{equation} in which $r_t^{i, agent}= r_t^{i, HVAC} + r_t^{i, PV} + r_t^{i, ES}$ is the agent-level reward and $r_t^{i, sys} = -\lambda [\text{max}(0, v_{comm}-\overline{v})+\text{max}(0, \underline{v}-v_{comm})] / 3$ represents the system-level reward, shared evenly with all three agents. Here, $\lambda$ is a large number. Through MARL training, each agent should be able to optimize its own objective (i.e., keep $r_t^{i, agent}$ low) and also able to work with other agents to avoid any voltage violation (i.e., keep $r_t^{i, sys}$ low). To train control policies for this problem, we use OpenAI's MADDPG implementation. Specifically, agent $i$ trains a critic network (i.e., $Q_{\theta^i}(\mathbf{s}, \mathbf{a})$) in an off-policy manner to minimize the mean squared Bellman error (MSBE): \begin{equation} \mathcal{L}_{critic}^i (\theta^i) = \mathbb{E}_{\mathbf{s}, \mathbf{a}, r^i, \mathbf{s'}}[[Q_{\theta^i}(\mathbf{s}, \mathbf{a})-(r^i + \gamma Q_{\theta^{i,-}}(\mathbf{s'}, \mathbf{a'})]^2] \end{equation} and the actor (i.e., the control policy $\mu_{\phi^i}(s^i)$) is trained by minimizing the following actor loss: \begin{equation} \mathcal{L}_{actor}^i (\phi^i) = -\mathbb{E}_{\mathbf{s}, \mathbf{a}}[Q^i_{\theta}(\mathbf{s}, [... a^{i-1}, \mu_{\phi^i}(s^i), a^{i+1}, ...])] \end{equation} In the above equations, $\theta^i$ and $\phi^i$ are the RL parameters to be optimized, and $\theta^{i, -}$ represents the target value network parameters (a common off-policy learning trick; see \cite{mnih2015human} for details). In our notation, $a^i$ and $s^i$ are the action and state of agent $i$, respectively, and the collection of all agents' actions and states are written as $\mathbf{a}$ and $\mathbf{s}$. the The states at the next step are denoted $\mathbf{s'}$, and $\mathbf{a'}=[\mu_{\phi^i}(s^i) | i={1, ...}]$ are the policy chosen actions at $\mathbf{s'}$. Fig. \ref{fig-homo-mae} shows the learning curves over 350 training iterations. By the end of the training, both agent costs and the total cost converge to a low level. $\mathcal{L}_{critic}$ starts at a large value and gradually decreases to a value close to zero, indicating that the state-action values can be estimated accurately. As the value estimation becomes more reliable, $\mathcal{L}_{actor}$ also decreases, implying that the control policies are improving to achieve a higher reward level for each agent. Finally, the episodic voltage violation sum, $v_{vio}$, is high at the beginning, and as the agents learn to coordinate with one another, the voltage violation is also eliminated, leading to $v_{vio}=0$. In summary, this example demonstrates using PowerGridworld to formulate a MARL problem with both competition (building comfort) and collaboration (system voltage) among agents. Admittedly, instead of na\"ively splitting the system penalty evenly among agents to encourage agents' coordination, a more advanced approach could be flexibly implemented using this framework by modifying the corresponding interfacing functions. \subsection{Multi-Agent Environment With Heterogeneous Agents} A key feature of the PowerGridworld package is that it enables users to model heterogeneous agents that interact both with one another and with the grid. To demonstrate this feature, we developed a simple example with three different agents consisting of one smart building---simulated as a \texttt{MultiComponentEnv} composed of a five-zone building, a solar panel, and a battery component environment---and two independent component environments representing a PV array and an electric vehicle (EV) charging station. The agents here are loosely coupled according to their reward structures and observation spaces, as described next. \emph{Smart building ($i=1$).} The five-zone building has a simple reward function characterized by a soft constraint that zone temperatures be maintained within a comfort range. The building thermal model used is the same as in Section \ref{subsec-building-coordination}; the reward function is similar, except that it does not take power consumption into account. \emph{PV array ($i=2$)}. Next, we include a controllable PV array as a source of real power injection, with the purpose of mitigating voltage violations stemming from high real power demand on the distribution feeder. We model a simple control whereby the real power injection can be curtailed between 0\% and 100\% of available power from the panels; the observation space consists of both real power available from the panels and the minimum bus voltage on the feeder, $v_{min}$. (The scenario we consider is stable with respect to maximum feeder voltage.) The reward function is given by a soft penalty on the minimum bus voltage, which is computed using OpenDSS. \emph{EV charging station ($i=3$)}. Finally, we consider an EV charging station with an aggregate, continuous control, $a_3 \in [0, 1]$, representing the rate of charging for all charging vehicles. For example, with action $a_3 = 0.25$, all charging vehicles will charge at 25\% of the maximum possible rate. The distribution of vehicles is control dependent because, as vehicles become fully charged, they leave the station and thus reduce the aggregate load profile. Furthermore, each vehicle has prespecified (exogenous) arrival and departure times, before and after which it cannot charge. The observation space consists of a handful of continuous variables characterizing the station's current occupancy and aggregate power consumption, as well as aggregate information about the state of the charging vehicles. The reward function balances the local task of meeting demand with a grid-supportive task of keeping the total real power consumption under a peak threshold. Note that, while the charging station does not directly respond to grid signals, the soft constraint on peak power incentivizes load shifting. Using RLLib's multi-agent training framework, we train separate proximal policy optimization (PPO) \cite{schulman2017proximal} policies for each agent, with each agent attempting to optimize its own reward. Although training multi-agent policies in this way is generally challenging due to nonstationarity, here, the agents are only loosely coupled through bus voltages in the PV agent's reward function, and training converges without issue---see Fig. \ref{fig-heterogeneous-ppo}. The lower panel in the figure shows the PPO loss function for each agent's policy, \begin{align} \mathcal{L} = \mathbb{E} \Bigg[ \sum_{t=0}^T \min \left( \frac{\pi_{\theta}(a_t|s_t)}{\pi_{\theta_{old}}(a_t|s_t)}\hat{A}(a_t, s_t), g\left(\epsilon, \hat{A}(a_t, s_t)\right) \right)\Bigg] \label{eqn-ppo-loss} \end{align} where $\hat{A}$ is the advantage estimator, $g(\epsilon, \cdot)$ is a clipping function with threshold $\epsilon$, and $\theta_{old}$ refers to the policy weights from the previous training iteration. We refer the reader to \cite{schulman2017proximal} for additional details about the PPO algorithm and loss function. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{img/heterogeneous-ppo.png} \caption{Learning curves for independent PPO policies for the heterogeneous control problem trained using RLLib. The PPO loss function, $\mathcal{L}$, is given in (\ref{eqn-ppo-loss}). All x-axes represent training iterations.} \label{fig-heterogeneous-ppo} \end{figure} \section{Conclusion} PowerGridworld fills a gap in MARL for power systems by providing users with a lightweight framework for rapidly prototyping customized, grid-interactive, multi-agent simulations with a bring-your-own-model philosophy. The multi-agent Gym API and other API converters enable users to rapidly integrate with existing MARL training frameworks, including RLLib (multiple algorithms) and OpenAI's MADDPG implementation. Unlike the CityLearn and GridLearn software packages, PowerGridworld does not provide carefully designed benchmarks for a given application, such as demand response with voltage regulation. Rather, it provides the user with abstractions that streamline experimentation with novel multi-agent scenarios, component Gym environments, and MARL algorithms where the power flow solutions are essential to the problem. Integration with RLLib, in particular, paves the way for the use of supercomputing and HPC resources for RL training, which will become ever more important as the complexity of MARL simulations continues to increase. \bibliographystyle{IEEEtran}
1,116,691,497,940
arxiv
\section{Introduction} Future wireless communication systems are envisioned to provide high data-rate communication services \cite{wong2017key}. Inspired by recent advances in electromagnetic metamaterials, revolutionary new metasurfaces, called intelligent reflecting surfaces (IRSs) have been proposed for deployment in conventional communication networks to satisfy this demand \cite{cui2014coding}. In particular, comprising a number of programmable elements, IRSs can be smartly adapted to the channel conditions so as to proactively customize the radio propagation environment for enhancing the system performance \cite{yu2021smart}. Moreover, due to the passive nature of the reflecting elements, e.g., diodes and phase shifters, the power required for maintaining the IRS operation is typically very small \cite{cui2014coding}. Furthermore, commonly fabricated as thin rectangular surfaces, IRSs can be flexibly deployed coexisting with existing infrastructure and smoothly integrate into conventional communication systems. \par These favorable properties have motivated numerous works to study IRSs for performance enhancement of conventional communication systems \cite{wu2019intelligent,8741198,yu2020power}. Yet, in practice, the end-to-end path loss of the BS-IRS-receiver link is in general much larger than that of the unobstructed direct link due to the double path loss effect \cite{9306896}. Hence, employing passive IRSs may not effectively enhance the system performance. To compensate for the severe double path loss in the cascaded IRS channel, one has to adopt a large passive IRS comprising hundreds if not thousands of phase shift elements to achieve a significant passive beamforming gain \cite{9306896}, \cite{xu2021optimal}. However, deploying a large number of passive IRS elements significantly increases the signaling overhead for channel estimation and the complexity of IRS optimization \cite{wu2019intelligent,8741198,yu2020power}, which makes the design of IRS-assisted wireless systems challenging in practice. To circumvent these issues, the authors of \cite{zhang2021active} recently proposed a new IRS structure, namely, active IRSs. In particular, equipped with reflection-type amplifiers \cite{lonvcar2019ultrathin}, \cite{you2021wireless}, active IRSs can not only reflect the incident signals by manipulating the programmable IRS elements, but also amplify the reflected signal with the support of an extra power supply. We note that active IRSs are fundamentally different from full-duplex amplify-and-forward (FD-AF) relays in terms of hardware architecture and the mode of transmission. Specifically, equipped with radio frequency (RF) chains, FD-AF relays are able to receive the incident signal and then transmit it after amplification at the expense of self-interference. This process introduces a delay incurred by the signal processing at the relay. In contrast, equipped with low-power reflection-type amplifiers, active IRSs reflect and amplify the incident signal instantaneously, and the resulting delay between the direct link and the reflected link is negligibly small compared to the symbol duration \cite{wu2019intelligent}. Moreover, the signals received at different relay antennas are jointly amplified via an amplification matrix. In contrast, for active IRSs, the signals received at different IRS elements are individually amplified. On the other hand, compared to conventional passive IRSs, active IRSs can effectively compensate the double path loss effect without significantly complicating the IRS design \cite{zhang2021active}. To illustrate this, the authors of \cite{zhang2021active} studied the joint transmit and reflect beamforming design for maximization of the spectral efficiency of an active IRS-assisted multiuser communication system. The resource allocation algorithm design was formulated as a series of quadratic constraint quadratic programming (QCQP) problems which were tackled in an alternating manner. In fact, to realize the potential gains facilitated by active IRSs, the appropriate amount of power has to be assigned to each element of the active IRS from the limited available power. As a result, compared to systems assisted by conventional passive IRSs, it is more important to delicately design the BS beamforming such that the power consumption of the whole system is still affordable and the quality-of-service (QoS) requirements of the users can be satisfied. Alternating optimization (AO)-based optimization frameworks cannot effectively handle the aforementioned power minimization problem. In particular, such problems cannot be easily transformed to standard QCQP or second-order cone program (SOCP) problems with convex constraints that can be efficiently solved by employing AO-based algorithms \cite{wu2019intelligent}, \cite{zhang2021active}. Moreover, by dividing the coupled optimization variables into disjoint groups, AO-based algorithms inevitably eliminate the joint optimality of the BS beamforming vectors, the IRS amplification factor matrix, and the IRS phase shift matrix in the considered power minimization problem, which may lead to unsatisfactory performance \cite{bezdek2002some}. Besides, for the considered power minimization problem, the monotonicity of the objective value during AO cannot be guaranteed because of the required Gaussian randomization \cite{yu2020power}. \par Motivated by the above discussion, in this paper, we investigate the resource allocation algorithm design for active IRS-assisted communication systems, where the active IRS can amplify the reflected signal exploiting an additional power source. To this end, we aim to minimize the transmit power of the BS by jointly designing the BS beamformers and the IRS reflection matrix, taking into account the QoS requirements of the users and the maximum power budget of the active IRS. Since the optimization variables are highly coupled in the resulting non-convex optimization problem, the corresponding globally optimal solution is challenging to obtain. As a compromise, by capitalizing on bilinear transformation, inner approximation, and semidefinite relaxation, we develop a novel iterative algorithm, which enjoys low computational complexity. The proposed algorithm is guaranteed to converge to a locally optimal solution of the considered problem. Our simulation results reveal that active IRSs are a promising solution to fully exploit the potential of IRS-assisted wireless systems, especially when non-negligible direct links exist. \par \textit{Notation:} Vectors and matrices are denoted by boldface lower case and boldface capital letters, respectively. $\mathbb{R}_+^{N\times M}$ and $\mathbb{C}^{N\times M}$ denote the spaces of $N\times M$ positive real-valued matrices and complex-valued matrices, respectively. $\Re\left \{ \cdot \right \}$ extracts the real part of a complex number. $|\cdot|$ and $||\cdot||$ denote the absolute value of a complex scalar and the Euclidean norm of its argument, respectively. $\mathbf{I}_{N}$ refers to the identity matrix of dimension $N$. $\mathbb{H}^{N}$ denotes the set of complex Hermitian matrices of dimension $N$. $\mathbf{A}^H$ refers to the conjugate transpose of matrix $\mathbf{A}$. $\mathbf{A}\succeq\mathbf{0}$ indicates that $\mathbf{A}$ is a positive semidefinite matrix. $||\mathbf{A}||_F$, $\mathrm{Tr}(\mathbf{A})$, and $\mathrm{Rank}(\mathbf{A})$ denote the Frobenius norm, the trace, and the rank of matrix $\mathbf{A}$, respectively. $\mathrm{diag}(\mathbf{a})$ represents a diagonal matrix whose main diagonal elements are extracted from vector $\mathbf{a}$; $\mathrm{Diag}(\mathbf{A})$ denotes a vector whose elements are extracted from the main diagonal elements of matrix $\mathbf{A}$. $\mathcal{E}\left \{ \cdot \right \}$ represents statistical expectation. $\overset{\Delta }{=}$ and $\sim$ refer to ``defined as'' and ``distributed as'', respectively. $\mathcal{CN}(\mu ,\sigma^2)$ indicates the distribution of a circularly symmetric complex Gaussian random variable with mean $\mu$ and variance $\sigma^2$. $\mathbf{X}^*$ refers to the optimal value of optimization variable $\mathbf{X}$. \section{System Model} \begin{figure}[t] \vspace*{-6mm} \centering \includegraphics[width=2.2in]{Active_IRS_System.eps} \vspace*{-6mm} \caption{An active IRS-assisted communication system consist of one multi-antenna BS and $K=2$ users. The active IRS is supported by a power supply. The direct links and reflected links between the BS and the users are denoted by red dashed lines and blue dash lines, respectively.} \label{system_model}\vspace*{-4mm} \end{figure} We consider an active IRS-assisted multiuser multiple-input single-output (MISO) communication system, cf. Figure \ref{system_model}. The BS is equipped with $N_\mathrm{T}$ antennas while all $K$ users are single-antenna devices. To enhance the performance of the considered system, an active IRS is employed to assist the information transmission from the BS to the users. In particular, the active IRS is composed of $M$ elements and is supported by an additional power source. Equipped with an integrated active reflection-type amplifier, each IRS element can not only smartly alter the phase of the incident signals, but also amplify the reflected signal for effective beamforming. To establish a performance upper bound for the considered system, we assume that the perfect channel state information (CSI) of the entire system is available at the BS. The CSI can be acquired with one of the existing channel estimation schemes proposed for IRS-assisted wireless systems \cite{9366805}, \cite{9087848}. To simplify the notation, we collect the indices of the users and IRS elements in sets $\mathcal{K}=\left \{1,\cdots ,K \right \}$ and $\mathcal{M}=\left \{1,\cdots ,M \right \}$, respectively. \par In each scheduled time slot, the signal vector $\mathbf{x}$ transmitted by the BS is constructed as follows \begin{equation} \mathbf{x}=\underset{k\in\mathcal{K}}{\sum }\mathbf{w}_kb_k, \end{equation} where $\mathbf{w}_k\in\mathbb{C}^{N_\mathrm{T}\times 1}$ and $b_k\in \mathbb{C}$ denote the beamforming vector for user $k$ and the corresponding information symbol. We assume $\mathcal{E}\{\left |b_k\right|^2\}=1$, $\forall\mathit{k} \in \mathcal{K}$, without loss of generality. \par Employing reflection-type amplifiers \cite{lonvcar2019ultrathin} driven by a common power supply, the signal reflected and amplified by the active IRS is given by \begin{equation} \mathbf{y}=\mathbf{A}\bm{\Theta}\mathbf{G}\mathbf{x}+\underbrace{\mathbf{A}\bm{\Theta}\mathbf{d}}_{\text{dynamic noise}}+\underbrace{\mathbf{s}}_{\text{static noise}},\label{IRSsignal} \end{equation} where $\mathbf{A}\overset{\Delta}{=}\mathrm{diag}(a_1,\cdots,a_M)\in\mathbb{R}^{M\times M}_+$ and $\bm{\Theta}\overset{\Delta}{=}\mathrm{diag}(e^{j\psi_1 },\cdots,e^{j\psi _M})\in\mathbb{C}^{M\times M}$ denote the amplification factor matrix and the phase shift matrix of the active IRS, respectively. Matrix $\mathbf{G}\in\mathbb{C}^{M\times N_\mathrm{T}}$ denotes the channel between the BS and the IRS. Moreover, we observe from \eqref{IRSsignal} that the noises at the IRS can be divided into two categories, i.e., dynamic noise and static noise \cite{zhang2021active}. In particular, the dynamic noise is generated due to the power amplification \cite{you2021wireless}, where $\mathbf{d}\in\mathbb{C}^{N_\mathrm{T}\times 1}$ is modelled as additive white Gaussian noise (AWGN) with variance $\sigma_d^2$, i.e., $\mathbf{d}\sim\mathcal{CN}(\mathbf{0}_{N_\mathrm{T}},\sigma_d^2\mathbf{I}_{N_\mathrm{T}})$ \cite{zhang2021active}. The static noise $\mathbf{s}\in\mathbb{C}^{N_\mathrm{T}\times 1}$ is modelled as AWGN with variance $\sigma_s^2$, i.e., $\mathbf{s}\sim\mathcal{CN}(\mathbf{0}_{N_\mathrm{T}},\sigma_s^2\mathbf{I}_{N_\mathrm{T}})$, and it is not affected by $\mathbf{A}$ and its power is usually negligibly small compared to that of the dynamic noise $\mathbf{A}\bm{\Theta}\mathbf{d}$ \cite{6047578}. \par The received signal at user $k$ is given by \begin{eqnarray} r_k\hspace*{-6mm}&&=\underbrace{(\mathbf{h}_{\mathrm{D},k}^H+\mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\mathbf{G})\mathbf{w}_kb_k}_{\text{desired signal}}+\underbrace{(\mathbf{h}_{\mathrm{D},k}^H+\mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\mathbf{G})\underset{\substack{r\in\mathcal{K}\\r\neq k}}{\sum }\mathbf{w}_rb_r}_{\text{multiuser interference}}\notag\\ \hspace*{-6mm}&&+\underbrace{\mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\mathbf{d}}_{\text{dynamic noise introduced by the IRS}}+\underbrace{n_k}_{\text{noise introduced at user $k$}}, \end{eqnarray} where $\mathbf{h}_{\mathrm{D},k}\in\mathbb{C}^{N_\mathrm{T}\times 1}$ and $\mathbf{h}_{\mathrm{R},k}\in\mathbb{C}^{M\times 1}$ denote the channel vectors of the BS-user $k$ link (direct link) and the IRS-user $k$ link (reflected link), respectively. $n_k$ represents the AWGN at the user $k$ with zero mean and variance $\sigma_{n_k}^2$, i.e., $n_k\sim\mathcal{CN}(0,\sigma_{n_k}^2)$. \par \section{Problem Formulation} The received signal-to-interference-plus-noise ratio (SINR) of user $k$ is given by \begin{eqnarray} \Gamma _k= \frac{\left | (\mathbf{h}_{\mathrm{D},k}^H+\mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\mathbf{G})\mathbf{w}_k\right |^2}{\underset{\substack{r\in\mathcal{K}\\r\neq k}}{\sum }\left | (\mathbf{h}_{\mathrm{D},k}^H+\mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\mathbf{G})\mathbf{w}_r\right |^2\hspace*{-1mm}+\sigma_d^2\left \|\mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta} \right \|^2\hspace*{-1mm}+\sigma_{n_k}^2}. \end{eqnarray} \par In this paper, we aim to minimize the BS transmit power while satisfying the QoS requirements of the users and the maximum power allowance of the active IRS. In particular, the joint design of the BS beamforming vectors, the IRS amplification factor matrix, and the IRS phase shift matrix, i.e., $\left \{\mathbf{w}_k,\mathbf{A},\bm{\Theta} \right \}$, is obtained by solving the following optimization problem \begin{eqnarray} \label{prob1} &&\hspace*{-12mm}\underset{\mathbf{w}_k,\mathbf{A},\bm{\Theta}}{\mino} \,\, \,\, \underset{k\in\mathcal{K}}{\sum }\left \|\mathbf{w}_k\right \|^2\notag\\ &&\hspace*{-12mm}\mbox{subject to}\hspace*{2mm} \mbox{C1:}\hspace*{1mm}\Gamma_{\mathrm{req}_k}\leq\Gamma_k,\hspace*{1mm}\forall k,\hspace*{2mm}\mbox{C2:}\hspace*{1mm}\underset{k\in\mathcal{K}}{\sum }\left \|\mathbf{A}\bm{\Theta}\mathbf{G} \mathbf{w}_k \right \|^2+\sigma_d^2\left \|\mathbf{A}\bm{\Theta}\right \|_F^2\leq P_{\mathrm{A}}. \end{eqnarray} Here, $\Gamma_{\mathrm{req}_k}$ in constraint C1 is the minimum required SINR of user $k$. Constraint C2 indicates that the amplification power of the active IRS should be less than or equal to the maximum power allowance $P_{\mathrm{A}}$. We note that the optimization problem in \eqref{prob1} is non-convex due to the coupled optimization variables and the fractional constraint C1. Next, by employing the bilinear transformation and IA, we develop a iterative low-complexity algorithm which is guaranteed to converge to a locally optimal solution of the problem in \eqref{prob1}. \par \begin{Remark} Compared to passive IRS design, though active IRS design sidesteps the unit-modulus constraint, it also introduces the additional non-convex constraint C2 which aggravates the coupling between the optimization variables. In fact, for resource allocation design for IRS-assisted systems, the coupling between the optimization variables is an unavoidable obstacle. For passive IRSs, such obstacle is commonly tackled by employing AO-based algorithms \cite{zhang2021active}, \cite{8930608} or IA-based algorithms \cite{yu2020power}. However, employing AO-based algorithms destroys the joint optimality of the optimization variables, which may lead to unsatisfactory system performance. Moreover, it has been shown in \cite{yu2020power} that for power minimization problems, the commonly adopted AO-based algorithm with Gaussian randomization is not guaranteed to generate a monotonically decreasing sequence of the objective function values during the iterations. On the other hand, when directly applying IA, the matrix $\bm{\Theta}=\mathrm{diag}(e^{j\psi_1 },\cdots,e^{j\psi _M})$ at the IRS is first transformed into a vector $\mathbf{v}=[e^{j\psi_1 },\cdots,e^{j\psi _M}]^H$ \cite{8930608}. Then, a new optimization variable $\mathbf{V}$ is defined as $\mathbf{V}=\mathbf{v}\mathbf{v}^H$, which imposes three additional constraints on the considered optimization problem, i.e., $\mathbf{V}\succeq \mathbf{0}$, $\mathrm{Diag}(\mathbf{V})=\mathbf{1}$, and a non-convex constraint $\mathrm{Rank}(\mathbf{V})=1$. In the literature, the rank-one constraint is usually removed by employing SDR. However, by doing so, the rank of the obtained solution is in general larger than one \cite{luo2010semidefinite}. Alternatively, $\mathrm{Rank}(\mathbf{V})=1$ can be equivalently transformed into a difference of norm functions, and then be tackled by a penalty-based algorithm \cite{xu2020resource}. However, since the penalty factor cannot be infinitely large in practice, such an approach can only guarantee a suboptimal solution. To circumvent these obstacles, in this paper, for active IRSs, we employ bilinear transformation and IA and develop a low-complexity iterative algorithm which is guaranteed to converge to a locally optimal solution of the optimization problem in \eqref{prob1} \cite{marks1978general}. \end{Remark} \section{Solution of the Optimization problem} \subsection{Bilinear Transformation} Note that matrices $\mathbf{A}$ and $\bm{\Theta}$ in \eqref{prob1} always appear in product form. Hence, we rewrite the product term $\mathbf{A}\bm{\Theta}$ as $\bm{\Psi}=\mathrm{diag}(a_1e^{j\psi_1 },\cdots,a_Me^{j\psi _M})\in\mathbb{C}^{M\times M}$. Then, the quadratic term $\sigma_d^2\left \| \mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\right \|^2$ in constraint C1 can be rewritten as follows \begin{equation} \sigma_d^2\left \| \mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\right \|^2=\sigma_d^2\mathrm{Tr}(\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k}\bm{\Psi} ), \end{equation} where $\mathbf{H}_{\mathrm{R},k}\in\mathbb{C}^{M\times M}$ is defined as $\mathbf{H}_{\mathrm{R},k}=\mathbf{h}_{\mathrm{R},k}\mathbf{h}_{\mathrm{R},k}^H$. To facilitate the application of the IA algorithm, we define $\mathbf{W}_k=\mathbf{w}_k\mathbf{w}_k^H$, $\forall k$, and rewrite the quadratic term $\left | (\mathbf{h}_{\mathrm{D},k}^H+\mathbf{h}_{\mathrm{R},k}^H\bm{\Psi}\mathbf{G})\mathbf{w}_r\right |^2$ in constraint C1 as follows \begin{eqnarray} \label{vectornormtotrace} &&\left | (\mathbf{h}_{\mathrm{D},k}^H+\mathbf{h}_{\mathrm{R},k}^H\bm{\Psi}\mathbf{G})\mathbf{w}_r\right |^2\notag\\ =\hspace*{-4mm}&&\mathbf{h}_{\mathrm{D},k}^H\mathbf{W}_r\mathbf{h}_{\mathrm{D},k}+\mathbf{h}_{\mathrm{R},k}^H\bm{\Psi}\mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{h}_{\mathrm{R},k} +2\Re\left \{\mathbf{h}_{\mathrm{D},k}^H\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{h}_{\mathrm{R},k} \right \}\notag\\ =\hspace*{-4mm}&&\mathrm{Tr}\left ( \begin{bmatrix} \mathbf{h}_{\mathrm{R},k} \\ \mathbf{h}_{\mathrm{D},k} \end{bmatrix} \begin{bmatrix} \mathbf{h}_{\mathrm{R},k}^H & \mathbf{h}_{\mathrm{D},k}^H \end{bmatrix} \begin{bmatrix} \mathbf{0} & \bm{\Psi}\mathbf{G}\mathbf{W}_r^H \\ \mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H & \mathbf{0} \end{bmatrix}\right )\notag\\ +\hspace*{-4mm}&&\mathrm{Tr}(\mathbf{H}_{\mathrm{D},k}\mathbf{W}_r)+\mathrm{Tr}(\bm{\Psi}\mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k}), \end{eqnarray} where $\mathbf{H}_{\mathrm{D},k}\in\mathbb{C}^{N_\mathrm{T}\times N_\mathrm{T}}$ is defined as $\mathbf{H}_{\mathrm{D},k}=\mathbf{h}_{\mathrm{D},k}\mathbf{h}_{\mathrm{D},k}^H$. Then, we recast the optimization problem in \eqref{prob1} equivalently as follows \begin{eqnarray} \label{prob2} &&\underset{\bm{\Psi},\mathbf{W}_k\in\mathbb{H}^{N_{\mathrm{T}}}}{\mino} \,\, \,\, \hspace*{2mm}\underset{k\in\mathcal{K}}{\sum }\mathrm{Tr}(\mathbf{W}_k)\notag\\ &&\mbox{subject to}\hspace*{4mm} \mbox{C1:}\hspace*{1mm}\Gamma _k\geq \Gamma_{\mathrm{req}_k},\hspace*{1mm}\forall k,\notag\\ &&\hspace*{21mm}\mbox{C2:}\hspace*{1mm}\underset{k\in\mathcal{K}}{\sum }\mathrm{Tr}(\bm{\Psi}\mathbf{G} \mathbf{W}_k\mathbf{G}^H\bm{\Psi}^H)+\sigma_d^2\mathrm{Tr}(\bm{\Psi}\bm{\Psi}^H)\leq P_{\mathrm{A}},\notag\\ &&\hspace*{21mm}\mbox{C3:}\hspace*{1mm}\mathbf{W}_k\succeq\mathbf{0},\hspace*{1mm}\forall k,\hspace*{10mm}\mbox{C4:}\hspace*{1mm}\mathrm{Rank}(\mathbf{W}_k)\leq 1,\hspace*{1mm}\forall k. \end{eqnarray} We note that the coupling between $\mathbf{W}_k$ and $\bm{\Psi}$ in constraints C1 and C2 and the rank-one constraint C4 are obstacles to solving \eqref{prob2}. Next, we take the term $\mathrm{Tr}(\bm{\Psi}\mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k})$ as an example to illustrate how to construct a convex subset for the non-convex constraint C1. Note the fact that for arbitrary matrices $\mathbf{C}$ and $\mathbf{D}$ having the same dimensions, we have $\mathrm{Tr}(\mathbf{C}\mathbf{D})=\frac{1}{2}\left \|\mathbf{C}+\mathbf{D} \right \|_F^2-\frac{1}{2}\mathrm{Tr}(\mathbf{C}^H\mathbf{C})-\frac{1}{2}\mathrm{Tr}(\mathbf{D}^H\mathbf{D})$. Hence, we first rewrite the coupling term $\mathrm{Tr}(\bm{\Psi}\mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k})$ as follows \begin{eqnarray} \label{decoupletrans} \mathrm{Tr}(\bm{\Psi}\mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k})=&&\hspace*{-6mm}\frac{1}{2}\left \|\bm{\Psi}+\mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k}\right \|_F^2-\frac{1}{2}\mathrm{Tr}( \bm{\Psi}^H\bm{\Psi})\notag\\ -&&\hspace*{-6mm}\frac{1}{2}\mathrm{Tr}(\mathbf{H}_{\mathrm{R},k}^H \bm{\Psi}\mathbf{G}\mathbf{W}_r^H\mathbf{G}^H \mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k}). \end{eqnarray} We note that the right-hand side term of \eqref{decoupletrans} contains a bilinear function of optimization variables $\mathbf{W}_r$ and $\bm{\Psi}$, i.e., $\mathbf{G}\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k}$, which is still non-convex. To circumvent this challenge, we further define a new optimization variable $\mathbf{Z}_r=\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H$, where $\mathbf{Z}_r\in\mathbb{C}^{N_\mathrm{T}\times M}$. Then, we introduce the following lemma to transform the constraint $\mathbf{Z}_r=\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H$ to a more tractable form. \par \textit{Lemma 1}:\hspace*{1mm}The equality constraint $\mathbf{Z}_r=\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H$ is equivalent to the following inequality constraints: \begin{eqnarray} &&\mbox{C5:}\hspace*{1mm}\begin{bmatrix} \mathbf{U}_r & \mathbf{Z}_r & \mathbf{W}_r \mathbf{G}^H \\ \mathbf{Z}_r^H & \mathbf{V}_r & \bm{\Psi}\\ \mathbf{G}\mathbf{W}_r^H & \bm{\Psi}^H & \mathbf{I}_M \end{bmatrix}\succeq\mathbf{0},\hspace*{1mm}\forall r\in\mathcal{K},\\[-2mm]\notag\\ &&\mbox{C6:}\hspace*{1mm}\mathrm{Tr}\left(\mathbf{U}_r-\mathbf{W}_r \mathbf{G}^H\mathbf{G}\mathbf{W}_r^H\right)\leq 0,\hspace*{1mm}\forall r\in\mathcal{K}, \end{eqnarray} where $\mathbf{U}_r\in\mathbb{C}^{N_\mathrm{T}\times N_\mathrm{T}}$ and $\mathbf{V}_r\in\mathbb{C}^{M\times M}$ are auxiliary optimization variables. \par \textit{Proof:~}The equality constraint $\mathbf{Z}_r=\mathbf{W}_r \mathbf{G}^H\bm{\Psi}^H$ has a similar structure as the constraint in \cite[Eq. (3)]{6698281} and Lemma 1 can be proved by closely following the same steps as in \cite[Appendix A]{6698281}. Due to the space limitation, we omit the detailed proof of Lemma 1. \par \subsection{Inner Approximation} After employing the proposed bilinear transformation, we can rewrite the right-hand side term of \eqref{decoupletrans} as follows \begin{eqnarray} \frac{1}{2}\left \|\bm{\Psi}+\mathbf{G}\mathbf{Z}_r\mathbf{H}_{\mathrm{R},k}\right \|_F^2-\frac{1}{2}\mathrm{Tr}\left( \bm{\Psi}^H\bm{\Psi}\right)-\frac{1}{2}\mathrm{Tr}\left(\mathbf{H}_{\mathrm{R},k}^H \mathbf{Z}_r^H\mathbf{G}^H\mathbf{G}\mathbf{Z}_r\mathbf{H}_{\mathrm{R},k}\right). \end{eqnarray} We note that the quadratic terms $\mathrm{Tr}( \bm{\Psi}^H\bm{\Psi})$ and $\mathrm{Tr}(\mathbf{H}_{\mathrm{R},k}^H \mathbf{Z}_r^H\mathbf{G}^H\mathbf{G}\mathbf{Z}\mathbf{H}_{\mathrm{R},k})$ are obstacles for efficient algorithm design. To handle this issue, we construct respective global underestimators for these terms by employing their first-order Taylor approximations via the iterative IA approach. In particular, we have \begin{eqnarray} &&\mathrm{Tr}\left (\bm{\Psi}^H\bm{\Psi}\right )\geq \mathrm{Tr}\left (\left (2\bm{\Psi}^{(j)}\right )^H\bm{\Psi}\right )-\left \|\bm{\Psi}^{(j)} \right \|_F^2,\label{taylorapproximation1}\\ &&\mathrm{Tr}\left (\mathbf{H}_{\mathrm{R},k}^H \mathbf{Z}_r^H\mathbf{G}^H\mathbf{G}\mathbf{Z}_r\mathbf{H}_{\mathrm{R},k}\right )\geq\mathrm{Tr}\left (\left (2\mathbf{H}_{\mathrm{R},k}^H\mathbf{G}^H\mathbf{G}\mathbf{Z}_r^{(j)}\mathbf{H}_{\mathrm{R},k}\right )^H\mathbf{Z}_r\right)-\left \|\mathbf{G}\mathbf{Z}_r^{(j)}\mathbf{H}_{\mathrm{R},k}\right \|_F^2,\label{taylorapproximation2} \end{eqnarray} where $\bm{\Psi}^{(j)}$ and $\mathbf{Z}_r^{(j)}$ are intermediate solutions obtained in the $j$-th iteration and superscript $j$ denotes the iteration index of the optimization variables. Moreover, by applying steps similar to \eqref{vectornormtotrace}, \eqref{decoupletrans}, \eqref{taylorapproximation1}, and \eqref{taylorapproximation2}, we construct an upper bound for the term $-\left | (\mathbf{h}_{\mathrm{D},k}^H+\mathbf{h}_{\mathrm{R},k}^H\mathbf{A}\bm{\Theta}\mathbf{G})\mathbf{w}_k\right |^2$ in constraint C1. As a result, a convex subset of constraint C1 is obtained as \begin{eqnarray} \overline{\mbox{C1}}\mbox{:}\hspace*{1mm}&&\frac{\Gamma_{\mathrm{req}_k}}{2}\underset{r\in\mathcal{K}\setminus \left \{k\right \}}{\sum }\left \|\bm{\Psi}+\mathbf{G}\mathbf{Z}_r\mathbf{H}_{\mathrm{R},k}\right \|_F^2-[\Gamma_{\mathrm{req}_k}(K-1)-1]\left [\mathrm{Tr}\left (\left (\bm{\Psi}^{(j)}\right )^H\bm{\Psi}\right )-\frac{1}{2}\left \|\bm{\Psi}^{(j)} \right \|_F^2\right ] \notag\\ &&-\Gamma_{\mathrm{req}_k}\underset{r\in\mathcal{K}\setminus \left \{k\right \}}{\sum }\left [\mathrm{Tr}\left (\left (\mathbf{H}_{\mathrm{R},k}^H\mathbf{G}^H\mathbf{G}\mathbf{Z}_r^{(j)}\mathbf{H}_{\mathrm{R},k}\right )^H\mathbf{Z}_r\right)-\frac{1}{2}\left \|\mathbf{G}\mathbf{Z}_r^{(j)}\mathbf{H}_{\mathrm{R},k}\right \|_F^2\right ]\notag\\ &&+\Gamma_{\mathrm{req}_k}\left (\underset{r\in\mathcal{K}\setminus \left \{k\right \}}{\sum }\mathrm{Tr}(\mathbf{H}_{\mathrm{D},k}\mathbf{W}_r)+\sigma_d^2\mathrm{Tr}(\bm{\Psi}^H\mathbf{H}_{\mathrm{R},k}\bm{\Psi} )+\sigma_{n_k}^2\right )-\mathrm{Tr}(\mathbf{H}_{\mathrm{D},k}\mathbf{W}_k)\notag\\ &&-\frac{1}{2}\left \|\bm{\Psi}+\mathbf{G}\mathbf{Z}_k\mathbf{H}_{\mathrm{R},k}\right \|_F^2+\mathrm{Tr}\left (\left (\mathbf{H}_{\mathrm{R},k}^H\mathbf{G}^H\mathbf{G}\mathbf{Z}_k^{(j)}\mathbf{H}_{\mathrm{R},k}\right )^H\mathbf{Z}_k\right)-\frac{1}{2}\left \|\mathbf{G}\mathbf{Z}_k^{(j)}\mathbf{H}_{\mathrm{R},k}\right \|_F^2\notag\\ &&+\mathrm{Tr}\left ( \widetilde{\mathbf{h}}_k\widetilde{\mathbf{h}}_k^H \begin{bmatrix} \mathbf{0} & \Gamma_{\mathrm{req}_k}\underset{r\in\mathcal{K}\setminus \left \{k\right \}}{\sum }\mathbf{Z}_r^H-\mathbf{Z}_k^H \\ \Gamma_{\mathrm{req}_k}\underset{r\in\mathcal{K}\setminus \left \{k\right \}}{\sum }\mathbf{Z}_r-\mathbf{Z}_k & \mathbf{0} \end{bmatrix}\right )\leq 0,\hspace*{2mm}\forall k, \end{eqnarray} where $\widetilde{\mathbf{h}}_k^H\in\mathbb{C}^{1\times (M+N_\mathrm{T})}$ is defined as $\widetilde{\mathbf{h}}_k^H=[\mathbf{h}_{\mathrm{R},k}^H \hspace*{2mm}\mathbf{h}_{\mathrm{D},k}^H]$. Similarly, constraint C2 can be approximated by the following convex constraint: \begin{eqnarray} \overline{\mbox{C2}}\mbox{:}\hspace*{1mm}&&\underset{k\in\mathcal{K}}{\sum }\left[\frac{1}{2}\left \|\bm{\Psi}+\mathbf{G}\mathbf{Z}_k\right \|_F^2-\mathrm{Tr}\left (\left (\mathbf{G}^H\mathbf{G}\mathbf{Z}_k^{(j)}\right )^H\mathbf{Z}_k\right)+\frac{1}{2}\left \|\mathbf{G}\mathbf{Z}_k^{(j)}\right \|_F^2\right]\notag\\ &&-K\left[\mathrm{Tr}\left (\left (\bm{\Psi}^{(j)}\right )^H\bm{\Psi}\right )-\frac{1}{2}\left \|\bm{\Psi}^{(j)} \right \|_F^2\right]+\sigma_d^2\mathrm{Tr}(\bm{\Psi}\bm{\Psi}^H)\leq P_{\mathrm{A}}. \end{eqnarray} \par On the other hand, we note that constraint C6 is in the canonical form of a difference of convex functions which is a non-convex constraint. To tackle this obstacle, again, we construct a global underestimator of $\mathrm{Tr}(\mathbf{W}_r \mathbf{G}^H\mathbf{G}\mathbf{W}_r^H)$. Specifically, we have \begin{equation} \mathrm{Tr}(\mathbf{W}_r \mathbf{G}^H\mathbf{G}\mathbf{W}_r^H)\geq-\left \|\mathbf{W}_r^{(j)} \mathbf{G}^H \right \|_F^2+2\mathrm{Tr}\left((\mathbf{G}^H\mathbf{G}\mathbf{W}_r^{(j)})^H\mathbf{W}_r\right). \end{equation} Then, constraint C6 can be approximated by the following convex constraint: \begin{equation} \overline{\mbox{C6}}\mbox{:}\hspace*{1mm}\mathrm{Tr}\left(\mathbf{U}_r\right)+\left \|\mathbf{W}_r^{(j)} \mathbf{G}^H \right \|_F^2-2\mathrm{Tr}\left((\mathbf{G}^H\mathbf{G}\mathbf{W}_r^{(j)})^H\mathbf{W}_r\right)\leq 0,\hspace*{1mm}\forall r\in\mathcal{K}. \end{equation} \par Therefore, the optimization problem to be solved in the $(j+1)$-th iteration of the IA-based algorithm is given by \begin{eqnarray} \label{prob3} \hspace*{2mm}&&\underset{\substack{\bm{\Psi},\mathbf{W}_k\in\mathbb{H}^{N_{\mathrm{T}}},\\\mathbf{Z}_k,\mathbf{U}_k,\mathbf{V}_k}}{\mino} \,\, \,\, \hspace*{2mm}F(\mathbf{W}_k)\overset{\Delta }{=}\underset{k\in\mathcal{K}}{\sum }\mathrm{Tr}(\mathbf{W}_k)\notag\\ &&\mbox{subject to}\hspace*{6mm} \overline{\mbox{C1}},\overline{\mbox{C2}},\mbox{C3},\mbox{C4},\mbox{C5},\overline{\mbox{C6}}. \end{eqnarray} We note that the only obstacle to efficiently solving \eqref{prob3} is the rank-one constraint C4. To convexify the optimization problem in \eqref{prob3}, we apply SDR and remove constraint C4 from the formulation. Then, the resulting relaxed version of \eqref{prob3} becomes a standard convex optimization problem which can be optimally solved by convex program solvers such as CVX \cite{grant2008cvx}. Next, we introduce the following theorem to reveal the tightness of SDR. \par \textit{Theorem 1:~}Given any positive $\Gamma_{\mathrm{req}_k}$, the optimal beamforming matrix obtained from \eqref{prob3}, i.e., $\mathbf{W}_k^*$, is always a rank-one matrix. \par \textit{Proof:~}Problem \eqref{prob3} has a similar structure as \cite [Problem (17)]{yu2020power} and Theorem 1 can be proved following the same steps as in \cite [Appendix]{yu2020power}. The detailed proof of Theorem 1 is omitted for brevity. \hfill \ensuremath{\blacksquare} \par \begin{algorithm}[t] \caption{IA-based Algorithm} \begin{algorithmic}[1] \small \STATE Set initial point $\mathbf{W}_k^{(j)}$, $\bm{\Psi}^{(j)}$, $\mathbf{Z}_k^{(j)}$, $\mathbf{U}_k^{(j)}$, $\mathbf{V}_k^{(j)}$, iteration index $j=1$, and error tolerance $0<\epsilon\ll1$. \REPEAT \STATE For given $\mathbf{W}_k^{(j)}$, $\bm{\Psi}^{(j)}$, $\mathbf{Z}_k^{(j)}$, $\mathbf{U}_k^{(j)}$, $\mathbf{V}_k^{(j)}$, obtain the intermediate solution $\mathbf{W}_k^{(j+1)}$, $\bm{\Psi}^{(j+1)}$, $\mathbf{Z}_k^{(j+1)}$, $\mathbf{U}_k^{(j+1)}$, $\mathbf{V}_k^{(j+1)}$ by solving the rank constraint-relaxed version of problem \eqref{prob3} \STATE Set $j=j+1$ \UNTIL $\frac{F(\mathbf{W}_k^{(j-1)})-F(\mathbf{W}_k^{(j)})}{F(\mathbf{W}_k^{(j)})}\leq \epsilon$ \end{algorithmic} \end{algorithm} \par We summarize the proposed algorithm in \textbf{Algorithm 1}. Note that the objective function of \eqref{prob3} is monotonically non-increasing in each iteration of \textbf{Algorithm 1}. Moreover, according to \cite[Theorem 1]{marks1978general}, the proposed algorithm is guaranteed to converge to a locally optimal solution of \eqref{prob1} in polynomial time. The per iteration computational complexity of \textbf{Algorithm 1} is given by $\mathcal{O}\Big(\mathrm{log}(1/\epsilon)\big((3K+1)^3+(3K+1)^2N_{\mathrm{T}}^2+(3K+1)N_{\mathrm{T}}^3+(2K+1)^3+(2K+1)^2M^2+(2K+1)M^3\big)\Big)$, where $\mathcal{O}\left ( \cdot \right )$ is the big-O notation \cite[Theorem 3.12]{polik2010interior} and $\epsilon$ is the convergence tolerance of \textbf{Algorithm 1}. \section{Simulation Results} \begin{table}[t]\vspace*{0mm}\caption{System simulation parameters.}\vspace*{-2mm}\label{tab:parameters}\footnotesize \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \begin{tabular}{|l|l|l|}\hline \hspace*{-1mm}$f_c$ & Carrier center frequency & $2.4$ GHz \\ \hline \hspace*{-1mm}$\sigma_k^2$& Noise power at the users & $-114$ dBm \\ \hline \hspace*{-1mm}$\sigma_d^2$& Dynamic noise power & $-100$ dBm \cite{zhang2021active}\\ \hline \hspace*{-1mm}$\epsilon$ & Convergence tolerance & $10^{-3}$ \\ \hline \end{tabular} \vspace*{-2mm} \end{table} In this section, the system performance of the proposed resource allocation scheme is evaluated via simulations. The BS is equipped with $N_{\mathrm{T}}=4$ antennas and serves one sector of a cell with a radius of $R$ m, where $K=3$ users are randomly and uniformly distributed in this sector. The active IRS comprises $M$ elements and is deployed at the edge of the sector. Moreover, the fading coefficients of all the channels are generated as independent and identically distributed Rician random variables with Rician factor $3$ dB. In addition, the path loss exponents for the direct links and the reflected links between the BS and the users are $\alpha_{\mathrm{d}}$ and $\alpha_{\mathrm{r}}$, respectively. For ease of presentation, we assume that the minimum required SINRs of all users are identical, i.e., $\Gamma_{\mathrm{req}_k}=\Gamma_{\mathrm{req}}$, $\forall k$. The adopted simulation parameter values are listed in Table \ref{tab:parameters}. \par For comparison, we consider two baseline schemes. For baseline scheme 1, we assume that an IRS is not deployed. Then, we optimize the beamforming vector $\mathbf{w}_k$ for minimization of the transmit power at the BS. For baseline scheme 2, we divide the power available at the active IRS, $P_{\mathrm{A}}$, equally among the IRS elements, i.e., $a_m=\sqrt{\frac{P_{\mathrm{A}}}{M}}$, $\forall m\in\mathcal{M}$, and generate the phases of the IRS elements in a random manner. Moreover, we adopt zero-forcing (ZF) beamforming at the BS. Then, we solve a problem similar to problem \eqref{prob1}, where we optimize the power allocated to user $k$, i.e., $p_k\in\mathbb{R}_+$. \subsection{Transmit Power Minimization} \begin{figure}[t]\vspace*{-2mm} \centering \includegraphics[width=3.8in]{power_sinr_conf.eps} \vspace*{-4mm} \caption{Average BS transmit power (dBm) versus minimum required SINR of the users for $K=3$, $N_{\mathrm{T}}=4$, $M=10$, $\alpha_{\mathrm{d}}=3.8$, $\alpha_{\mathrm{r}}=2.3$, and $R=100$ m.}\vspace*{-2mm}\label{powersinr_ActiveIRS} \end{figure} \par In Figure \ref{powersinr_ActiveIRS}, we investigate the average BS transmit power versus the minimum required SINR of the users for a scenario where the direct links are severely shadowed ($\alpha_{\mathrm{d}}=3.8$). We can observe from Figure \ref{powersinr_ActiveIRS} that the transmit power of the proposed scheme and the two baseline schemes monotonically increases with $\Gamma_{\mathrm{req}}$. This is attributed to the fact that to satisfy a more stringent minimum SINR requirement, the BS has to transmit with a higher power. Yet, the proposed scheme yields substantial power savings compared to the two baseline schemes even if we account for the total transmit power. For example, for $\Gamma_{\mathrm{req}}=4$, the proposed scheme with $P_{\mathrm{A}}=10$ mW consumes $10^{(1.5)}+10\approx41.6$ mW, while baseline scheme 1 and baseline scheme 2 require $100$ mW and $73.1$ mW, respectively. In particular, for baseline scheme 1, since there is no IRS, there are no degrees of freedom (DoFs) available for customizing favorable wireless channels. As for baseline scheme 2, both the BS and the active IRS cannot fully exploit the DoFs available for resource allocation due to the partially fixed beamforming policy and the randomly generated IRS phase shifts, respectively. This highlights the effectiveness of the proposed scheme for jointly optimizing the beamformers at the BS and the active IRS elements. Moreover, as expected, increasing the maximum power allowance at the active IRS from $10$ mW to $15$ mW leads to further transmit power savings at the BS. This is because the additional power budget at the active IRS can be utilized to facilitate more accurate beamforming and to mitigate multiuser interference in a more effective manner. \par \subsection{Energy Efficiency Evaluation} \begin{figure}[t]\vspace*{-2mm} \centering \includegraphics[width=3.8in]{energy_efficiency_element_conf.eps} \vspace*{-4mm} \caption{Average energy efficiency versus the number of IRS elements with $K=3$, $N_{\mathrm{T}}=4$, $P_{\mathrm{A}}=20$ mW, $\Gamma_{\mathrm{req}}=10$ dB, $\alpha_{\mathrm{d}}=2.9$, $\alpha_{\mathrm{r}}=2.3$, and $R=200$ m.}\vspace*{-2mm}\label{powerSINR_SWIPT} \end{figure} To further investigate the performance of active IRSs, we also compare with a conventional IRS where the IRS elements just passively reflect the incident signals without amplification. In particular, we employ the IA-based algorithm developed in \cite{yu2020power} and solve a problem similar to \eqref{prob1} but replacing constraint C2 with a unit-modulus constraint induced by the passive IRS. For a fair comparison, we adopt the energy efficiency (bits/J/Hz) as the performance metric which is defined as\footnote{We set $P_\mathrm{A}=0$ when computing the energy efficiency of the system with the conventional passive IRS.} \cite[Eq. (19)]{yu2020power} \begin{equation} \xi=\frac{\underset{k\in\mathcal{K}}{\sum }\mathrm{log}_{\mathrm{2}}(1+\Gamma_k)}{\frac{1}{\eta }\underset{k\in\mathcal{K}}{\sum }\left \|\mathbf{w}_k\right \|^2+N_\mathrm{T}P_\mathrm{T}+P_\mathrm{C}+MP_\mathrm{I}+\frac{1}{\eta }P_\mathrm{A}}, \end{equation} where $\eta=0.5$ is the power amplifier efficiency, $P_\mathrm{T}=100$ mW is the circuit power that maintains one BS antenna element operational, $P_\mathrm{C}=85$ mW is the static circuit power of the BS \cite{yu2020power}, $P_\mathrm{I}=2$ mW is the circuit power required to support one IRS element\footnote{In this paper, we adopt the same $P_\mathrm{I}$ for passive and active IRS elements. Yet, in practice, depending on the specific hardware structure and components, active IRS elements may consume slightly more power for supporting the required amplifier \cite{lonvcar2019ultrathin}.} \cite{pei2021prototype}, and $P_\mathrm{A}=20$ mW is the power allowance of the active IRS \cite{zhang2021active}. Figure \ref{powerSINR_SWIPT} illustrates the average energy efficiency versus the number of IRS elements for a scenario where the direct links are slightly shadowed ($\alpha_{\mathrm{d}}=2.9$). As can be seen from Figure \ref{powerSINR_SWIPT}, the energy efficiencies of the proposed scheme, the scheme employing a conventional IRS, and baseline scheme 2 monotonically increase with the number of IRS elements. In particular, due to the low-power consumption of IRS phase shifters, deploying more IRS elements does not significantly increase the operational power of the IRS. Moreover, additional IRS elements introduce extra DoFs that can be exploited to proactively configure the wireless channel which yields transmit power savings. Besides, for the proposed scheme, additional IRS elements allow the active IRS to strike a balance between effectively mitigating the dynamic noise amplification and amplifying the desired signals. On the other hand, we observe that the proposed scheme outperforms the scheme employing a conventional passive IRS and the two baseline schemes. In particular, for the scenario where the direct links are slightly shadowed, deploying passive IRSs can not effectively enhance performance due to the double path loss effect. In contrast, the proposed scheme employing the active IRS can simultaneously adjust the phase and the amplitude of the reflected signal to combat the double path loss effect, which yields a performance enhancement at the expense of supplying extra power to the IRS. This observation strongly motivates the application of active IRSs to further improve the system performance, especially when the direct links are not weak. \section{Conclusion} In this paper, we investigated the deployment of active IRSs, where, unlike conventional passive IRSs, each IRS element is equipped with an amplifier, and studied the resulting resource allocation algorithm design problem for a multiuser communication system. In particular, we jointly optimized the beamforming vectors at the BS and the IRS parameters for minimization of the BS transmit power. To tackle the formulated non-convex optimization problem, we developed a novel low-complexity algorithm, based on the bilinear transformation and IA. The developed algorithm is guaranteed to converge to a locally optimal solution of the considered problem. Simulation results showed that the proposed scheme achieves considerable power savings compared to two baseline schemes. Moreover, our results revealed that active IRSs are a promising means to combat the performance degradation caused by the double path loss effect in IRS-assisted communication systems. \vspace*{-1mm} \bibliographystyle{IEEEtran}
1,116,691,497,941
arxiv
\section{Introduction} The existence of cold dark matter in the universe is well--established by recent astrophysical observations. While the particle nature of dark matter remains a mystery, its effect of promoting structure formation via hierarchical formation of gravitational potential wells from primordial density fluctuations is well--documented in both high resolution N--body simulations and observations of large--scale structure \cite{bertone-2005-405,2006astro.ph..9541P}. The properties required for a thermally--produced particulate cold dark matter include high mass, lack of strong or electromagnetic couplings, and stability over the lifetime of the universe. No standard model particle is capable of simultaneously satisfying these demands. Minimal supersymmetry provides a number of viable candidates among its spectrum of fundamental particles \cite{primack-seckel-88,jungman-1996-267}. The lightest of the four neutralinos, which in models we will consider is also the lightest supersymmetric partner or LSP, consists of various linear combinations of the neutral bino, wino, and Higgsino states. The LSP has been proposed as an excellent candidate not only because of the above properties, but also because the expected cross section gives rise naturally to the observed dark matter mass density during thermal decoupling. If supersymmetry is to solve the gauge hierarchy problem, it should be broken in such a way that the partner particles attain mass corrections on the order of the electroweak scale, $\sim 100$ GeV. A weakly-interacting dark matter particle has the correct cross section to produce the observed relic density. Attempts to discover the particle nature of dark matter take on a three--pronged approach. Terrestrial experiments to directly detect dark matter particles passing through the earth are underway \cite{sadoulet-2007-,leclercq-2006-,sanglard-2005-71}, and it is expected that the next generation of particle accelerators will be capable of producing a weakly interacting particle with a mass near the electroweak scale \cite{baltz-2006-74}. In contrast to direct detection and production of dark matter, indirect detection techniques search for the products of dark matter annihilation. Self-annihilations between clustered particles are expected to produce a variety of high-energy cosmic rays, photons, and neutrinos. Sites for these events include the central density spikes of dark matter halos \cite{ullio-2001-64,bertone-2002-337,gondolo-1999-83,gnedin-2004-93,prada-barcomp,merritt-2004-92,merritt-2007-75,hall-2006-74}, diffuse radiation from the halo at large \cite{stoehr-2003-345,deboer-2007}, substructure within the galactic halo \cite{diemand-2006,pieri-2007}, and within astronomical bodies such as stars and planets, including our own earth and sun \cite{bergstrom-2000-,press-spergel-1985,halzen-2006-73}. As mentioned in \cite{prada-barcomp}, the gamma ray flux per energy bin from neutralino annihilations in a region of space requires inputs from two factors, one being the cross section times the expectation value for the number of gammas produced, and the other an astrophysical variable determined by the distance and density profile of the region. In addition to assuming a supersymmetric model, calculation of the signal from the center of our galaxy requires knowledge about the profile and normalization of the inner halo. On scales larger than about 1 kpc, it is well--established through N--body simulations that the halo has a power--law radial density profile with index -1 to -1.5 \cite{navarro-1997-490,moore-1999,klypin-2000,power-2003-338}. Frequent gravitational scattering events between dark matter and stars leads to an equilibrium profile of $\rho \sim r^{-3/2}$ in the inner 2 parsecs \cite{gnedin-2004-93}, where dark matter is a negligible proportion of the total mass. Since the normalization is indeterminate, as a fiducial model we interpolate inwards from an NFW profile \cite{navarro-1997-490}. While this may be a reasonable assumption, it is likely that baryonic compression, in which baryonic matter losing energy through radiative processes falls inward and consequently redistributes dark matter, has increased this number, possibly by several orders of magnitude \cite{prada-barcomp}. For a weakly--interacting particle of mass greater than 100 GeV, it is plausible that a dark matter signal will be observed from the Galactic center by the current generation of ground--based gamma ray detection experiments, provided that baryonic compression has increased the central density by at least a modest amount over the fiducial value \cite{gnedin-2004-93}, although astrophysical processes could be a potentially troubling background for these searches \cite{zaharijas-2006-73}. A recent observation of a TeV gamma--ray signal from the Galactic center by the H.E.S.S. atmospheric Cherenkov telescope \cite{aharonian-2004-425,aharonian-2006-97} has motivated this determination of a theoretical upper limit on the mass of neutralino dark matter. This result was confirmed recently by the MAGIC telescope \cite{albert-2006-638}. The signal was also observed previously by the CANGAROO--II experiment \cite{tsuchiya-2004-606}, though this result was inconsistent with the newer H.E.S.S. results; the large flux observed by CANGAROO at low energies was not seen by H.E.S.S. All groups have stated that the signal is consistent with a point source, and H.E.S.S. and MAGIC agree on a logarithmic slope of about -2.25. Without a clear indication of an annihilating particle, such as line features in the spectrum, or an observation that disfavors an annihilation scenario, such as time variability, the source of the signal remains uncertain, although the extended power law nature of the observed spectrum does not fit well with the expected rollover shape of an annihilation spectrum. An analysis by the H.E.S.S. group of their 2004 data, which extended to approximately 30 TeV, was unable to find any annihilation spectra which reproduced the observed power law, and they proposed that the signal must be primarily non--dark matter in origin. While the newest data from H.E.S.S. does not seem to be consistent with the annihilation spectrum of either a supersymmetric neutralino or Kaluza--Klein dark matter particle \cite{bergstrom-2005-94}, models in which a signal from annihilating dark matter is masked by emissions of astrophysical sources are still possible \cite{aharonian-2006-97}. Another analysis of the 2004 data by Profumo searched for optimal spectral fits based on final state channels; it was determined that the dark matter annihilation remains a possible interpretation of the H.E.S.S. data for a restricted set of final states \cite{profumo-2005-}. The 2003 H.E.S.S. data, which extends to 9 TeV, can similarly be fitted with fewer constraints on the final states of the annihilating particle. Mambrini and collaborators \cite{mambrini-2006-0601} searched for a neutralino annihilation spectrum in a non-universal supergravity model which could fit the 2003 data. They were successful in finding reasonable fits, though none of the points in parameter space for their high mass candidates were consistent with WMAP constraints. Possible astrophysical sources of TeV gamma--rays include jets or the shocks in the accretion flow into the central black hole \cite{yuan-2002-383,narayan-1998-492}. Another possibility, which may be ruled out by lack of time variability, is a signal from particles accelerating near the event horizon of a rotating super--massive central black hole \cite{levinson-2000,aharonian-2004-425}. While various authors have conducted surveys of supersymmetric parameter space while categorizing dark matter candidates (e.g. \cite{baltz-2004-0410,baer-2005-0507,pierce-2004-70}), few papers have explicitly searched for upper mass limits on these candidates. The effects of coannihilation in a low-energy effective MSSM model were considered in \cite{bednyakov-2002-66}, which calculated the degree to which relic density was reduced by various channels. We conduct a similar survey, but with the goal of examining the regions of maximal mass in Higgsino coannihilation. In \cite{edsjo-2003-0304}, the DarkSUSY software package was used to explore coannihilation in the mSUGRA parameter space. In the chargino coannihilation region, the LSP is mostly Higgsino, and coannihilates with a nearly--degenerate chargino and second Higgsino. This publication reported a cosmological limit of $\sim$1500 GeV resulting from chargino coannihilation, which is nearly consistent with our findings for a pure Higgsino in this region. They also examined coannihilations of a bino--type LSP with sleptons and put forth the claim that coannihilation processes do not allow arbitrarily high masses, in contrast to some previous authors. Higgsino dark matter in an mSUGRA framework was also considered in \cite{chattopadhyay-2006-632}, where a WMAP--favored mass range of approximately 1 TeV is found. Wino--type dark matter appears in minimal anomaly--mediated supersymmetry breaking (mAMSB) models, and has a cosmological mass bound of over 2 TeV \cite{chattopadhyay-wino}. Profumo \cite{profumo-2005-} looked at several different scenarios, including a mAMSB wino, annihilation through a heavy Higgs resonance, and QCD effects in gluino annihilations which can allow dark matter masses in the hundreds of TeV. We use a model--independent approach in the calculation of the $m_{LSP}$ upper bound, one which could be applicable to many individual supersymmetry--breaking models. To this end, we have used model inputs at the electroweak scale, allowing us to control individual inputs to the mass parameters and compute the dark matter relic density $\Omega_{LSP}$. Calculations were done using the MicrOMEGAs \cite{belanger-2004-} and DarkSUSY \cite{gondolo-2004-0407} software packages. Both of these codes compute SUSY relic density via a numerical solution of the Boltzmann equation, including the cross sections of any relevant coannihilation processes. Both programs also provide several options for inputting the supersymmetric particle spectrum, including electroweak and GUT-scale minimal inputs, or determining the mass of each particle independently. MicrOMEGAs calculates the contribution of individual decay channels to standard model products, as a function of the contribution of each channel to $\Omega^{-1}$, there being an approximate inverse relationship between the total cross section and final relic density. DarkSUSY also provides the necessary tools to calculate the flux of high energy gamma--rays from halo annihilation, as well as a variety of other direct and indirect detection signals. \subsection{The Boltzmann Equation} The density evolution of any particle $\chi$ in the thermal bath of the early universe is governed by the Boltzmann equation: \begin{equation} \label{boltzdiff} a^{-3}\frac{d(n_\chi a^3)}{dt}= \langle\sigma v \rangle \left( (n^{(0)}_\chi)^2 - n_{\chi}^2 \right ). \end{equation} Here $a$ is the cosmological scale factor, and $n$ and $n^{(0)}$ are the number density and equilibrium number density of the particle species. The thermally-averaged cross section $\langle\sigma v \rangle$ must include all channels by which $\chi$ can interact, including coannihilation with other particles, in which the number densities of both species are important. At some point in time the SUSY particle will no longer be able to remain in thermal equilibrium with its surroundings (``freeze-out'') and its co--moving number density will be nearly constant. We limit ourselves to models which obey a discrete symmetry, R--parity, which prevents decays (but not two--body scattering) of SUSY particles into standard model particles \cite{jungman-1996-267}. Any SUSY particle other than the LSP in existence at freeze--out will decay to the LSP state. Following the derivation in \cite{dodelson}, the expression for the present relic density of a dark matter particle in terms of the cross section and freeze--out temperature is \begin{equation} \label{bolt} \Omega_{dm} h^2 \approx 0.3 x_f \sqrt{g_*} \frac {10^{-41} \mbox{cm}^2}{\langle\sigma v \rangle}. \end{equation} Here $x_f \equiv m_\chi/T_f$ where $T_f$ is the freeze--out temperature, and $g_*$ is the number of effective relativistic degrees of freedom of all species contributing to annihilation. The important features here are that total annihilation cross section controls density, and that mass does not enter the equation except through a weak dependence in $g_*$ and $x_f$. \subsection{Relic--Density Constraints} The WMAP survey, when combined with recent observations of large--scale structure, currently provides the best constraints on the quantity $\Omega_{dm} h^2$, where $\Omega_{dm}$ is the ratio of dark matter density to the critical density $\rho_c = 1.88h^2 \times 10^{-29} \mbox{g cm}^{-3}$. For our analysis, we use the most recent third year WMAP data combined with Sloan Digital Sky Survey large--scale structure data \cite{spergel-2006} to arrive at the tightest constraint on relic density, $\Omega_{dm} h^2=0.111^{+0.0056}_{-0.0075}$, here h $=0.709_{-0.032}^{+0.024}$ being the Hubble parameter in units of 100 km/s/Mpc. We apply bounds equal to twice these 1$\sigma$ limits for the following analysis, \begin{equation} 0.096 \leq \Omega_{dm} h^2 \leq 0.122. \end{equation} These limits are sufficient to incorporate other recent measurements of $\Omega_{dm} h^2$ \cite{2006PhRvD..74l3507T,2006JCAP...10..014S}, which do not differ from our figure by more than $\sim 1\sigma$. It should be noted that our results are not particularly sensitive to the relic density constraints, and there are larger sources of error involved in the calculation than the relatively minor variations in experimental determinations of $\Omega_{dm} h^2$. \subsection{The Supersymmetric Neutralino} The dark matter candidate we address in this paper is the lightest supersymmetric neutralino, denoted $\tilde{\chi}$, in the context of the minimal supersymmetric standard model (MSSM). To express the neutralino mass states as a linear combination of Higgsino, bino, and neutral wino particle states, we diagonalize the mass matrix. \begin{widetext} \[ M_{\tilde{\chi}}^0 = \left( \begin{array}{cccc} M_1 & 0 & -m_z \cos \beta \sin \theta_w & m_z \sin \beta \sin \theta_w \\ 0 & M_2 & m_z \cos \beta \cos \theta_w & -m_z \sin \beta \cos \theta_w \\ -m_z \cos \beta \sin \theta_w & m_z \cos \beta \cos \theta_w & 0 & -\mu \\ m_z \sin \beta \sin \theta_w & -m_z \sin \beta \cos \theta_w & -\mu & 0 \\ \end{array} \right) \] \end{widetext} Here $\beta$ is the ratio of vacuum expectation values between the two Higgs doublets, $m_z$ is the mass of the $Z^0$, $\theta_w$ is the weak mixing angle, and $M_1$, $M_2$, $\mu$ are the U(1) and SU(2) gaugino and Higgsino mass parameters, respectively. The physical states then become \begin{equation} \label{neutmix} {\tilde{\chi}_{i}}^0 = A_i\tilde{B} + B_i\tilde{W}^3 + C_i\tilde{H}_1^0 + D_i\tilde{H}_2^0, \end{equation} with $A_i^2+B_i^2+C_i^2+D_i^2=1$. Here $i=$ 1 to 4 is a particle index that will be suppressed in cases where the LSP is being discussed. For our first models, we will be discussing instances in which the LSP is entirely Higgsino-- or wino--like, corresponding to the conditions $C^2+D^2 \approx 1$ or $B^2 \approx 1$, respectively. The masses we are investigating are going to be strictly $>1$ TeV, and the off--diagonal blocks in the neutralino mass matrix are $< m_z$ so mixings are not large, and therefore the mass of the LSP is tightly controlled by the least of the three mass parameters. The Higgsino parameter $\mu$ and SU(2) parameter $M_2$ also appears in the chargino mass matrix: \[ M_{\tilde{\chi}^\pm} = \left( \begin{array}{cc} M_2 & \sqrt{2} m_w \sin \beta \\ \sqrt{2} m_w \cos \beta & \mu \\ \end{array} \right). \] Again, the off diagonal parameters here are small compared to the mass scale of interest. Thus our Higgsino-- and wino--type dark matter models come with a nearly--degenerate chargino (fermionic partner of charged Higgs and W bosons) built into the model at high energies. This chargino will account for a large degree of the total annihilation cross section for these two dark matter types. Incidentally, because we are interested in high--mass dark matter, $>1$ TeV, we will not be addressing the possibility of bino--type dark matter in this paper. Because there is no degenerate chargino state in this case, the total annihilation cross sections and mass limits on bino dark matter tend to be much lower than the other two varieties, even with strong coannihilation from sfermionic particles. \section{Mass Limit Results} \subsection{Pure States} To explore cases in which the LSP is a pure wino or Higgsino, all SUSY masses are set to high values ($>10$ TeV) except for either the Higgsino or SU(2) (wino) mass parameter. The other relevant particle which appears in this situation is a slightly heavier chargino, and in the Higgsino case a second nearly degenerate Higgsino--type neutralino. As the other particles are at a significantly higher mass scale, they will be thermally suppressed prior to dark matter freeze--out and will not affect relic density. Our results for this particular region show a monotonically increasing relic density with increasing mass, with no dependence on the $\tan\beta$ parameter. Our bounds from combined WMAP and SDSS for these cases are: \begin{eqnarray} 0.99 & \leq & m_{\tilde{\chi}} (\mbox{TeV}) \leq 1.12 \textrm{ (Higgsino) } \\ 2.10 & \leq & m_{\tilde{\chi}} (\mbox{TeV}) \leq 2.38 \textrm{ (wino) } \end{eqnarray} Our mass limit for a pure wino state is consistent with that mentioned by Profumo \cite{profumo-2005-}, who quoted a function $\Omega_{dm}h^2 = c(m_{\tilde{\chi}}(\mbox{TeV}))^\gamma$ with $0.0225 \leq c \leq 0.0255$ and $1.90 \leq \gamma \leq 1.92$. \subsection{Coannihilation with a Sfermion} In ordered to systematically test the effects of coannihilation with a Higgsino, we tested each sfermion mass parameter, originally set to high values, by shifting them down to the coannihilation region, to a mass $m_{ca}$ which is slightly larger than $\mu$. Beginning with the limits set by Higgsino--chargino coannihilation, we attempt to find regions where these processes allow a larger Higgsino mass by increasing the effective cross section for annihilation. Depending on the specific interaction strengths for processes involving this new particle, the relic density may be increased or decreased as $m_{ca}$ is brought lower, that is, coannihilation may have a positive or negative effect on the mass limit. In our low--energy effective supersymmetric model the sfermion masses are all free parameters. For our notation we write $m_{\tilde{q}i}$, $m_{\tilde{u}i}$, $m_{\tilde{d}i}$, $m_{\tilde{l}i}$, and $m_{\tilde{e}i}$, with i = 1,2,3 being the generation index, for the left--handed squark doublet, right--handed up and down singlets, left--handed slepton doublet, and right--handed slepton singlet, respectively. This gives 15 free mass parameters to examine in this coannihilation calculation. Using MicrOMEGAs and confirming our results with DarkSUSY, we determine that only coannihilation with the third generation of squarks allows the Higgsino mass to be increased beyond the amounts in the previous section while conforming to experimental limits. This is shown graphically here in Figure \ref{fig:coannihilation_contours}, which show regions in $m_{ca}$--$\mu$ space that fall within the $2*1\sigma$ bounds on relic density. For these figures, all masses other than $\mu$ and $m_{ca}$ are set to 50 TeV. At this mass these particles are effectively removed from the early--universe Boltzmann equation, as their number densities are exponentially suppressed prior to freeze--out. The ratio of Higgs vacuum expectation values $\tan\beta$ is set to 30, and the supersymmetry breaking parameters $A_i$ are set to 0. No other sfermion increases the maximum Higgsino mass when allowed to coannihilate. Also, the effect of coannihilation with two or more sfermions is not found to be cumulative in general. When $m_{\tilde{q}3} \approx \mu$, bringing down any other mass either increases $\Omega_{dm} h^2$ or has a negligable effect. We did not find any cases in which compound coannihilation with several sfermions had a substantially effect on the mass bound. \begin{figure*}[htpb] \centering \epsfig{file=ca_higgsino_plot_g.eps, width=15cm} \caption{Results for Higgsino--squark coannihilation. We show allowed regions in $m_{\tilde{f}}$-$\mu$ space, where $m_{\tilde{f}}$ is the mass for 4 different sfermion species shown here: $\tilde{q}3$, the left-handed doublet of 3rd--generation squarks in the lower left, $\tilde{u}3$ and $\tilde{d}3$, the right--handed singlet partner of the top and bottom quarks in the lower and upper right, and $\tilde{l}3$, the left-handed doublet of 3rd--generation leptons in the upper left. The two lines indicate the upper and lower $2\sigma$ bounds on relic density. The physical masses of the sfermions in each case are very nearly equal to our mass parameter, with corrections $<5$ GeV that can be ignored. Thus, the contour lines must end at $m_{\tilde{f}} = \mu$, for the Higgsino to remain the LSP. The highest vertical point in the regions for $\tilde{u}3$ and $\tilde{q}3$ indicate the highest possible Higgsino mass. In the case of $\tilde{l}3$, the effect of coannihilation on the upper mass bound is strictly negative. The $\tilde{d}3$ parameter has more complex behavior, but the overall effect is negative.} \label{fig:coannihilation_contours} \end{figure*} The ratio of Higgs doublet vacuum expectation values $\tan \beta$ has a significant effect in the sfermion coannihilation region. Our previous analysis was done with a typical value of $\tan \beta = 30$, but, with a higher value, the effects of coannihilation are increased. Third generation sfermion right--hand singlet parameters u3 and d3 can also have a slight enhancement effect in this regime. The optimal combination is found to be $\mu \approx m_{\tilde{q}3} = 0.9 m_{\tilde{u}3} = 0.9 m_{\tilde{d}3}$. In this case, $m_{\tilde{\chi}}$ may grow as large as 1.8 TeV while remaining within experimental density bounds. This is the greatest mass which was possible under any combination of coannihilating sfermions in our Higgsino scenario. For a wino--type LSP, no sfermion coannihilation arrangement was found to have the effect of raising the mass bound; all sfermion coannihilation schemes cause the mass limit to decrease. Therefore the highest mass limit for this type of dark matter is that found in the previous section. \subsection{Annihilation through a Massive Higgs Resonance} Another mechanism through which the annihilation cross section of a neutralino LSP might be greatly enhanced is via a heavy Higgs resonance, $m_A\approx 2m_{LSP}$. At the multi--TeV scale, the masses of the heavy CP--even Higgs, CP--odd Higgs, and charged Higgs are all approximately degenerate. \[ m_A \approx m_H \approx m_{H^\pm} \] Under this arrangement, not only is the cross section of LSP annihilation enhanced by the resonance, but so are coannihilations between any nearly degenerate charginos or next to lightest neutralinos. It is expected that the cross section will be dominated by the CP--odd Higgs channel, as the contribution from CP--even Higgses vanish in the low velocity limit due to the requirement of CP conservation in the intermediate state \cite{jungman-1996-267}. Profumo \cite{profumo-2005-} analyzed this region of parameter space in minimal supergravity (mSUGRA) and anomaly--mediated (mAMSB) SUSY breaking models with non-universal Higgs masses. He found upper LSP mass limit of approximately 5 and 12 TeV for mSUGRA and mAMSB, respectively, utilizing $2\sigma$ WMAP bounds and $\tan \beta = 40$. We have followed a similar program, taking the low--energy neutralino and chargino mass matrix inputs as free parameters, with the aforementioned constraint on the LSP mass, therefore exploring over the vector space of neutralino and chargino mixings. For two multi--TeV neutralinos interacting at zero velocity, the cross section for annihilation through a CP--odd Higgs is \cite{griest-1991,profumo-2005-}, \begin{equation} \langle \sigma v \rangle = \frac{g_{A \tilde{\chi}\tilde{\chi}}^2}{8\pi\Gamma_{A}^2} \sum_f c_f |g_{Aff}|^2 \approx \frac{2 \pi g_{A \tilde{\chi}\tilde{\chi}}^2}{m_{\tilde{\chi}}^2 \sum_f c_f |g_{Aff}|^2} \end{equation} where $g_{A\tilde{\chi}\tilde{\chi}}$ and $g_{Aff}$ are the vertex factors for the coupling between the Higgs and neutralino and final--state fermion species $f$, respectively, and $\Gamma_{A}$ is the Higgs width. The vertex factors appearing in a neutralino--Higgs or chargino--Higgs junction involve products of gaugino and Higgsino mixing factors, and are therefore sensitive to the exact choice of mass parameters. For two neutralino LSPs annihilating through a CP--odd Higgs, we have \cite{edsjo-1997-} \begin{eqnarray} & g_{A \tilde{\chi}\tilde{\chi}} = (gB-g'A)(C\sin\beta-D\cos\beta) \\ \vspace{2 mm} & g_{Auu}=\frac{gm_{u}\cot\beta}{2m_W} \\ \vspace{2mm} & g_{Add}=\frac{gm_{d}\tan\beta}{2m_W} \end{eqnarray} where the LSP composition is denoted by parameters A through D as in equation \ref{neutmix}. Here `u' refers to up--type quarks and neutrinos, and `d' refers to down type quarks and charged leptons, and g and g' are the SU(2) and U(1) coupling constants. Because the $g_{A \tilde{\chi}\tilde{\chi}}$ vertex factor is determined by products of gaugino and Higgsino fractions, the largest factors and highest annihilation enhancements tend to occur when the neutralino is an even mixture of gaugino and Higgsino particle eigenstates. As derived in \cite{profumo-2005-}, the maximum cross section for the CP--odd Higgs channel is at $\tan\beta \approx \sqrt{m_t/m_b} \approx 6.4$; our investigation of different values for $\tan\beta$ found the highest mass limits at approximately this point. Our results for this type of model can be seen in Figure \ref{fig:schan}. We systematically calculated mass limits corresponding to the upper $2*1\sigma$ experimental bound over a large number of neutralino mixtures. This was done by setting $m_A = 2\mu \approx 2m_{\tilde{\chi}}$, and scanning over values of $\mu-M_1$ and $\mu-M_2$. These differences alter the particle state content of the LSP via the neutralino mass matrix. The final results are shown in terms of the fractional LSP composition. As $\tan\beta$ is also an independent, relevant parameter, we included this as a variable and searched from $\tan\beta$ = 2 to 50. The results from this scan are plotted as mass contours as a function of fractional neutralino composition. We show plots for 3 values of $\tan\beta$ in Figure \ref{fig:schan}. \begin{figure*}[htpb] \centering \epsfig{file=higgschancont_tanb2.ps, width=10 cm} \epsfig{file=higgschancont_tanb10.ps, width=10 cm} \epsfig{file=higgschancont_tanb50.ps, width=10 cm} \caption{Resonant heavy Higgs annihilation mass limit plots for $\tan \beta$= 2 (top), 10 (middle), and 50 (bottom). The axes are the fractional wino and bino states of the LSP, with the Higgsino fraction being the remainder. The contours show the upper mass bounds in TeV.} \label{fig:schan} \end{figure*} \section{Detection Rates} For each of the model types thus discussed, we have calculated the resulting local gamma--ray flux from annihilations in the galactic center. For our halo model, we utilized the predictions of \cite{gnedin-2004-93}, together with their fiducial value for the density normalization at the maximum radius of the central black hole's sphere of influence, $\sim$ 2pc. This factor is highly uncertain and it is possible that it has been increased by baryonic infall. Dark matter annihilation can also have an effect on the density profile of the innermost part of the cusp, changing the signal intensity by a multiplier refered to as the boost parameter \cite{bertone-2005-72}, an effect not considered here. A reader wishing to use a different normalization can simply raise or lower our flux measurements by the square of the normalization multiplier, or by the boost factor in the case of modifications to the inner halo profile. The total cross sections for all annihilations to continuous and line features in the spectrum were calculated using the DarkSUSY program. In order to show a typical spectrum as seen by a detector, we have folded the distribution with a gaussian with an energy width of 15 percent, as was done in \cite{bergstrom-2005-95}. This also allows a reasonable visual comparison to be made of the prominence of the line emission feature against the continuous output. In Figure \ref{fig:dmodflux} we show the gamma ray flux from 5 different models, corresponding to the scenarios we have discussed. Clearly, with the fiducial halo normalization none of these annihilation models can account for a significant fraction of the flux observed by the H.E.S.S. telescopes and confirmed by MAGIC. However, the highly uncertain contribution to the central density from baryonic compression has not been taken into account in these results. As flux increases quadratically with particle density, a rather modest compression factor of 10 would increase flux by 2 orders of magnitude, enough to bring our more strongly annihilating models to the levels observed by these atmospheric Cherenkov telescopes. However, the H.E.S.S. data also maintains an approximate power--law profile for 2 logarithmic decades, something that none of our models can reproduce even with a carefully adjusted density normalization. Even the most massive particles we found to be capable of satisfying relic density constraints exhibit a roll--off behavior that is not in the observed spectrum. Two plots for the annihilation through a heavy Higgs resonance are shown. We have chosen a mass of 20 TeV here, the heaviest neutralino for which a cross section could be computed without resorting to extrapolation in certain DarkSUSY routines. Because mean halo velocities are much lower than those at freeze--out, annihilation cross sections are more sensitive in the halo to the exact relation between $m_A$ and $m_{\tilde{\chi}}$ as there is little smearing out of the center--of--mass energy due to thermal velocities. Thus models that yield similar relic densities can have very different halo annihilation cross sections. To illustrate this we have displayed both an optimized model (orange) with $m_A = 2m_{\tilde{\chi}}$ and a more typical model (purple) where the relation is only approximate. \begin{figure*}[htpb] \centering \epsfig{file=dmodflux_wbrem.ps, width=14 cm} \caption{In the upper plot, we summarize our findings by showing the resulting local gamma--ray flux from the galactic center in several annihilation scenarios using the halo model of \cite{gnedin-2004-93} with fiducial normalization (no baryonic compression), and compare to the latest observations of the H.E.S.S. experiment (black data points, \cite{aharonian-2006-97}). The dashed lines show the true continuous distribution, while the solid lines show the total (continuous plus discrete) emission spectra as seen by a detector with an energy resolution of 15 percent. The blue line is a 1 TeV Higgsino, coannihilating with a nearly degenerate chargino and second Higgsino. The red line shows the same model with coannihilation from a 3rd generation squark, at a mass of 1.8 TeV. The green line is a 2.4 TeV wino. The purple and orange lines are both a mixed type neutralino annihilating through a heavy Higgs resonance. The orange model has been optimized by fine tuning of the resonance, so that the cross section and resulting flux are maximized, while the purple line shows a more typical model. The lower plot demonstrates an attempt to fit a Higgs resonance model to the H.E.S.S. data. A factor 10 density boost is applied, resulting in a $10^2$ increase in flux above the fiducial value.} \label{fig:dmodflux} \end{figure*} \section{Discussion} Higgsino-- and wino--type dark matter annihilate much more strongly than bino dark matter at a given mass, and are capable of satisfying relic density constraint with masses at the TeV scale, something that is not possible with a bino particle, even including sfermion coannihilations. Primarily Higgsino-type dark matter can arise, for example, in the `focus point' region of minimal supergravity (mSUGRA) \cite{feng-2000-84}. In this model, the supersymmetric scalar quarks and leptons are set to high masses by a single GUT scale parameter, and the neutralino becomes a Higgsino-bino mixture. As this scalar mass term is increased, the neutralino LSP becomes more Higgsino--like, and features an increasing cross section \cite{edsjo-2003-0304,feng-2000-482}. Since the highest mass LSPs in the focus point region are nearly pure Higgsino, we have constrained this and other Higgsino-type models by considering this limiting case. Dark matter which is predominantly wino--type appears in minimal anomaly--mediated supersymmetry breaking \cite{feng:1999hg}. MicrOMEGAs provides information on the annihilation channels relevant to the total LSP cross section in a particular model. For the case of Higgsino--chargino coannihilation, a large number of channels provide modest contributions to the total cross section. Chargino and neutralino annihilation to quarks are the most important processes, with annihilations to leptons and gauge bosons, and double chargino annihilation to quarks being the other relevant channels. In the $>$TeV mass range of interest, even the heaviest standard model particles are essentially massless, and there is little difference in available phase space between different generations, hence little variation in cross section. Not surprisingly, there is also relatively little variation in channel contributions as the mass scale and resulting relic density are altered. The introduction of a coannihilating squark opens the possibility of tree--level annihilations to gluons. Annihilations to gluons, as well as coannihilations between Higgsinos and squarks to quarks and gauge bosons, are the major new channels available. The decreased mass of a given sfermion increases the t--channel amplitude for scattering from neutralinos to fermions, but only for that particular flavor of the sfermion. These same vertices explain why the third generation of squarks are unique as the only effective set of coannihilating partners. The appearance of the corresponding quark mass terms limits these channels to cases involving heavy quark masses. The quantity $\beta$ appears in several vertex factors involving Higgs and Higgsino iterations with matter \cite{edsjo-1997-}. Processes with a factor of $\tan\beta$ or $(\sin\beta)^{-1}$ in the amplitude include chargino and neutralino annihilation with a sfermion to a gauge boson and fermion, and these are primarily responsible for the decrease in relic density in scenarios with squark coannihilation with increasing $\tan\beta$. An effect which is relevant for Higgsino and wino models is the Sommerfeld enhancement, appearing in the context of weakly--interacting non--relativistic particles with a mass much greater than the W--boson. This non--perturbative effect has been shown to decrease relic abundance by as much as 50$\%$ for a wino--type LSP and 10$\%$ for a Higgsino--type \cite{hisano-2007-646}. These authors study wino dark matter as an example and find upper WMAP bounds of 2.7 to 3 TeV. Neither DarkSUSY nor MicrOMEGAs accounts for this effect; in addition to these considerably higher new bounds for wino dark matter, our bounds involving pure Higgsino dark matter could be enhanced by a slight amount. When annihilation through a heavy Higgs s--channel resonance is considered, allowed masses can go into the tens of TeV. The examination of this scenario was done using the DarkSUSY software. As expected, the neutralinos which had the largest cross section were approximately even mixtures of Higgsino and gaugino states. From Figure \ref{fig:schan}, it is clear that there is little change in the general topography of the relation between neutralino mixture and mass bounds with changing $\tan\beta$. The maximum mass limits do change as a function of $\tan\beta$, rising a small amount from 32 TeV at $\tan\beta = 2$ to about 34 TeV at $\tan\beta =$ 5 to 8, and then decreasing from that point down to 18 TeV at $\tan\beta = 50$. Our results with MicrOMEGAS were tested against the DarkSUSY code, and the programs were found to generally be in agreement over the parameter space of interest, except in the case of annihilation through a Higgs resonance. For our calculations involving pure Higgsino and wino states, the difference in relic density calculations were no larger than 2.5 percent, and in certain cases where sfermion coannihilation was considered it was no greater than 9 percent. It was noted during while investigating sfermion coannihilation that the two codes produced highly disparate results in certain situations. These problems were determined to be a error in the DarkSUSY software that only appears at mass scales higher than we have considered here, and did not appear to be an issue for our results in the coannihilation region. For the Higgs resonance models, there was a significant difference in the predictions of the two codes, sometimes by as large as a factor of 2. The DarkSUSY results, which we have presented, tended to output higher mass limits than MicrOMEGAs. While this does mean that our results in this area should be taken only as approximate, our conclusion that this scenario is unlikely and cannot explain current gamma ray observations is not altered. While there is no concrete upper bound on the scale of supersymmetry breaking, a mass well into the TeV range is certainly disfavored by constraints from gauge coupling unification \cite{ellis-1992-287,bourilkov-2005-20}. Another `absolute' bound comes from partial wave unitarity, which provides an upper limit on the mass that any thermally produced dark matter particle can have, by placing a constraint on the cross section in equation [\ref{boltzdiff}]. This bound is applicable as long as the annihilation cross section arises primarily from s--wave terms. The mass limit set by unitarity is $\Omega_{dm}h^2 \geq 1.7*10^{-6}\sqrt{x_f}(m_{dm}/TeV)^2$ \cite{griest-kamionkowski}, which leads to a maximum relic density of about 120 TeV. While this mass is well above that of any of the MSSM models we examined, it may become important when considering thermally--produced heavy dark matter candidates from other particle physics extensions to the standard model. However, it should be noted that the unitarity bound can be violated in the case of a strong resonance, in which the assumption of s--wave dominance breaks down \cite{hui-2002}. Another case in which the unitarity bound is not applicable comes from possible non--perturbative factors in the annihilation cross section which could affect heavy ($>$500 GeV) Higgsino-type neutralinos \cite{hisano-2005-71}. These factors appear only at low velocity and would not affect the physics of the dark matter during freeze--out but could thus affect halo interactions, greatly increasing flux levels from annihilations. The mass limits we have set in this paper apply only to neutralino dark matter in the MSSM model which attains a relic density through thermal freeze--out in a standard cosmology. We can make no claims about cases in which the dark matter is produced through non--thermal processes. These could include scenarios in which the dark matter is produced non--thermally, possibly by a late--decaying scalar field \cite{gelmini-2006-74}, or one in which entropy is produced after freeze--out. This latter case could happen for a variety of reasons (\cite{jungman-1996-267}, for review) and would have the effect of violating the standard assumption of constant comoving entropy density, which would reduce relic density. The unitarity bound would not apply in this situation, as a very massive dark matter particle with ordinary cross section could still attain the correct density today. \section{Summary} We have determined the masses of pure Higgsino-- and wino--type thermally--produced dark matter which are consistent with the latest density constraints on dark matter, defined here as twice the 1$\sigma$ bound determined by combined SDSS and WMAP--3 data. In the absence of any coannihilation processes with scalar fermions, the suitable mass range is found to be between 0.99 and 1.12 TeV for a pure Higgsino and 2.10 and 2.38 for a pure wino state. Coannihilation with partners of the 3rd--generation quarks is found to increase this limit modestly for Higgsino type dark matter to an upper limit of about 1.80 TeV, with fine tuning in the mass parameters and $\tan\beta$, but no coannihilation model can increase the mass limit for a wino--type particle. Allowing the dark matter to exist as a bino pure or mixed state tends to sharply decrease mass limits, and bino mass limits were always found to be in the sub--TeV range. The other class of models which we examined utilized annihilation of the LSP through a heavy Higgs resonance. Viable models with LSP masses as high as 34 TeV were found, though these scenarios are sensitive to the both the neutralino mixture and the resonance condition $m_A = 2m_{\tilde{\chi}}$, and are therefore dependent on fine--tuning. A computation of the VHE gamma ray spectrum which could be observed with an atmospheric Cherenkov telescope showed that even the largest masses we found are not adequate for fitting the observed H.E.S.S spectrum. This observed event rate is also considerably higher than our predictions, although this is not necessarily a problem because of uncertain normalization of the dark matter profile in its innermost regions. The author wishes to thank Joel Primack, Patrick Fox, Andreas Birkedal, Gordon Kane, and Stefano Profumo for advice and guidance. This project would not have been possible without the MicrOMEGAs and DarkSUSY teams, and in particular Fawzi Boudjema and Genevieve Belanger from the former, and Paolo Gondolo, Joakim Edsjo, and Edward Baltz from the latter, who provided assistance on various issues. This work was supported in part by NSF grant AST--0607712 and by the University of California, Santa Cruz Division of Graduate Studies.
1,116,691,497,942
arxiv
\section{Preliminaries} Here we present some supplemental lemmas and define some notations to be used throughout the proof. We write a target quantum state, which is a pure state to be approximated by a PQC, as $|\phi\rangle$ and let the actual circuit output be $|\psi(\bm{\theta})\rangle$. We may simply write $|\psi\rangle$ if the circuit parameters $\bm{\theta}$ are unimportant in the context. Moreover, let $F(\ket{\phi}, \ket{\psi})=|\langle\psi|\phi\rangle|^2$ be fidelity. Unless otherwise specified, $\{\ket{k}\}_{k=0}^{2^N-1}$ are the $N$-qubit computational bases. We denote $\widetilde{\operatorname{CNOT}}\equiv\prod_{i=0}^{N-i}\operatorname{CNOT}(i, N+i)$ which is a composition of $N \operatorname{CNOT}$s controlled and targeted on the pair $(i, N+i)$ for all $i=0,\dots,N-1$. Let $\bin{i}{}$ be a vector for the binary expansion of $i$, and therefore $\bin{i}{j}$, where $0\leq j<\lceil\log i\rceil$, is the $j$-th leading digit of the expansion. \subsection{$t$-design and integration over the unitary group}\label{Supp:unitary-design} We firstly recall the definition of a $t$-design. Assume $\mathcal{U} = \{U_k\}_{k=1}^K$ is a finite of unitary operator on $\mathbb{C}^D$, and $P_{t,t}(U)$ is a polynomial of degree at most $t$ in the matrix elements of $U$ and at most $t$ in those of $U^{\dagger}$. Then, we define that $\mathcal{U}$ is a $t$-design if for every polynomial $P_{t,t}(U)$ we have \begin{align} \frac{1}{K}\sum_{k=1}^K P_{t,t}(U_k) = \int P_{t,t}(U)d\eta(U), \label{def:t-design} \end{align} where the integrals with respect to the Haar measure over the unitary group. Particularly, when $t=2$, the definition is equivalent to the following definition \cite{dankert2009exact}. \renewcommand{\theproposition}{S\arabic{definition}} \renewcommand{\theproposition}{S\arabic{proposition}} \begin{definition} $\{U_k\}_{k=1}^K$ forms a unitary $2$-design if and only if for any linear operators $C,D,\rho \in L(\mathbb{C}^D)$, we have \begin{align} \frac{1}{K}\sum_{k=1}^K U_k^{\dagger}CU_ k\rho U_k^{\dagger}DU_k &=\quad\int_{\mathcal{U}(d)}U^{\dagger}CU\rho U^{\dagger}DUd\eta(U). \label{Appendix: def-2-design} \end{align} \end{definition} Based on this definition, we can know whether a unitary group is a 2-design by comparing the both sides of Eq.~\eqref{Appendix: def-2-design}. Fortunately, according to the Schur's lemma \cite{feit1982representation} , the RHS of Eq.~\eqref{eq:Appendix-lem:2-design-U} has a closed form which is shown in lemma \ref{Appendix-lem:2-design-U}. The proof of lemma \ref{Appendix-lem:2-design-U} can be seen in \cite{emerson2005scalable}. \begin{lemma} \label{Appendix-lem:2-design-U} Let $\{U_k\}_{k=1}^K$ forms a unitary $t$-design with $t\geq 2$, for any linear operators $C,D,\rho \in L(\mathbb{C}^D)$. We have \begin{align} \quad\int_{\mathcal{U}(d)}U^{\dagger}CU\rho U^{\dagger}DUd\eta(U) = \frac{\tr[CD]\tr[\rho]}{d}\frac{I}{d}+(\frac{d\tr[C]\tr[D]-\tr[CD]}{d(d^2-1)})(\rho-\tr[\rho]\frac{I}{d}).\label{eq:Appendix-lem:2-design-U} \end{align} \end{lemma} Furthermore, we present the following lemma so that we can solve the problem with bipartite system. \renewcommand\theproposition{\ref{lem:2-design-UxU}} \setcounter{proposition}{\arabic{proposition}-1} \begin{lemma} For any bipartite state $\rho_{AB}$ ($d_A=d_B=d$), and arbitrary linear operators $C,D \in L(\mathbb{C}^{d^2})$, we have \begin{align} \int_{\mathcal{U}(d)\otimes \mathcal{U}(d)} d \eta(U) U^{\dagger} C U \rho U^{\dagger} D U = t_0 \rho + t_1 \frac{\rho_A\ox I_B}{d} + t_2\frac{I_A\ox\rho_B}{d} + t_3 I_{AB}\tr(\rho_{AB}), \end{align} where $\rho_A = \tr_B[ \rho_{AB}]$, $\rho_B = \tr_A [\rho_{AB}]$,and $\{t_j\}_{j=0}^3$ can be computed from the following linear system of equations \begin{align} \tr(CD) & = t_0d^2 + t_1d^2 +t_2d^2 + t_3d^4, \label{eq:U1U2-part1} \\ \tr(C_A D_A) & = t_0d^3+t_1d+t_2d^3+t_3d^3,\label{eq:U1U2-part2} \\ \tr(C_B D_B) & = t_0d^3+t_1d^3+t_2d+t_3d^3,\label{eq:U1U2-part3} \\ \tr(C)\tr(D) & = t_0d^4 + t_1d^2 + t_2d^2+t_3d^2,\label{eq:U1U2-part4} \end{align} that is, \begin{equation} \left[ {\begin{array}{*{20}{c}} {t_0}\\ {t_1}\\ t_2\\ t_3 \end{array}} \right] =\frac{1}{\left(d^{2}-1\right)^{2}}\left[ {\begin{array}{*{20}{c}} \frac{\tr[CD]}{d^2}-\frac{\tr[C_AD_A]}{d}-\frac{\tr[C_BD_B]}{d}+\tr[C]\tr[D]\\ -\tr[CD]+\frac{\tr[C_AD_A]}{d}+d\tr[C_BD_B]-\tr[C]\tr[D] \\ -\tr[CD]+d\tr[C_AD_A]+\frac{\tr[C_BD_B]}{d}-\tr[C]\tr[D] \\ \tr[CD]-\frac{\tr[C_AD_A]}{d}-\frac{\tr[C_BD_B]}{d}+\frac{\tr[C]\tr[D]}{d^2} \end{array}} \right]. \end{equation} \end{lemma} \begin{proof} Similar to the proof of Lemma~\ref{Appendix-lem:2-design-U}, Schur's lemma implies that the LHS of Eq.~\eqref{eq: u1-tensor-u2} has an explicitly expression, which is denoted as follows: \begin{align} \int_{\mathcal{U}(d)\otimes \mathcal{U}(d)} d \eta(U) U^{\dagger} C U \rho U^{\dagger} D U = t_0 \rho + t_1 \frac{\rho_A\ox I_B}{d} + t_2\frac{I_A\ox\rho_B}{d} + t_3 I_{AB}\tr(\rho_{AB}). \end{align} For simplicity, we define the following two functions: \begin{align} &f^{(1)}(\rho)\equiv t_0 \rho + t_1 \rho_A\ox I_B/d + t_2I_A\ox\rho_B/d + t_3 I_{AB}\tr(\rho_{AB}),\\ &f^{(2)}(\rho)\equiv \int_{\mathcal{U}(d)\otimes \mathcal{U}(d)} d \eta(U) U^{\dagger} C U \rho U^{\dagger} D U , \end{align} where $f^{(1)} = f^{(2)}$. Then we only need to consider the coefficients $t_0,t_1,t_2,t_3$. To calculate the coefficients, we defined the following four operators: \begin{align} &T_1(f)\equiv\sum_{i,j}\bra{ij}f(I)\ket{ij},\\ &T_2(f)\equiv\sum_{i,j,k}\bra{ik}f(\ketbra{i}{j}_A\otimes I_B)\ket{jk},\\ &T_3(f)\equiv\sum_{i,k,l}\bra{ik}f(I_A \otimes \ketbra{k}{l}_B)\ket{il},\\ &T_4(f)\equiv\sum_{i,j,k,l}\bra{ij}f(\ketbra{ij}{kl})\ket{kl}. \end{align} For Eq.~\eqref{eq:U1U2-part1}, we have \begin{align*} T_1(f^{(1)})&=t_0\sum_{i,j}\bra{ij}I\ket{ij}+t_1\sum_{i,j}\bra{ij}I_A\otimes I_B\ket{ij}+t_2\sum_{i,j}\bra{ij}I_A\otimes I_B\ket{ij}+t_3d^4\\ &=t_0d^2 + t_1d^2 +t_2d^2 + t_3d^4,\\ T_1(f^{(2)})&=\sum_{i,j}\bra{ij}\int_{\mathcal{U}(d)\otimes \mathcal{U}(d)} d \eta(U) U^{\dagger} C U I U^{\dagger} D U\ket{ij}\\ &=\tr(CD). \end{align*} Since $T_1(f^{(1)})=T_1(f^{(2)})$, then $\tr(CD) = t_0d^2 + t_1d^2 +t_2d^2 + t_3d^4$. For Eq.~\eqref{eq:U1U2-part3}, we have \begin{align*} T_2(f^{(1)})&=t_0\sum_{i,j,k}\bra{ik}\ketbra{i}{j}\otimes I_B\ket{jk}+t_1\sum_{i,j,k}\bra{ik}\ketbra{i}{j}\otimes I_B\ket{jk}\\ &\quad +\frac{t_2}{d}\sum_{i,j,k}\bra{ik}I_A\otimes\tr[\ketbra{i}{j}]I_B\ket{jk}+t_3\sum_{i,j,k}\bra{ik}I_{AB}\tr[\ketbra{i}{j}\otimes I_{B}]\ket{jk}\\ &=t_0d^3 + t_1d^3 +t_2d + t_3d^3,\\ T_2(f^{(2)})&=\sum_{i,j,k}\bra{ik}\int_{\mathcal{U}(d)\otimes \mathcal{U}(d)} d \eta(U) U^{\dagger} C U (\ketbra{i}{j}_A\otimes I_B) U^{\dagger} D U \ket{jk}\\ &=\sum_{i,j,k}\int_{\mathcal{U}(d)\otimes \mathcal{U}(d)}d \eta(U) \bra{ik} U^{\dagger} C U (\ketbra{i}{j}_A\otimes I_B) U^{\dagger} D U \ket{jk} \\ &=\tr(C_B D_B). \end{align*} Since $T_2(f^{(1)})=T_2(f^{(2)})$, then $\tr(C_B D_B) = t_0d^3+t_1d^3+t_2d+t_3d^3$. Similarly, we have \begin{align*} T_3(f^{(1)})&=t_0d^3+t_1d+t_2d^3+t_3d^3,\\ T_3(f^{(2)})&=\tr(C_A D_A),\\ T_4(f^{(1)})&=t_0d^4+t_1d^2+t_2d^2+t_3d^2,\\ T_4(f^{(2)})&=\tr(C)\tr(D).\\ \end{align*} Since \begin{align*} T_3(f^{(1)})&=T_3(f^{(2)}),\\ T_4(f^{(1)})&=T_4(f^{(2)}), \end{align*} then \begin{align*} \tr(C_A D_A) & = t_0d^3+t_1d+t_2d^3+t_3d^3,\\ \tr(C)\tr(D) & = t_0d^4 + t_1d^2 + t_2d^2+t_3d^2. \end{align*} Up to now, we have proved Eq.~\eqref{eq:U1U2-part1} to Eq.~\eqref{eq:U1U2-part4}. Let \begin{equation} R = \left(\begin{array}{llll} d^{2} & d^{2} & d^{2} & d^{4} \\ d^{3} & d^{1} & d^{3} & d^{3} \\ d^{3} & d^{3} & d^{1} & d^{3} \\ d^{4} & d^{2} & d^{2} & d^{2} \end{array}\right). \end{equation} Then \begin{equation} R^{-1} = \frac{1}{\left(d^{2}-1\right)^{2}}\left(\begin{array}{cccc} \frac{1}{d^{2}} & -\frac{1}{d} & -\frac{1}{d} & 1 \\ -1 & \frac{1}{d} & d & -1 \\ -1 & d & \frac{1}{d} & -1 \\ 1 & -\frac{1}{d} & -\frac{1}{d} & \frac{1}{d^{2}} \end{array}\right). \end{equation} Hence, we have \begin{equation} \left[ {\begin{array}{*{20}{c}} {t_0}\\ {t_1}\\ t_2\\ t_3 \end{array}} \right] = R^{-1}\left[ {\begin{array}{*{20}{c}} \tr(CD)\\ \tr(C_A D_A) \\ \tr(C_B D_B) \\ \tr(C)\tr(D) \end{array}} \right]=\frac{1}{\left(d^{2}-1\right)^{2}}\left[ {\begin{array}{*{20}{c}} \frac{\tr[CD])}{d^2}-\frac{\tr[C_AD_A]}{d}-\frac{\tr[C_BD_B]}{d}+\tr[C]\tr[D]\\ -\tr[CD]+\frac{\tr[C_AD_A]}{d}+d\tr[C_BD_B]-\tr[C]\tr[D] \\ -\tr[CD]+d\tr[C_AD_A]+\frac{\tr[C_BD_B]}{d}-\tr[C]\tr[D] \\ \tr[CD]-\frac{\tr[C_AD_A]}{d}-\frac{\tr[C_BD_B]}{d}+\frac{\tr[C]\tr[D]}{d^2} \end{array}} \right], \end{equation} \begin{equation} \left[ {\begin{array}{*{20}{c}} {t_0}\\ {t_1}\\ t_2\\ t_3 \end{array}} \right] =\frac{1}{\left(d^{2}-1\right)^{2}}\left[ {\begin{array}{*{20}{c}} \frac{\tr[CD]}{d^2}-\frac{\tr[C_AD_A]}{d}-\frac{\tr[C_BD_B]}{d}+\tr[C]\tr[D]\\ -\tr[CD]+\frac{\tr[C_AD_A]}{d}+d\tr[C_BD_B]-\tr[C]\tr[D] \\ -\tr[CD]+d\tr[C_AD_A]+\frac{\tr[C_BD_B]}{d}-\tr[C]\tr[D] \\ \tr[CD]-\frac{\tr[C_AD_A]}{d}-\frac{\tr[C_BD_B]}{d}+\frac{\tr[C]\tr[D]}{d^2} \end{array}} \right]. \end{equation} \end{proof} \section{Proof of Theorem~\ref{pro:non-2-design}} \label{appendix:proof-TA-NOT-2-design} \renewcommand{\theproposition}{S\arabic{proposition}} \begin{lemma}\label{lem:cnot} $\widetilde{\operatorname{CNOT}}=\sum_{i=0}^{2^n-1}\ketbra{i}{i}_A\otimes V_{B_i}$, where $A,B$ are subsystems and $V_{B_i}=\bigotimes_{j=0}^{n-1}X_j^{\bin{i}{j}}, \bin{i}{j}\in\{0, 1\}$, that is the operator on the subsystem $B$ that represents the binary bit of $i$ is 1 acting on pauli $X$, and 0 acting on $I$. Then $V_{B_i}$ has the following properties: \begin{enumerate} \item $[V_{B_i},V_{B_j}]=0$; \item $V_{B_i}=V_{B_i}^{\dagger}$; \item $\bra{0}^{\otimes n}V_{B_i}V_{B_j}\ket{0}^{\otimes n}=\delta_{ij}$, \end{enumerate} \end{lemma} where $\delta_{ij}$ is Kronecker delta. \renewcommand\theproposition{\ref{pro:non-2-design}} \setcounter{proposition}{\arabic{proposition}-1} \begin{theorem} {SEA} with $U_i(i=1,2,3)$ being local 2-design and $\widetilde{\operatorname{CNOT}}$ as entangling layer does not form a 2-design on the global system. \end{theorem} \begin{proof} Without losing generality, we only consider the SEA with $2n$ qubits in this proof. Lemma~\ref{Appendix-lem:2-design-U} says that if an ansatz forms a 2-design, the Eq.~\eqref{eq:Appendix-lem:2-design-U} should hold for any linear operators $C,D,\rho \in L(\mathbb{C}^{d^2})$. Assuming $d=2^{n}, C_0=D_0=\rho=\ketbra{0}{0}^{\otimes 2n}$, then if {SEA} forms a 2-design, we have \begin{align} &\int_{U(\theta)}\bra{0}^{\otimes 2n}U^{\dagger}C_0U\rho U^{\dagger}D_0U\ket{0}^{\otimes 2n}d\eta(U) \label{eq: TA_not_2-design eq 1}\\ =&\int_{\mathcal{U}(d^2)}\bra{0}^{\otimes 2n}U^{\dagger}C_0U\rho U^{\dagger}D_0U\ket{0}^{\otimes 2n}d\eta(U)\\ =&\frac{2}{d^2(d^2+1)}, \label{eq:2n-qubit} \end{align} where $U(\theta)$ denotes the subset of unitary group generated by SEA, and $d\eta(U)$ denotes the Haar measure. In the following steps, we will explicitly calculate the Eq.~\eqref{eq: TA_not_2-design eq 1} and prove that it cannot be $\frac{2}{d^2(d^2+1)}$. For the {SEA} with $U_i(i=1,2,3)$ being local 2-design and $\widetilde{\operatorname{CNOT}}=V=\sum_{i=0}^{2^n-1}\ketbra{i}{i}_A\otimes V_{B_i}$, the unitary of this ansatz is \begin{align} U&= \sum_{i=0}^{2^n-1}(U_2\otimes U_3)\cdot \ketbra{i}{i}\otimes V_{B_i} \cdot (U_1\otimes I_B)\label{eq:2n-U}. \end{align} Therefore, \begin{align*} U\ket{0}^{\otimes 2n}&= \sum_{i=0}^{2^n-1}(U_2\otimes U_3)\cdot \ketbra{i}{i}\otimes V_{B_i} \cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n},\\ \bra{0}^{\otimes 2n}U^{\dagger} &=\sum_{i=0}^{2^n-1}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot\ketbra{i}{i}\otimes V_{B_i} \cdot (U_2^{\dagger}\otimes U_3^{\dagger}). \end{align*} Then \begin{align*} &\quad\bra{0}^{\otimes 2n}U^{\dagger}C_0U\rho U^{\dagger}D_0U\ket{0}^{\otimes 2n}\\ &=\bra{0}^{\otimes 2n}U^{\dagger}(\ketbra{0}{0})^{\otimes 2n}U(\ketbra{0}{0})^{\otimes 2n}U^{\dagger} (\ketbra{0}{0})^{\otimes 2n}U\ket{0}^{\otimes 2n}\\ &=\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot (U_2^{\dagger}\otimes U_3^{\dagger})(\ketbra{0}{0})^{\otimes 2n} (U_2\otimes U_3)\cdot V\cdot (U_1\otimes I_B)(\ketbra{0}{0})^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\\ &\quad \cdot (U_2^{\dagger}\otimes U_3^{\dagger}))(\ketbra{0}{0})^{\otimes 2n}(U_2\otimes U_3)\cdot V \cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}\\ &:=\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot (U_2^{\dagger}\otimes U_3^{\dagger})\cdot C_0 \cdot(U_2\otimes U_3)\cdot\rho_1\cdot(U_2^{\dagger}\otimes U_3^{\dagger})\cdot D_0\cdot(U_2\otimes U_3) \cdot V\\ &\quad\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}, \end{align*} where \begin{align*} V&=\sum_{i=0}^{2^n-1}\ketbra{i}{i}_A\otimes V_{B_i},\\ \rho_1&=V\cdot (U_1\otimes I_B)(\ketbra{0}{0})^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}. \end{align*} Therefore, we have \begin{align*} \rho_{1}^A & = \tr_B[\rho_1]\\ &=\sum_{k=0}^{d-1}I_A\otimes\bra{k}_B\cdot V\cdot (U_1\otimes I_B)(\ketbra{0}{0})^{\otimes 2n}\cdot (U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot I_A\otimes\ket{k}_B\\ &=\sum_{k=0}^{d-1}I_A\otimes\bra{k}_B\cdot (\sum_{i=0}^{d-1}\ketbra{i}{i}_A\otimes V_{B_i})\cdot (U_1\otimes I_B)(\ketbra{0}{0})^{\otimes 2n}\cdot (U_1^{\dagger}\otimes I_B) \cdot (\sum_{j=0}^{d-1}\ketbra{j}{j}_A\otimes V_{B_j}^{\dagger})\cdot I_A\otimes\ket{k}_B\\ &=\sum_{i,j,k=0}^{d-1}I_A\otimes\bra{k}_B\cdot (\ketbra{i}{i}_A\otimes V_{B_i})\cdot (U_1\otimes I_B)(\ketbra{0}{0})^{\otimes 2n}\cdot (U_1^{\dagger}\otimes I_B) \cdot (\ketbra{j}{j}_A\otimes V_{B_j}^{\dagger})\cdot I_A\otimes\ket{k}_B\\ &=\sum_{i,j,k=0}^{d-1}\ketbra{i}{i}_AU_1(\ketbra{0}{0})^{\otimes n}U_1^{\dagger}\ketbra{j}{j}_A\otimes\bra{k}_BV_{B_i}(\ketbra{0}{0})^{\otimes n}V_{B_j}^{\dagger}\ket{k}_B\\ &=\sum_{i,j=0}^{d-1}\ketbra{i}{i}_AU_1(\ketbra{0}{0})^{\otimes n}U_1^{\dagger}\ketbra{j}{j}_A(\sum_{k=0}^{d-1}\bra{k}_BV_{B_i}(\ketbra{0}{0})^{\otimes n}V_{B_j}^{\dagger}\ket{k}_B)\\ &=\sum_{i,j=0}^{d-1}\ketbra{i}{i}_AU_1(\ketbra{0}{0})^{\otimes n}U_1^{\dagger}\ketbra{j}{j}_A\tr(V_{B_i}(\ketbra{0}{0})^{\otimes n}V_{B_j}^{\dagger})\\ &=\sum_{i,j=0}^{d-1}\ketbra{i}{i}_AU_1(\ketbra{0}{0})^{\otimes n}U_1^{\dagger}\ketbra{j}{j}_A\tr(\bra{0}^{\otimes n}V_{B_i}V_{B_j}^{\dagger}\ket{0}^{\otimes n})\\ &=\sum_{i=0}^{d-1}\ketbra{i}{i}U_1\ketbra{0}{0}^{\otimes n}U_1^{\dagger}\ketbra{i}{i},\quad \text{(due to lemma~\ref{lem:cnot})}\\ \rho_{1}^B& = \tr_A[\rho_1]\\ &=\sum_{k=0}^{d-1}\bra{k}_A\otimes I_B\cdot V\cdot (U_1\otimes I_B)(\ketbra{0}{0})^{\otimes 2n}\cdot (U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot \ket{k}_A\otimes I_B\\ &=\sum_{k=0}^{d-1}\bra{k}_A\otimes I_B\cdot (\sum_{i=0}^{d-1}\ketbra{i}{i}_A\otimes V_{B_i})\cdot (U_1\otimes I_B)(\ketbra{0}{0})^{\otimes 2n}\cdot (U_1^{\dagger}\otimes I_B) \cdot (\sum_{j=0}^{d-1}\ketbra{j}{j}_A\otimes V_{B_j}^{\dagger})\cdot \ket{k}_A\otimes I_B\\ &=\sum_{i,j,k=0}^{d-1}\bra{k}_A\ketbra{i}{i}_AU_1(\ketbra{0}{0})^{\otimes n}U_1^{\dagger}\ketbra{j}{j}_A\ket{k}_A\cdot V_{B_i}(\ketbra{0}{0})^{\otimes n}V_{B_j}\\ &=\sum_{k=0}^{d-1}\bra{k}U_1\ketbra{0}{0}^{\otimes n}U_1^{\dagger}\ket{k}\cdot V_{B_k}\ketbra{0}{0}^{\otimes n}V_{B_k}. \end{align*} Then combining lemma~\ref{Appendix-lem:2-design-U} and lemma~\ref{lem:2-design-UxU}, we have \begin{align} & \quad\int_{U(\theta)}\bra{0}^{\otimes 2n}U^{\dagger}C_0U\rho U^{\dagger}D_0U\ket{0}^{\otimes 2n}d\eta(U)\nonumber\\ &=\int_{U(\theta)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot (U_2^{\dagger}\otimes U_3^{\dagger})\cdot C_0 \cdot(U_2\otimes U_3)\cdot\rho_1\cdot(U_2^{\dagger}\otimes U_3^{\dagger})\cdot D_0\cdot(U_2\otimes U_3) \cdot V \nonumber\\ &\quad\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}\\ &=\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger} \cdot(t_0\rho_1+t_1\frac{\rho_{1}^A\otimes I_B}{d}+t_2\frac{I_A\otimes \rho_{1}^B}{d} +t_3 I_{AB}\tr(\rho_1))\cdot V\nonumber\\ &\quad \cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U)\nonumber\\ &:=M_0+M_1+M_2+M_3\label{eq:U1U2}, \end{align} where \begin{align*} M_0&=t_0\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger} \cdot\rho_1\cdot V\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U)\\ & =t_0,\\ M_1&=\frac{t_1}{d}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot\rho_{1}^A\otimes I_B \cdot V\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U),\\ M_2&=\frac{t_2}{d}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot I_A\otimes \rho_{1}^B \cdot V\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U),\\ M_3&=t_3\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot I_{AB}\tr(\rho_1) \cdot V\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U)\\ &=t_3,\\ t_0&=t_3=\frac{1}{d^2(d+1)^2}, \\ t_1&=t_2=\frac{1}{d(d+1)^2}. \end{align*} Now the problem is how do we calculate $M_1$ and $M_2$. Combine with $V=\sum_{i=0}^{d-1}\ketbra{i}{i}_A\otimes V_{B_i}$, there has \begin{align} M_1&=\frac{t_1}{d}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot\rho_{1}^A\otimes I_B\cdot V\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U)\nonumber\\ &=\frac{t_1}{d}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B)(\sum_{i=0}^{d-1}\ketbra{i}{i}\otimes V_{B_i}^{\dagger}) \cdot(\sum_{k=0}^{d-1}\ketbra{k}{k}U_1\ketbra{0}{0}^{\otimes n}U_1^{\dagger}\ketbra{k}{k}\otimes I_{B}) \nonumber\\ &\quad \cdot (\sum_{j=0}^{d-1}\ketbra{j}{j}\otimes V_{B_j})(U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U)\nonumber\\ &=\frac{t_1}{d}\sum_{i=0}^{d-1}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes n}U_1^{\dagger}\ketbra{i}{i}U_1(\ketbra{0}{0})^{\otimes n}U_1^{\dagger} \ketbra{i}{i}U_1\ket{0}^{\otimes n}\cdot (\braket{0}{0})^{\otimes n}d\eta(U) \nonumber\\ &=\frac{2t_1}{d(d+1)},\label{eq:M1}\\ M_2&=\frac{t_2}{d}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B) \cdot V^{\dagger}\cdot I_A\otimes\rho_{1}^B\cdot V\cdot (U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U)\nonumber\\ &=\frac{t_2}{d}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes 2n}(U_1^{\dagger}\otimes I_B)(\sum_{i=0}^{d-1}\ketbra{i}{i}\otimes V_{B_i}^{\dagger}) \cdot(I_A\otimes\sum_{k=0}^{d-1}\bra{k}U_1\ketbra{0}{0}^{\otimes n}U_1^{\dagger}\ket{k} V_{B_k}\ketbra{0}{0}^{\otimes n}V_{B_k}) \nonumber\\ &\quad \cdot (\sum_{j=0}^{d-1}\ketbra{j}{j}\otimes V_{B_j})(U_1\otimes I_B)\ket{0}^{\otimes 2n}d\eta(U)\nonumber\\ &=\frac{t_2}{d}\sum_{i=0}^{d-1}\int_{\mathcal{U}_1(d)}\bra{0}^{\otimes n}U_1^{\dagger}\ketbra{i}{i}U_1(\ketbra{0}{0})^{\otimes n}U_1^{\dagger} \ketbra{i}{i}U_1\ket{0}^{\otimes n}\cdot (\braket{0}{0})^{\otimes n}d\eta(U) \nonumber\\ &=\frac{2t_2}{d(d+1)}.\label{eq:M2} \end{align} Therefore, \begin{align} Eq.~\eqref{eq:U1U2}&=M_0+M_1+M_2+M_3\nonumber\\ &=\frac{2d+6}{d^2(d+1)^3}\nonumber\\ &\neq\frac{2}{d^2(d^2+1)}(\text{$n\geq1,d=2^n>1$})\nonumber\\ &=Eq.~\eqref{eq:2n-qubit}. \end{align} Thus, {SEA} with $U_i(i=1,2,3)$ being local 2-design ansatzes and $\widetilde{\operatorname{CNOT}}$ is not a unitary 2-design. \end{proof} \section{Proof of Property~\ref{pro: state} and Property~\ref{pro: truncation-general}} \label{appendix:proof-of-TA-state } \renewcommand\theproperty{\ref{pro: state}} \setcounter{property}{\arabic{property}-1} \begin{property} If $U_1$ can generate an arbitrary $N$-qubit pure state, then given any $2N$-qubit pure state $\ket{\phi}$, a $2N$-qubit SEA can generate $\ket{\phi}$ with a certain set of parameters $\bm{\hat{\theta}} = \{\hat{\bm{\theta}}_1,\hat{\bm{\theta}}_2,\hat{\bm{\theta}}_3\}$, that is, \begin{align} S(\bm{\hat{\theta}})\ket{0}^{\otimes 2N} = \ket{\phi}. \end{align} \end{property} \begin{proof} Beginning with Schmidt decomposition, we can write the target state $\ket{\phi}$ as $$\ket{\phi}=(\hat{U}_2\otimes \hat{U}_3)\sum_{k=0}^{2^N-1}\lambda_k\ket{k}_A\ket{k}_B,$$ where $A$ and $B$ are two $N$-qubit subsystems, $\hat{U}_2$ and $\hat{U}_3$ are two unitary operators acting on these two subsystems, $\{\ket{k}\}_{k=0}^{2^N-1}$ are the $N$-qubit computational bases, and $\{\lambda_k\}_{k=0}^{2^N-1}$ are Schmidt coefficients. Because $U_1(\bm{\theta_1})$ can generate an arbitrary $N$-qubit pure state, and $U_2(\bm{\theta_2}), U_3(\bm{\theta_3})$ are universal, we can choose a certain set of parameters $\bm{\hat{\theta}} = \{\hat{\bm{\theta}}_1,\hat{\bm{\theta}}_2,\hat{\bm{\theta}}_3\}$such that \begin{align} U_1(\hat{\bm{\theta}}_1)\ket{0}^{\otimes N} &=\sum_{k=0}^{2^N-1} \lambda_k\ket{k},\\ U_2(\hat{\bm{\theta}}_2) &= \hat{U}_2,\\ U_3(\hat{\bm{\theta}}_3) &= \hat{U}_3. \end{align} Combining with $V$, which is a composition of $N \operatorname{CNOT}$s controlled and targeted on the qubit-pairs $\{(i, N+1)\}_{i=0}^{N-1}$, there has \begin{align*} V (U_1(\bm{\theta_1})\otimes I)\ket{0}_A^{\otimes N}\ket{0}_B^{\otimes N} = & \sum_{k=0}^{2^N-1} \lambda_k V \ket{k}_A\ket{0}_B^{\otimes N} \\ = & \sum_{k=0}^{2^N-1} \lambda_k\ket{k}_A\ket{k}_B. \end{align*} Therefore, we have \begin{align*} S(\hat{\bm{\theta}})\ket{0}^{\otimes 2N} &=(U_2(\hat{\bm{\theta}}_2)\otimes U_3(\hat{\bm{\theta}}_3))V(U_1(\hat{\bm{\theta}}_1)\otimes I)\ket{0}^{\otimes N}\ket{0}^{\otimes N}\\ &=(\hat{U}_2\otimes \hat{U}_3)\sum_{k=0}^{2^N-1} \lambda_k\ket{k}_A\ket{k}_B. \\ &=\ket{\phi}. \end{align*} \end{proof} \renewcommand{\theproposition}{S\arabic{proposition}} \begin{lemma}\label{lemma:average} For any descending sequence $\{x_i\}_{i=1}^{n}(x_i\geq x_{i+1})$, and any $n\geq N\geq M\geq 1$ ($n, N, M \in \mathbb{Z}$), the following is true: $$\frac{1}{M}\sum_{i=1}^Mx_i\geq \frac{1}{N}\sum_{i=1}^Nx_i.$$ \end{lemma} \begin{proof} Because $N\geq M\geq 1$ and $x_i\geq x_{i+1}$, we have the following formula: \begin{align*} &\quad\frac{1}{M}\sum_{i=1}^Mx_i-\frac{1}{N}\sum_{i=1}^Nx_i\\ &=\frac{1}{M}\sum_{i=1}^Mx_i-(\frac{1}{N}\sum_{i=1}^{M}x_i+\frac{1}{N}\sum_{i=M+1}^Nx_i)\\ &=(\frac{1}{M}-\frac{1}{N})\sum_{i=1}^Mx_i-\frac{1}{N}\sum_{i=M+1}^Nx_i\\ &=\frac{1}{N}(\frac{N-M}{M}\sum_{i=1}^Mx_i-\sum_{i=M+1}^Nx_i)\\ &=\frac{N-M}{N}(\frac{1}{M}\sum_{i=1}^Mx_i-\frac{1}{N-M}\sum_{i=M+1}^Nx_i)\\ &\geq \frac{N-M}{N}(\frac{1}{M}\sum_{i=1}^Mx_M-\frac{1}{N-M}\sum_{i=M+1}^Nx_{M+1}) \quad \text{(because $x_i\geq x_{i+1}$)}\\ &= \frac{N-M}{N}(x_M-x_{M+1})\\ &\geq 0, \end{align*} thus, we obtain $\frac{1}{M}\sum_{i=1}^Mx_i-\frac{1}{N}\sum_{i=1}^Nx_i\geq 0$, that is, $\frac{1}{M}\sum_{i=1}^Mx_i\geq \frac{1}{N}\sum_{i=1}^Nx_i$. \end{proof} \renewcommand\theproperty{\ref{pro: truncation-general}} \setcounter{property}{\arabic{property}-1} \begin{property} If $U_1$ can generate any $N$-qubit pure state that is a superposition of at most $K$ computational basis states, then for any $\ket{\phi}$, there exists an {SEA} output state $\ket{\psi}$ with $F(\ket{\phi},\ket{\psi})\geq \min\left\{\frac{K}{r}, 1\right\}$, where $F(\ket{\phi},\ket{\psi})$ is the fidelity between $\ket{\phi}$ and $\ket{\psi}$, and $r$ is the Schmidt rank of $\ket\phi$. \end{property} \begin{proof} For any $2N$-qubit target state $\ket{\phi}=\sum_{k=0}^{r-1}\lambda_k\ket{v_k}_A\ket{v_k}_B$, we explicitly construct an {SEA}, such that $\ket{\psi}=S(\hat{\bm{\theta}})\ket{0}^{\otimes 2N}$ and $F(\ket{\phi},\ket{\psi})\geq\min\{\frac{K}{r}, 1\}$, where $\hat{\bm{\theta}}=\{\hat{\bm{\theta}}_1,\hat{\bm{\theta}}_2,\hat{\bm{\theta}}_3\}$ is a certain set of parameters. Suppose $\{\lambda_k\}_{k=0}^{r-1}$ is the Schmidt coefficients of $\ket\phi$ sorted in descending order. Similar to the proof of Property~\ref{pro: state}, we can choose a certain set of parameters $\hat{\bm{\theta}}$ such that \begin{align} U_1(\hat{\bm{\theta}}_1)\ket{0}_A^{\otimes N} &=\frac{1}{\sqrt{M}}\sum_{k=0}^{\min\{K,r\}-1} \lambda_k\ket{k},\\ U_2(\hat{\bm{\theta}}_2)\ket{k}_A &= \ket{v_k}_A,\\ U_3(\hat{\bm{\theta}}_3)\ket{k}_B &=\ket{v_k}_B, \end{align} where $M=\sum_{k=0}^{\min\{K,r\}-1} \lambda_k^2$. Then $$\ket{\psi}=S(\hat{\bm{\theta}})\ket{0}^{\otimes 2N}=\frac{1}{\sqrt{M}}\sum_{k=0}^{\min\{K,r\}-1}\lambda_k\ket{v_k}_A\ket{v_k}_B.$$ Hence \begin{align*} F(\ket{\phi}, \ket{\psi}) &=|\langle\phi|\psi\rangle|^2\\ &=\Big(\frac{1}{\sqrt{M}}\sum_{k=0}^{\min\{K,r\}-1}\lambda_k^2\Big)^2 \\ &=\frac{1}{M}\Big(\sum_{k=0}^{\min\{K,r\}-1}\lambda_k^2\Big)^2 \\ &=\sum_{k=0}^{\min\{K,r\}-1}\lambda_k^2 \\ &\geq \frac{\min\{K,r\}}{r}\sum_{k=0}^{r-1}\lambda_k^2 \quad(\text{due to lemma \ref{lemma:average}})\\ &=\frac{\min\{K,r\}}{r}\\ &=\min\{\frac{K}{r},1\}, \end{align*} with equality holds if and only if $\lambda_i=\lambda_j, \forall i,j=0,\dots,r-1$. \end{proof} \renewcommand\theproperty{\ref{pro:TApara}} \setcounter{property}{\arabic{property}-1} \begin{property} Constructing a $2N$-qubit {SEA} that can generate an arbitrary pure state requires at most $O(4^N)$ independent parameters. \end{property} \begin{proof} As has been proved in property~\ref{pro: state}, {SEA} composed of a universal Schmidt layer, two local universal N-qubit PQCs and $N$ $\operatorname{CNOT}$s, can generate an arbitrary pure state. Therefore, the number of independent parameters of this SEA is $f(N)=4^N+2\times4^N-1$, that is, $f(N)=O(4^N)$. However, the dimension of unitary group of $2N$-qubit is $4^{2N}$. Thus, a $2N$-qubit universal ansatz needs at least $O(4^{2N})$ independent parameters. \end{proof} \section{Proof of Property~\ref{cor:TAforVQE}} \label{appendix:proof-of-vqe} \renewcommand\theproperty{\ref{cor:TAforVQE}} \setcounter{property}{\arabic{property}-1} \begin{property} For any $2N$-qubit Hamiltonian $H$, it holds that \begin{align} \min_{S} \bra{0}^{\otimes 2N}S^{\dagger}HS\ket{0}^{\otimes 2N} = E_0, \end{align} where $E_0$ is the ground state energy of $H$ and the optimization is over all unitaries reachable by {SEA} with $U_1$ having universal wavefunction expressibility. \end{property} \begin{proof} A given $2N$-qubit Hamiltonian $H$ can be written as follows according to spectral decomposition: $$H =\sum_{i=0}^m E_iP_i,$$ where $\{E_i\}_{i=0}^{m}$ are eigenvalues of $H$ such that $E_0 < E_1 < \cdots < E_{m}$ and $P_i$ is a projector onto the eigenspace $V_i$ corresponding to the eigenvalue $E_i$. Then given an arbitrary pure state $\ket{\psi} \in V_0$, we have $$H \ket{\psi} = E_0\ket{\psi}.$$ Note that when the optimization is over all unitaries reachable by {SEA} with $U_1$ having universal wavefunction expressibility, there exists an $S$ such that $\ket \psi = S\ket{0}^{\otimes 2N}$. Therefore, \begin{align} \bra{0}^{\otimes 2N}S^{\dagger}HS\ket{0}^{\otimes 2N} &= \bra{\psi}H\ket{\psi}\\ &= E_0\braket{\psi}{\psi}\\ &= E_0. \end{align} Since it is trivial that $ \min_{S} \bra{0}^{\otimes 2N}S^{\dagger}HS\ket{0}^{\otimes 2N} \geq E_0$, we have \begin{align} \min_{S} \bra{0}^{\otimes 2N}S^{\dagger}HS\ket{0}^{\otimes 2N} = E_0. \end{align} \end{proof} \section{Supplementary Description of Experiments} \label{appendix:exp} In this section, we present results from supplementary numerical experiments. Table~\ref{tab:param} shows the specific calculation method for the numbers of parameters. Fig.~\ref{fig:VQE-LiH} displays the results of VQE using the SEA of Schmidt coefficient layer as $R_y(\bm{\theta})^{\otimes 6}$, entangling layer as 6 CNOTs and the LBC layers as two $6$-qubit ALTs. Fig.~\ref{Fig: BP-sub.2} shows the variance of the largest partial derivative in each sample. \label{app:exp} \renewcommand{\thetable}{S\arabic{table}} \begin{table}[htbp] \centering \caption{Comparison of the number of parameters in different ansatzes. $N$ represents qubit number, $l_i(i=1,2,3)$ is the number of layer of $U_i$.} \begin{ruledtabular} \begin{tabular}{cccc} & SEA & ALT & Random \\ \colrule parameter number & $3N+6(N-1)l_1$ & $2N+2(2N-1)l_2$ & $2Nl_3$ \end{tabular} \end{ruledtabular} \label{tab:param} \end{table} \renewcommand{\thefigure}{S\arabic{figure}} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{paper-figures/LiH_L0.1_dif_ansatzes.png} \caption{\textbf{Numerical experiment of VQE on LiH (12-qubit)}.The blue dotted line is the theoretical ground energy of LiH, and the lines from top to bottom represent the experimental results of ALT, the random circuit, {SEA} with $R_y(\bm{\theta})$ as $U_1$ and two ALTs as $U_2, U_3$ and $6$ CNOTs as entangling layer, {SEA} with three ALTs as $U_i (i=1,2,3)$ and $3$ CNOTs as entangling layer, {SEA} with three ALTs as $U_i (i=1,2,3)$ and $6$ CNOTs as entangling layer, respectively. $j$($j=2, 3$) CNOTs means that we set a composition of $j \operatorname{CNOT}$s controlled and targeted on the qubit-pairs $\{(i, N+i)\}_{i=0}^{j-1}$.} \label{fig:VQE-LiH} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{paper-figures/Heisenberg_ALT_max.png} \caption{\textbf{Comparison of the scaling of variance between different ansatzes on the Heisenberg model}. It shows the semi-log plot of the variance of the largest partial derivative among parameters in each round of sampling. We ensure different ansatzes have similar number of parameters by setting different depth. The solid part of the fitted lines represents the range we experimented with, while the dotted part represents the expected performance on a larger range.} \label{Fig: BP-sub.2} \end{figure} \renewcommand{\theproposition}{A\arabic{proposition}} \end{document}
1,116,691,497,943
arxiv
\section*{Introduction} Recently, Smith studied in \cite{Sm} a remarkable graded Calabi-Yau algebra $B$ of dimension 3 constructed from the octonions. Amongst other things, Smith proved that $B$ is a graded Ore extension of an Artin-Schelter regular algebra of global dimension 2 and uses that fact to show that $B$ is graded 3-Calabi-Yau and graded coherent. In this note, we show that the Calabi-Yau property and the coherence of $B$ do not occur incidentally. A large class of graded algebras that are Ore extensions of graded Calabi-Yau algebras are themselves graded Calabi-Yau. The main result of this note is the following. \begin{thm}\label{thm} Let $V$ be a finite dimension vector space with basis $\{x_1,\dots,x_n\}$, let $M$ be an invertible $n\times n$ anti-symmetric matrix, and define $$r=(x_1,\dots,x_n)M\left( \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right) \in T(V).$$ Let $A=T(V)/\langle r\rangle$, where $\langle r\rangle$ is the ideal of $T(V)$ generated by $r$. Let $\delta$ be a degree-one graded derivation of $T(V)$ such that $\delta(r)=0$. Then $\delta$ induces a graded derivation $\bar{\delta}$ on $A$. Let $B=A[z;\bar{\delta}]$ be the Ore extension of $A$ defined by $\bar{\delta}$. Then the following hold: \begin{itemize} \item[(i)] $B$ is a graded 3-Calabi-Yau algebra. \item[(ii)] Let $\widehat{V}=V\op\k z$, and $Q=\left( \begin{array}{ccc} -1 & 0 \\ 0 & M \\ \end{array} \right)$. Let $w=(z,x_1,\dots,x_n)Q\left( \begin{array}{c} r \\ r_1 \\ \vdots \\ r_n \end{array} \right)$, where $r_i=z\ot x_i-x_i\ot z-\delta(x_i)\in \widehat{V}\ot \widehat{V}$ for all $i=1,\dots,n$. Then $(\alpha\ot1\ot1)(w)=(1\ot1\ot\alpha)(w)$ for all $\alpha\in (\widehat{V})^*$, and $A[z;\bar{\delta}]\cong T(\widehat{V})/\langle \partial_{x_i}(w):i=0,\dots,n\rangle$, where we set $x_0=z$ and $\partial_{x_i}(w)$ is the cyclic partial derivative of $w$ with respect to $x_i$. \item[(iii)] Write $\delta(x_i)=\sum_{s,t=1}^nk^i_{st}x_i\ot x_j$ for all $i=1,\dots,n$. Assume there is an integer $j$ such that $k^i_{jj}=0$ for all $i=1,\dots,n$, and $M$ is a standard anti-symmetric matrix. Then $B$ is graded coherent. \end{itemize} \end{thm} Most of this note is devoted to the proof of Theorem \ref{thm}. However, we will go a bit further to discuss the properties of the algebra $B$. Smith's algebra in \cite{Sm} is an example satisfying the conditions in the theorem. We will provide a few more examples. We remark that any quadratic algebra $A$ defined by an invertible anti-symmetric matrix as in the above theorem is isomorphic to a quadratic algebra defined by a standard anti-symmetric matrix (see Convention \ref{con} for the definition), because, for every invertible anti-symmetric matrix $M$, there is an invertible matrix $P$ such that $P^tMP$ is a standard anti-symmetric matrix. \begin{rem}\label{rrem}{\rm Let $V$ be a finite dimensional vector space with basis $\{x_1,\dots,x_n\}$. Take an element $r\in V\ot V$. Since $V\ot V\cong \Hom_\k(V^*,V)$, the element $r$ corresponds to a linear map $f_r:V^*\to V$. The {\it rank} of $r$, denoted by rank($r$), is defined to be the rank of $f_r$ (cf. \cite[Introduction]{Z2}). One sees $$\text{rank}(r)=\min\{m|r=u_1\ot v_1+\cdots+u_m\ot v_m, \text{ for some }u_i,v_i\in V\}.$$ It has been shown that certain features of the algebra $T(V)/\langle r\rangle$ entirely depend on rank($r$) (cf. \cite[Theorem 0.1]{Z2}). If $M$ is an $n\times n$ matrix and $r=(x_1,\dots,x_n)M\left( \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right) \in V\ot V$, then rank($r$)=rank($M$). Therefore, the condition that $M$ is invertible in Theorem \ref{thm} is equivalent to the condition that rank($r$)=$n$. } \end{rem} Throughout $\k$ is a fixed field. The unadorned $\ot$ means $\ot_\k$. Let $U=\op_{n\in \mathbb{Z}}U_n$ be a graded vector space, and $l$ an integer. We write $U(l)$ for the graded vector space with degree $k$ component $U(l)_k=U_{k+l}$. A connected graded algebra $A$ is called a {\it graded Calabi-Yau algebra} of dimension $d$, or simply {\it graded d-CY algebra} (cf. \cite{Gin}), if \begin{itemize} \item[(i)] $A$ is homologically smooth; that is, $A$ has a finite resolution by finitely generated graded projective left $A^e$-modules, where $A^e=A\ot A^{op}$ is the enveloping algebra of $A$; \item[(ii)] the projective dimension of $A$ as a left $A^e$-module is $d$, and $\Ext^i_{A^e}(A,A\ot A)=0$ if $i\neq d$ and $\Ext_{A^e}^d(A,A\ot A)\cong A(l)$ for some integer $l$ as a right $A^e$-module. \end{itemize} We refer to \cite{Z} (also, cf. \cite{Ber} and \cite{DV}) for the basic properties of a graded 2-CY algebra. \section{Ore extensions of graded Calabi-Yau algebras of dimension 2} Let $V$ be a vector space with basis $\{x_1,\dots,x_n\}$. Let $A$ be a graded quotient algebra of $T(V)$. If $A$ is a graded 2-CY algebra, then it is defined by an $n\times n$ invertible anti-symmetric matrix $M$ \cite{Z} (also, cf. \cite[Proposition 3.4]{Ber}); that is, $A\cong T(V)/\langle r\rangle$ with $r=(x_1,\dots,x_n)M(x_1,\dots,x_n)^t$. Henceforth, we assume $A=T(V)/\langle r\rangle$ with \\ $r=(x_1,\dots,x_n)M(x_1,\dots,x_n)^t$ for some fixed anti-symmetric matrix $M$. Let $\pi:T(V)\to A$ be the natural projection map. Since degree($r$)=2, we can, and we will, identify $V$ with $A_1$ through the projection $\pi$. Let $\delta:V\to V\ot V$ be a linear map. Then $\delta$ extends in a unique way to a degree-one derivation (also denoted by $\delta$) of $T(V)$. If $\delta(r)\in \langle r\rangle$, then $\delta$ induces a derivation $\bar{\delta}$ on $A$. From now on, we assume that $\delta(r)\in\langle r\rangle$. Let $B=A[z;\bar{\delta}]$ be the graded Ore extension of $A$ by the derivation $\bar{\delta}$; that is, we view $z$ as an element of degree 1, and $za=az+\bar{\delta}(a)$ for all $a\in A$. Zhang proved in \cite{Z} that $A$ is a Koszul algebra of global dimension 2, and the minimal projective resolution of ${}_A\k$ can be written as follows: \begin{equation}\label{eq1} 0\longrightarrow A\ot \k r\overset{\overline{d}^{-2}}\longrightarrow A\ot V\overset{\overline{d}^{-1}}\longrightarrow A\overset{\varepsilon}\longrightarrow{}_A\k\longrightarrow0, \end{equation} where $\varepsilon$ is the augmentation map, $\overline{d}^{-1}(1\ot x)=\pi(x)$ for all $x\in V$, and $\overline{d}^{-2}(1\ot r)=r\in A_1\ot V$. Since $B$ is an Ore extension of $A$, $B$ is a Koszul algebra of global dimension 3. Note that $B_A$ is a free $A$-module. Applying $B\ot_A-$ to the sequence (\ref{eq1}), we obtain the exact sequence \begin{equation}\label{eq2} 0\longrightarrow B\ot\k r\overset{d^{-2}}\longrightarrow B\ot V\overset{d^{-1}}\longrightarrow B\longrightarrow B/BA_{\ge1}\longrightarrow0, \end{equation} where the unlabeled map is the natural projection map, $d^{-1}(1\ot x)=\pi(x)\in B_1$ for all $x\in V$, and $d^{-2}(1\ot r)=r\in B_1\ot V$. \begin{lem}\label{lemr1} Suppose that $\delta(r)=0$ and let $B=A[z;\bar{\delta}]$ be as above. We have the following morphism of cochain complexes: $$\xymatrix{B\ot \k r\ar[r]^{d^{-2}}\ar[d]_{f^{-2}} &B\ot V\ar[d]_{f^{-1}}\ar[r]^{d^{-1}}&B\ar[d]^{f^{0}}\\ B\ot \k r\ar[r]^{d^{-2}} & B\ot V\ar[r]^{d^{-1}}& B, }$$ where the vertical arrows are left $B$-module morphisms $f^{-2}(1\ot r)=z\ot r$, $f^{-1}(1\ot x)=z\ot x-\delta(x)$ for all $x\in V$, and $f^{0}(1)=z$. \end{lem} \proof We write $r=\sum_{i=1}^nu_i\ot x_i$ with all $u_i\in V$, and assume $\delta(x_i)=\sum_{j=1}^n y_{ij}\ot x_j$ for all $i=1,\dots,n$ with all $y_{ij}\in V$. We prove the commutativity of the left square. The commutativity of the right one is easy. The identity $\delta(r)=0$ is equivalent to $\sum_{i=1}^n\delta(u_i)\ot x_i+\sum_{i=1}^n u_i\ot \delta(x_i)=0$, which is in turn equivalent to $\sum_{i=1}^n\delta(u_i)\ot x_i+\sum_{i=1}^n \sum_{j=1}^nu_i\ot y_{ij}\ot x_j=0$. Applying the map $\pi\ot 1:T(V)\ot V\to A\ot V$ to the last identity, we obtain $\sum_{i=1}^n\bar{\delta}(u_i)\ot x_i+\sum_{i=1}^n \sum_{j=1}^nu_iy_{ij}\ot x_j=0$. Hence \begin{equation}\label{eqq1} \bar{\delta}(u_i)=-\sum_{j=1}^nu_jy_{ji} \end{equation} for all $i=1,\dots,n$. The following equations hold: \begin{eqnarray} \nonumber f^{-1}\circ d^{-2}(1\ot r)&=&f^{-1}(\sum_{i=1}^nu_i\ot x_i) \\ \nonumber &=& \sum_{i=1}^nu_iz\ot x_i- \sum_{i=1}^n\sum_{j=1}^n u_i y_{ij}\ot x_j\\ \nonumber &=& \sum_{i=1}^n(u_i z- \sum_{j=1}^n u_j y_{ji})\ot x_i, \end{eqnarray} and \begin{eqnarray} \nonumber d^{-2}\circ f^{-2}(1\ot r) &=& d^{-2}(z\ot r)= \sum_{i=1}^nzu_i\ot x_i\\ \nonumber &=& \sum_{i=1}^n(u_i z+\bar{\delta}(u_i))\ot x_i. \end{eqnarray} By Equation (\ref{eqq1}), $f^{-1}\circ d^{-2}(1\ot r)=d^{-2}\circ f^{-2}(1\ot r)$. Hence the left square of the diagram commutes. \qed The mapping cone of the morphism in Lemma \ref{lemr1} provides a graded projective resolution of the trivial module ${}_B\k$ (see also, \cite{GS,P}). \begin{lem} Let $r$ and $B$ be the same as in Lemma \ref{lemr1}. The minimal projective resolution of ${}_B\k$ is as follows: $$0\longrightarrow B\ot \k r\overset{\partial^{-3}}\longrightarrow B\ot \k r\op B\ot V\overset{\partial^{-2}}\longrightarrow B\ot V\op B\overset{\partial^{-1}}\longrightarrow B\longrightarrow\k\longrightarrow0,$$ where $\partial^{-3}=\left( \begin{array}{c} f^{-2} \\ -d^{-2} \\ \end{array} \right) $, $\partial^{-2}=\left( \begin{array}{cc} d^{-2} & f^{-1} \\ 0 & -d^{-1} \\ \end{array} \right) $, and $\partial^{-1}=\left(d^{-1},f^{0}\right)$. \end{lem} Let $A^!$ be the quadratic dual of $A$. As graded vector spaces $A^!_0\cong \k$, $A^!_1\cong V^*$ and $A^!_2\cong \k r^*$, where $r^*\in (\k r)^*$ defined by $r^*(r)=1$. The multiplication on $A^!$ is given by: $\alpha\beta=(a_1,\dots,b_n)M(b_1,\dots,b_n)^tr^*$, for $\alpha=a_1x_1^*+\cdots+a_nx_n^*$ and $\beta=b_1x_1^*+\cdots+b_nx_n^*$ in $V^*$ (cf. \cite[Section 3]{HVZ2}), where $\{x^*_1,\dots,x^*_n\}$ is the basis of $V^*$ dual to the basis $\{x_1,\dots,x_n\}$. Write $E^i(B):=\Ext_B^i({}_B\k,{}_B\k)$ and $E(B):=\op_{i\ge0}E^i(B)$. Then $E(B)$ is a graded algebra with the degree $i$ component $E^i(B)$. The minimal projective resolution of ${}_B\k$ above implies that, as graded vector spaces, \begin{equation}\label{eq3} E(B)\cong A^!\op A^!(-1). \end{equation} We write an element in $E(B)$ as $(\alpha,\beta)$ for some $\alpha, \beta\in A^!$, and demote the Yoneda product on $E(B)$ by $(\alpha,\beta)*(\alpha',\beta')$. \begin{prop} \label{prop1} Assume $\delta(r)=0$. Then $A[z;\bar{\delta}]$ is a 3-CY algebra. \end{prop} \proof By \cite[Proposition 3.3]{HVZ1}) in the Koszul case, $B=A[z;\bar{\delta}]$ is Calabi-Yau if and only if $E(B)$ is a graded symmetric algebra. Recall that a finite dimensional graded algebra $E=\op_{i\ge0}E^i$ is graded symmetric if there is an integer $d$ and a homogeneous nondegenerate bilinear form $\langle-,-\rangle:E\times E\longrightarrow \k(d)$ such that $\langle \alpha\beta,\gamma\rangle=\langle \alpha,\beta\gamma\rangle$ and $\langle \alpha,\beta\rangle=(-1)^{ij}\langle \beta,\alpha\rangle$ for all homogeneous elements $\alpha\in E^i,\beta\in E^j$ and $\gamma\in E^k$. Since the global dimension of $B$ is 3 and $\dim E^3(B)=1$, $E(B)$ is graded symmetric if and only if, for all elements $\Phi\in E^1(B), \Theta\in E^2(B)$, $\Phi*\Theta=\Theta*\Phi$. Let $\Phi=(\alpha,k)$ with $\alpha\in A^!_1=V^*$ and $k\in\k$, and $\Theta=(r^*,\beta)$ with $\beta\in V^*$. The element $\Phi$ induces a $B$-module morphism $g:B\ot V\op B\longrightarrow {}_B\k$ by $g(1\ot x,1)=\alpha(x)+k$ for all $x\in V$, and the element $\Theta$ induces a $B$-module morphism $h:B\ot \k r\op B\ot V\longrightarrow {}_B\k$ by $h(1\ot r,1\ot x)=1+\beta(x)$ for all $x\in V$. Consider the following diagram: $$\xymatrix{0\ar[r]&B\ot \k r\ar[r]^{\partial^{-3}}\ar[d]_{g_{2}} &B\ot \k r\op B\ot V\ar[d]_{g_{1}}\ar[r]^{\quad\partial^{-2}}&B\ot V\op B\ar[d]_{g_{0}}\ar[r]^{\quad\partial^{-1}}\ar[dr]^{g} &B\cdots\\ \cdots\ar[r]&B\ot \k r\op B\ot V\ar[r]^{\partial^{-2}} & B\ot V\op B\ar[r]^{\partial^{-1}}& B\ar[r]&{}_B\k, }$$ where the vertical arrows are $B$-module morphisms defined as follows. As before, we write $r=\sum_{i=1}^nu_i\ot x_i$ with all $u_i\in V$, and assume $\delta(x_i)=\sum_{j=1}^n y_{ij}\ot x_j$ for all $i=1,\dots,n$ with all $y_{ij}\in V$. Then {\small\begin{eqnarray} \nonumber g_0(1\ot x_j,1)&=&\alpha(x_j)1+k1;\\ \nonumber g_1(1\ot r,1\ot x_j)&=&\left(\sum_{i=1}^n1\ot u_i\alpha(x_i)-1\ot k x_j-\sum_{i=1}^n1\ot y_{ji}\alpha(x_j),\ \alpha(x_j)1\right);\\ \nonumber g_2(1\ot r)&=&(1\ot k r,\ \sum_{i=1}^n1\ot u_i\alpha (x_i)), \end{eqnarray}} for all $j=1,\dots,n$. Since $\delta(r)=0$, it follows that $\displaystyle\sum_{i=1}^n\delta(u_i) \ot x_i+\sum_{i=1}^n\sum_{j=1}^n u_i\ot y_{ij}\ot x_j=0$. Applying the linear map $1\ot 1\ot \alpha$ to this identity, one obtains: \begin{equation}\label{eqq4} \sum_{i=1}^n\delta(u_i)\alpha(x_i)+\sum_{i=1}^n\sum_{j=1}^nu_i\ot y_{ij}\alpha(x_j)=0. \end{equation} Using Equation (\ref{eqq4}) and the following computations: {\small\begin{eqnarray} \nonumber g_1\circ\partial^{-3}(1\ot r) &=& g_1(z\ot r,-r) \\ \nonumber &=&\left( \sum_{i=1}^nz\ot u_i\alpha(x_i)+\sum_{i=1}^nu_i\ot k x_i+\sum_{i=1}^n\sum_{j=1}^nu_i\ot y_{ij}\alpha(x_j),-\sum_{i=1}^n u_i\alpha(x_i)\right), \end{eqnarray} \begin{eqnarray} \nonumber \partial^{-2}\circ g_2(1\ot r) &=& \partial^{-2} \left(1\ot k r,1\ot\sum_{i=1}^nu_i\alpha(x_i)\right) \\ \nonumber &=& \left(k r+\sum_{i=1}^nz\ot u_i\alpha(x_i)-\sum_{i=1}^n\delta(u_i)\alpha(x_i),-\sum_{i=1}^n u_i\alpha(x_i)\right). \end{eqnarray}} we obtain the identity: $g_1\circ\partial^{-3}(1\ot r)=\partial^{-2}\circ g_2(1\ot r)$. Hence $g_1\circ\partial^{-3}=\partial^{-2}\circ g_2$. Similar computations show that the second square in the diagram commutes. The commutativity of the triangle in the diagram is obvious. Thus, we have $h\circ g_2(1\ot r)=h(1\ot k r, \sum_{i=1}^n1\ot u_i\alpha(x_i))=k+\sum_{i=1}^n\beta(u_i)\alpha(x_i)$. By the definition of the Yoneda product, we have $\Theta*\Phi=(r^*,\beta)*(\alpha,k)=kr^*+\beta\alpha$, where $\beta\alpha$ is the product in $A^!$. Similarly, we can show that $\Phi*\Theta=kr^*-\alpha\beta$. Now $A$ is Calabi-Yau, hence $A^!$ is graded symmetric; that is, $\alpha\beta=-\beta\alpha$ for all $\alpha,\beta\in A^!_1$. It follows that $\Phi*\Theta=\Theta*\Phi$. Therefore, $B=A[z;\bar{\delta}]$ is Calabi-Yau. \qed The computation in the proof of Proposition \ref{prop1} has given us the formulas of the Yoneda product of $E(B)$. \begin{cor} \label{cor1} As vector spaces, $E(B)\cong A^!\op A^!(-1)$. The Yoneda product of $E(B)$ is given as follows: for $\alpha,\beta\in A^!_1$ and $k,k'\in\k$, $$(r^*,\beta)*(\alpha,k)=(\alpha,k)*(r^*,\beta)=kr^*+\beta\alpha,$$ and $$(\beta,k')*(\alpha,k)=(\beta\alpha,k'\alpha-k\beta-(\beta\ot\alpha)\circ\delta),$$ where $r^*$ is the basis of $A^!_2$ such that $r^*(r)=1$. \end{cor} \proof The first identity is proved in the proof of Proposition \ref{prop1}. Keep the same notions as in the proof of Proposition \ref{prop1}. The element $(\beta,k')$ induces a $B$-module morphism $g':B\ot V\op B\longrightarrow {}_B\k$ by $g'(1\ot x,1)=\beta(x)+k'$ for all $x\in V$, and $(\beta\alpha,k'\alpha-k\beta-(\beta\ot\alpha)\circ\delta)$ induces a $B$-module morphism $f:B\ot \k r\op B\ot V\longrightarrow {}_B\k$ by $f(1\ot r,1\ot x_j)=\sum_{i=1}^n\beta(u_i)\alpha(x_i)+k'\alpha(x_j)-k\beta(x_j)-\sum_{i=1}^n\beta(y_{ji})\alpha(x_i)$ for all $j=1,\dots,n$. By the definition of Yoneda product, $(\beta,k')*(\alpha,k)$ is represented by $g'\circ g_1$. Now $g'\circ g_1(1\ot r,1\ot x_j)=\sum_{i=1}^n\beta(u_i)\alpha(x_i)-k\beta(x_j)-\sum_{i=1}^n\beta(y_{ji})\alpha(x_i)+k'\alpha(x_j)=f(1\ot r,1\ot x_j)$ for all $j=1,\dots,n$. Therefore the second identity follows. \qed Let $\epsilon:A^!\to A^!$ be the automorphism of $A^!$ defined by $\epsilon(\alpha)=-\alpha$ for $\alpha\in A^!_1$ and $\epsilon(\beta)=\beta$ for all $\beta\in A^!_2$. Let ${}_\epsilon A^!$ be the graded $A^!$-bimodule whose right $A^!$-action is the regular action, and whose left $A^!$-action is twisted by the automorphism $\epsilon$; that is, for all $\gamma,\theta\in A^!$, the left $A^!$-action $\gamma\cdot \theta=\epsilon(\gamma)\theta$. Let $I={}_\epsilon A^!(-1)$, and let $E(A^!;I)$ be the trivial extension of $A^!$ by the $A^!$-bimodule $I$. By Corollary \ref{cor1}, $E(B)$ is isomorphic to $E(A^!;I)$. \begin{cor} \label{cor2} The Yoneda algebra $E(B)$ is isomorphic to the trivial extension of $A^!$ by the $A^!$-bimodule $I$. \end{cor} \begin{exa} \label{exa1} {\rm Consider the Calabi-Yau algebra studied by Smith in \cite{Sm}. Let $\k \langle x_1,\dots,x_6\rangle$ be the free algebra generated by six elements. Let $A=\k \langle x_1,\dots,x_6\rangle/\langle r\rangle$, where $$r=(x_1,\dots,x_6)\left( \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0& 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right) \left( \begin{array}{c} x_1 \\ \vdots \\ x_6 \\ \end{array} \right).$$ Define a derivation $\delta:\k \langle x_1,\dots,x_6\rangle\to \k \langle x_1,\dots,x_6\rangle$ by $$\begin{array}{ccccccc} \delta(x_1)&=&x_4x_2-x_2x_4+x_3x_5-x_5x_3&&\delta(x_2)&=&x_1x_4-x_4x_1+x_3x_6-x_6x_3\\ \delta(x_3)&=&x_5x_1-x_1x_5+x_6x_2-x_2x_6&&\delta(x_4)&=&x_2x_1-x_1x_2+x_5x_6-x_6x_5\\ \delta(x_5)&=&x_1x_3-x_3x_1+x_6x_4-x_4x_6&&\delta(x_6)&=&x_2x_3-x_3x_2+x_4x_5-x_5x_4. \end{array} $$ Then $\delta(r)=0$, and $B=A[z;\bar{\delta}]$ is 3-CY \cite{Sm}.} \end{exa} Keep the assumption that $\delta(r)=0$. Let $\widehat{V}=V\op \k z$. Then $B=A[z;\bar{\delta}]$ is a quotient algebra of $T(\widehat{V})$. Since $B$ is 3-CY, $B$ is defined by a superpotential \cite[Theorem 3.1]{Bo}. Let $\{x_1^*,\dots,x_n^*\}$ be the basis of $V^*$ dual to $\{x_1,\dots,x_n\}$. Recall that a {\it superpotential} is an element $w\in \widehat{V}\ot \widehat{V}\ot \widehat{V}$ such that $[\alpha w]=[w\alpha]$ for all $\alpha\in(\widehat{V})^*$, where $[\alpha w]=(\alpha\ot 1\ot 1)(w)$ and $[w\alpha]=(1\ot 1\ot \alpha)(w)$. Given a superpotential $w$, the {\it partial derivative} of $w$ by $x_i$ is defined by $\partial_{x_i}(w)=[x^*_iw]$ (cf. \cite{BSW}). By \cite[Theorem 3.1]{Bo}, there is a superpotential $w\in \widehat{V}\ot \widehat{V}\ot \widehat{V}$ such that $B\cong T(\widehat{V})/\langle \partial_{x_i}(w):i=0,\dots,x_n\rangle$ where $x_0=z$. We next show that the superpotential $w$ may be written out explicitly. For $i=1,\dots,n$, let $r_i=z\ot x_i-x_i\ot z-\delta(x_i)\in\widehat{V}\ot \widehat{V}$. Clearly $r,r_1,\dots,r_n$ are linearly independent in $\widehat{V}\ot \widehat{V}$, moreover $B\cong T(\widehat{V})/\langle r,r_1,\dots,r_n\rangle$. Before we construct the general form of the superpotentials, let us look at the following example. \begin{exa}\label{exa2} {\rm Let $\k \langle x,y\rangle$ be the free algebra generated by two elements. Let $\delta:\k \langle x,y\rangle\to \k \langle x,y\rangle$ be a derivation defined by $\delta(x)=bx^2+cy^2$ and $\delta(y)=ax^2-bxy-byx$, where $(a,b,c)\in\k^3$. Let $r=xy-yx$. Then it is easy to see that $\delta(r)=0$. Therefore, $\delta$ induces a derivation $\bar{\delta}$ on $A=\k[x,y]$. Now $B=A[z;\bar{\delta}]$ is 3-CY. A straightforward verification shows that $w=yxz+zyx+xzy-xyz-zxy-yzx-ax^3+cy^3+bxyx+bx^2y+byx^2$ is a superpotential, and $B\cong \k \langle x,y,z \rangle/\langle\partial_x(w),\partial_{y}(w),\partial_{z}(w)\rangle$. Explicitly, the generating relations are $r_1=zy-yz-ax^2+byx+bxy, r_2=xx-zx+cy^2+bx^2$ and $r_3=yx-xy$.} \end{exa} \begin{prop} \label{prop3} Assume $\delta(r)=0$. Let $Q=\left( \begin{array}{ccc} -1 & 0 \\ 0 & M \\ \end{array} \right)$, and let $w=(z,x_1,\dots,x_n)Q\left( \begin{array}{c} r \\ r_1 \\ \vdots \\ r_n \end{array} \right)$, where $M$ is an invertible $n\times n$ anti-symmetric matrix, and $r_i=z\ot x_i-x_i\ot z-\delta(x_i)\in \widehat{V}\ot \widehat{V}$ for all $i=1,\dots,n$. Then {\rm (i)} $w$ is a superpotential; {\rm (ii)} $A[z;\bar{\delta}]\cong T(\widehat{V})/\langle \partial_{x_i}(w):i=0,\dots,n\rangle$, where we set $x_0=z$. \end{prop} \proof Let $\{m_{ij}|i,j=1,\dots,n\}$ be the entries of $M$. Then $r=\sum_{i,j=1}^nm_{ij}x_i\ot x_j$. Since $\delta(r)=0$, we have $\displaystyle\sum_{i,j=1}^nm_{ij}\delta(x_i)\ot x_j=-\sum_{i,j=1}^nm_{ij}x_i\ot \delta(x_j)$. Let us compute the element $w$. $$\begin{array}{ccl} w&=&-z\ot r+(x_1,\dots,x_n)M\left( \begin{array}{c} r_1\\ \vdots \\ r_n \end{array} \right)\\ &=&\displaystyle-\sum_{i,j=1}^nm_{ij}z\ot x_i\ot x_j+\sum_{i,j=1}^nm_{ij}x_i\ot r_j\\ &=&\displaystyle-\sum_{i,j=1}^nm_{ij}z\ot x_i\ot x_j+\sum_{i,j=1}^nm_{ij}x_i\ot z\ot x_j\\ &&\displaystyle-\sum_{i,j=1}^nm_{ij}x_i\ot x_j\ot z -\sum_{i,j=1}^nm_{ij}x_i\ot \delta(x_j)\\ &=&-\displaystyle\sum_{i,j=1}^nm_{ij}(z\ot x_i-x_i\ot z)\ot x_j-\sum_{i,j=1}^nm_{ij}x_i\ot x_j\ot z +\sum_{i,j=1}^nm_{ij}\delta(x_i)\ot x_j\\ &=&-\displaystyle\sum_{i,j=1}^nm_{ij}(z\ot x_i-x_i\ot z-\delta(x_i))\ot x_j-\sum_{i,j=1}^nm_{ij}x_i\ot x_j\ot z\\ &=&-r\ot z+(r_1,\dots,r_n)M^t\left( \begin{array}{c} x_1\\ \vdots \\ x_n \end{array} \right). \end{array} $$ Now it is clear that $[x_i^*w]=[wx_i^*]$, and $\partial_{x_i}(w)=r_i$ for all $i=0,1,\dots,n$, where $r_0=r$. \qed \section{Coherence of $A[z;\bar{\delta}]$} Notation and notions are as in the previous section. By \cite[Theorem 0.2]{Z}, $A$ is Noetherian if and only if $\dim (V)=2$. Since $B$ is an Ore extension of $A$ in variable $z$, $B/Bz$ is isomorphic to $A$ as a graded left $B$-module. Since $A$ is not left Noetherian when $\dim (V)>2$, neither is $B$. Similarly, $B$ is not right Noetherian when $\dim(V)>2$. Summarizing the foregoing argument, we obtain the following property. \begin{lem} $B=A[z;\bar{\delta}]$ is Noetherian if and only if $\dim (V)=2$. \end{lem} Piontkovski showed in \cite[Theorem 4.1]{Pi} that any connected graded algebra with a single quadratic relation is graded coherent. Hence $A$ is a graded coherent algebra. So, it is natural to ask whether $B$ is a graded coherent algebra. The answer is affirmative. However, the proof of this property is not trivial because an Ore extension of a coherent ring needs not be coherent. In fact, there is a commutative coherent ring $R$ such that the polynomial extension $R[z]$ is not coherent \cite{So}. Some other results about the coherence of polynomial rings may be found in \cite{GV}. Let us recall the definition of a graded coherent algebra. A graded algebra $D$ is called a {\it graded left coherent} algebra if one of the following equivalent conditions is satisfied: \begin{itemize} \item [(i)] every finitely generated graded left ideal of $D$ is finitely presented; that is, if $I$ is a graded left ideal of $D$ then there is a finitely graded free $D$-module $F$ and a surjective morphism $g:F\to I$ of graded modules such that $\ker g$ is also a finitely generated $D$-module; \item[(ii)] every finitely generated graded submodule of a finitely presented graded module is finitely presented; \item [(iii)] the category of all finitely presented graded left $D$-modules is an abelian category. \end{itemize} Similarly we can define a {\it graded right coherent} algebra. If a graded algebra is both graded left and right coherent, then it is called a {\it graded coherent} algebra. Let $W=\op_{i\ge0}W_i$ be a graded vector space with $\dim (W_i)<\infty$ for all $i$. Recall that the Hilbert series of $W$ is defined to be the power series $H_W(t)=\sum_{i\ge0}\dim(W_i) t^i$. \begin{lem}\label{lem2} Let $V$ be a vector space of dimension $n\ge4$ with basis $\{x_1,\dots,x_n\}$, and let \begin{equation}\label{eq4} M=\left( \begin{array}{cccccc} 0 & \cdots & 0 & 0 & \cdots & 1 \\ \vdots & & \vdots & \vdots & & \vdots \\ 0 & \cdots & 0 & 1 & \cdots & 0 \\ 0 & \cdots & -1 & 0 & \cdots & 0 \\ \vdots & & \vdots & \vdots & & \vdots \\ -1 & \cdots & 0 &0 & \cdots & 0 \\ \end{array} \right)\end{equation} be the invertible $n\times n$ anti-symmetric matrix with entries in the anti-diagonal line $1$ or $-1$ and others 0. Let $r=(x_1,\dots,x_n)M(x_1,\dots,x_n)^t$, and $A=T(V)/\langle r\rangle$. Let $\delta$ be a derivation on $T(V)$ of degree 1. We write $\delta(x_i)=\sum_{s,t=1}^nk^i_{st}x_s\ot x_t$ for all $i=1,\dots,n$. Assume that $k^i_{nn}=0$ for all $i=1,\dots,n$ and $\delta(r)=0$. Let $\bar{\delta}$ be the derivation on $A$ induced by $\delta$. Write $B=A[z;\bar{\delta}]$. Then the following hold. \begin{itemize} \item [(i)] Let $I$ be the ideal of $B$ generated by the elements $x_1,\dots,x_{n-1}$. Then $B/I\cong \k[X,Z]$, where $\k[X,Z]$ is the commutative polynomial algebra in variables $X$ and $Z$; \item [(ii)] Let $L=\k x_1\op\cdots\op\k x_{n-1}$ and $L'=\k x_2\op\cdots\op\k x_{n-1}$. Then, as left $B$-modules, $I\cong B\ot(L\op L'x_n\op L'x_n^2\op\cdots)$, where $L'x_n^k$ ($k\ge1$) is the vector space spanned by the elements $x_2x_n^k,\dots,x_{n-1}x_n^k$. \end{itemize} \end{lem} \begin{con}\label{con} {\rm We call an $n\times n$ ($n\ge2$) invertible anti-symmetric matrix of the form (\ref{eq4}) a {\it standard anti-symmetric matrix}. If $M$ is an invertible anti-symmetric matrix, there is an invertible matrix $P$ such that $P^tMP$ is standard.} \end{con} \noindent{\it Proof of Lemma \ref{lem2}}. (i) By assumption, $\delta(x_n)=\sum_{s,t=1}^nk^n_{st}x_s\ot x_t$ and $k^n_{nn}=0$. Therefore $\bar{\delta}(x_n)\in I$ and $B/I$ is a commutative algebra. There is an algebra morphism $g:k[X,Z]\longrightarrow B/I$ defined by $g(X)=x_n$ and $g(Z)=z$. Next, we want to construct an algebra morphism from $B/I$ to $\k[X,Z]$. As before, write $\widehat{V}=V\oplus\k z$. Firstly, we define $f:T(\widehat{V})\longrightarrow \k[X,Z]$ by letting $f(x_i)=0$ for all $i=1,\dots,n-1$, $f(x_n)=X$ and $f(z)=Z$. Denote by $\langle x_1,\dots,x_{n-1}\rangle$ and by $\langle z\ot x_n-x_n\ot z\rangle$ the ideals of $T(\widehat{V})$ respectively generated by $x_1,\dots,x_{n-1}$ and by $z\ot x_n-x_n\ot z$. Obviously, $\langle x_1,\dots,x_{n-1}\rangle+\langle z\ot x_n-x_n\ot z\rangle\subseteq\ker f$. Recall that $B$ is a Koszul algebra and $B=T(\widehat{V})/J$ where $J=\langle r,z\ot x_1-x_1\ot z-\delta(x_1),\dots,z\ot x_n-x_n\ot z-\delta(x_n)\rangle$. Since $\delta(x_i)=\sum_{s,t=1}^nk^i_{st}x_s\ot x_t$ such that $k^i_{nn}=0$ for all $i=1,\dots,n$, it follows that $\delta(x_i)\in\langle x_1,\dots,x_{n-1}\rangle$ for all $i=1,\dots,n$. Hence $r,z\ot x_1-x_1\ot z-\delta(x_1),\dots,z\ot x_{n-1}-x_{n-1}\ot z-\delta(x_{n-1})\in \langle x_1,\dots,x_{n-1}\rangle\subseteq\ker f$. Now $z\ot x_n-x_n\ot z-\delta(x_n)\in \langle z\ot x_n-x_n\ot z\rangle+\langle x_1,\dots,x_{n-1}\rangle\subseteq\ker f$. Hence $J\subseteq\ker f$. Therefore, $f$ induces an algebra morphism $\overline{f}:B\longrightarrow \k[X,Z]$. Obviously, $\ker\overline{f}\supseteq I$. Hence $\overline{f}$ in turn induces an algebra morphism $\hat{f}:B/I\longrightarrow \k[X,Z]$. Now it is easy to see that $\hat{f}\circ g=id=g\circ\hat{f}$. The statement (i) follows. (ii) Here we make use of the technique from \cite[Proposition 7.3]{Sm}. Let $\mu:B\ot B\to B$ be the multiplication of $B$. Then the restriction of $\mu$ defines a left $B$-module morphism (also denoted by $\mu$): $$\mu:B\ot(L\op L'x_n\op L'x_n^2\op\cdots)\longrightarrow I.$$ We claim that $\mu$ is surjective. In fact, if we can show that the image $I'=\text{im}(\mu)$ is also an ideal of $B$, then $I=I'$. So, it suffices to show that $I'x_n\subseteq I'$ and $I'z\subseteq I'$. Following the generating relation of $A$, we have $x_1x_n=x_nx_1+(x_2x_{n-1}-x_{n-1}x_2)+\cdots+(x_{\frac{n}{2}}x_{\frac{n}{2}+1}-x_{\frac{n}{2}+1}x_{\frac{n}{2}})\in BL\subseteq I'$. Therefore $I'x_n\subseteq I'$. In particular, $\bar{\delta}(x_i)\in I'$ for all $i=1,\dots,n$. On the other hand, since $x_iz=zx_i-\bar{\delta}(x_i)$, it follows that $x_iz\in I'$ for all $i=1,\dots,n-1$. For $2\leq i\leq n-1$, we have $x_ix_nz=x_i(zx_n-\bar{\delta}(x_n))=x_izx_n-x_i\bar{\delta}(x_n)\in I'x_n+x_iI'\subseteq I'$. Now assume $x_ix_n^jz\in I'$ for all $j<k$ and $2\leq i\leq n-1$. Then $$x_ix_n^kz=x_ix_n^{k-1}(zx_n-\bar{\delta}(x_n))=(x_ix_n^{k-1}z)x_n-x_ix_n^{k-1}\bar{\delta}(x_n)\in I'x_n+x_ix_n^{k-1}I'\subseteq I'.$$ Hence $I'z\subseteq I'$. The claim follows. To show that $\mu$ is injective, we only need to compare the Hilbert series of $I$ and that of $F:=B\ot(L\op L'x_n\op L'x_n^2\op\cdots)$. Write $W=L\op L'x_n\op L'x_n^2\op\cdots$. Clearly $H_F(t)=H_B(t)\cdot H_W(t)$. We have $$H_W(t)=(n-1)t+(n-2)t^2+(n-2)t^3+\cdots=((n-1)t-t^2)(1-t)^{-1}.$$ The exact sequence $0\to I\to B\to B/I\to0$ implies $H_I(t)=H_B(t)-H_{B/I}(t)$. Since $B$ is Koszul of global dimension 3, it follows that $H_B(t)=\left(1-(n+1)t+(n+1)t^2-t^3\right)^{-1}$ by \cite[Theorem 5.9]{Sm1} and the isomorphism (\ref{eq3}) of the previous section. By (i), $H_{B/I}(t)=(1-t)^{-2}$. Hence $$\begin{array}{ccl} H_{I}(t)&=&\left(1-(n+1)t+(n+1)t^2-t^3\right)^{-1}-(1-t)^{-2}\\ &=& \left(1-(n+1)t+(n+1)t^2-t^3\right)^{-1}\cdot\left((n-1)t-t^2\right)(1-t)^{-1}\\ &=&H_B(t)\cdot H_W(t)\\ &=&H_F(t). \end{array} $$ Therefore $\mu$ is injective. So, (ii) follows. \qed {\it\textbf{ Proof of the statement (iii) of Theorem \ref{thm}}}. If $n=2$, then $A=\k[x_1,x_2]$. We obtain that $B=A[z;\bar{\delta}]$ is Noetherian, and hence coherent. Now assume $n\ge4$. We only prove the statement when $j=n$ in the assumption, that is, $k^i_{nn}=0$ for all $i=1,\dots,n$. When $j\neq n$, the statement can be proved similarly. By Lemma \ref{lem2}, there is an exact sequence $0\longrightarrow I\longrightarrow B\longrightarrow B/I\longrightarrow0$ such that $B/I$ is a polynomial algebra in two variables and $I$ is a free graded left $B$-module. By \cite[Proposition 3.2]{Pi}, $B$ is graded right coherent. Note that the left version of Lemma \ref{lem2}(ii) holds too. Hence $B$ is also graded left coherent. \qed As a special case of the statement (iii) of Theorem \ref{thm}, we have the following result, which can be viewed as a noncommutative version of \cite[Theorem 4.3]{GV}. \begin{prop}\label{prop4} Let $A$ be a connected graded 2-CY algebra. Then $A[z]$ is a graded coherent algebra. \end{prop} \proof By \cite[Theorem 0.1]{Z} (also, cf. \cite[Proposition 3.4]{Ber}), $A$ is defined by an invertible anti-symmetric matrix $M$, that is, $A=T(V)/\langle r\rangle$ with $r=(x_1,\dots,x_n)M(x_1,\dots,x_n)^t$. For an invertible anti-symmetric matrix $M$, there is an invertible matrix $P$ such that $P^tMP$ is a standard invertible anti-symmetric matrix. Then the algebras defined by $M$ and $P^tMP$ respectively are isomorphic to each other. Hence we may assume that the anti-symmetric matrix $M$ itself is standard. Now by (iii) of Theorem \ref{thm}, we see that $A[z]$ is graded coherent. \qed Now assume that $B=A[z;\bar{\delta}]$ is graded coherent. We may form a noncommutative projective space from $B$. Following \cite{Po}, we denote by $\text{coh}B$ the category of all finitely presented graded left $B$-modules, and by $\text{fdim}B$ the category of all finite dimensional graded left $B$-modules. Since $B$ is graded coherent, fdim$B$ is a Serre subcategory of coh$B$. Hence the quotient category $$\text{cohproj}B:=\text{coh}B/\text{fdim}B$$ is also an abelian category. Since $B$ is Koszul and 3-CY, $B$ is Artin-Schelter regular with Gorenstein parameter $-3$. Hence the Beilinson algebra of $B$ (for the terminology, see \cite[Definition 4.7]{MM}) is $$\nabla B=\left( \begin{array}{ccc} \k & B_1 & B_2 \\ 0 & \k & B_1 \\ 0 & 0 & \k \\ \end{array} \right). $$ Let $\text{mod}\nabla B$ be the category of finite dimensional left $\nabla B$-modules. Then by \cite[Theorem 4.14]{MM}, we have the following corollary. \begin{cor} If the conditions of Theorem \ref{thm} are satisfied, then there is an equivalence of triangulated categories: $$D^b(\text{\rm cohproj}B)\cong D^b(\text{\rm mod}\nabla B),$$ where $D^b(-)$ is the bounded derived category of the corresponding abelian category. \end{cor} \vspace{5mm} \subsection*{Acknowledgement} The authors are very grateful to the referee for his/her careful reading of the manuscript, numerous comments and suggestions, which has greatly improved the paper. In particular, Remark \ref{rrem} was suggested to the authors by the referee. The work is supported by an FWO-grant and grants from NSFC (No. 11171067) and NSF of Zhejiang Province (No. LY12A01013). This work is also in part supported by SRF for ROCS, SEM. \vspace{5mm}
1,116,691,497,944
arxiv
\section[Introduction]{Introduction} \label{introduction} Since J. Fourier's introduction of his \emph{M\'emoire sur la propagation de la chaleur dans les corps solides} in 1807, the theory of what is now called Fourier Analysis shown its importance, not only from the purely theoretical point of view but also from the viewpoint of its applications. Many important mathematicians contributed greatly to the theory and in the mid 1930s the theory took a great impulse with the development of the theory of topological groups by A. Weil, L. Pontryagin, J. von Neumann, van Dantzing, among others. From there on, the theory is called \emph{Abstract Harmonic Analysis} and it has been developed particularly in several topological groups. The long complete description made by E. Hewitt and K.A. Ross in \cite{HR1,HR2} shows the importance of the subject. In this article, the Fourier analysis on one dimensional solenoids is done by following the path traced by H. Bohr in his celebrated theory of \emph{Almost periodic functions} (see \cite{Bohr}). By character theory, one dimensional solenoids are dual groups of additive subgroups of the group of rational numbers $\mathbb{Q}$ with the discrete topology, and hence they are homomorphic images of the so called \textsf{universal one dimensional solenoid} $\mathsf{S}$ which is the dual group of $\mathbb{Q}$. The group $\mathsf{S}$ is a compact abelian topological group and also has a structure of a one dimensional foliated space. By considering the properly discontinuously free action of $\mathbb{Z}$ on $\mathbb{R}\times \widehat{\mathbb{Z}}$ given by \[ \gamma\cdot (x,t) := (x+\gamma,t-\gamma) \quad (\gamma\in \mathbb{Z}), \] the group $\mathsf{S}$ appears as the orbit space of this action, i.e. $\mathsf{S}=\mathbb{R}\times_{\mathbb{Z}} \widehat{\mathbb{Z}}$. Here, $\mathbb{Z}$ is acting on $\mathbb{R}$ by covering transformations and on $\widehat{\mathbb{Z}}$ by right translations. The group $\displaystyle{\widehat{\mathbb{Z}} := \varprojlim_n \mathbb{Z}/n\mathbb{Z}}$ is the profinite completion of $\mathbb{Z}$, which is a compact abelian topological group, and also perfect and totally disconnected, and hence it is homeomorphic to the Cantor set. Being $\widehat{\mathbb{Z}}$ the profinite completion of $\mathbb{Z}$, it admits a canonical inclusion of $\mathbb{Z}$ whose image is dense. As a topological group, $\mathsf{S}$ is also isomorphic to the projective limit \[ \mathsf{S} \cong \varprojlim_n \{ \mathbb{S}^{1}, p_{nm} \} \] with canonical projection $\mathsf{S}\longrightarrow \mathbb{S}^{1}$, determined by projection onto the first coordinate, which gives a locally trivial $\widehat{\mathbb{Z}}$ -- bundle structure $\widehat{\mathbb{Z}}\hookrightarrow \mathsf{S} \longrightarrow \mathbb{S}^{1}$. In the classical theory over the circle, it is very well known that there exists a one to one correspondance between the set \[ \{ \mathbb{Z} - \text{invariant functions } \mathbb{R}\longrightarrow \mathbb{C} \} \] and \[ \{ \text{Continuous functions } \mathbb{S}^{1}\longrightarrow \mathbb{C} \}, \] via the universal covering projection $\pi:\mathbb{R}\longrightarrow \mathbb{S}^{1}$. By using the `covering' projection $\mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathsf{S}$, it is established an analogous one to one correspondance between \[ \{ \mathbb{Z} - \text{invariant functions } \mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathbb{C} \} \] and \[ \{ \text{Continuous functions } \mathsf{S}\longrightarrow \mathbb{C} \}. \] This is the starting point for the development of the theory in this article. Once this context is settled, the inspiration is Bohr's treatment. The notion of the mean value is introduced for this class of functions (see Section \ref{solenoidal_BohrFourier-series}): for any $\mathbb{Z}$ -- invariant functions $\Phi:\mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathbb{C}$, \[ \mathcal{M}(\Phi) := \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) dx dt, \] whenever this limit exists. Using this mean value, the Bohr -- Fourier transform of any such function $\Phi$ is: \[ \widehat{\Phi}(\chi_{\lambda,\varrho}) := \mathcal{M} \big( \Phi(x,t) \overline{\chi_{\lambda,\varrho}(x,t)} \big), \] where $\chi_{\lambda,\varrho}=\chi_\lambda\cdot \chi_\varrho$ is any character of the product $\mathbb{R}\times \widehat{\mathbb{Z}}$ identified with $\mathbb{R}\times \mathbb{Q}/\mathbb{Z}$ by duality. Now, to any transversal variable $t\in \widehat{\mathbb{Z}}$ there corresponds a limit periodic function $\Phi_t:\mathbb{R}\longrightarrow \mathbb{C}$. For $t=0$, the function $\Phi_0$ is the corresponding function on the base leaf $\mathcal{L}_{0}=\mathbb{R}\times \{0\}$. If $\widehat{\Phi}_0(\chi_\lambda)=M(\Phi_0 \cdot\overline{\chi_\lambda})$ is the classical Bohr -- Fourier coefficient of $\Phi_0$, the continuous variation $\widehat{\mathbb{Z}}\longrightarrow \mathrm{C_{ap}}(\mathbb{R})$ implies: \smallskip \textbf{Theorem \ref{Bohr-Fourier_coefficients}:} $$ \widehat{\Phi}(\chi_{\lambda,\varrho}) = \widehat{\Phi}_0(\chi_\lambda) \cdot \int_{\widehat{\mathbb{Z}}} A_{\lambda}(t)\cdot \overline{\chi_{\varrho}(t)} dt, $$ where $A_{\lambda} : \widehat{\mathbb{Z}}\longrightarrow \mathbb{T}$ defines a character on $\widehat{\mathbb{Z}}$ determined by the transversal variation. \smallskip The fact that characters are an orthonormal system provides the relation \[ \widehat{\Phi}(\chi_{\lambda,\varrho}) = \widehat{\Phi}_0(\chi_\lambda), \] when $\varrho=\lambda \mod \mathbb{Z}$, and $0$ in other case. \smallskip The Bohr -- Fourier series of $\Phi$ can now be written as: $$ \overline{\Phi}(x,t) = \sum_{(\lambda,\varrho)\in \Omega_\Phi} \widehat{\Phi}(\lambda,\varrho) \chi_{\lambda,\varrho}(x,t), $$ where $\Omega_\Phi$ is a countable subset of characters $\mathbb{R}\times \mathbb{Q}/\mathbb{Z}$. Denote by $\mathrm{C}(\mathsf{S})$ the set consisting of all $\mathbb{Z}$ -- invariant functions $\Phi:\mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathbb{C}$. The Parseval's identity is established as: \smallskip \noindent \textsf{Theorem \ref{solenoidal_parseval-identity}:} For any $\Phi\in \mathrm{C}(\mathsf{S})$, \[ \sum_{(\lambda,\varrho)\in \Omega_\Phi} \abs{\widehat{\Phi}(\lambda,\varrho)}^2 = \mathcal{M}(\abs{\Phi}^2). \] \smallskip The Approximation theorem follows: \smallskip \noindent \textbf{Theorem \ref{solenoidal_approximation-theorem}:} Any $\Phi\in \mathrm{C}(\mathsf{S})$ can be approximated arbitrarily by finite terms of its Fourier series. \smallskip It is important to point out that several attempts have been made to describe the theory of Fourier series on solenoids, most notably the work \cite{HRit}, where the authors used characters on solenoids composed with trigonometric polynomials on the circle to describe the theory. As has already been said, the approach in this article is to use the theory of $\mathbb{Z}$ -- invariant functions on the covering $\mathbb{R}\times \widehat{\mathbb{Z}}$ which descend to the appropriate functions on $\mathsf{S}$ plus to follow the line of ideas of Bohr's treatment in order to be able to introduce the notion of mean value and the description of frequencies to form the Fourier series. Further development of the theory presented here goes in two different directions: one the one hand, the full generalization to the $L^p$ theory would be able, and, on the other, the extension of these ideas to the so called Sullivan's solenoidal manifolds is propitious, since, according to Sullivan (see \cite{Sul} and \cite{Ver}), any compact one dimensional orientable solenoidal manifold is the suspension of a homeomorphism of the Cantor set. The universal solenoid itself is precisely one instance of this construction. All these themes are the subject of recent investigations. Section \ref{universal_solenoid} presents the relevant definitions on solenoids, characters, measures and all that. In Section \ref{Bohr_theory} there is a brief account of the most relevant facts to this article of the classical Bohr's theory. The Section \ref{solenoidal_theory} is dedicated to the description of basic ingredients of the solenoidal theory and in Section \ref{solenoidal_BohrFourier-series}, the Bohr -- Fourier series is described and compared with the classical Fourier series on $\mathsf{S}$, reminiscent of the series on an arbitrary compact abelian group. \section[The universal solenoid]{The universal solenoid} \label{universal_solenoid} This section introduces the basic objects relevant to this article: the universal solenoid exhibited as an orbit space, as a projective limit and also as a quotient group, the basic definitions and examples of duality theory, and the required elements of measure theory. A complete account of much of these concepts and properties is documented in the treatise \cite{HR1}. More specific descriptions of most of the objects presented here can be consulted in the recent article \cite{CLV}. \subsection[The universal solenoid]{The universal solenoid} \label{solenoid} For every integer $n\geq 1$, by covering space theory, it is defined the unbranched covering space of degree $n$, $p_n:\mathbb{S}^{1} \longrightarrow \mathbb{S}^{1}$ given by $z\longmapsto z^n$. If $n,m\in \mathbb{Z}^+$ and $n$ divides $m$, then there exists a unique covering map $p_{nm}:\mathbb{S}^{1}\longrightarrow \mathbb{S}^{1}$ such that $p_n \circ p_{nm} = p_m$. This determines a projective system of covering spaces $\{\mathbb{S}^{1},p_n\}_{n\geq 1}$ whose projective limit is the \textsf{universal one dimensional solenoid} \[ \mathsf{S} := \varprojlim_n \{ \mathbb{S}^{1}, p_{nm} \} \] with canonical projection $\mathsf{S}\longrightarrow \mathbb{S}^{1}$, determined by projection onto the first coordinate, which produces a locally trivial $\widehat{\mathbb{Z}}$ -- bundle structure $\widehat{\mathbb{Z}}\hookrightarrow \mathsf{S} \longrightarrow \mathbb{S}^{1}$, where $\widehat{\mathbb{Z}} := \displaystyle{\varprojlim_n \mathbb{Z}/m\mathbb{Z}}$ is the profinite completion of $\mathbb{Z}$, which is a compact, perfect and totally disconnected abelian topological group homeomorphic to the Cantor set. The image of the inclusion $\mathbb{Z}\hookrightarrow \widehat{\mathbb{Z}}$ is dense. By considering the properly discontinuously free action of $\mathbb{Z}$ on $\mathbb{R}\times \widehat{\mathbb{Z}}$ given by \[ \gamma\cdot (x,t) := (x+\gamma,t-\gamma) \quad (\gamma\in \mathbb{Z}), \] $\mathsf{S}$ is identified with the orbit space $\mathbb{R}\times_{\mathbb{Z}} \widehat{\mathbb{Z}} \equiv \mathbb{R}\times \widehat{\mathbb{Z}} / \mathbb{Z}$. Here, $\mathbb{Z}$ is acting on $\mathbb{R}$ by covering transformations and on $\widehat{\mathbb{Z}}$ by translations. The pathconnected component of the identity element $0\in \mathsf{S}$ is called the \textsf{base leaf} and it is denoted by $\mathcal{L}_{0}$. Clearly, $\mathcal{L}_{0}$ is the image of $\mathbb{R}\times \{0\}$ under the canonical projection $\mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathsf{S}$ and it is homeomorphic to $\mathbb{R}$. \smallskip In summary, $\mathsf{S}$ is a compact, connected, abelian topological group and also a one dimensional lamination where each ``leaf" is a simply connected one dimensional manifold, homeomorphic to the universal covering space $\mathbb{R}$ of $\mathbb{S}^{1}$ and a typical ``transversal" is isomorphic to the Cantor group $\widehat{\mathbb{Z}}$. \subsection[Characters]{Characters} \label{characters} The \textsf{group of characters} or \textsf{dual group} of $\mathbb{R}$ is the group consisting of continuous homomorphims $\mathrm{Hom}_{\mathrm{cont}}(\mathbb{R},\mathbb{S}^{1})$ denoted by $\Char(\mathbb{R})$, and similarly define the character group of any abelian group. By the classical theory, the group of characters of a compact abelian group is a discrete abelian group, and viceversa, the character group of a discrete abelian group is a compact abelian group. Also, the character group of a product of two abelian groups is the product of the character groups. The following examples and facts are relevant for this work and they are very well known: \begin{enumerate}[(a)] \item $\Char(\mathbb{R}) \cong \mathbb{R}$, \item $\Char(\widehat{\mathbb{Z}})\cong \mathbb{Q}/\mathbb{Z}$, where $\mathbb{Q}/\mathbb{Z}$ is the group of roots of unity, \item $\mathrm{Char}(\mathbb{R}\times \widehat{\mathbb{Z}})\cong \mathrm{Char}(\mathbb{R})\times \mathrm{Char}(\widehat{\mathbb{Z}})\cong \mathbb{R}\times \mathbb{Q}/\mathbb{Z}$. \end{enumerate} The statement (c) says that any character in $\mathbb{R}\times \widehat{\mathbb{Z}}$ has the form \[ \chi_{\lambda,\varrho} = \chi_\lambda \cdot \chi_\varrho, \] for some $\lambda\in \mathbb{R}$ and $\varrho\in \mathbb{Q}/\mathbb{Z}$. An important character group for this development is \begin{remark} $\mathrm{Char}(\mathsf{S})\cong \mathbb{Q}$. \end{remark} Classically this isomorphism is deduced from the fact that there is an isomorphism of topological groups between the solenoid $\mathsf{S}$ and the so called \emph{Ad\`ele Class Group} of the rational numbers $\mathbb A_\mathbb{Q}/\mathbb{Q}$, where $\mathbb A_\mathbb{Q}$ is the ad\`ele group of $\mathbb{Q}$ and $\mathbb{Q}\hookrightarrow \mathbb A$ is a discrete cocompact subgroup. However, for the purposes of this article it is convenient to calculate the character group of $\mathsf{S}$ in an alternative way as follows. The solenoid $\mathsf{S}$ can also be realized as the quotient group $\mathbb{R}\times \widehat{\mathbb{Z}}/\mathbb{Z}$, where $\mathbb{Z}$ is immersed diagonally as a discrete subgroup by \[ \mathbb{Z}\hookrightarrow \mathbb{R}\times \widehat{\mathbb{Z}}, \qquad n\longmapsto (-n,n). \] In order to be able to compute the dual group of a quotient group, the duality theory establish an isomorphism \[ \mathrm{Char}(\mathbb{R}\times \widehat{\mathbb{Z}}/\mathbb{Z}) \cong \mathrm{Ann}(\mathbb{Z}), \] where $\mathrm{Ann}(\mathbb{Z})$ is the annihilator subgroup of $\mathbb{Z}$ in $\mathrm{Char}(\mathbb{R}\times \widehat{\mathbb{Z}})$. It happens that the characters in $\mathbb{R}\times \widehat{\mathbb{Z}}$ which annihilates the generator $(-1,1)$ of $\mathbb{Z}$ in the product are precisely the characters determined by elements in $\mathbb{Z}\times \mathbb{Q}/\mathbb{Z}$. By duality theory, the surjective homomorphism $\mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathsf{S}$ induces a monomorphism between the dual groups $\mathbb{Q}\longrightarrow \mathbb{R}\times \widehat{\mathbb{Z}}$ whose image is isomorphic to the subgroup $\mathbb{Z}\times \mathbb{Q}/\mathbb{Z}$. This identification is very important in this work: \begin{remark} \label{frequencies_up-down} There is a one to one correspondance between discrete abelian groups $\mathbb{Q}$ and $\mathbb{Z}\times \mathbb{Q}/\mathbb{Z}$. \end{remark} \subsection[Haar measure]{Haar measure} \label{haar_measure} Denote by $dx$ the usual Haar measure on $\mathbb{R}$ and by $dt$ the Haar measure on $\widehat{\mathbb{Z}}$ normalized in such a way that \[ \int_{\widehat{\mathbb{Z}}} dt = 1. \] So, the Haar measure on $\mathbb{R}\times \widehat{\mathbb{Z}}$ is the product measure $dx\times dt=dxdt$ and it induces the normalized Haar measure $d\mu$ on $\mathsf{S}$, i.e. \[ \int_{\mathsf{S}}\phi d\mu= \int_{\widehat{\mathbb{Z}}}\int_{\mathbb{R}} \Phi dxdt, \] for any lifting $\Phi:\mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathbb{C}$ of $\phi:\mathsf{S}\longrightarrow \mathbb{C}$. \section[Classical Bohr's theory]{Classical Bohr's theory} \label{Bohr_theory} This section is a brief r\'esum\'e of Bohr's theory of almost periodic functions. We follow closely Bohr's seminal work \cite{Bohr}. \smallskip Let $\mathrm{C}(\mathbb{R})$ be the space of complex valued continuous functions equipped with the uniform norm. Define the action by translations of $\mathbb{R}$ on $\mathrm{C}(\mathbb{R})$ by \[ \mathbb{R}\times \mathrm{C}(\mathbb{R})\to \mathrm{C}(\mathbb{R}),\quad (t,\varphi)\longmapsto \varphi^t = \varphi\circ R_t, \] where $\varphi^t:\mathbb{R}\longrightarrow \mathbb{R}$ is given by $\varphi^t(x) := \varphi\circ R_t(x) = \varphi(x+t)$. Denote by $\mathfrak O_{\mathbb{R}}(\varphi)$ the orbit of $\varphi$ under this action, and by $\mathrm{Hull}(\varphi)$ the closed convex hull of $\mathfrak O_{\mathbb{R}}(\varphi)$ in $\mathrm{C}(\mathbb{R})$. Given $\varphi\in \mathrm{C}(\mathbb{R})$ and $\epsilon>0$, the number $\tau=\tau(\epsilon)\in \mathbb{R}$ is called a \textsf{translation number} of $\varphi$ (corresponding to $\epsilon$) whenever \[ \norm{\varphi^{\tau(\epsilon)} - \varphi}_{\infty}\leq\epsilon. \] \begin{definition} $\varphi\in \mathrm{C}(\mathbb{R})$ is called \textsf{almost periodic} if given $\epsilon>0$, there exists a relatively dense set of translation numbers of $\varphi$ corresponding to $\epsilon$, i.e. for all $\epsilon$, there exists a length $L=L(\epsilon)$ such that each interval of length $L$ contains at least one translation number corresponding to $\epsilon$. \end{definition} Denote by $\mathrm{C_{ap}}(\mathbb{R})$ the complex vector space consisting of all almost periodic functions. \begin{example} Any periodic function is obviously an almost periodic function. \end{example} Some important properties of almost periodic functions are summarized in the following: \begin{properties} The following properties are satisfied: \begin{enumerate} \item If $\varphi\in \mathrm{C_{ap}}(\mathbb{R})$, then $\varphi$ is an uniformly continuous function. \item The sum of almost periodic functions is an almost periodic function. \item The uniform limit of almost periodic functions is an almost periodic function. \end{enumerate} \end{properties} Since the sum of arbitrary periodic functions is an almost periodic function, particularly the trigonometric polynomials are almost periodic functions. An interesting observation is that every function $\varphi$ which can be approximated uniformly by trigonometric polynomials is an almost periodic function (see Theorem \ref{Bohr_theorem}). The main interest in this article is the subspace of all limit periodic functions $\mathrm{C_{lp}}(\mathbb{R})\subset \mathrm{C_{ap}}(\mathbb{R})$ which consists of all functions $\varphi$, such that $\varphi$ is the uniform limit of periodic functions. \begin{definition} For every almost periodic function there exists the \textsf{mean value} \[ M(\varphi) = \lim_{T\to \infty} \frac{1}{T} \int_{0}^{T} \varphi(x) dx. \] \end{definition} It is clear that if $\varphi\in \mathrm{C_{ap}}(\mathbb{R})$ and $t\in\mathbb{R}$, then $\varphi^t \in \mathrm{C_{ap}}(\mathbb{R})$, and therefore there exists $M(\varphi^t)$. \begin{theorem} \label{Mean_properties} $M : \mathrm{C_{ap}}(\mathbb{R}) \longrightarrow \mathbb{C}$ is a continuous linear functional which is invariant under right translations. That is, \begin{enumerate} \item $M(\varphi+\psi) = M(\varphi) + M(\psi)$, for any $\varphi,\psi\in \mathrm{C_{ap}}(\mathbb{R})$. \item $M(\varphi^t) = M(\varphi)$, for any $\varphi\in \mathrm{C_{ap}}(\mathbb{R})$ and $t\in \mathbb{R}$. \item If $\varphi$ is the uniform limit of a sequence $(\varphi_n)_{n\in \mathbb{N}}$, then \[ M(\varphi)=\lim_{n\to \infty} M(\varphi_n). \] \end{enumerate} \end{theorem} Now recall the concept of the Fourier series of an almost periodic function. A \textsf{normalized orthogonal system} $\{ e^{i\lambda x} \}_{\lambda\in\mathbb{R}}$ satisfies \[ M(e^{i\lambda_1 x} e^{-i\lambda_2 x}) = \delta(\lambda_1,\lambda_2) \] where $\delta(\lambda_1,\lambda_2)=1$ if $\lambda_1=\lambda_2$ and 0 in other case. The elements of this system are called \textsf{basic elements} and this set can be identified with $\mathrm{Char}(\mathbb{R})$. \smallskip Consider $\varphi\in \mathrm{C_{ap}}(\mathbb{R})$ and $\lambda\in \mathbb{R}$. The function $\varphi(x) e^{-i\lambda x}$ is the product of an almost periodic function and a purely periodic function, so it is an almost periodic function and its mean value $$ M(\varphi(x) e^{-i\lambda x}) = \lim_{T\to \infty} \frac{1}{T} \int_{0}^{T} \varphi(x) e^{-i\lambda x} dx $$ exists. The next theorem is of fundamental importance for the theory. \begin{theorem} \label{Bohr_frequencies} The function $a(\lambda) := M(\varphi(x) e^{-i\lambda x})$ is zero for all values of $\lambda$ with the exception of at most an enumerable set of numbers $\lambda$. \end{theorem} This theorem allows to carry the theory of Bohr -- Fourier series into the theory of almost periodic functions in the sense that it is possible to associate to an almost periodic function $\varphi$ its unique Bohr -- Fourier series \[ \sum_{n\in \mathbb{N}} a(\lambda_n) e^{i\lambda_n x}. \] \begin{remark} \label{Parseval} The Parseval's identity holds for any almost periodic function $\varphi$: \[ \sum_{n\in \mathbb{N}} \abs{a(\lambda_n)}^2 = M(\abs{\varphi}^2). \] \end{remark} The main result of the theory goes as follows. \begin{theorem} \textsf{(Bohr)} \label{Bohr_theorem} Every almost periodic function can be uniformly approximated by finite sums $s_N(x)=\sum_1^N a_n e^{i\lambda_n x}$. The exponents in the approximating sums $s_N(x)$ can be chosen to be precisely the Fourier exponents $\lambda_n$ of the function $\varphi$. \end{theorem} \section[Solenoidal Bohr -- Fourier theory]{Solenoidal Bohr -- Fourier theory} \label{solenoidal_theory} This section presents the main basic elements required for the development of the theory of the solenoidal Bohr -- Fourier series. First, it will be analyzed the relevant spaces of continuous functions, both on $\mathsf{S}$ and on $\mathbb{R}\times \widehat{\mathbb{Z}}$, and the continuous variation of the functions with respect to the transversal variable. This allows to define the appropriate notion of mean value and to describe its transversal variation. \subsection[Continuous invariant functions on $\mathsf{S}$]{Continuous invariant functions on $\mathsf{S}$} \label{continuous_functions} Denote by $\mathrm{C_{lp}}(\mathbb{R})$ the space of limit periodic functions $\mathbb{R}\longrightarrow \mathbb{C}$ in the sense of Bohr. Let $\mathrm{C}(\mathsf{S})$ be the space of continuous functions $\phi:\mathsf{S}\longrightarrow\mathbb{C}$. It is well known that there is a one to one correspondance between $\mathrm{C}(\mathsf{S})$ and the space $\mathrm{C}_\mathbb{Z}(\mathbb{R}\times \widehat{\mathbb{Z}})$ of continuous function $\Phi:\mathbb{R}\times \widehat{\mathbb{Z}}\longrightarrow \mathbb{C}$ satisfying that $\Phi$ is invariant under the action of $\mathbb{Z}$, i.e. \[ \Phi(\gamma\cdot (x,t)) = \Phi(x+\gamma,t-\gamma) = \Phi(x,t), \qquad ((x,t)\in \mathbb{R}\times \widehat{\mathbb{Z}}, \gamma\in \mathbb{Z}). \] In order to develop the Bohr -- Fourier theory for $\mathrm{C}(\mathsf{S})$ we will work on the space $\mathrm{C}_\mathbb{Z}(\mathbb{R}\times \widehat{\mathbb{Z}})$, which, after projection provides us the Bohr -- Fourier theory of $\mathrm{C}(\mathsf{S})$ described at the end of Section \ref{solenoidal_BohrFourier-series}. For now on, we will indistinguishably denote by $\mathrm{C}(\mathsf{S})$ both spaces. For every $t\in \widehat{\mathbb{Z}}$, the function $\Phi_t:\mathbb{R}\longrightarrow \mathbb{C}$ defined by \[ \Phi_t(x) = \Phi(x,t) \] is continuous. The invariant condition can be written as \begin{equation} \label{invariant_condition} \Phi_{t-\gamma} (x + \gamma) = \Phi_t(x), \qquad ((x,t)\in \mathbb{R}\times \widehat{\mathbb{Z}}, \gamma \in \mathbb{Z}). \end{equation} \begin{remark} According to \cite{Lop}, Theorem 2.4, for every $t\in \widehat{\mathbb{Z}}$, the function $\Phi_t:\mathbb{R}\longrightarrow \mathbb{C}$ is limit periodic. \end{remark} A nice consequence of this remark is the following: \begin{proposition} For each $\Phi\in \mathrm{C}(\mathsf{S})$, the map \[ \widehat{\mathbb{Z}}\longrightarrow \mathrm{C_{lp}}(\mathbb{R}), \qquad t\longrightarrow \Phi_t \] is uniformly continuous. That is, if $(t_n)_{n\geq 1}$ is a sequence of points in $\mathbb{Z}\subset \widehat{\mathbb{Z}}$ which converges to $t\in \widehat{\mathbb{Z}}$ in the profinite topology, then the sequence $(\Phi_{t_n})_{n\geq 1}$ in $\mathrm{C_{lp}}(\mathbb{R})$ converges to $\Phi_t\in \mathrm{C_{lp}}(\mathbb{R})$ in the uniform topology of $\mathrm{C_{lp}}(\mathbb{R})$. \end{proposition} This Proposition implies \[ \mathrm{C}(\mathsf{S}) \cong \mathrm{C}(\widehat{\mathbb{Z}},\mathrm{C_{lp}}(\mathbb{R})). \] \begin{remark} \label{translation-variation} As a matter of notation, it is important to notice that the function $\Phi_t$ \textbf{does not correspond} exactly with the usual definition of right translations on $\mathrm{C}(\mathbb{R})$ which is denoted by $\Phi^t$ with $t\in \mathbb{R}$. This notation emphazises the dependence of $\Phi$ on the transversal variable. However, when $t=0$ in $\widehat{\mathbb{Z}}$, the invariant condition restricted to $\mathcal{L}_0$ implies that \begin{align*} \Phi_0^s(x) &= \Phi_0\circ R_s(x) \\ &= \Phi_0(x+s) \\ &= \Phi(x+s,0) \\ &= \Phi(x+s+(-s),0-(-s)) \\ &= \Phi(x,s)\\ &= \Phi_s(x), \end{align*} for any $s\in \mathbb{Z}$ and $x\in \mathcal{L}_0$. Furthermore, for any $t,s\in \mathbb{Z}\subset \widehat{\mathbb{Z}}$ and $x\in \mathcal{L}_t$, the relation above sees as: \begin{align*} \Phi_t^s(x) &= \Phi_t\circ R_s(x) \\ &= \Phi_t(x+s) \\ &= \Phi(x+s,t) \\ &= \Phi(x+s+(-s),t-(-s)) \\ &= \Phi(x,t+s)\\ &= \Phi_{t+s}(x). \end{align*} \end{remark} \subsection[The mean value]{The mean value} \label{solenoidal_mean-value} For any function $\Phi\in \mathrm{C}(\mathsf{S})$, the \textsf{mean value} of $\Phi$ is given by \[ \mathcal{M}(\Phi) = \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) dx dt, \] whenever this limit exists. \begin{theorem} \label{MeanComparison} $\mathcal{M}(\Phi) = M(\Phi_t)$, for any choice of $t\in \widehat{\mathbb{Z}}$ fixed. \end{theorem} \begin{proof} If $t\in \mathbb{Z}$ and $s\in\mathbb{Z}$, Remark \ref{translation-variation} implies that $\Phi_{t+s}=\Phi_t\circ R_s$. By traslation invariance of the mean value (see Theorem \ref{Mean_properties}(2)), it follows that \[ M(\Phi_{t+s}) = M(\Phi_t\circ R_s) = M(\Phi_t) \quad (t\in \mathbb{Z}). \] Now, by what have been said before, if $(t_n)_{n\geq 1}$ is a sequence of points in $\mathbb{Z}\subset \widehat{\mathbb{Z}}$ which converges to $t\in \widehat{\mathbb{Z}}$ in the profinite topology, then the sequence $(\Phi_{t_n})_{n\geq 1}$ converges to $\Phi_t$. By properties of the mean value (see Theorem \ref{Mean_properties}(3)), $\displaystyle{M(\Phi_t)=\lim_{n\to \infty} M(\Phi_{t_n})}$. This means that for any $t\in \widehat{\mathbb{Z}}$ fixed, the mean value is constant and equal to $M(\Phi_t)$ in $\widehat{\mathbb{Z}}$. Therefore \begin{align*} \mathcal{M}(\Phi) &= \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) dx dt \\ &= \int_{\widehat{\mathbb{Z}}} M(\Phi_t) dt \\ &= M(\Phi_t). \end{align*} \end{proof} \begin{theorem} The invariant mean $\mathcal{M} : \mathrm{C}(\mathsf{S})\longrightarrow \mathbb{C}$ is a continuous linear functional which is invariant under right translations. That is, \begin{enumerate} \item $\mathcal{M}(\Phi+\Psi) = \mathcal{M}(\Phi) + \mathcal{M}(\Psi)$, for any $\Phi,\Psi\in \mathrm{C}(\mathsf{S})$. \item $\mathcal{M}(\Phi\circ R_s) = \mathcal{M}(\Phi)$, for any $\Phi\in \mathrm{C}(\mathsf{S})$ and $s\in \mathbb{R}$. \item If $\Phi$ is the uniform limit of a sequence $(\Phi_n)_{n\in \mathbb{N}}$, then \[ \mathcal{M}(\Phi) = \lim_{n\to \infty} \mathcal{M}(\Phi_n). \] \end{enumerate} \end{theorem} \subsection[Bohr -- Fourier transform]{Bohr -- Fourier transform} \label{solenoidal_Bohr-Fourier_transform} Given any function $\Phi\in \mathrm{C}(\mathsf{S})$ and any character $\chi_{\lambda,\varrho} \in \mathrm{Char}(\mathbb{R}\times \widehat{\mathbb{Z}})$, the Fourier transform of $\Phi$ in the \textsf{mean sense} is given by $$ \widehat{\Phi}(\chi_{\lambda,\varrho}) = \mathcal{M} \big( \Phi(x,t) \overline{\chi_{\lambda,\varrho}(x,t)} \big) = \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) \overline{\chi_{\lambda,\varrho}(x,t)} dx dt. $$ In fact, \begin{theorem} \label{transform_expression} If $\Phi$ is any function in $\mathrm{C}(\mathsf{S})$ and $\chi_{\lambda,\varrho}$ is any element in $\mathrm{Char}(\mathbb{R}\times \widehat{\mathbb{Z}})$, then \[ \widehat{\Phi}(\chi_{\lambda,\varrho}) = \int_{\widehat{\mathbb{Z}}} M(\Phi_t e^{-i\lambda x})\overline{\chi_{\varrho}(t)} dt. \] \end{theorem} \begin{proof} \begin{align*} \widehat{\Phi}(\chi_{\lambda,\varrho}) &= \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) \overline{\chi_{\lambda,\varrho}(x,t)} dx dt \\ &= \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) \overline{\chi_{\lambda}(x)} \overline{\chi_{\varrho}(t)} dx dt \\ &= \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) e^{-i\lambda x} \overline{\chi_{\varrho}(t)} dx dt \\ &= \int_{\widehat{\mathbb{Z}}} \lim_{T\to \infty}\frac{1}{T}\int_0^T \Phi_t(x) e^{-i\lambda x} dx \cdot\overline{\chi_{\varrho}(t)} dt \\ &= \int_{\widehat{\mathbb{Z}}} M(\Phi_t e^{-i\lambda x})\overline{\chi_{\varrho}(t)} dt. \end{align*} \end{proof} Since $\Phi_t$ is limit periodic for all $t\in \widehat{\mathbb{Z}}$, $\mathrm{Hull}(\Phi_t)$ is a quotient group of the solenoid (see \cite{Lop}, Theorem 2.2). By duality, $\mathrm{Char}(\mathrm{Hull}(\Phi_t))$ is a subgroup of the group $\mathrm{Char}(\mathsf{S})\cong \mathbb{Q}$. \begin{remark} \label{rational_frequencies} The function $M(\Phi_{t} e^{-i\lambda x})$ is zero for all values of $\lambda$ with the exception of at most an enumerable subset $\Omega_{\Phi_t}$ of $\mathbb{Q}$. \end{remark} Theorem \ref{transform_expression} tells us that the study of the variation of $M(\Phi_t e^{-i\lambda x})$ with respect to the transversal variable $t$ must be done. The following discussion deals with this issue. \smallskip First, fix $t=0$, the identity element in $\widehat{\mathbb{Z}}$. The function $\Phi_0\in \mathrm{C_{lp}}(\mathbb{R})$ is a limit periodic function defined on the base leaf $\mathcal{L}_{0} = \mathbb{R}\times \{0\}\subset \mathbb{R}\times \widehat{\mathbb{Z}}$. According to Bohr's theory: \begin{enumerate}[$\bullet$] \item The frequency module of $\Phi_0$ is a countable subset of rational numbers $\Omega_{\Phi_0}\subset \mathbb{R}$, \item the invariant mean $M(\Phi_0)$ defined as \[ M(\Phi_0) = \lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_0(x) dx \] exists, and, \item the $\lambda^{th}$ Fourier coefficient of $\Phi_0$, $$ \widehat{\Phi}_0(\lambda) = M(\Phi_0(x)e^{-i\lambda x}) = \lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_0(x) e^{-i\lambda x} dx $$ is well defined. \end{enumerate} \begin{remark} Using the fact that $\mathbb{R}$ is selfdual, sometimes we will also write $$ \widehat{\Phi}_0(\chi_\lambda) = M(\Phi_0(x) \overline{\chi_\lambda(x)}) = \lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_0(x) \overline{\chi_\lambda(x)} dx, $$ emphasizing the use of the character $\chi_\lambda \in \Char(\mathbb{R})$ associated with $\lambda$. \end{remark} The Fourier series of $\Phi_0$ is written as \[ \Phi_0(x) = \sum_{\lambda \in \Omega_{\Phi_0}} \widehat{\Phi}_0(\lambda) \chi_\lambda(x) = \sum_{\lambda \in \Omega_{\Phi_0}} \widehat{\Phi}_0(\lambda) e^{i\lambda x}. \] \begin{theorem} \label{transversal-variation} If $\Phi\in\mathrm{C}(\mathsf{S})$ then \[ M(\Phi_t e^{-i\lambda x}) = A_{\lambda}(t) M(\Phi_0(x)e^{-i\lambda x}), \] where $A_{\lambda} : \widehat{\mathbb{Z}}\longrightarrow \mathbb{T}$ is a continuous function. \end{theorem} \begin{proof} The first part of Remark \ref{translation-variation} implies that for any $t\in \mathbb{Z}\subset \widehat{\mathbb{Z}}$, the identity $\Phi_0(x+t) = \Phi_t(x)$ holds for every $x\in \mathcal{L}_{0}$. The mean value of $\Phi_0$ is precisely invariant under these translations, i.e. $M(\Phi_0(x+t)) = M(\Phi_0(x))$. Hence, \begin{align*} M(\Phi_t(x)e^{-i\lambda x}) &= \lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_t(x) e^{-i\lambda x} dx \\ &= \lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_0(x+t) e^{-i\lambda x} dx \\ &= e^{-i\lambda t} \lim_{T\to \infty}\frac{1}{T} \int_0^T \Phi_0(x) e^{-i\lambda x}dx\\ &= e^{-i\lambda t} M(\Phi_0(x)e^{-i\lambda x}). \end{align*} This means that for any $t\in \mathbb{Z}\subset \widehat{\mathbb{Z}}$, the mean value $M(\Phi_t(x)e^{-i\lambda x})$ is transformed into $e^{-i\lambda t} M(\Phi_0(x)e^{-i\lambda x})$. This calculation, together with the continuous variation can be used to determine the mean value $M(\Phi_t(x)e^{-i\lambda x})$ for any $t\in \widehat{\mathbb{Z}}$. Chose a sequence $(t_{n})_{n\in \mathbb{N}}$ in $\mathbb{Z}\subset \widehat{\mathbb{Z}}$ such that $t_n\to t$. Note that since $\Phi_{t_n}\longrightarrow \Phi_t$, \begin{align*} M(\Phi_t(x)e^{-i\lambda x}) &= \lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_t(x) e^{-i\lambda x} dx \\ &=\lim_{n\to \infty}\lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_{t_n}(x) e^{-i\lambda x} dx \\ &=\lim_{n\to \infty}\lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_0(x+t_n) e^{-i\lambda x} dx \\ &= \lim_{n\to \infty} e^{-i\lambda t_n} \lim_{T\to \infty} \frac{1}{T} \int_0^T \Phi_0(x) e^{-i\lambda x}dx\\ &= \lim_{n\to \infty} e^{-i\lambda t_{n}} M(\Phi_0(x)e^{-i\lambda x}).\\ &= A_{\lambda}(t) M(\Phi_0(x)e^{-i\lambda x}), \end{align*} where \[ A_{\lambda}(t) := \lim_{n\to \infty} e^{-i\lambda t_n} \] exists and it does not depend on the choice of the sequence $(t_n)_{n\in \mathbb{N}}$. This determines a continuous function $A_{\lambda} : \widehat{\mathbb{Z}}\longrightarrow \mathbb{T}$. \end{proof} \begin{remark} \label{A_character} Note that $A_{\lambda}$ can be written as $$ A_{\lambda}(t) = \frac{M(\Phi_{t}(x)e^{-i\lambda x})}{M(\Phi_0(x)e^{-i\lambda x})} = \frac{\widehat{\Phi}_t(\chi_{\lambda}) }{ \widehat{\Phi}_0(\chi_\lambda)}. $$ The function $A_{\lambda} : \widehat{\mathbb{Z}}\longrightarrow \mathbb{T}$ defines a character on $\widehat{\mathbb{Z}}$. \end{remark} The results proved before, Theorem \ref{transform_expression}, Theorem \ref{transversal-variation} and Remark \ref{A_character}, can be used to compute the Fourier transform of any function $\Phi\in \mathrm{C}(\mathsf{S})$ in the following way: for any character $\chi_{\lambda,\varrho}\in \mathrm{Char}(\mathbb{R}\times \widehat{\mathbb{Z}})$, \begin{align*} \widehat{\Phi}(\chi_{\lambda,\varrho}) &= \mathcal{M} \big( \Phi(x,t) \overline{\chi_{\lambda,\varrho}(x,t)} \big) \\ &= \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) \overline{\chi_{\lambda,\varrho}(x,t)} dx dt \\ &= \int_{\widehat{\mathbb{Z}}} M(\Phi_t e^{-i\lambda x})\overline{\chi_{\varrho}(t)} dt \\ &= M(\Phi_0 \cdot\overline{\chi_\lambda}) \cdot \int_{\widehat{\mathbb{Z}}} A_{\lambda}(t)\cdot \overline{\chi_{\varrho}(t)}. \end{align*} According to Remark \ref{rational_frequencies}, the mean value $M(\Phi_0 \cdot\overline{\chi_\lambda})$ is zero for all values of $\lambda$ with the exception of at most a countable subset of $\mathbb{Q}$. The integral expression in the last equality is evaluated by integration of characters of $\widehat{\mathbb{Z}}$: \[ \int_{\widehat{\mathbb{Z}}} A_{\lambda}(t)\cdot \overline{\chi_{\varrho}(t)} dt = 1 \] if and only if $A_{\lambda}(t)=\chi_{\varrho}(t)$ and $0$ in other case. Also, $A_{\lambda}$ and $\chi_{\varrho}$ define the same character if and only if $\varrho=\lambda \mod \mathbb{Z}$. So, the final form of the Fourier coefficient of any function $\Phi\in \mathrm{C}(\mathsf{S})$ is given in the following: \begin{theorem} \label{Bohr-Fourier_coefficients} $$ \widehat{\Phi}(\chi_{\lambda,\varrho}) = \widehat{\Phi}_0(\chi_\lambda) \cdot \int_{\widehat{\mathbb{Z}}} A_{\lambda}(t)\cdot \overline{\chi_{\varrho}(t)} dt, $$ where $\widehat{\Phi}_0(\chi_\lambda)=M(\Phi_0 \cdot\overline{\chi_\lambda})$ when $\varrho=\lambda \mod \mathbb{Z}$, and $0$ in other case. \end{theorem} \begin{remark} \label{frequencies_decomposition} As a consequence of the above theorem the mean value is zero except at most in an enumerable set $\Omega_{\Phi}\cong \Omega_{\Phi_0}$ (Compare Theorem \ref{Bohr_frequencies}). In fact, any $\lambda\in \Omega_{\Phi}$ can be written as $\lambda = [\lambda] + \varrho$, where $[\lambda]$ is the integer part of $\lambda$ and its fractional part $\varrho$ is such that $\varrho=\lambda \mod \mathbb{Z}$. \end{remark} \section[Solenoidal Bohr -- Fourier series]{Solenoidal Bohr -- Fourier series} \label{solenoidal_BohrFourier-series} This final section describes the Bohr -- Fourier series for a complex -- valued continuous function $\phi$ on the solenoid $\mathsf{S}$ through the associated continuous $\mathbb{Z}$ -- invariant function on $\mathbb{R}\times \widehat{\mathbb{Z}}$. It also presents the solenoidal version of the Parseval's identity and the Approximation theorem. Finally, this theory is compared with the classical theory on $\mathsf{S}$ viewed as a compact abelian group. \subsection[Classical Fourier series on $\mathsf{S}$]{Classical Fourier series on $\mathsf{S}$} \label{fourier-series_S} According to the classical harmonic analysis on the compact abelian topological group $\mathsf{S}$, given a function $\phi:\mathsf{S}\longrightarrow\mathbb{C}$, the Fourier series can be defined abstractly as \[ \overline{\phi}(z) = \sum_{q\in\mathbb{Q}} \widehat{\phi}(\chi_q) \chi_q(z), \] where $\chi_q$ is the character of $\mathsf{S}$ associated to $q\in \mathbb{Q}$ and \[ \widehat{\phi}(\chi_q) = \int_{\mathsf{S}} \phi(z) \overline{\chi}_q(z) d\mu. \] In what follows the corresponding Bohr -- Fourier series described through the theory developed previously is done. \subsection[Solenoidal Bohr -- Fourier series]{Solenoidal Bohr -- Fourier series} \label{solenoidal_BF-series} Denote by $\overline{\Phi}$ the Bohr -- Fourier series associated to a given function $\Phi\in \mathrm{C}(\mathsf{S})$. According with Remark \ref{frequencies_decomposition}, the Bohr -- Fourier series of $\Phi$ is $$ \overline{\Phi}(x,t) = \sum_{(\lambda,\varrho)\in \Omega_\Phi} \widehat{\Phi}(\lambda,\varrho) \chi_{\lambda,\varrho}(x,t). $$ \begin{remark} Since $\chi_{\lambda,\varrho}=\chi_\lambda\cdot \chi_\varrho$, when $t=0$, $\chi_\varrho(0)=1$ for any $\varrho$. By Theorem \ref{Bohr-Fourier_coefficients}, $\widehat{\Phi}(\lambda,\varrho)=\widehat{\Phi}_0(\lambda)$ and Remark \ref{frequencies_decomposition} shows that $\Omega_{\Phi}\cong \Omega_{\Phi_0}$. This allows to identify the Fourier series introduced here with the usual Bohr -- Fourier series when restricting to the base leaf $\mathcal{L}_{0}$: $$ \overline{\Phi}_0(x) = \sum_{\lambda\in \Omega_{\Phi_0}} \widehat{\Phi}_0(\lambda) \chi_\lambda (x). $$ \end{remark} Following the order of ideas presented by Bohr (see \cite{Bohr}, Sections 70 and 84), the solenoidal version of the main results of Bohr's theory such as the Parseval's identity, the uniqueness theorem and the approximation theorem are now discussed. \begin{theorem}[Parseval's identity] \label{solenoidal_parseval-identity} For any $\Phi\in \mathrm{C}(\mathsf{S})$ \[ \sum_{(\lambda,\varrho)\in \Omega_\Phi} \abs{\widehat{\Phi}(\lambda,\varrho)}^2 = \mathcal{M}(\abs{\Phi}^2). \] \end{theorem} \begin{proof} According with Remark \ref{frequencies_decomposition} and Theorem \ref{Bohr-Fourier_coefficients}, \[ \abs{\mathcal{M}(\Phi(\lambda,\varrho))}^2 = \abs{M(\Phi_0(\lambda))}^2. \] Therefore, considering the Bohr -- Fourier series of $\Phi_0$, the classical Parseval's identity (see Theorem \ref{Parseval}) and Theorem \ref{MeanComparison} imply that \begin{align*} \sum_{(\lambda,\varrho)\in \Omega_\Phi} \abs{\widehat{\Phi}(\lambda,\varrho)}^2 &= \sum_{\lambda\in \Omega_{\Phi_0}} \abs{\widehat{\Phi}_0(\lambda)}^2\\ &= M(\abs{\Phi_0}^2)\\ &= \mathcal{M}(\abs{\Phi}^2). \end{align*} \end{proof} \begin{theorem}[Uniqueness] Any $\Phi\in \mathrm{C}(\mathsf{S})$ is uniquely determined by its Fourier series. \end{theorem} \begin{proof} Uniqueness follows from Parseval's identity as in Bohr (see \cite{Bohr} Section 71). By Theorem \ref{MeanComparison}, if $\mathcal{M}(\abs{\Phi}^2)=0=M(\abs{\Phi_0}^2)$ then the second equality implies that $\Phi_0=0$. Finally, $\Phi_0\equiv 0$ implies $\Phi_t\equiv 0$ for every $t\in \widehat{\mathbb{Z}}$ and therefore $\Phi\equiv 0$. \end{proof} \begin{remark} As was established by Bohr, these theorems are equivalent and play a fundamental role in the development of the theory. \end{remark} Another implication of the theory developed here is that since any function can be approximated on the base leaf by the Fourier series in the classical sense and it coincides with the restriction of the solenoidal version, we can extend the argument to the solenoid by limits and the approximation theorem follows immediately. \begin{theorem} [Approximation theorem] \label{solenoidal_approximation-theorem} Any $\Phi\in \mathrm{C}(\mathsf{S})$ can be aproximated arbitrarily by finite terms of its Fourier series. \end{theorem} \subsection[Invariance of the Bohr -- Fourier series]{Invariance of the Bohr -- Fourier series} \label{invariance_BohrFourier-series} To conclude the analysis on the Bohr -- Fourier series it should be verify that the theory just developed descend naturally to the universal solenoid. This is done as follows. First recall that the invariance of any $\Phi\in \mathrm{C}(\mathsf{S})$ under the action of $\mathbb{Z}$, reads as: $$\Phi_{t-\gamma} (x + \gamma) = \Phi_t(x), \qquad ((x,t)\in \mathbb{R}\times \widehat{\mathbb{Z}}, \gamma \in \mathbb{Z})$$ From this expression and the definition of the Fourier coefficient follows immediately that the Bohr -- Fourier coefficients are invariant under the action of $\mathbb{Z}$. Hence the corresponding Fourier coefficients of the induced function $\phi$ are given by (see Section \ref{haar_measure}) \begin{align*} \widehat{\Phi}(\chi_{\lambda,\varrho})& = \lim_{T\to \infty} \frac{1}{T} \int_{\widehat{\mathbb{Z}}} \int_0^T \Phi(x,t) \overline{\chi_{\lambda,\varrho}(x,t)} dx dt\\ &=\int_{\mathsf{S}}\phi(z) \overline{\chi_q}(z) d\mu\\ &=\widehat{\phi}(q). \end{align*} where $q=\lambda+\varrho$. Finally, this allows us to `project' the Bohr -- Fourier series of any $\mathbb{Z}$ --invariant function $\Phi:\mathbb{R}\times\widehat{\mathbb{Z}}\longrightarrow \mathbb{C}$ to the classical Bohr -- Fourier series of a function $\phi:\mathsf{S}\to \mathbb{C}$ as: \begin{align*} \overline{\phi}(z) &= \sum_{q\in\mathbb{Q}} \widehat{\phi}(\chi_q) \chi_q(z). \end{align*}
1,116,691,497,945
arxiv
\section{Introduction} \label{s1} In the present paper we study a relation between the Lorentzian twistor equation and CR-geometry. Besides the Dirac operator there is a second important conformally covariant differential operator acting on the spinor fields $\Gamma(S)$ of a smooth semi-Riemannian spin manifold $(M,g)$ of dimension $n$ and index $k$, the so-called {\em twistor operator} ${\cal D}$. The twistor operator is defined as the composition of the spinor derivative $\nabla^S$ with the projection $p$ onto the kernel of the Clifford multiplication $\mu$ \[ {\cal D}:\Gamma(S)\stackrel{\nabla^S}{\longrightarrow} \Gamma(T^*M\otimes S) \stackrel{g}{\approx} \Gamma(TM\otimes S)\stackrel{p}{\longrightarrow} \Gamma(\mbox{Ker}\,\mu). \] The elements of the kernel of ${\cal D}$ are called {\em twistor spinors}. A spinor field $\varphi$ is a twistor spinor if and only if it satisfies the {\em twistor equation} \[ \nabla^S_X \varphi + \frac{1}{n} X \cdot D \varphi = 0 \] for each vector field $X$, where $D$ is the Dirac operator. Each twistor spinor $\varphi$ defines a conformal vector field $V_{\varphi}$ on $M$ by \[ g(V_\varphi , X) = i^{k+1}\,\langle X \cdot \varphi , \varphi \rangle \,. \] Twistor spinors were introduced by R.Penrose in General Relativity (see \cite{Penrose:67}, \cite{Penrose/Rindler:86}, \cite{Nieuwenhuizen/Warner:84}). They are related to Killing vector fields in semi-Riemannian supergeometry (see \cite{Alekseevski/Cortes/ua:97}). In Riemannian geometry the twistor equation first appeared as an integrability condition for the canonical almost complex structure of the twistor space of an oriented four-dimensional Riemannian manifold (see \cite{Atiyah/Hitchin/ua:78}). In the second half of the 80th Lichnerowicz and Friedrich started the systematic investigation of twistor spinors on Riemannian spin manifolds from the view point of conformal differential geometry. Nowadays one has a lot of structure results and examples for ma\-ni\-folds with twistor spinors in the Riemannian setting (see \cite{Lichnerowicz1:88}, \cite{Lichnerowicz2:88}, \cite{Lichnerowicz1:89}, \cite{Friedrich:89} \cite{Lichnerowicz2:90}, \cite{Friedrich/Pokorna:91}, \cite{Baum/Friedrich/ua:91}, \cite{Habermann:90}, \cite{Habermann:93}, \cite{Habermann:94}, \cite{Kuehnel/Rademacher:94}, \cite{Kuehnel/Rademacher:95}, \cite{Kuehnel/Rademacher1:96}, \cite{Kuehnel/Rademacher2:96}). Crucial results were obtained by studying the properties of the conformal vector field $V_{\varphi}$ of a twistor spinor $\varphi$. Twistor operators also turned out to be a usefull tool in proving sharp eigenvalue estimates for coupled Dirac operators on compact Riemannian manifolds (see eg \cite{Baum1:94}). \\ In opposite to this, there is not much known about solutions of the twistor equation in the general Lorentzian setting. In 1991 Lewandowski studied local solutions of the twistor equation on 4-dimensional space-times, (\cite{Lewandowski:91}). In particular, he proved that a 4-dimensional space-time admitting a twistor spinor $\varphi$ without zeros and with twisting conformal vector field $V_{\varphi}$ is locally conformal equivalent to a Fefferman space. On the other hand, on 4-dimensional Fefferman spaces there exist {\em local} solutions of the twistor equation. The aim of the present paper is the generalisation of this result. \\ Fefferman spaces were defined by Fefferman (\cite{Fefferman:76}) in case of strictly pseudoconvex hypersurfaces in $\C^n$, its definition was extended by Burns, Diederich, Shnider (\cite{Burns/Diederich/ua:77}), Farris (\cite{Farris:86}) and Lee (\cite{Lee:86}) to general non-degenerate CR-manifolds. Sparling (\cite{Sparling:85}), Lee (\cite{Lee:86}), Graham (\cite{Graham:87}) and Koch (\cite{Koch:88}) studied geometric properties of Fefferman spaces. A Fefferman space is the total space of a certain $S^1$-principal bundle over a non-degenerate CR-manifold $M$ equipped with a semi-Riemannian metric defined by means of the Webster connection. By changing the topological type of the $S^1$-bundle defining the Fefferman space, we can prove that there are {\em global} solutions of the twistor equation on the (modified) Fefferman spaces of strictly pseudoconvex spin manifolds of arbitrary dimension. These solutions have very special geometric properties which are only possible on Fefferman spaces. More exactly, we prove (see Theorem \ref{t1}, Theorem \ref{t2}):\\[0.3cm] {\em Let $\,(M^{2n+1},T_{10},\theta)\,$ be a strictly pseudoconvex spin manifold and $\,(\sqrt{F},h_\theta)\,$ its Fefferman space. Then, on the Lorentzian spin manifold $\,(\sqrt{F},h_\theta)\,$ there exist a non-trivial twistor spinor $\phi$ such that \begin{enumerate} \item The canonical vector field $V_\phi$ of $\phi$ is a regular isotropic Killing vector field. \item $\, V_\phi \cdot \phi = 0\,$. In particular, $\phi$ is a pure or partially pure spinor field. \item $\,\nabla_{V_\phi} \phi = i\,c\,\phi\,, \quad c= \,\mbox{const}\,\in \R \setminus \{0\}\,$. \end{enumerate} On the other hand, if $(B,h)$ is a Lorentzian spin manifold with a non-trivial twistor spinor satisfying 1. - 3., then $B$ is an $S^1$-principal bundle over a stricly pseudoconvex spin manifold $\,(M,T_{10},\theta)\,$ and $(B,h)$ is locally isometric to the Fefferman space $\,(\sqrt{F},h_\theta)\,$ of $(M,T_{10},\theta)\,$}.\\[0.3cm] In particular, if $(M^{2n+1},T_{10},\theta)$ is a compact strictly pseudoconvex spin manifold of constant Webster scalar curvature, then the Fefferman space $(\sqrt{F},h_\theta)$ of $(M,T_{10},\theta)$ is a (2n+2)-dimensional non-Einsteinian Lorentzian spin manifold of constant scalar curvature $R$ and the twistor spinor $\phi$ defines eigenspinors of the Dirac operator of $(\sqrt{F},h_\theta)$ to the eigenvalues $\, \,\pm \frac{1}{2}\sqrt{\frac{2n+2}{2n+1}R}\,\,$ with constant length.\\ After some algebraic prelimeries in section 2 we introduce in section 3 the notion of Lorentzian twistor spinors and explain some of their basic properties. In order to define the (modified) Fefferman space we recall in section 4 the basic notions of pseudo-hermitian geometry. In particular, we explain the properties of the Webster connection of a non-degenerate pseudo-hermitian manifold, which are important for the spinor calculus on Fefferman spaces. In section 5 the Fefferman spaces are defined and in section 6 we derive a spinor calculus for Lorentzian metrics on $S^1$-principal bundles with isotropic fibre over strictly pseudoconvex spin manifolds. Finally, section 7 contains the proof of the Theorems 1 and 2 which state the properties of the solutions of the twistor equation on Fefferman spaces of strictly pseudoconvex spin manifolds. \section{Algebraic prelimeries} \label{s2} For concrete calculations we will use the following realization of the spinor representation. Let $\,\mbox{Cliff}_{n,k}\,$ be the Clifford algebra of $\,(\R^n,-\langle \cdot, \cdot \rangle_k)\,$, where $\,\langle \cdot, \cdot \rangle_k\,$ is the scalar product $\,\, \langle x,y\rangle_k:=-x_1y_1-\ldots-x_ky_k+x_{k+1}y_{k+1}+\ldots+x_ny_n\,\,$. For the canonical basis $(e_1,\ldots,e_n)$ of $\R^n$ one has the following relations in $\,\mbox{Cliff}_{n,k}\,:\,\, e_i\cdot e_j+e_j\cdot e_i=-2\varepsilon_j\delta_{ij},\,\,$ where $ \varepsilon_j=\left\{\begin{array}{rl} -1 & j\le k\\ 1 & j> k \,\end{array}\right. $. Denote $\,\,\tau_j = \left\{\begin{array}{ll} i & j\le k\\ 1 & j>k\end{array}\right.\,\,$ and \begin{displaymath} U = \left(\begin{array}{cc} i & 0\\ 0 & -i\end{array}\right),\quad V=\left(\begin{array}{cc} 0 & i\\ i & 0\end{array}\right),\quad E=\left(\begin{array}{cc} 1 & 0\\ 0 & 1\end{array}\right),\quad T=\left(\begin{array}{cc} 0 & -i\\ i & 0\end{array}\right). \end{displaymath} Then an isomorphism \[ \phi_{2m,k}:\mbox{Cliff}^{\Bbb C}_{2m,k} \longrightarrow M(2^m;\C) \] is given by the Kronecker product \begin{eqnarray}\label{1} \begin{array}{llll} \phi_{2m,k}(e_{2j-1}) & = & \tau_{2j-1} &E\otimes\ldots\otimes E \otimes U \otimes T\otimes\ldots\otimes T\\ \phi_{2m,k}(e_{2j}) & = & \tau_{2j} &E\otimes\ldots\otimes E\otimes V \otimes \underbrace{T\otimes\ldots\otimes T}_{j-1} \end{array}. \end{eqnarray} Let $\mbox{Spin}_0(n,k)\subset \mbox{Cliff}_{n,k}$ be the connected component of the identity of the spin group. The spinor representation is given by \[ x_{n,k}=\phi_{n,k}|_{\mbox{Spin}_0(n,k)}:\mbox{Spin}_0(n,k) \longrightarrow \mbox{GL} (\C^{2^m}). \] We denote this representation by $\,\Delta_{n,k}\,$. If $n=2m$, $\,\Delta_{2m,k}\,$ splits into the sum $\,\Delta_{2m,k}=\Delta^+_{2m,k}\oplus \Delta^-_{2m,k}\,$, where $\,\Delta^\pm_{2m,k}\,$ are the eigenspaces of the endomorphism $\,\phi_{2m,k}(e_1\cdot\ldots\cdot e_{2m})\,$ to the eigenvalue $\pm i^{m+k}$. Let us denote by $u(\delta)\in\C^2$ the vector $\, u(\delta)=\frac{1}{\sqrt{2}}{1\choose -\delta i},\,\,\delta=\pm 1 \,\,$ and let \begin{equation}\label{2} u(\delta_1,\ldots,\delta_m)=u(\delta_1)\otimes\ldots\otimes u(\delta_m)\qquad \delta_j=\pm 1. \end{equation} Then $\;(u(\delta_1,\ldots,\delta_m)\,|\,\prod\limits^m_{j=1}\delta_j=\pm 1)\;$ is an orthonormal basis of $\,\Delta^\pm_{2m,k}\,$ with respect to the standard scalar product of $\C^{2^m}$. \section{Lorentzian twistor spinors} \label{s3} Let $(M^{n,1},g)$ be a connected space- and time oriented Lorentzian spin manifold with a fixed time orientation $\,\xi\in \Gamma(TM)\,$, $g(\xi,\xi)=-1$. We denote by $S$ the spinor bundle of $(M^{n,1},g)$, by $\,\nabla^S:\Gamma(S) \to \Gamma(TM^*\otimes S)\,$ the spinor derivative given by the Levi-Civita connection of $(M^{n,1},g)$ and by $\,D:\Gamma(S)\to\Gamma(S)\,$ the Dirac operator on $S$.\\ On $S$ there exists an indefinite scalar product $\,\langle\cdot,\cdot\rangle\,$ of index $\,\frac{1}{2}\dim S\,$ such that \begin{eqnarray}\label{3} \langle X\cdot\varphi,\psi\rangle &=& \langle\varphi, X\cdot\psi\rangle\\ X\langle\varphi,\psi\rangle &=& \langle\nabla^S_X\varphi,\psi\rangle+ \langle\varphi,\nabla^S_X\psi\rangle \label{4} \end{eqnarray} for all vector fields $X$ and all spinor fields $\varphi,\psi \in\Gamma(S)$. Furthermore, there is a positive definite scalar product $\,(\cdot ,\cdot )_\xi\,$ on $S$ depending on the time orientation $\xi$ such that \begin{equation}\label{5} \langle\varphi,\psi\rangle =(\xi\cdot\varphi,\psi)_\xi \end{equation} for all $\varphi,\psi\in\Gamma(S)$ (see \cite{Baum:81}, chap.1.5, 3.3.1.). Let $\,p:TM\otimes S \longrightarrow \mbox{Ker}\,\mu\,$ denote the orthogonal projection onto the kernel of the Clifford multiplication $\mu$ (with respect to $\langle \cdot , \cdot\rangle$). $p$ is given by \[ p(X \otimes \varphi)=X \otimes \varphi+\frac{1}{n}\sum\limits^n_{k=1}\varepsilon_k s_k \otimes s_k \cdot X \cdot \varphi, \] where $(s_1,\ldots,s_n)$ is a orthonormal basis of $(M,g)$ and $\varepsilon_k=g(s_k, s_k)=\pm 1$. \begin{de} The twistor operator ${\cal D}$ of $\,(M^{n,1}, g)\,$ is the operator given by the composition of the spinor derivative with the projection $p$ \[ {\cal D}:\Gamma(S)\stackrel{\nabla^S}{\longrightarrow} \Gamma(T^*M\otimes S) \stackrel{g}{\approx} \Gamma(TM\otimes S)\stackrel{p}{\longrightarrow} \Gamma(\mbox{Ker}\,\mu). \] \end{de} Locally, we have \[ {\cal D}\varphi=\sum\limits^n_{k=1}\varepsilon_k s_k\otimes(\nabla^S_{s_k}\varphi+ \frac{1}{n}s_k\cdot D\varphi). \] \begin{de} A spinor field $\varphi\in\Gamma(S)$ is called a twistor spinor, if ${\cal D}\varphi=0$. \end{de} Let us first recall some properties of twistor spinors which are proved in the same way as in the Riemannian case. \begin{pr} \label{pr1} {\em (\cite{Baum/Friedrich/ua:91}, Th.1.2)}\\ For a spinor field $\varphi\in \Gamma(S)$ the following conditions are equivalent:\\[0.1cm] \begin{tabular}{cl} 1. & $\varphi$ is a twistor spinor.\\[0.1cm] 2. & $\varphi$ satisfies the so-called twistor equation \end{tabular} \vspace{-0.05cm}\\ \begin{equation} \label{6} \nabla^S_X\varphi+\frac{1}{n}X\cdot D\varphi=0 \end{equation} \vspace{-0.4cm}\\ \begin{tabular}{cl} & for all vector fields $X$.\\[0.1cm] 3. & For all vector fields $X$ and $Y$ \end{tabular} \vspace{-0.05cm}\\ \begin{equation}\label{7} X\cdot\nabla^S_Y\varphi+Y\cdot\nabla^S_X\varphi=\frac{2}{n}\,g(X,Y)\, D\varphi \end{equation} \vspace{-0.4cm}\\ \begin{tabular}{cl} & holds. \\[0.1cm] 4. & There exists a spinor field $\psi\in\Gamma(S)$ such that \end{tabular} \vspace{-0.05cm}\\ \begin{equation}\label{8} \psi=g(X,X)X\cdot\nabla^S_X\varphi \end{equation} \vspace{-0.4cm}\\ \begin{tabular}{cl} & $\;\;\;$for all vector fields $X$ with $\,|g(X,X)|=1$. \end{tabular} \end{pr} \vspace{0.1cm} \begin{pr} \label{pr2} {\em (\cite{Baum/Friedrich/ua:91}, Th.1.7)}\\ The twistor operator is conformally covariant: Let $\,\tilde{g} = e^{2\sigma} g\,$ be a conformally equivalent metric to $g$ and let $\tilde{D}$ be the twistor operator of $(M,\tilde{g})$. Then \[ \tilde{D} \tilde{\varphi} = e^{- \frac{1}{2}\sigma} \widetilde{D (e^{-\frac{1}{2}\sigma} \cdot \varphi)}, \] where $\,^{\sim} : S \longrightarrow \tilde{S}\,$ denotes the canonical identification of the spinor bundles of $(M,g)$ and $(M,\tilde{g})$. \end{pr} \vspace{0.1cm} \begin{pr} \label{pr3} {\em (\cite{Baum/Friedrich/ua:91} Cor.1.2)}\\ The dimension of the space of twistor spinors is conformally invariant and bounded by \[ \dim \mbox{Ker} {\cal D} \le 2^{[\frac{n}{2}]+1} . \] \end{pr} \vspace{0.1cm} \begin{pr} \label{pr4} {\em (\cite{Baum/Friedrich/ua:91} Cor.1.3)}\\ Let $\varphi \in \Gamma (S)$ be a non-trivial twistor spinor and $x_0\in M$. Then $\,\varphi (x_0)\neq 0\,$ or $\, D \varphi (x_0) \neq 0$. \end{pr} \ \\ Let $R$ be the scalar curvature and Ric the Ricci curvature of $(M^{n,1},g)$. If $\,\dim M = n \ge 3,\,\,\,K\,$ denotes the (2,0) -Schouten tensor \[ K(X,Y) = \frac{1}{n-2} \left\{ \frac{R}{2(n-1)} g - \mbox{ Ric} \right\}. \] We always identify $TM$ with $TM^*$ using the metric $g$. For a ($2,0$)-tensor field $B$ we denote by the same symbol $B$ the corresponding $(1,1)$-tensor field $B:TM \longrightarrow TM\,$, $\;g(B(X),Y) = B(X,Y).\,$ Let $C$ be the (2,1)-Schouten-Weyl tensor \[ C(X,Y) = (\nabla_X K)(Y) - (\nabla_Y K)(X) . \] Furthermore, let $W$ be the (4,0)-Weyl tensor of $(M,g)$ and let denote by the same symbol the corresponding (2,2)-tensor field $\; W : \Lambda^2 M \longrightarrow \Lambda^2 M.\;$ Then we have \begin{pr}\label{pr5} {\em (\cite{Baum/Friedrich/ua:91} Th.1.3, Th.1.5)}\\ Let $\varphi \in \Gamma (S)$ be a twistor spinor and $\eta = Y \wedge Z \in \Lambda^2M\,$ a two form. Then \begin{eqnarray} D^2 \varphi & = & \frac{1}{4} \frac{n}{n-1} R \varphi\,, \label{9} \\ \label{10} \nabla^S_X D \varphi & = & \frac{n}{2} K(X) \cdot \varphi\;,\\ \label{11} W (\eta ) \cdot \varphi &=& 0 \;,\\ \label{12} W(\eta)\cdot D\varphi& = & n\, C(Y,Z) \cdot \varphi \;,\\ \label{13} (\nabla_X W) (\eta)\cdot\varphi & = & X\cdot C(Y,Z)\cdot\varphi +\frac{2}{n}(X\;_-\!\rule{0.2mm}{0.2cm}\;\; W(\eta))\cdot D\varphi\;. \hspace{4cm} \end{eqnarray} \end{pr} \ \\ If the scalar curvature $R$ of $\,(M^{n,1},g)\,$ is constant and non-zero, equation (\ref{9}) shows that the spinor fields \[ \psi_{\pm} := \frac{1}{2} \varphi \pm \sqrt{\frac{n-1}{nR}}\,D\varphi \] are formal eigenspinors of the Dirac operator $D$ to the eigenvalue $\,\pm \frac{1}{2} \sqrt{\frac{nR}{n-1}}\,$.\\ A special class of twistor spinors are the so-called {\em Killing spinors} $\,\varphi \in \Gamma(S)\,$ defined by the condition \[ \nabla_X^S \varphi = \lambda \,X \cdot \varphi \qquad \mbox{ for all }\;\; X \in \Gamma(TM),\] where $\lambda\,$ is a constant complex number, called the {\em Killing number} of $\varphi$. Using the twistor equation and the properties (9) and (10) one obtains that for an Einstein space $\,(M^{n,1},g)\,$ with constant scalar curvature $\,R \not = 0\,$ the spinor fields $\,\psi_{\pm}\,$ are Killing spinors to the Killing number $ \,\lambda = \mp \frac{1}{2} \sqrt{\frac{R}{n(n-1)}}\,$. Hence, on this class of Lorentzian manifolds each twistor spinor is the sum of two Killing spinors. Therefore, we are specially interested in non-Einsteinian Lorentzian manifolds which admit twistor spinors.\\ To each spinor field we associate a vector field in the following way. \begin{de} Let $\varphi \in \Gamma (S)$. The vector filed $V_{\varphi}$ definied by \[ g(V_{\varphi}, X):=-\langle X \cdot \varphi, \varphi \rangle \,,\quad\qquad X \in \Gamma(TM) \] is called the canonical vector field of $\varphi$. \end{de} Because of (1), $V_{\varphi}$ is a real vector field. By Zero$( \varphi )$ and Zero$(X)$ we denote the zero sets of a spinor field $\varphi$ or a vector field $X$. \begin{pr} \label{pr6} \begin{enumerate} \item For each spinor field $\varphi \in \Gamma (S)$ $\; \mbox{Zero}(\varphi ) = \mbox{ Zero}(V_{\varphi}). $ \item If $n$ is even, $n \le 6$ and $\varphi \in \Gamma (S^{\pm} ) $ is a half spinor, then $\; V_{\varphi} \cdot \varphi = 0. \;$ In particular, $V_{\varphi}$ is an isotropic vector field. \end{enumerate} \end{pr} {\bf Proof:} Let $\varphi \in \Gamma (S)$. From (5) follows for the time orientation $\xi$ \begin{eqnarray*} g(V_{\varphi},\xi)&=& -\langle \xi \cdot \varphi , \varphi \rangle = - (\xi \cdot \xi \cdot \varphi , \varphi )_{\xi} = -(\varphi, \varphi )_{\xi} . \end{eqnarray*} Since the scalar product $\,(\cdot,\cdot)_{\xi}\,$ is positive definite, this shows that Zero$(V_{\varphi}) = $ Zero$(\varphi )$. The second statement is proved by a direct calculation using a basis representation of $\varphi$ and $V_{\varphi}$ and the formulas (1) and (2).\qed In the Riemannian case Proposition 6.1 is not true. There exist non-trivial spinor fields $\varphi$ such that the canonical vector field $V_{\varphi}$ is identically zero (see \cite{Kuehnel/Rademacher:95}). On the other hand, the zero set Zero$(\varphi)$ of a Riemannian twistor spinor is discret (\cite{Baum/Friedrich/ua:91}, Th.2.1). This is in the Lorentzian setting not the case.\\ We call a subset $A \subset M$ isotropic, if each differentiable curve in $A$ is isotropic. \begin{pr} \label{pr7} Let $\varphi \in \Gamma (S)$ be a twistor spinor. Then the zero set of $\varphi$ is isotropic. \end{pr} {\bf Proof:} Let $\gamma : I \longrightarrow \mbox{Zero}(\varphi )$ be a curve in Zero$(\varphi )$. Then $\,\varphi(\gamma(t)) \equiv 0\,$ and therefore $\nabla_{\dot{\gamma}(t)}\varphi\equiv 0$. From the twistor equation (6) it follows $\,\dot{\gamma}(t)\cdot D\varphi(\gamma(t)) \equiv 0\,$. Since by Proposition 4 $\,D\varphi(\gamma(t)) \not = 0\,$, $\,\dot{\gamma}(t)$ is isotropic for all $t \in I$. \qed \begin{pr} \label{pr8} Let $\varphi \in \Gamma (S)$ be a twistor spinor. Then $V_{\varphi}$ is a conformal vector field and for the Lie derivative \[ L_{V_{\varphi}} g = - \frac{4}{n} \,\mbox{Re} \langle \varphi , D \varphi \rangle\, g \] holds. \end{pr} {\bf Proof:} Let $\,V:=V_{\varphi}\,$. From the definition of $V_{\varphi}$ it follows \begin{eqnarray*} (L_V g ) (X,Y) &=& g ( \nabla_X V, Y ) + g (X, \nabla_Y V ) \\ &=& X (g (V, Y )) - g (V, \nabla_X Y ) + Y (g (X, V) ) - g ( \nabla_Y X, V ) \\ &=&- X \langle Y \cdot \varphi, \varphi \rangle - Y \langle X \cdot \varphi, \varphi \rangle +\langle \nabla_X Y \cdot \varphi , \varphi \rangle + \langle \nabla_Y X \cdot \varphi , \varphi \rangle \\ & \stackrel{(\ref{4})}{=} &-\langle \nabla_X Y \cdot \varphi, \varphi \rangle- \langle Y \cdot \nabla^S_X \varphi, \varphi \rangle -\langle Y \cdot \varphi , \nabla^S_X \varphi \rangle - \langle \nabla_Y X \cdot \varphi , \varphi \rangle \\ &&-\langle X \cdot \nabla^S_Y \varphi, \varphi \rangle - \langle X \cdot \varphi , \nabla^S_Y \varphi \rangle + \langle \nabla_X Y \cdot \varphi , \varphi \rangle + \langle \nabla_Y X \cdot \varphi , \varphi \rangle \\ & \stackrel{(\ref{3})}{=} & - \langle Y \cdot \nabla^S_X \varphi + X \cdot \nabla^S_Y \varphi, \varphi \rangle - \langle \varphi, Y \cdot \nabla^S_X \varphi + X \cdot \nabla^S_Y \varphi \rangle\,. \end{eqnarray*} Using (\ref{7}) we obtain \[ \left( L_V g \right) (X,Y) = - \frac{4}{n} g (X,Y)\;\mbox{Re} \langle \varphi , D \varphi \rangle . \] \qed From Proposition 8 follows that for each twistor spinor $\varphi\;$ div$(V_{\varphi}) = - 2\, \mbox{Re} \langle \varphi, D \varphi \rangle$. For the imaginary part of $\, \langle \varphi , D \varphi \rangle\,$ we have \begin{pr} \label{pr9} Let $\varphi \in \Gamma (S)$ be a twistor spinor. Then the function $\,C_{\varphi} := \,\mbox{Im}\, \langle \varphi , D \varphi \rangle\,$ is constant on $M$. \end{pr} {\bf Proof:} Because of (\ref{3}) the function $\,\langle Y \cdot \psi, \psi \rangle\,$ is real for each vector field $Y$ and each spinor field $\psi$. Furthermore, \begin{eqnarray*} X \langle D \varphi, \varphi \rangle & \stackrel{(\ref{4})}{=} & \langle \nabla^S_X D \varphi , \varphi \rangle + \langle D \varphi , \nabla^S_X \varphi \rangle \\ & \stackrel{(\ref{6}),(\ref{10})}{=} & \frac{n}{2}\, \langle K(X) \cdot\varphi,\varphi\rangle-\frac{1}{n}\,\langle D\varphi,X\cdot D\varphi\rangle. \end{eqnarray*} Hence $\,X \langle D \varphi , \varphi \rangle\,$ is a real function. Therefore, $\,C_{\varphi} =\, \mbox{Im} \langle \varphi , D \varphi \rangle$ is constant. \qed Let us denote by $C$ the $(3,0)$-Schouten-Weyl tensor $\; C (X, Y, Z ) = g (X, C (Y,Z) ).$ \\ \begin{pr} \label{pr10} Let $\varphi \in \Gamma (S)$ be a twistor spinor. Then \begin{enumerate} \item $V_{\varphi} \;_-\!\rule{0.2mm}{0.2cm}\;\; C = 0 .$ \item If $n = 4$, then $\, V_{\varphi} \;_-\!\rule{0.2mm}{0.2cm}\;\; W = 0. $ \end{enumerate} \end{pr} {\bf Proof:} From (\ref{11}) and (\ref{12}) we obtain \begin{eqnarray*} C (V_{\varphi} , X, Y) &=& g(V_{\varphi} , C ( X, Y )) = -\langle C ( X, Y) \cdot \varphi , \varphi \rangle \\ &=& - \frac{1}{n}\, \langle W (X \wedge Y ) \cdot \varphi , \varphi \rangle = \frac{1}{n} \,\langle \varphi , W (X \wedge Y ) \cdot \varphi \rangle \;=\; 0 . \end{eqnarray*} Let $\,\varphi = a u (\varepsilon,1)+bu(-\varepsilon,-1)\in\Gamma (S^{\varepsilon})\,$ be a half spinor on a 4-dimensional manifold. Then by a direct calculation using (\ref{1}) and (\ref{2}) we obtain \begin{eqnarray*} V_{\varphi} &=& (|a|^2 + |b|^2 ) s_1 +(|a|^2-|b|^2)s_2 - 2 \mbox{Re} (ia\bar{b})s_3-2\varepsilon\mbox{Re}(a\bar{b})s_4. \end{eqnarray*} Hence, \begin{eqnarray}\label{12a} W(V_\varphi,s_i,s_j,s_k) &=& (|a|^2+|b|^2)W_{1ijk}+(|a|^2-|b|^2)W_{2ijk}\nonumber\\ && - 2\mbox{Re}(ia\bar{b})W_{3ijk}-2\varepsilon\mbox{Re}(a\bar{b})W_{4ijk}. \end{eqnarray} On the other hand, from the basis representation of \[ 0 = W(s_j \wedge s_k)\cdot \varphi= \sum\limits_{r<l} \varepsilon_r \varepsilon_l W_{rljk}\, s_r \cdot s_l \cdot \varphi \] result the equations \begin{eqnarray}\label{13a} 0 & =& (W_{12jk}-\varepsilon iW_{34jk}) a + (i W_{13jk} - \varepsilon W_{24jk} -\varepsilon W_{14jk} + i W_{14jk} ) \cdot b \\ \label{14a} 0 &=& (-W_{12jk} + \varepsilon iW_{34jk}) b + (-iW_{13jk}+\varepsilon W_{24jk} - \varepsilon W_{14jk}+i W_{23jk}) a\,. \end{eqnarray} Then looking at the real and imaginary part of the equations $\,(\ref{13a}) \bar{a} \pm (\ref{14a}) \bar{b}\,$ and $\,(\ref{13a}) \bar{b} \pm (\ref{14a})\bar{a}\,$ one obtains $\,W( V_{\varphi}, s_i , s_j , s_k) = 0$.\qed \section{Pseudo-hermitian geometry} \label{s4} Before we define the Fefferman spaces we recall some basic facts from pseudo-hermitian geometry in order to fix the notations. The proofs of the following propositions are obtained by easy direct calculations (see \cite{Tanaka:75}, \cite{Baum:97}).\\ Let $M^{2n+1}$ be a smooth connected manifold of odd dimension $2n+1$. A {\em complex CR-structure} on $M$ is a complex subbundle $T_{10}$ of $TM^{\Bbb C}$ such that\\[0.2cm] \begin{tabular}{lll} &1. & $ \dim_{\Bbb C} T_{10}=n,$\\[0.1cm] &2. & $ T_{10}\cap\overline{T_{10}}=\{0\},$\\[0.1cm] &3. & $ [\Gamma(T_{10}),\Gamma(T_{10})]\subset\Gamma(T_{10})\quad$ (integrability condition). \end{tabular}\\[0.2cm] A {\em real CR-structure} on $M$ is a pair $(H,J)$, where\\[0.2cm] \begin{tabular}{lll} & 1.& $ H\subset TM$ is a real $2n$-dimensional subbundle,\\ & 2.& $ J:H\longrightarrow H$ is an almost complex structure on $H: \;J^2=-\mbox{id}$,\\[0.1cm] & 3.& $\mbox{If }\,X,Y\in\Gamma(H)\,$, then $\,[JX,Y]+[X,JY] \in\Gamma(H)\,$ and\\[0.1cm] & & $ N_J(X,Y) := J([JX,Y]+[X,JY])-[JX,JY]+[X,Y] \equiv 0\,$\\ & & (integrability condition). \end{tabular}\\[0.2cm] Obviously the complex and real CR-structure correspond to each other: If $\,T_{10}\subset TM^{\Bbb C}\,$ is a complex CR-structure, then $\,H:=\mbox{Re }(T_{10} \oplus\overline{T_{10}}))\,$, $\,J(U+\bar{U}):= i(U-\bar{U})\,$ defines a real CR-structure. If $(H,J)$ is a real CR-structure, then the eigenspace of the complex extension of $J$ on $H^{\Bbb C}$ to the eigenvalue $i$ is a complex CR-structure. A {\em CR-manifold} is an odd-dimensional manifold equipped with a (real or complex) CR-structure. Let $(M,T_{10})$ be a CR-manifold. The hermitian form on $T_{10}$ \[ L:T_{10}\times T_{10}\longrightarrow E := TM^{\Bbb C}/_{T_{10}\oplus\overline{T_{10}}}\] \[ L(U,V):= i[U,\bar{V}]_E\,, \] where $X_E$ denotes the projection of $X\in TM^{\Bbb C}$ onto $E$, is called the {\em Levi-form} of $(M,T_{10})$. The CR-manifold is called {\em non-degenerate}, if its Levi-form $L$ is non-de\-ge\-ne\-ra\-te. An nowhere vanishing 1-form $\theta \in\Omega^1(M)$ is called a {\em pseudo-hermitian structure} on $(M,T_{10})$, if $\,\theta|_H\equiv 0\,$. $(M,T_{10},\theta)$ is called a {\em pseudo-hermitian manifold}. There exists a pseudo-hermitian structure $\theta$ on $(M,T_{10})$ if and only if $M$ is orientable. Two pseudo-hermitian structures $\theta,\tilde{ \theta}$ differs by a real nowhere vanishing function $\,f \in C^{\infty}(M)\,$: $\,\tilde{\theta}=f\cdot \theta\,$. Let $(M,T_{10},\theta)$ be a pseudo-hermitian manifold. The hermitian form $\;L_\theta:T_{10}\times T_{10}\longrightarrow \C\;$ \[ L_\theta(U,V):= -id\theta(U,\bar{V}) \] is called the {\em Levi-form} of $(M,T_{10},\theta)$. Obviously, we have $\,\theta(L(U,V))=L_\theta(U,V)\,$. The pseudo-hermitian manifold $(M,T_{10},\theta)$ is called {\em strictly pseudoconvex}, if the Levi-form $L_\theta$ is positive definite. If the pseudo-hermitian manifold $(M,T_{10},\theta)$ is non-degenerate, then the pseudo-hermitian structure $\theta$ is a contact form. We denote by $T\in \Gamma (TM)\,$ the {\em characteristic vector field} of this contact form, e.g. the vector field uniquely defined by \[ \theta(T) \equiv 1 \qquad \mbox{ and } \qquad T \;_-\!\rule{0.2mm}{0.2cm}\;\; d\theta \equiv 0. \] From now on we always suppose, that $(M,T_{10},\theta)$ is non-degenerate. If $M$ is oriented, we always choose $\theta$ such that a basis of the form $\,(X_1,JX_1, \ldots , X_n, JX_n, T)\,$ is positive oriented on $M$. We consider the following spaces of forms: \begin{eqnarray*} \Lambda^{q,0}M &:=& \{\omega\in\Lambda^qM^{\Bbb C}\mid V\;_-\!\rule{0.2mm}{0.2cm}\;\; \omega= 0\; \;\;\forall V\in\overline{T_{10}}\}\\ \Lambda^{0,q}M &:=& \{\omega\in\Lambda^qM^{\Bbb C}\mid V\;_-\!\rule{0.2mm}{0.2cm}\;\; \omega=0\;\;\; \forall V\in T_{10}\}\\ \Lambda^{p,q}M &:=& \mbox{span}\{\omega\wedge\sigma\mid\omega\in\Lambda^{p,0} M,\; \sigma\in\Lambda^{0,q}M\}\\ \Lambda^{p,q}_\theta M &:=& \{\omega\in\Lambda^{p,q}M\mid T\;_-\!\rule{0.2mm}{0.2cm}\;\; \omega=0\}. \end{eqnarray*} Now, let us extend the Levi-form of $(M,T_{10},\theta)$ to $TM^{\Bbb C}$ by \\[0.2cm] \hspace*{1cm} $ L_\theta(\bar{U},\bar{V}):= \overline{L_\theta(U,V)}=L_\theta(V,U)\,, \quad L_\theta(U,\bar{V}):=0 \,,\quad$ where $U,V\in T_{10}$,\\ \hspace*{1cm} $ L_\theta(T,\,\cdot\,):= 0. $ \begin{pr} \label{pr11} Let $L_\theta:TM^{\Bbb C}\times TM^{\Bbb C} \longrightarrow \C$ be the Levi-form of $(M,T_{10},\theta)$ and let $T$ be the characteristic vector field of $\theta$. Then \begin{eqnarray} & & [T,Z]\in\Gamma(T_{10}\oplus \overline{T_{10}})\qquad \; \mbox{ if }\; Z\in\Gamma(T_{10})\, \mbox{ or }\, Z\in \Gamma(\overline{T_{10}})\,, \label{14} \hspace{3.5cm} \\[0.1cm] & & L_\theta([T,U],V)+L_\theta(U,[T,V])=T(L_\theta(U,V)) \qquad \forall \; U,V\in\Gamma (T_{10}) \,, \label{15}\\[0.1cm] & & L_\theta([T,\bar{U}],V)=L_\theta([T,\bar{V}],U) \hspace{3.3cm} \forall \; U,V\in \Gamma(T_{10})\,,\label{16} \\[0.1cm] & & L_\theta([T,U],\bar{V})=L_\theta([T,V],\bar{U}) \hspace{3.3cm} \forall \; U,V\in\Gamma (T_{10})\,,\label{17} \end{eqnarray} \end{pr} \ \\ If we consider the Levi-form $L_\theta$ as a bilinear form on the real tangent bundle, we obtain a symmetric bilinear form on $TM$ which is non-degenerate on $H$. \begin{pr} \label{pr12} Let $(M^{2n+1},T_{10},\theta)$ be a non-degenerate pseudo-hermitian manifold and $(H,J)$ the real CR-structure, defined by $T_{10}$. Let $X$ and $Y$ be two vector fields in $H$. Then the Levi-form $L_\theta:TM\times TM \longrightarrow \R$ satisfies \begin{eqnarray} & & L_\theta(X,Y)= d\theta(X,JY)\,,\label{18}\\[0.1cm] & & L_\theta(JX,JY)=L_\theta(X,Y) \quad \mbox{and} \quad L_\theta(JX,Y)+L_\theta(X,JY)=0\,, \label{19} \hspace{3,5cm} \\[0.1cm] & & L_\theta([T,X],Y)-L_\theta([T,Y],X) \,=\, L_\theta([T,JX],JY)-L_\theta([T,JY],JX)\,.\label{20} \end{eqnarray} \end{pr} \ \\ On non-degenerate pseudo-hermitian manifolds there exists a special covariant derivative, the so-called {\em Webster connection}, which was introduced by Tanaka (\cite{Tanaka:75} and by Webster (\cite{Webster:78}). \begin{pr} \label{pr13} Let $(M,T_{10},\theta)$ be a non-degenerate pseudo-hermitian manifold and let $T$ be the characteristic vector field of $\theta$. Then there exists an uniquely determined covariant derivative $\; \nabla^W:\Gamma(T_{10})\longrightarrow \Gamma(T^*M^{\Bbb C}\otimes T_{10})\;$ on $\,T_{10}\,$ such that\\[0.2cm] 1. $\;\nabla^W$ is metric with respect to $L_\theta:$ \vspace{-0.5cm}\\ \begin{eqnarray} X(L_\theta(U,V))=L_\theta(\nabla^W_XU,V)+L_\theta(U,\nabla^W_{\bar{X}} V) \qquad U,V\in\Gamma(T_{10}), \; X\in \Gamma(TM^{\Bbb C}) \label{21} \end{eqnarray} \vspace{-0.8cm}\\ \parbox{13cm}{ 2.$ \quad \nabla^W_TU=\mbox{pr}_{10} [T,U]$, \\[0.2cm] 3.$\quad \nabla^W_{\bar{V}}U =\mbox{pr}_{10}[\bar{V},U]$,} \hfill \parbox{6mm}{\begin{eqnarray} \label{22}\\[0.1cm] \label{23} \end{eqnarray}} \vspace{-0.2cm}\\ where $\mbox{pr}_{10}$ denotes the projection on $T_{10}\;$. Furthermore, $\nabla^W$ satisfies \begin{equation}\label{24} \nabla^W_UV-\nabla^W_VU=[U,V],\qquad U,V\in\Gamma(T_{10}). \end{equation} \end{pr} \ \\ Now, we extend the Webster connection to $TM^{\Bbb C}$ by \begin{eqnarray*} \nabla^W\bar{U} := \overline{\nabla^WU} \qquad \mbox{ and } \qquad \nabla^WT := 0. \end{eqnarray*} \begin{pr} \label{pr14} The torsion $\mbox{Tor}^W$ of the Webster connection $\; \nabla^W: \Gamma(TM^{\Bbb C})\longrightarrow \Gamma(T^*M^{\Bbb C}\otimes TM^{\Bbb C})\;$ satisfies \begin{eqnarray} \label{26} \mbox{Tor}^W(U,V) &=& \mbox{Tor}^W(\bar{U},\bar{V})\,=\,0\,, \hspace{5cm}\\ \label{27} \mbox{Tor}^W(U,\bar{V}) &= &i L_\theta(U,V) \,T \,,\\ \label{28} \mbox{Tor}^W(T,U) &=& -\mbox{pr}_{01}[T,U]\,,\\ \mbox{Tor}^W(T,\bar{U}) &=& -\mbox{pr}_{10}[T,\bar{U}]\,, \end{eqnarray} where $\mbox{pr}_{01}$ denotes the projection onto $\overline{T_{10}}\,$, $\,p_{10}$ the projection onto $T_{10}\,$ and $U,V \in \Gamma(T_{10})$. \end{pr} \ \\ Let $(M,T_{10},\theta)$ be a non-degenerate pseudo-hermitian manifold and let $(p,q)$ be the signature of $(T_{10},L_\theta)$. Then $\; g_\theta:= L_\theta+\theta\circ \theta \;$ defines a metric of signature $(2p,2q+1)$ on $M$. \begin{pr} \label{pr15} Let $(M,T_{10},\theta)$ be a non-degenerate pseudo-hermitian manifold. Then the Webster connection $\nabla^W:\Gamma(TM) \longrightarrow \Gamma(T^*M\otimes TM)\,$ considered on the real tangent bundle is metric with respect to $\,g_\theta\,$ and the torsion of $\nabla^W$ is given by \begin{eqnarray} \label{29} \mbox{Tor}^W(X,Y)& = & L_\theta(JX,Y)\cdot T \qquad \qquad \mbox{ for }\; X,Y\in\Gamma(H), \hspace{2cm}\\ \mbox{Tor}^W(T,X) &= & -\frac{1}{2}\{[T,X]+J[T,JX]\} \qquad \mbox{ for } \; X\in\Gamma(H).\label{30} \end{eqnarray} Furthermore, on $\Gamma(H)$ \begin{equation}\label{31} \nabla^W\circ J=J\circ \nabla^W\,. \end{equation} \end{pr} \ \\ Now, let $\;R^{\nabla^W}\in\Gamma (\Lambda^2M^{\Bbb C}\otimes\mbox{End}(TM^{\Bbb C}, TM^{\Bbb C}))\;$ be the curvature operator of $\nabla^W$ \[ R^{\nabla^W}(X,Y) = [\nabla^W_X,\nabla^W_Y]-\nabla^W_{[X,Y]}. \] Then the (4,0)-curvature tensor ${\cal R}^W$ \[ {\cal R}^W(X,Y,Z,V):= g_\theta(R^{\nabla^W}(X,Y)Z,\bar{W}),\qquad X,Y,Z,W\in TM^{\Bbb C} \] has the following symmetry properties \\ \begin{pr} \label{pr16} Let $\,X,Y,Z,V\in TM^{\Bbb C}\,$, $\,A,B,C,D\in T_{10}\,$. Then\\[0.2cm] \hspace*{1cm} ${\cal R}^W(X,Y,Z,V)=-{\cal R}^W(Y,X,Z,V)=-{\cal R}^W(X,Y,V,Z)$\\[0.1cm] \hspace*{1cm} $\overline{{\cal R}^W(X,Y,Z,V)}={\cal R}^W(\bar{X},\bar{Y},\bar{Z}, \bar{V})$\\[0.1cm] \hspace*{1cm} ${\cal R}^W(A,\bar{B},C,\bar{D})={\cal R}^W(C,\bar{B},A,\bar{D})$\\[0.1cm] \hspace*{1cm} ${\cal R}^W(A,B, \cdot , \cdot )=0$ \end{pr} \ \\ Let $\omega\in\Lambda^2M^{\Bbb C}$ be a complex 2-form and $\,\tilde{\omega}: T_{10} \longrightarrow T_{10}\,$ the uniquely determined $\C$-linear map with $\,\omega(U,\bar{V})=L_\theta(\tilde{\omega}U,V)\,$, $\,U,V \in T_{10}\,$. Then the $\theta$-trace of $\omega$ is defined by $\; \mbox{Tr}_\theta\omega:=\mbox{Tr}(\tilde{\omega}).\;$ If $\,(Z_1,\ldots,Z_n)\,$ is an unitary basis of $\,(T_{10},L_\theta)\,$, $\,\varepsilon_k= L_\theta(Z_k,Z_k)\,$, then \[ \mbox{Tr}_\theta\omega=\sum\limits^n_{\alpha =1}\varepsilon_\alpha\;\omega(Z_\alpha, \bar{Z}_\alpha). \] The (2,0)-tensor field \[ \mbox{Ric}^W:= \mbox{Tr}_\theta^{(3,4)}{\cal R}^W=\sum\limits^n_{\alpha=1}\varepsilon_\alpha {\cal R}^W( \cdot , \cdot , Z_\alpha,\bar{Z}_\alpha) \] is called the {\em Webster-Ricci-tensor}, the function $\,\, R^W:=\mbox{Tr}_\theta\mbox{Ric}^W\,\,$ is the {\em Webster scalar curvature}. Proposition 16 shows that $\,\mbox{Ric}^W\in\Lambda^{1,1}M\,$, $\,\mbox{Ric}^W(X,Y) \in i\R\,$ for all $X,Y\in TM\,$ and that $R^W$ is a real function. \section{Fefferman spaces} \label{s5} Let $\,(M^{2n+1},T_{10})\,$ be a CR-manifold. The complex line bundle $\,K:=\Lambda^{n+1,0}M\,$ of $(n+1,0)$-forms is called the {\em canonical bundle} of $\,(M^{2n+1},T_{10})\,$. $\R^+$ acts on $\,K^*=K\setminus \{0\}\,$ by multiplication. Let $\,F:=K^*/_{\R^+}\,$. Then $(F,\pi,M)$ is the $S^1$-principal bundle over $M$ associated to $K$. We call $(F,\pi,M)$ the {\em canonical $S^1$-bundle} of $(M,T_{10})$. Now, let $(M,T_{10},\theta)$ be a non-degenerate pseudo-hermitian manifold and $\,\nabla^W:\Gamma(T_{10})\longrightarrow \Gamma(T^*M^{\Bbb C}\otimes T_{10})\,$ its Webster-connection. $\nabla^W$ allows us to define a connection $A^W$ on the canonical $S^1$-bundle $F$ in the following way: Let $\,s=(Z_1, \ldots ,Z_n)\,$ be a local unitary basis of $(T_{10},L_\theta)$ over $U \subset M$ and let us denote by $\,\omega_s:=(\omega_{\alpha\beta})\,$ the matrix of connection forms of $\nabla^W$ with respect to $s$ \[ \nabla^WZ_\alpha=\sum\limits_\beta\omega_{\alpha\beta}Z_\beta. \] \vspace{-0.3cm}\\ $(Z_1,\ldots,Z_n,\bar{Z}_1,\ldots,\bar{Z}_n,T)\,$ is a local basis of $TM^{\Bbb C}$ over $U$. Let $(\theta^1,\ldots,\theta^n,\bar{\theta}^1,\ldots, \bar{\theta}^n,\theta)$ be the corresponding dual basis. Then \[ \hat{\tau}_s := \theta\wedge\theta^1\wedge\ldots\wedge\theta^n : U \longrightarrow K \] is a local section in $K$. We denote by $\,\tau_s := [\hat{\tau}_s]\,$ the corresponding local section in $\,F= K^*/_{\R^+}\,$. The Webster connection $\,\nabla^W\,$ defines in the standard way a covariant derivative $\,\nabla^K\,$ in the canonical line bundle $K$ such that \[ \nabla^K \hat{\tau}_s = - \sum\limits_{\alpha} \omega_{\alpha\alpha} \cdot \hat{\tau}_s = - \mbox{ Tr}\,\omega_s \cdot \hat{\tau}_s \,.\] \vspace{-0.3cm}\\ Since $\,\nabla^W\,$ is metric with respect to $L_\theta$, the trace Tr$\,\omega_s\,$ is purely imaginary. Hence $\nabla^K\,$ is induced by a connection $A^W\,$ on the associated $S^1$-principal bundle $\,(F,\pi,M;S^1)\,$ with the local connection forms \[ \tau_s^* A^W = - \mbox{ Tr}\, \omega_s \,.\] Let $\Omega^W$ be the curvature form of the connection $A^W$ on $F$. Since $\Omega^W$ is tensionell and right-invariant, it can be considered as 2-form on $M$ with values in $i\R$. Over $U \subset M$ \begin{equation}\label{33} \Omega^W=dA^{\tau_s}=-\mbox{Tr}\,d\omega_s. \end{equation} holds. On the other hand, \begin{eqnarray*} \mbox{Ric}^W(X,Y) &=&\sum\limits_\alpha\varepsilon_\alpha L_\theta(([\nabla^W_X,\nabla^W_Y] -\nabla^W_{[X,Y]})Z_\alpha,\bar{Z}_\alpha)\\ &=&\Big(\sum\limits_\alpha d\omega_{\alpha\alpha}-\sum\limits_{\alpha,\beta}\omega_{\alpha \beta}\wedge\omega_{\beta\alpha}\Big)(X,Y). \end{eqnarray*} \vspace{-0.5cm}\\ Hence, \begin{eqnarray*} \mbox{Ric}^W = \mbox{Tr}\,d\omega_s-\mbox{Tr}\,(\omega_s\wedge\omega_s) = \mbox{Tr}\,d\omega_s. \end{eqnarray*} From (\ref{33}) it follows \begin{equation} \label{32} \Omega^W=-\mbox{Ric}^W\,. \end{equation} The connection $A^W$ on the canonical $S^1$-bundle $(F,\pi,M)$ is called the {\em Webster-con\-nec\-tion on $F$}. Two connections on an $S^1$-principal bundle over $M$ differ by an 1-form on $M$ with values in $i\R$. The connection \[ A_\theta := A^W-\frac{i}{2(n+1)} R^W \theta \] on the canonical $S^1$-bundle $(F,\pi,M)$ is called the {\em Fefferman connection on $F$}.\\ Let us consider the following right-invariant metric on $F$: \[ h_\theta:=\pi^*L_\theta-i\frac{4}{n+2}\pi^*\theta\circ A_\theta, \] where $\circ$ denotes the symmetric tensor product. $h_\theta$ is called the {\em Fefferman metric on $F$}. If $(T_{10}, L_\theta)$ is of signature $(p,q)$, then $h_\theta$ has signature $(2p+1,2q+1)$. In particular, if $(M,T_{10},\theta)$ is strictly pseudoconvex, $h_\theta$ is a Lorentzian metric. The semi-Riemannian manifold $(F,h_\theta)$ is called the {\em Fefferman space of $\,(M,T_{10},\theta)\,$}. The fibres of the ca\-no\-nical $S^1$-bundle $F$ are isotropic submanifolds of $(F,h_\theta)$. From the special choise of the Fefferman connection $A_\theta$ in the definition of $h_\theta$ results that the conformal class $[h_\theta]$ of the metric $h_\theta$ is an invariant of the oriented CR-manifold $(M,T_{10})$, e.g. if $\tilde{\theta}=f\cdot\theta$, $f>0$, is a further pseudo-hermitian structure on $(M,T_{10})$, then $\,h_{\tilde{\theta}}=f\cdot h_\theta\, $ (see \cite{Lee:86}, Th. 5.17.). We remark that Fefferman spaces are never Einsteinian. \\ \ \\ In the following we always assume that $(M,T_{10},\theta)$ is strictly pseudoconvex. In order to find global solutions of the Lorentzian twistor equation on Fefferman spaces it is necessary to change the topological type of the canonical $S^1$-bundle.\\ \begin{pr} \label{pr17} Let $(M^{2n+1},T_{10},\theta)$ be a strictly pseudoconvex spin manifold. Then each spinor structure of the Riemann manifold $(M,g_\theta)$ defines a square root $\sqrt{F}$ of the canonical $S^1$-bundle $F$. (e.g. $\sqrt{F}$ is an $S^1$-bundle over $M$ such that the associated line bundle $\,L:=\sqrt{F} \times_{S^1}\C\,$ satisfies $\,L\otimes L=K\,$). \end{pr} {\bf Proof:} Let $\, U(n) \hookrightarrow SO(2n) \hookrightarrow SO(2n+1)\,$ be the canonical embedding of $U(n)$ in $SO(2n+1)$. \[ P_H:=\{(X_1,JX_1,\ldots,X_n,JX_n,T)\mid (X_1,JX_1,\ldots,X_n,JX_n)\; \mbox{on-basis of } (H,L_\theta)\} \] is an $U(n)$-reduction of the bundle $P_M$ of $SO(2n+1)$-frames of $(M,g_\theta)$ . Let $(Q_M,f_M)$ be a spinor structure of $(M,g_\theta)$ and let us denote by $(Q_H,f_H)$ the reduced spinor structure \[ Q_H:= f^{-1}_M(P_H),\qquad f_H:= f_M\mid_{Q_H}. \] Now, the proof of Proposition \ref{pr17} is a repetition of Hitchin's proof of the fact that each spinor structure on a K\"ahler manifold defines a square root of the canonical bundle (see \cite{Hitchin:74}). Since we need some notation later on, we repeat the idea of the proof.\\ Let $\ell:U(n) \longrightarrow \mbox{Spin}(2n)^{\Bbb C}=\mbox{Spin}(2n) \times_{\Bbb Z_2}S^1$ be defined by \begin{equation}\label{34} \ell(A)=\prod\limits^n_{k=1} \left(\cos\frac{\theta_k}{2}+\sin\frac{\theta_k}{2} \cdot f_k\cdot J_0(f_k)\right) \times e^{\frac{i}{2}\sum\limits^n_{k=1}\theta_k}\,, \end{equation} where $(f_1,\ldots,f_n)$ is an unitary basis of $\C^n$ such that $Af_k= e^{i\theta_k}f_k$ and $J_0:\C^n \to \C^n$ is the standard complex structure of $\C^n$. Then we have the following commutative diagram \vspace{-0.5cm}\\ \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(10,3.3) \put(0,0.3){ $S^1$ } \put(3,0.4){\vector(-3,0){1.5}} \put(2,0.5){det} \put(4,0.3){ $U(n)$} \put(6,0.4){\vector(3,0){1.5}} \put(6.7,0.5){$i$} \put(8,0.3){SO($2n$)} \put(0,2.8){ $S^1$ } \put(1.5,2.9){\vector(3,0){1.5}} \put(2,2.6){$j_2$} \put(3.5,2.8){Spin$(2n)^{\Bbb C}$} \put(7.5,2.9){\vector(-3,0){1.5}} \put(6.7,2.6){$j_1$} \put(7.9,2.8){Spin($2n$)} \put(0.3,2.3){\vector(0,-2){1.3}} \put(4.5,1){\vector(0,2){1.3}} \put(8.7,2.3){\vector(0,-2){1.3}} \put(4.2,1.5){$\ell$} \put(8.9,1.5){$\lambda$} \put(0.5,2){$z$}\put(0.5,1.6){$\downarrow$}\put(0.5,1.2){$z^2$} \put(5.7,2.3){\vector(3,-2){2}} \put(5.8,1.4){{\footnotesize $\lambda \circ pr_1$}} \put(0.54,1.8){-} \end{picture} \end{center} where $i,j_1,j_2$ denote the canonical embeddings and $\lambda:\mbox{Spin} (2n)\to SO(2n)$ is the universal covering of $SO(2n)$. Hence, for each $A\in U(n)$ and each square root of $\det(A)$ one has \[ \lambda^{-1}(A):= j_1\lambda^{-1}(i(A))=\pm\ell(A)\,\mbox{Det}(A)^{-\frac{1}{2}}. \] Now, let $\{(U_{\alpha\beta}\,,\, g_{\alpha\beta}:U_{\alpha\beta}\to \lambda^{-1}(U(n)))\}_{\alpha,\beta}$ are the cocycles defining the reduced spinor structure $\,(Q_H,f_H)\,$. Then on $U_{\alpha\beta}$ we choose a square root $\; h_{\alpha\beta}:U_{\alpha\beta}\to S^1 \;$ of the determinant of $\,\lambda(g_{\alpha\beta})^{-1}\,$ such that \begin{equation}\label{35} h^2_{\alpha\beta}=\mbox{Det}(\lambda(g_{\alpha\beta}))^{-1} \qquad \mbox{ and } \qquad g_{\alpha\beta}=\ell(\lambda(g_{\alpha\beta}))\cdot h_{\alpha\beta}. \end{equation} $\{(U_{\alpha\beta},h_{\alpha\beta})\}_{\alpha\beta}$ are cocyles defining a square root $\,(\sqrt{F},\pi,M)\,$ of the canonical $S^1$-bundle $(F,\pi,M)$. \qed \ \\ Let $\,(\sqrt{F},\pi,M)\,$ be the square root of the canonical $S^1$-bundle defined by the spinor structure of $(M,g_\theta)$. Then the Webster connection $A^W$ on $F$ defines a corresponding connection $A^{\sqrt{W}}$ on $\sqrt{F}$: Let $\,\{\tilde{s}_\alpha:U_\alpha\to Q_H\}\,$ be a covering of $Q_H$ by local sections with the transition functions $\,g_{\alpha\beta}\;$; $\;\tilde{s}_\alpha =\tilde{s}_\beta\cdot g_{\alpha\beta}\;$. Let $\,s_\alpha=f_H(\tilde{s}_\alpha) \in P_H\,$ and denote by $\,\sqrt{\tau_{s_\alpha}}:U_\alpha\to\sqrt{F}\,$ the local sections in $\sqrt{F}$ with transition functions $\,h_{\alpha\beta}\,$ \[ \sqrt{\tau_{s_\alpha}}=\sqrt{\tau_{s_\beta}}\cdot h_{\alpha\beta}, \] defined by (\ref{35}). Then the local connection forms of $A^{\sqrt{W}}$ are given by \begin{equation}\label{37} \sqrt{\tau_{s_\alpha}}^{\,*} A^{\sqrt{W}}=\frac{1}{2}\tau^*_{s_\alpha}A^W=- \frac{1}{2}\,\mbox{Tr}\,\omega_{s_\alpha} \end{equation} and the curvature of $A^{\sqrt{W}}$ is \begin{equation}\label{38} \Omega^{\sqrt{W}}=\frac{1}{2}\Omega^W= - \frac{1}{2}\mbox{Ric}^W. \end{equation} The connection $\,A^{\sqrt{}}_\theta\,$ on $\,\sqrt{F}\,$ defined by \[ A^{\sqrt{}}_\theta:= A^{\sqrt{W}}-\frac{i}{4(n+1)}R^W\cdot\theta \] is called the {\em Fefferman connection on $\sqrt{F}$} and the Lorentzian metric \[ h_\theta:=\pi^*L_\theta- i\frac{8}{n+2}\pi^*\theta\circ A^{\sqrt{}}_\theta \] is the {\em Fefferman metric on $\sqrt{F}$}. As we will see in the next section, the spinor structure $(Q_M,f_M)$ of $\,(M,g_\theta)\,$ defines a canonical spinor structure on $\,(\sqrt{F},h_\theta)\,$. \begin{de} The Lorentzian spin manifold $(\sqrt{F},h_\theta)$ is called the Fefferman space of the strictly pseudoconvex spin manifold $\,(M,T_{10},\theta,(Q_M,f_M))\,$. \end{de} \ \\ \section{Spinor calculus for $S^1$-bundles with isotropic fibre over strictly pseudoconvex spin manifolds} \label{s6} Let $(M^{2n+1},T_{10},\theta)$ be a strictly pseudoconvex manifold and let $(Q_M,f_M)$ be a spinor structure of $(M,g_\theta)$. Furthermore, consider an $S^1$-principle bundle $(B,\pi,M;S^1)$ over $M$, a connection $A$ on $B$ and a constant $c\in\R\backslash\{0\}$. Then \[ h:= h_{A,c} :=\pi^*L_\theta-i c\,\pi^*\theta\circ A \] is a Lorentzian metric on $B$. In this section we want to derive a suitable spinor calculus for the Lorentzian manifold $(B,h)$.\\ Let $\,N\in \Gamma(TB)\,$ be the fundamental vector field on $B$ defined by the element $\frac{2}{c}i\in i\R$ of the Lie algebra $i\R$ of $S^1$ \[ N(b)=\widetilde{\frac{2}{c}i}\,(b):=\frac{d}{dt}\left(b\cdot e^{\frac{2}{c}it}\right)|_{ t=0}. \] Denote by $\,T^*\in \Gamma(TB)\,$ the $A$-horizontal lift of the characteristic vector field $T$ of $\theta$. Then $N$ and $T^*$ are global isotropic vector fields on $B$ such that $h(N,T^*)=1$. Consider the global vector fields \begin{equation}\label{39} s_1 = \frac{1}{\sqrt{2}}(N - T^*) \qquad \mbox{ and } \qquad s_2 = \frac{1}{\sqrt{2}} (N+T^*). \end{equation} Then \[ h(s_1,s_1)=-1, \qquad h(s_2,s_2)=1, \qquad h(s_1,s_2)=0. \] Let the time orientation of $(B,h)$ be given by $s_1$ and the space orientation by the vectors $\,(s_2,X^*_1,JX_1^*,\ldots,X^*_n,JX_n^*))\,$, where $\,(X_1,JX_1,\ldots, X_n,JX_n)\in P_H\,,$ and $X^*$ denotes the $A$-horizontal lift of a vector field $X$ on $M$. Now, let $(Q_H,f_H)$ be the reduced spinor structure of $(M,g_\theta)$ defined in the previous section. Denote by \[ S_H:= Q_H \times_{\lambda^{-1}(U(n))}\Delta_{2n,0} \] the corresponding spinor bundle of $(H,L_\theta)$. Obviously, the bundle \[ \hat{P}_B:=\{(s_1,s_2,X^*_1,JX_1^*,\ldots,X^*_n,JX_n^*) \mid (X_1,JX_1,\ldots,X_n, JX_n) \mbox{ on-basis of } (H,L_\theta)\} \] is an $U(n)$-reduction of the frame bundle $P_B$ of $(B,h)$ with respect to the embedding $\,U(n) \hookrightarrow SO_0(2n+2,1)\,$. Since $\,\hat{P}_B\approx\pi^*P_H\,$ we have \[ P_B \,\approx\, \pi^*P_H \times_{U(n)}\,SO_0(2n+2,1). \] Therefore, \begin{eqnarray*} Q_B := \pi^*Q_H \times_{\lambda^{-1}(U(n))}\,\mbox{Spin}_0(2n+2,1)\,, \qquad f_B := [f_H,\lambda] \end{eqnarray*} is a spinor structure of the Lorentzian manifold $(B,h)$. The corresponding spinor bundle $S$ on $(B,h)$ is given by \begin{equation}\label{41} S \,=\, \pi^*Q_H \times_{\lambda^{-1}(U(n))}\,\Delta_{2n+2,1}. \end{equation} \vspace{0.2cm} \begin{pr} \label{pr19} Let $S_H$ be the spinor bundle of $(H,L_\theta)$ over $M$. Then the spinor bundle $S$ of $(B,h)$ can be identified with the sum \[ S \approx \pi^*S_H\oplus \pi^*S_H, \] where the Clifford multiplication is given by \begin{eqnarray} \label{42} s_1\cdot (\varphi,\psi) &=& (-\psi,-\varphi) \\ \label{43} s_2\cdot (\varphi,\psi) & = & (-\psi,\varphi) \\ \label{44} X^*\cdot(\varphi,\psi) &=& (-X\cdot\varphi,X\cdot\psi),\qquad X\in H. \end{eqnarray} In particular, \begin{eqnarray} \label{45} N\cdot(\varphi,\psi) & =& (-\sqrt{2}\,\psi,0) \\ \label{46} T^*\cdot(\varphi,\psi) & = & (0,\sqrt{2}\,\varphi). \end{eqnarray} Furthermore, the positive and negative parts of $S$ are \begin{equation}\label{47} S^+ = \pi^*S^+_H\oplus\pi^*S^-_H,\qquad S^- = \pi^*S^-_H \oplus \pi^*S^+_H. \end{equation} The indefinite scalar product $\,\langle \cdot , \cdot \rangle\,$ in $S$ is given by \begin{equation}\label{48} \langle(\varphi,\psi), (\hat{\varphi},\hat{\psi})\rangle=-(\psi,\hat{\varphi})_{S_H} -(\varphi,\hat{\psi})_{S_H}, \end{equation} where $(\cdot,\cdot)_{S_H}$ is the usual positive definite scalar product in $S_H$. \end{pr} {\bf Proof:} By definition of the spinor bundle $S$ (see (\ref{41})) we have only to check, how the $\,\mbox{Spin}(2n)$-modul $\Delta_{2n+2,1}$ decomposes into $\mbox{Spin}(2n)$-representations. Let the embedding $i:\R^{2n}\to\R^{2n+2,1}$ be given by $i(x)=(0,0,x)$ and let $\mbox{Spin}(2n)\hookrightarrow \mbox{Spin}_0(2n+2,1)$ be the corresponding embedding of the spin groups. Consider the following isomorphisms of the representation spaces \begin{eqnarray*} \begin{array}{lccrcl} \chi :& \Delta_{2n+2,1} & \longrightarrow & \,\Delta_{2n,0} &\oplus & \Delta_{2n,0}\, \\[0.2cm] & \, u \otimes u(1) + v \otimes u(-1)\, &\longmapsto &(u&,&v) \end{array} \end{eqnarray*} where we use the notation of section \ref{s2}. Then formula (\ref{1}) shows that \begin{eqnarray*} \begin{array}{lcl} \chi\,(e_1\cdot (u\otimes u(1)+v\otimes u(-1)))& =& (-u,-v) \\[0.1cm] \chi\,(e_2\cdot(u\otimes u(1) + v\otimes u(-1)))&=&(u,-v)\\[0.1cm] \chi\,(e_k\cdot(u\otimes u(1) + v\otimes u(-1)))&=&(-e_{k-2}\cdot u,e_{k-2}\cdot v), \qquad k>2. \end{array} \end{eqnarray*} Therefore, $\chi$ is an isomorphism of the $\mbox{Spin}(2n)$-representations and (\ref{42})-(\ref{44}) and because of (\ref{39}) also the formulas (\ref{45}), (\ref{46}) are valid. Let $\omega_{2n+2}=e_1\cdot\ldots\cdot e_{2n+2}$ be the volume element of $\mbox{Cliff}_{2n+2,1}$ and $\omega_{2n}=e_1\cdots e_{2n}$ the volume element of $\mbox{Cliff}_{2n,0}$. Then using the identification $\chi$ we obtain \[ \omega_{2n+2} \cdot (u,v) = (-\omega_{2n} \cdot u\,,\,\omega_{2n}\cdot v). \] According to the definition of $S^\pm$ this shows (\ref{47}). Because of (\ref{5}) the scalar product satisfies \begin{eqnarray*} \langle(\varphi,\psi),(\hat{\varphi},\hat{\psi})\rangle &=& (s_1\cdot ( \varphi,\psi), (\hat{\varphi},\hat{\psi}))_{s_1} \\&=& ((-\psi,-\varphi), (\hat{\varphi},\hat{\psi}))_{s_1}\\ &=& -(\psi,\hat{\varphi})_{S_H}-(\varphi,\hat{\psi})_{S_H}. \end{eqnarray*} \qed \ \\ In order to describe the spinor derivative in the spinor bundle $S$ of $B$ we need the connection forms of the Levi-Civita connection of $(B,h)$. Let $X,Y,Z$ be local vector fields on $(B,h)$ of constant length and constant scalar products with each other. Then the Levi-Civita connection $\nabla$ of $(B,h)$ satisfies \begin{equation}\label{49} h(\nabla_XY,Z)=\frac{1}{2}\{h([X,Y],Z)+h([Z,Y],X)+h([Z,X],Y)\}. \end{equation} For a vector $Z\in T_bB$ we denote by $Z^h$ the projection on the horizontal tangent space and by $Z^v$ the projection on the vertical tangent space. If $X\in T_{\pi(b)}M$, then $X^*\in T_bB$ denotes the horizontal lift of $X$. Let $\,\Omega^A \in \Omega^2(M;i\R)\,$ be the curvature form of the connection $A$. From the connection theory in principle bundles follows for vector fields $X,Y$ on $M$ \begin{eqnarray} \label{50} [X^*,N]\,\,\, & = & 0 \,,\\ \label{51} [X^*,Y^*]^v &= & i\,\frac{c}{2}\,\Omega^A(X,Y)\cdot N\,,\\ \label{52} [X^*,Y^*]^h &= & [X,Y]^*\,. \end{eqnarray} Now, let $X,Y\in\Gamma(H)$. Since $\;[T,X] \in \Gamma(H)\;$ and \[ [X,Y] = \mbox{pr}_H[X,Y] + \theta([X,Y])\cdot T = \mbox{pr}_H[X,Y]-d\theta(X,Y)\cdot T \] we obtain from (\ref{51}) and (\ref{52}) \begin{eqnarray} \label{53} [T^*,X^*] &=& [T,X]^* + i\,\frac{c}{2}\,\Omega^A(T,X) \cdot N\,, \\ \mbox{}[X^*,Y^*] &=& \mbox{pr}_H [X,Y]^* - d\theta (X,Y) \cdot T^* + i\,\frac{c}{2}\,\Omega^A(X,Y) \cdot N. \label{54} \end{eqnarray} \\ \begin{pr} \label{pr20} Let $X,Y,Z\in\Gamma(H)$ be vector fields of constant lenght and constant $L_\theta$-scalar products with each other. Then \begin{eqnarray*} h(\nabla_{X^*}Y^*,Z^*) &=& L_\theta(\nabla^W_XY,Z)\\ h(\nabla_NY^*,Z^*) &=& \frac{1}{2} d\theta(Y,Z)\\ h(\nabla_{T^*}Y^*,Z^*) &=& \frac{1}{2}\{L_\theta([T,Y],Z)-L_\theta([T,Z],Y) -i\frac{c}{2}\Omega^A(Y,Z)\}\\ h(\nabla_{X^*}Y^*,N) &=& -\frac{1}{2}d\theta(X,Y)\\ h(\nabla_{X^*}Y^*,T^*) &=& \frac{1}{2}\{L_\theta([T,X],Y)+L_\theta([T,Y],X) + i\frac{c}{2}\Omega^A(X,Y)\}\\ h(\nabla_{T^*}T^*,Z^*) &=& -i\,\frac{c}{2}\,\Omega^A(T,Z)\\ h(\nabla N,T^*) &=& h(\nabla T^*,T^*)\,=\,h(\nabla N^*,N^*)\,=\,0 \\ h(\nabla_NN,Z^*) &=& h(\nabla_NT^*,Z^*)\,=\,h(\nabla_{T^*}N,Z^*)\,=\,0. \end{eqnarray*} \end{pr} {\bf Proof:} From (\ref{49}) and (\ref{54}) it follows \begin{eqnarray*} 2\,h(\nabla_{X^*}Y^*,Z^*) &=& h(\,\mbox{pr}_H[X,Y]^*,Z^*)+h(\,\mbox{pr}_H[Z,Y]^*, X^*) + h(\,\mbox{pr}_H[Z,X]^*,Y^*)\\ &=& L_\theta([X,Y],Z)+L_\theta([Z,Y],X)+L_\theta([Z,X],Y). \end{eqnarray*} According to (\ref{29}) $\,\mbox{Tor}^W(X,Y)=L_\theta(JX,Y)\cdot T\,$. Hence, \begin{eqnarray*} L_\theta([X,Y],Z) &=& L_\theta(\nabla^W_XY-\nabla^W_YX-\mbox{Tor}^W(X,Y),Z)\\ &=& L_\theta(\nabla^W_XY-\nabla^W_YX,Z). \end{eqnarray*} Therefore, using that $\nabla^W$ is metric with respect to $L_\theta$ we obtain \[ h(\nabla_{X^*}Y^*,Z^*)=L_\theta(\nabla_X^WY,Z). \] The other formulas follow immediately from the definition of $h$ and (\ref{49}), (\ref{50}), (\ref{53}) and (\ref{54}). \qed \ \\ By definition the spinor derivative on $S$ is given by the following formula:\\ Let $\,\tilde{s}:U \longrightarrow Q_H\,$ be a local section in $Q_H$ and $\,s=(X_1,\ldots, X_{2n})=f_H(\tilde{s})\in P_H\,$ the corresponding orthonormal basis in $(H,L_\theta)$. Consider a local spinor field $\,\phi=[\,\tilde{s},u\,]\,$ in $S$. Then \begin{eqnarray*} \nabla^{S}\phi &=& [\,\tilde{s},du\,]-\frac{1}{2}\,h(\nabla s_1,s_2)\,s_1\cdot s_2\cdot\phi -\frac{1}{2}\sum\limits^{2n}_{k=1}h(\nabla s_1,X^*_k)\,s_1\cdot X^*_k\cdot\phi\\ && + \frac{1}{2}\sum\limits^{2n}_{k=1}h(\nabla s_2,X_k^*)\,s_2\cdot X^*_k\cdot\phi + \frac{1}{2}\sum\limits_{k<l} h(\nabla X_k^*,X^*_l)\,X^*_k\cdot X^*_l\cdot\phi. \end{eqnarray*} Using the definition of $s_1$ and $s_2$ (see (\ref{39})) and Proposition \ref{pr20} we obtain $\,h(\nabla s_1, s_2)=0\,$. Furthermore, if we denote by $a_k(Z)$ the vector field \[ a_k(Z):= h(\nabla_Zs_2,X^*_k)\,s_2-h(\nabla_Zs_1,X^*_k)\,s_1, \] from Proposition \ref{pr20} results \begin{eqnarray*} a_k(N) &=& 0\\ a_k(T^*) &=& -i\,\frac{c}{2}\,\Omega^A(T,X_k) \cdot N\\ a_k(X^*_j) &=& \frac{1}{2}\, d\theta (X_j,X_k)\,T^*-\frac{1}{2} \{L_\theta([T,X_j], X_k) +\\ && + L_\theta([T,X_k],X_j)+i\,\frac{c}{2}\,\Omega^A(X_j,X_k)\}N. \end{eqnarray*} These formulas and Proposition \ref{pr20} give the following formulas for the spinor derivative in the spinor bundle $S$ of $(B,h)$: \\ \begin{pr} \label{pr21} Let $\,\tilde{s}:U \longrightarrow Q_H\,$ be a local section in $Q_H$, $\,s=f_H(\tilde{s})=(X_1,\ldots,X_{2n})\,$ and let $\,\phi=[\,\tilde{s},u\,]\,$ be a local section in $S$. Then for the spinor derivative of $\phi$ holds: \begin{eqnarray*} \nabla^{S}_N\phi &=& [\,\tilde{s},N(u)\,]+\frac{1}{4} \,d\theta^*\cdot\phi\\ \nabla^{S}_{T^*}\phi &=& [\,\tilde{s},T^*(u)\,]+i\,\frac{c}{2}\,(T\;_-\!\rule{0.2mm}{0.2cm}\;\; \Omega^A)^*\cdot N\cdot\phi-i\,\frac{c}{8}\,(\Omega^A_\theta)^*\cdot\phi\\ && + \frac{1}{4}\sum\limits_{k<l}\{L_\theta([T,X_k],X_l)-L_\theta([T,X_l],X_k) \}X^*_k\cdot X^*_l \cdot\phi\\ \nabla^{S}_{X^*}\phi &=& [\,\tilde{s},X^*(u)\,] - \frac{1}{4}(X \;_-\!\rule{0.2mm}{0.2cm}\;\; d\theta)^* \cdot T^*\cdot \phi + i\,\frac{c}{8}\,(X \;_-\!\rule{0.2mm}{0.2cm}\;\; \Omega^A)_\theta^*\cdot N\cdot\phi\\ && + \frac{1}{4}\sum\limits^n_{k=1}\{L_\theta([T,X],X_k)+L_\theta([T,X_k],X)\}X^*_k\cdot N\cdot\phi\\ && + \frac{1}{2}\sum\limits_{k<l}L_\theta(\nabla^W_X X_k,X_l)\,X^*_k\cdot X^*_l \cdot \phi, \end{eqnarray*} where $\sigma_\theta$ denotes the projection of a form $\sigma\in \Lambda^p M$ onto $\Lambda^p_\theta M$, $\sigma^*_\theta$ is its horizontal lift on $B$ and the vector field $X$ belongs to the set $\{X_1,\ldots,X_{2n}\}$. \end{pr} \ \\ \begin{pr} \label{pr22} Let $(X_1,\ldots,X_{2n})$ be a local orthonormal basis of $(H,L_\theta)$ with $\,X_{2\alpha}=J(X_{2\alpha-1})\,$. Denote by $\,\sigma^1, \ldots,\sigma^{2n}\,$ the dual basis of $\,(X_1,\ldots,X_{2n})\,$ and by $\,s=(Z_1, \ldots,Z_n)\,$, $\,Z_\alpha=\frac{1}{\sqrt{2}}(X_{2\alpha-1}-iX_{2\alpha})\,$, the corresponding local unitary basis of $(T_{10},L_\theta)$. Consider the 2-forms \begin{eqnarray*} b_s &:= & \sum\limits_{k<l} \,\{L_\theta([T,X_k],X_l) - L_\theta([T,X_l],X_k)\}\, \sigma^k \wedge\sigma^l, \\ d_s(X) &:=& \sum\limits_{k<l}\,L_\theta(\nabla^W_X X_k,X_l)\,\sigma^k\wedge\sigma^l\,,\quad \qquad X\in H. \end{eqnarray*} Then \begin{enumerate} \item[1)] $b_s\in\Lambda^{1,1}_\theta(M)\,$ and $\,\mbox{Tr}_{\,\theta}\, b_s = 2 \mbox{Tr}\, \omega_s(T)$ \item[2)] $d_s(X)\in\Lambda^{1,1}_\theta(M)\,$ and $\,\mbox{Tr}_{\,\theta} \,d_s(X)= \mbox{Tr}\,\omega_s(X)$, \end{enumerate} where $\omega_s$ is the matrix of connection forms of the Webster connection $\nabla^W$ with respect to $s=(Z_1,\ldots,Z_n)$. \end{pr} {\bf Proof:} A 2-form $\sigma$ belongs to $\Lambda^{1,1}M$ iff $\,\sigma(JX,JY)= \sigma(X,Y)\,$ for all $X,Y\in H$. From formula (\ref{20}) of Proposition \ref{pr12} follows for $X,Y\in\{X_1,\ldots,X_{2n}\}$ \begin{eqnarray*} b_s(JX,JY) &=& L_\theta([T,JX],JY)-L_\theta([T,JY],JX)\\ & \stackrel{(\ref{20})}{=} & L_\theta([T,X],Y)-L_\theta ([T,Y],X)\\ &=& b_s(X,Y). \end{eqnarray*} Therefore, $b_s\in\Lambda^{1,1}_\theta M$. Furthermore, \begin{eqnarray*} \mbox{Tr}_\theta \,b_s &=& i\sum\limits^n_{\alpha=1}b_s(X_{2\alpha-1},X_{2\alpha})\\ &=& i\sum\limits^n_{\alpha=1}\{L_\theta([T,X_{2\alpha-1}],X_{2\alpha})-L_\theta ([T,X_{2\alpha}],X_{2\alpha-1})\}. \end{eqnarray*} Inserting \[ X_{2\alpha-1}=\frac{1}{\sqrt{2}}(Z_\alpha+\bar{Z}_\alpha),\quad X_{2\alpha} =\frac{i}{\sqrt{2}}(Z_\alpha-\bar{Z}_\alpha) \] one obtains \begin{eqnarray*} \mbox{Tr}_\theta\, b_s &=& \sum\limits^n_{\alpha=1}\{L_\theta([T,Z_\alpha],Z_\alpha)- L_\theta([T,\bar{Z}_\alpha],\bar{Z}_\alpha)\}\\ &=& 2i \,\sum\limits^n_{\alpha=1}\,\mbox{Im}\, \{L_\theta(\mbox{pr}_{10}[T,Z_\alpha], Z_\alpha)\}\\ &=& 2i\,\sum\limits^n_{\alpha=1}\,\mbox{Im}\,L_\theta(\nabla^W_T Z_\alpha,Z_\alpha))\\ &=& 2i\,\mbox{Im}\,(\,\mbox{Tr}\,\omega_s(T))\,. \end{eqnarray*} Since $\nabla^W$ is metric with respect to $L_\theta$, we have $\,\omega_{\alpha\beta} +\overline{\omega_{\beta\alpha}}=0\,$. Hence, $\,\omega_{\alpha\alpha}(T)\,$ is imaginary. Therefore, $\,\mbox{Tr}_\theta\, b_s=2\,\mbox{Tr}\, w_s(T)$.\\ According to formula (\ref{31}) of Proposition \ref{pr15} and formula (\ref{19}) of Proposition \ref{pr12} we have for $Y,Z\in\{X_1,\ldots,X_{2n}\}\,$ and $\,X\in H$ \begin{eqnarray*} d_s(X)(JY,JZ) &=& L_\theta(\nabla^W_XJY,JZ) \,=\, L_\theta(J\nabla^W_XY,JZ)\\ &=& L_\theta(\nabla_X^WY,Z)\, = \, d_s(X)(Y,Z). \end{eqnarray*} This shows that $\,d_s(X)\in\Lambda^{1,1}_\theta(M)\,$. Furthermore, \begin{eqnarray*} \mbox{Tr}_\theta\, d_s(X) &=& i\,\sum\limits^n_{\alpha=1}\,L_\theta(\nabla^W_X X_{2\alpha-1}, X_{2\alpha})\\ &=& \frac{1}{2}\,\sum\limits^n_{\alpha=1}\,\{L_\theta(\nabla_X^WZ_\alpha,Z_\alpha)- L_\theta(\nabla^W_X\bar{Z}_\alpha,\bar{Z}_\alpha)\}\\ &=& i\,\mbox{Im Tr}\, \omega_s(X)\\ &=& \mbox{Tr}\, \omega_s(X). \end{eqnarray*} \qed \ \\ Next we proof a property of the spinor bundle $S_H$ of $(H,L_\theta)$, which is very similar to the properties of the spinor bundle of K\"ahler manifolds (see \cite{Kirchberg:86}).\\ \begin{pr} \label{pr23} Let $\,(M^{2n+1},T_{10},\theta)\,$ be a strictly pseudoconvex spin manifold and $(\sqrt{F},h_\theta)$ its Fefferman space. Then the spinor bundle $S_H$ of $(H,L_\theta)$ has the following properties: \begin{enumerate} \item $S_H$ decomposes into $\,n+1\,$ subbundles \[ S_H=\bigoplus\limits^n_{r=0} S_{(-n+2r)i}, \] where $S_{ki}$ is the eigenspace of the endomorphism $d\theta\cdot :S_H \to S_H$ to the eigenvalue $ki$. The dimension of $S_{ki}$ is {\footnotesize $\left( \begin{array}{c} n\\ \frac{n+k}{2}\end{array}\right)$}. In particular, there are two 1-dimensional subbundles $\,S_{\varepsilon ni},\, \varepsilon = \pm 1\,$, of $\,S_H\,$ satisfying $\,d\theta\cdot |_{S_{\varepsilon ni}}=\varepsilon ni\cdot\mbox{Id}_{S_{\varepsilon ni}}$. \item If $\,\sigma\in\Lambda^{1,1}_\theta M\,$, then \[ \sigma\cdot|_{S_{\varepsilon ni}}=\varepsilon\cdot\mbox{Tr}_\theta(\sigma)\cdot\mbox{Id}_{S_{ \varepsilon ni}}. \] \item The induced bundles $\pi^*S_{n\varepsilon i}$ on the Fefferman space $\sqrt{F}$ are trivial. A global section $\psi_{\varepsilon}\in \Gamma(\pi^*S_{n\varepsilon i})$ is given in the following way:\\ Let $\,\tilde{s}:U \longrightarrow Q_H\,$ be a local section in $Q_H$, $s$ the local unitary basis in $(T_{10},L_\theta)$, corresponding to $\,f_H(\tilde{s}):U \longrightarrow P_H\,$. Furthermore, let $\,\sqrt{\tau_s}:U \longrightarrow \sqrt{F}\,$ be the local section in $\sqrt{F}$ defined by $\tilde{s}$ and let $\,\varphi_s:\sqrt{F}|_U \longrightarrow S^1\,$ be given by $\,p=\sqrt{\tau_s(\pi(p))}\cdot \varphi_s(p)\,$. Then \[ \psi_{\varepsilon}(p) := [\,\tilde{s}(\pi(p)),\varphi_s(p)^{-\varepsilon}u(\varepsilon,\cdots,\varepsilon)\,] \] defines a global section in $\,\Gamma(\pi^*S_{n\varepsilon i})$. \end{enumerate} \end{pr} {\bf Proof:} $\Delta^\pm_{2n,0}$ is a $U(n)$-representation, where $U(n)$ acts by \[ U(n)\,\stackrel{\ell}{\longrightarrow} \, \mbox{Spin}^{\Bbb C}(2n)\,\, \;\stackrel{\Phi_{2n,0}}{\longrightarrow} \,\,\, \mbox{GL}(\Delta^\pm_{2n,0}). \] The element $\,\Omega_0=e_1\cdot e_2+\cdots+ e_{2n-1}\cdot e_{2n}\in \mbox{Cliff}\,^{\Bbb C}_{2n,0}\,$ acts on $\Delta^\pm_{2n,0}$ by \[ \Omega_0\cdot u(\varepsilon_1,\ldots, \varepsilon_n) =i\,(\sum\limits^n_{k=1}\varepsilon_k)\,u(\varepsilon_1,\ldots, \varepsilon_n). \] Hence, $\Delta^\pm_{2n,0}$ decomposes into $U(n)$-invariant eigenspaces $E_{\mu_r}(\Omega_0)$ of $\Omega_0$ to the eigenvalues $\mu_r=(-n+2r)i$, $r=0,\ldots,n\,$. In particular, the eigenspace to the eigenvalue $\varepsilon ni$, $\varepsilon=\pm 1\,,$ is 1-dimensional and given by \[ E_{in\varepsilon}(\Omega_0)=\C\cdot u(\varepsilon,\ldots,\varepsilon). \] By definition of $\ell$ (see (\ref{34})) we obtain for $\,A=\mbox{diag}(e^{i\theta_1}, \ldots e^{i\theta n)}$ \begin{eqnarray}\label{55} \ell(A)u(\varepsilon,\ldots,\varepsilon)=\left\{\begin{array}{cl} u(\varepsilon,\ldots,\varepsilon) &\quad \varepsilon=-1\\ \mbox{Det} A\cdot u(\varepsilon,\ldots,\varepsilon) &\quad \varepsilon=1. \end{array}\right. \end{eqnarray} Hence, $E_{-ni}$ is the trivial $U(n)$-representation and $E_{ni}$ is isomorphic to the $U(n)$-representation $\Lambda^n(\C^n)$. Since the subspaces $E_{\mu_r} (\Omega_0)$ of $\Delta^\pm_{2n,0}$ are invariant under the action of $\lambda^{-1}(U(n))$ we obtain the decomposition \[ S_H= \bigoplus\limits^n_{r=0} S_{\mu_r}, \] where $\,S_{\mu_r}:= Q_H\times_{\lambda^{-1}(U(n))} E_{\mu_r}(\Omega_0)$. \\ If $\,\tilde{s}:U \longrightarrow Q_H\,$ is a local section in $Q_H$, $d\theta$ acts on $S_H$ by \[ d\theta\cdot [\,\tilde{s}\,,\,v\,]=[\,\tilde{s}\,,\,\Omega_0\cdot v\,]. \] Therefore, $S_{\mu_r}$ is the eigenspace of $d\theta\cdot$ to the eigenvalue $\mu_r$.\\ Now, let $\,\eta=[\,q\,,\,u(\varepsilon,\ldots,\varepsilon)\,]\in S_{\varepsilon ni}\,$, $\varepsilon=\pm 1$. Denote $\,f_H(q)=(X_1,\ldots,X_{2n})\in P_H\,$, $\,X_{2\alpha}=JX_{2\alpha-1}\,$ and $\,s=(Z_1,\ldots,Z_n)\,$ the corresponding unitary basis in $(T_{10},L_\theta)$ with $\,Z_\alpha=\frac{1}{\sqrt{2}}(X_{2\alpha-1}-i JX_{2\alpha-1})\,$. Let $\,(\theta^1,\ldots,\theta^n)\,$ be the dual basis of $(Z_1,\ldots,Z_n)$ and $(\sigma^1,\ldots,\sigma^{2n})$ the dual basis of $(X_1,\ldots,X_{2n})$. If $\,\sigma\in\Lambda^{1,1}_\theta M\,$ is a form of type (1,1), then \begin{eqnarray*} \sigma &=& \sum\limits^n_{\alpha,\beta=1}\,\sigma_{\alpha\beta}\,\theta^\alpha\wedge \overline{\theta^\beta}\\ &=& \frac{1}{2} \sum\limits_{\alpha\not=\beta}\sigma_{\alpha\beta}\,(\sigma^{2\alpha-1} \wedge \sigma^{2\beta-1}+\sigma^{2\alpha}\wedge\sigma^{2\beta}) +\frac{i}{2}\sum\limits_{\alpha,\beta}\sigma_{\alpha\beta}\,(\sigma^{2\alpha}\wedge \sigma^{2\beta-1}-\sigma^{2\alpha -1}\wedge\sigma^{2\beta}). \end{eqnarray*} Hence, \begin{eqnarray*} \sigma\cdot\eta \,=\, [\,q\,,&\frac{1}{2}\,\sum\limits_{\alpha\not=\beta}\sigma_{\alpha\beta} \,(e_{2\alpha-1}\cdot e_{2\beta-1}+e_{2\alpha}\cdot e_{2\beta})\cdot u(\varepsilon,\ldots,\varepsilon)\\ & + \, \frac{i}{2}\,\sum\limits_{\alpha,\beta}\sigma_{\alpha\beta}\,(e_{2\alpha} \cdot e_{2\beta-1}-e_{2\alpha-1}\cdot e_{2\beta})\cdot u(\varepsilon,\ldots,\varepsilon)\,] \end{eqnarray*} where $\,\sigma_{\alpha\beta}=\sigma(Z_\alpha,\bar{Z}_\beta)\,$. Using formula (\ref{1}) we obtain \begin{eqnarray*} (e_{2\alpha-1}\cdot e_{2\beta-1}+e_{2\alpha}\cdot e_{2\beta})\cdot u(\varepsilon, \ldots,\varepsilon) &=& 0 \hspace{3cm} \alpha\not=\beta\\ (e_{2\alpha}\cdot e_{2\beta-1}-e_{2\alpha-1}\cdot e_{2\beta}) \cdot u(\varepsilon,\ldots, \varepsilon)&=& 0 \hspace{3cm} \alpha\not=\beta\\ (e_{2\alpha}\cdot e_{2\alpha-1}-e_{2\alpha-1}\cdot e_{2\alpha}) \cdot u(\varepsilon,\ldots, \varepsilon) &=& -2\varepsilon i\, u(\varepsilon,\ldots,\varepsilon). \end{eqnarray*} Therefore, \begin{eqnarray*} \sigma\cdot\eta &=& [\,q\,,\,\varepsilon\,\sum\limits_\alpha\, \sigma(Z_\alpha,\bar{Z}_\alpha)\cdot u(\varepsilon,\ldots,\varepsilon)\,]\\ &=& \varepsilon\cdot \mbox{Tr}_\theta\,\sigma\cdot\eta. \end{eqnarray*} Now, let us consider the section $\psi_{\varepsilon}\in\Gamma(\pi^* S_{\varepsilon ni})$ defined by \[ \psi_{\varepsilon}(p) := [\, \tilde{s}(\pi(p))\,,\, \varphi_s(p)^{-\varepsilon}u(\varepsilon,\ldots, \varepsilon)\,]. \] Let $\tilde{s}, \tilde{\hat{s}}:U \longrightarrow Q_H$ be two local sections, $\tilde{s}= \tilde{\hat{s}}\cdot g$ and let $h:U\longrightarrow S^1$ be the function defined by (\ref{35}): \begin{equation}\label{56} \ell(\lambda(g))\cdot h=g,\quad h^2=\mbox{Det}(\lambda(g))^{-1}. \end{equation} Then $\,\varphi_{\hat{s}}(p)=\varphi_s(p)\cdot h(\pi(p))\,$ and \begin{eqnarray*} \psi_{\varepsilon}(p) &=& [\,\tilde{\hat{s}}\cdot g\,,\, \varphi_s(p)^{-\varepsilon}u(\varepsilon,\ldots, \varepsilon)\,]\\ &=& [\,\tilde{\hat{s}}\,,\,\varphi_s(p)^{-\varepsilon}\, g\cdot u(\varepsilon,\ldots,\varepsilon)\,]\\ &=& [\,\tilde{\hat{s}}\,,\,\varphi_{\hat{s}}(p)^{-\varepsilon}\, h^{\varepsilon}\, g\cdot u(\varepsilon,\ldots,\varepsilon)\,]\\ &\stackrel{(\ref{56})}{=} & [\,\tilde{\hat{s}}\,,\, \varphi_{\hat{s}} (p)^{-\varepsilon}h^{\varepsilon+1}\ell(\lambda(g))\,u(\varepsilon,\ldots,\varepsilon)\,]\\ &\stackrel{(\ref{55}),(\ref{56})}{=}& [\,\tilde{\hat{s}}\,,\,\varphi_{ \hat{s}}(p)^{-\varepsilon}\, u(\varepsilon,\ldots,\varepsilon)\,]. \end{eqnarray*} Hence, $\,\psi_{\varepsilon}\,$ is a global section in the bundle $\,\pi^*S_{\varepsilon ni}\,$ on $\sqrt{F}$. \qed \ \\ \section{Twistor spinors on Fefferman spaces} \label{s7} Let $\,(M^{2n+1},T_{10},\theta)\,$ be a strictly pseudoconvex spin manifold, $(\sqrt{F},\pi,M)$ the square root of the canonical $S^1$-bundle corresponding to the spinor structure of $(M,g_\theta)$ and $h_\theta$ the Fefferman metric on $\sqrt{F}$. Denote by $\psi_{\varepsilon}\in\Gamma(\pi^*S_H)$ the global sections in the bundles $\pi^* S_{\varepsilon ni}$ over $\sqrt{F}$ defined in Proposition \ref{pr23}. Now, we are able to solve the twistor equation on the Lorentzian spin manifold $\,(\sqrt{F},h_\theta)\,$.\\ \begin{th} \label{t1} Let $\,S:=\pi^*S_H\oplus\pi^*S_H\,$ be the spinor bundle of $\,(\sqrt{F},h_\theta)\,$. Then the spinor fields $\,\phi_{\varepsilon}:=(\psi_{\varepsilon},0) \in\Gamma(S)\,$, $\varepsilon=\pm 1\,$, are solutions of the twistor equation on $\,(\sqrt{F},h_\theta)\,$ with the following properties: \begin{enumerate} \item The canonical vector field $V_{\phi_{\varepsilon}}$ of $\phi_{\varepsilon}$ is a regular isotropic Killing vector field. \item $V_{\phi_{\varepsilon}}\cdot \phi_{\varepsilon}=0\,.$ \item $\nabla^S_{V_{\phi_{\varepsilon}}}\phi_{\varepsilon}=-\frac{1}{\sqrt{2}}\, \varepsilon\, i\,\phi_{\varepsilon}\,.$ \item $\|\phi_{\varepsilon}\|_\xi\equiv 1$. \end{enumerate} \end{th} {\bf Remark:} If $n$ is even, then $\phi_1$ and $\phi_{-1}$ are linearly independent spinor fields in $S^+$. If $n$ is odd then $\phi_1\in \Gamma(S^+)$ and $\phi_{-1}\in\Gamma(S^-)$ (see Proposition \ref{pr19}). The second property of Theorem \ref{t1} shows that $\phi_{\varepsilon}$ is a pure or partially pure spinor field (see \cite{Trautman:94}). A vector field is called {\em regular}, if all of its integral curves are closed and of the same shortest period.\\ \ \\ {\bf Proof of Theorem 1:} We use the formulas for the spinor derivative in $S$ given in Proposition \ref{pr21} for the Fefferman connection $\,A=A^{\sqrt{}}_\theta\,$ and the constant $\,c=\frac{8}{n+2}\,$. Let $\tilde{s}:U\longrightarrow Q$ be a local section and $\,\varphi_s:\sqrt{F}|_U \longrightarrow S^1\,$ the corresponding transition function in $\sqrt{F}$ (see Proposition \ref{pr23}). Then for the fundamental vector field $N$ on $\,\sqrt{F}\,$ \begin{equation}\label{57} N(\varphi_s)=\frac{n+2}{4}i \, \varphi_s \end{equation} holds. If $\,Y^*\,$ is an $\,A^{\sqrt{}}_\theta$-horizontal lift of a vector field $Y$ on $M$, we obtain using standard formulas from connection theory \begin{eqnarray}\label{58} Y^*(\varphi_s) &=& -\varphi_s\cdot \sqrt{\tau_s}^*A^{\sqrt{}}_\theta(Y)\nonumber\\ &=& \frac{1}{2}\,\varphi_s\,\{\,\mbox{Tr}\, \omega_s(Y)+\frac{i}{2(n+1)}R^W \theta(Y)\}, \end{eqnarray} where $\omega_s$ is the matrix of connection forms of the Webster connection with respect to the unitary basis $s$ in $(T_{10},L_\theta)$ corresponding to $f_H(\tilde{s})$. According to Proposition \ref{pr19} we have $\,N\cdot \phi_{\varepsilon}=0\,$. Therefore, from Proposition \ref{pr21} and (\ref{57}), (\ref{58}) result \begin{eqnarray*} \nabla^S_N\,\phi_{\varepsilon}&=&\left(-\varepsilon\,\frac{n+2}{4} \,i\,\psi_{\varepsilon}+ \frac{1}{4}\,d\theta\cdot \psi_{\varepsilon}\,,\,0\,\right)\\ \nabla^S_{T^*}\phi_{\varepsilon} &=&\left(-\frac{1}{2}\,\varepsilon\,\{\mbox{Tr}\,\omega_s(T)+\frac{i}{ 2(n+1)}R^W\}\psi_{\varepsilon}+\frac{1}{4}\,b_s\cdot \psi_{\varepsilon} - i\,\frac{1}{n+2}\,\Omega^{A^{\sqrt{}}_\theta}_\theta \cdot \psi_{\varepsilon}\,,\,0\,\right)\\ \nabla^S_{X^*} \phi_{\varepsilon}&=& \left(-\frac{1}{2}\,\varepsilon\,\mbox{Tr}\, \omega_s(X)\,\psi_{\varepsilon} + \frac{1}{2}\,d_s(X)\cdot\psi_{\varepsilon}\,,\,0\,\right) -\frac{1}{4}(X \;_-\!\rule{0.2mm}{0.2cm}\;\; d\theta)^*\cdot T^*\cdot\phi_{\varepsilon}\,, \end{eqnarray*} where $\,b_s\,$ and $\,d_s(X)\,$ are the $\Lambda^{1,1}$-forms defined in Proposition \ref{pr22}. Since $\psi_{\varepsilon}$ is a section in $\pi^*S_{\varepsilon ni}$, $b_s$ and $d_s(X)$ act on $\psi_{\varepsilon}$ by multiplication with $\,\varepsilon\mbox{Tr}_\theta\, b_s\,$ and $\,\varepsilon\,\mbox{Tr}_\theta\, d_s(X)\,$, respectively (Proposition \ref{pr23}). Hence, according to Proposition \ref{pr22}, \begin{eqnarray*} \nabla^S_{T^*}\phi_{\varepsilon} &=& \left(\,-i\frac{1}{n+2}\,\Omega^{A^{\sqrt{}}_\theta}_\theta \cdot\psi_{\varepsilon} - \varepsilon\,\frac{i}{4(n+1)}\,R^W \psi_{\varepsilon}\,,\,0\,\right)\\ \nabla^S_{X^*}\phi_{\varepsilon} &=& -\frac{1}{4}(X\;_-\!\rule{0.2mm}{0.2cm}\;\; d\theta)^*\cdot T^*\cdot \phi_{\varepsilon}. \end{eqnarray*} Furthermore, $\psi_{\varepsilon}$ is an eigenspinor of the action of $d\theta$ on $S_H$ to the eigenvalue $\varepsilon ni$. Therefore, \begin{equation}\label{59} \nabla^S_N\phi_{\varepsilon} = -\frac{\varepsilon}{2}\,i\,\phi_{\varepsilon}\,. \end{equation} Because of \begin{eqnarray*} \Omega^{A^{\sqrt{}}_\theta}_\theta \,=\, -\frac{1}{2}\,\mbox{Ric}^W_\theta - \frac{i}{4(n+1)} \,d(R^W\theta)_\theta\,=\, -\frac{1}{2}\,\mbox{Ric}^W_\theta-\frac{i}{4(n+1)}\,R^Wd\theta, \end{eqnarray*} the curvature $\,\Omega^{A^{\sqrt{}}_\theta}_\theta\,$ of the Fefferman connection is a form of type (1,1). Hence, \begin{eqnarray*} \Omega^{A^{\sqrt{}}_\theta}_\theta\cdot\psi_{\varepsilon} &=& \varepsilon \,\, \mbox{Tr}_\theta (\Omega^{A^{\sqrt{}}_\theta}) \, \psi_{\varepsilon}\\ &=& (-\frac{1}{2}\,\varepsilon \, R^W - \frac{i\varepsilon}{4(n+1)} \, R^W \, in)\,\psi_{\varepsilon}\\ &=& -\varepsilon\frac{n+2}{4(n+1)} R^W \,\psi_{\varepsilon}. \end{eqnarray*} Therefore, we obtain \begin{equation}\label{60} \nabla^S_{T^*}\phi_{\varepsilon}=0. \end{equation} According to Proposition \ref{pr19}, $\,T^*\cdot\phi_{\varepsilon}=(\,0,\sqrt{2}\psi_{\varepsilon}\,)\,$. If $X\in\{X_1,\ldots,X_{2n}\}$, the 1-form $X\;_-\!\rule{0.2mm}{0.2cm}\;\; d\theta$ acts on the spinor bundle by Clifford multiplication with $J(X)$. Hence, we have \begin{equation}\label{61} \nabla^S_{X^*}\phi_{\varepsilon}\,=\,\left(\,0\,,\,-\frac{\sqrt{2}}{4}J(X)\cdot \psi_{\varepsilon}\,\right). \end{equation} Now, using $\,s_1= \frac{1}{\sqrt{2}}(N-T^*)\,$ , $\, s_2=\frac{1}{\sqrt{2}}(N+T^*)\,$, we obtain \begin{eqnarray*} -s_1\cdot\nabla^S_{s_1}\phi_{\varepsilon} \,=\, s_2\cdot\nabla_{s_2}^S \phi_{\varepsilon} \,=\, X^* \cdot\nabla_{X^*}^S \phi_{\varepsilon} \,=\, \left(\,0,-\frac{1}{2\sqrt{2}}\, \varepsilon\,i\,\psi_{\varepsilon}\,\right), \end{eqnarray*} where $X\in\{X_1,\ldots,X_{2n}\}\,$. This shows, that $\,\phi_{\varepsilon}\,$ is a twistor spinor (see Proposition \ref{pr1}).\\ From Proposition \ref{pr19} it follows \begin{eqnarray*} (\phi_{\varepsilon},\phi_{\varepsilon})_\xi \,=\, \langle s_1 \cdot \phi_{\varepsilon}, \phi_{\varepsilon} \rangle \,=\, \langle \,(0,-\psi_{\varepsilon}),(\psi_{\varepsilon},0)\,\rangle \,=\, (\psi_\varepsilon,\psi_\varepsilon)_{S_H} \,=\,1\,. \end{eqnarray*} Furthermore, we obtain for the canonical vector field $\,V_{\phi_{\varepsilon}}\,$ \begin{eqnarray*} V_{\phi_{\varepsilon}} &=& \langle s_1\cdot\phi_{\varepsilon},\phi_{\varepsilon}\rangle\, s_1- \langle s_2\cdot\phi_{\varepsilon},\phi_{\varepsilon}\rangle\, s_2 - \sum\limits^{2n}_{k=1} \,\langle X^*_k\cdot\phi_{\varepsilon},\phi_{\varepsilon}\rangle X^*_k\\ &=& s_1+s_2 \,=\, \sqrt{2}\, N. \end{eqnarray*} Therefore, $V_{\phi_{\varepsilon}}$ is regular and isotropic and satisfies $\,V_{\phi_{\varepsilon}} \cdot\phi_{\varepsilon}=0\,$. Because of (\ref{59}) we have \[ \nabla^S_{V_{\phi_{\varepsilon}}}\phi_{\varepsilon}=-\frac{1}{\sqrt{2}}\,\varepsilon\, i \, \phi_{\varepsilon}. \] It remains to show, that the vertical vector field $N$ is a Killing vector field. This follows directly from the formulas of Proposition \ref{pr20}: \[ L_Nh_\theta(Y,Z)=h_\theta(\nabla_YN,Z)+h_\theta(Y,\nabla_ZN)=0 \] for all vector fields $Y$ and $Z$ on $\,\sqrt{F}\,$. \qed \ \\ Conversely, we have\\ \begin{th} \label{t2} Let $(B^{2n+2},h)$ be a Lorentzian spin manifold and let $\varphi\in\Gamma(S)$ be a nontrivial twistor spinor on $(B,h)$ such that \begin{enumerate} \item The canonical vector field $V_\varphi$ of $\varphi$ is a regular isotropic Killing vector field. \item $V_\varphi\cdot\varphi=0\,.$ \item $\nabla^S_{V_\varphi}\varphi=i \,c\,\varphi\,, \qquad c=\mbox{const} \in\R\backslash \{0\}$. \end{enumerate} Then $\,B\,$ is an $\,S^1$-principal bundle over a strictly pseudoconvex spin manifold\\ $(M^{2n+1},T_{10},\theta)\,$ and $\,(B,h)\,$ is locally isometric to the Fefferman space $\,(\sqrt{F},h_\theta)$ of $(M,T_{10},\theta)$. \end{th} {\bf Proof:} Since $\,V_\varphi\,$ is regular, it defines an $S^1$-action on $B$ \begin{eqnarray*} B\times S^1 & \longrightarrow & B\\ (p,e^{it}) &\longmapsto & \gamma^V_{t\cdot\frac{L}{2\pi}} (p) \end{eqnarray*} where $\,\gamma^V_t(p)\,$ is the integral curve of $\,V=V_\varphi\,$ through $p$ and $L$ is the period of the integral curves. Then $\,M:=B/_{S^1}\,$ is an $\,2n+1\,$-dimensional manifold and $V$ is the fundamental vector field defined by the element $\frac{2\pi}{L}i$ of the Lie algebra $i\R$ of $S^1$ in the $S^1$-principal bundle $(B,\pi,M;S^1)$. Now we use Sparling's characterization of Fefferman spaces, proved by Graham in \cite{Graham:87}. Let $W$ denote the (4,0)-Weyl tensor, $C$ the (3,0)-Schouten-Weyl tensor and $K$ the (2,0)-Schouten tensor of $(B,h)$. Graham proved:\\ If $V$ is an isotropic Killing vector field such that \begin{eqnarray} V\;_-\!\rule{0.2mm}{0.2cm}\;\; W &=& 0 \label{62} \\ V\;_-\!\rule{0.2mm}{0.2cm}\;\; C &=& 0 \label{63}\\ K(V,V) &=& \mbox{const} <0, \label{64} \end{eqnarray} then there exists a pseudo-hermitian structure $(T_{10},\theta)$ on $M$ such that $(B,h)$ is locally isometric to the Fefferman space $(F,h_\theta)$ of $(M,T_{10},\theta)$. The local isometry is given by $S^1$-equivariant bundle maps $\,\phi_U:B|_U\longrightarrow F|_U\,$.\\ We first prove that $\,V=V_\varphi\,$ satisfies (\ref{62})-(\ref{64}). Property (\ref{63}) is valid for each twistor spinor (see Proposition \ref{pr10}). Using $\,W(X\wedge Y)\cdot\varphi=0\,$ (see (\ref{11}) of Proposition \ref{pr5}) and the assumption $\,V_\varphi\cdot\varphi=0\,$ we obtain \begin{eqnarray*} 0 &=& \{W(X\wedge Y)\cdot V-V\cdot W(X\wedge Y)\}\cdot\varphi\\ &=& 2\,\{ V\;_-\!\rule{0.2mm}{0.2cm}\;\; \,W(X\wedge Y)\}\cdot \varphi\\ &=& 2\, W(X,Y,V)\cdot\varphi \end{eqnarray*} for all vector fields $X$ and $Y$ on $B$. Since $V_\varphi$ is a nontrivial isotropic Killing field, it has no zeros. Hence, by Proposition \ref{pr6}, the twistor spinor $\varphi$ has no zeros and therefore, the vector field $W(X,Y,V)$ must be isotropic for all vector fields $X,Y$ on $B$. Because of \[ W(X,Y,V,V)=h(W(X,Y,V),V)=0\,, \] $W(X,Y,V)$ is orthogonal to the isotropic vector field $V$. Since $(B,h)$ has Lorentzian signature, it follows that there is a 2-form $\lambda$ on $B$ such that \begin{equation}\label{65} W(X,Y,V)=\lambda(X,Y)\, V \qquad\mbox{for all } X,Y\in \Gamma(TB). \end{equation} Now, we use formula (\ref{12}) of Proposition \ref{pr5} to obtain \begin{eqnarray*} 0 &=& V\cdot W(X\wedge Y)\cdot D\varphi-n \,\{V\cdot C(X,Y)+C(X,Y)\cdot V \}\cdot \varphi\\ &=& V\cdot W(X\wedge Y)\cdot D\varphi + 2n\,C(V,X,Y)\,\varphi. \end{eqnarray*} Because of $V\;_-\!\rule{0.2mm}{0.2cm}\;\; C=0$ it results \begin{equation}\label{66} V\cdot W(X\wedge Y)\cdot D\varphi=0. \end{equation} From the twistor equation (\ref{6}) and the assumption $\,\nabla^S_V\varphi= i\,c\,\varphi\,$ it follows \begin{eqnarray}\label{67} W(X\wedge Y)\cdot V\cdot D\varphi &=& -n\,\, W(X\wedge Y)\cdot \nabla^S_V\varphi\nonumber\\ &=& - n i c\,\, W(X\wedge Y)\cdot\varphi\nonumber\\ &\stackrel{(\ref{11})}{=}& 0\,. \end{eqnarray} Then (\ref{65}), (\ref{66}) and (\ref{67}) give \begin{eqnarray*} 0 &=& W(X\wedge Y)\cdot V\cdot D\varphi-V\cdot W(X\wedge Y)\cdot D\varphi\\ &=& 2\,\,W(X,Y,V)\cdot D\varphi\\ &=& 2\lambda(X,Y)\, V\cdot D\varphi\\ &\stackrel{(\ref{6})}{=}& -2n\,\,\lambda(X,Y) \, \nabla_V^S \varphi\\ &=& -2nci\,\lambda (X,Y)\,\varphi. \end{eqnarray*} Therefore, $\lambda\equiv 0$ and $V\;_-\!\rule{0.2mm}{0.2cm}\;\; W=0$. Using formula (\ref{10}) of Proposition \ref{pr5} we obtain \begin{eqnarray*} V\cdot \nabla_V^SD\varphi \,=\, \frac{n}{2}\,\{V\cdot K(V)+K(V)\cdot V\}\cdot\varphi \,=\, -n\,K(V,V)\varphi. \end{eqnarray*} Since $V$ is an isotropic Killing field, it satisfies $\nabla_VV=0$. It follows \begin{eqnarray*} \nabla_V^S(V\cdot D\varphi) \,=\, \nabla_VV\cdot D\varphi+V\cdot\nabla_V^SD\varphi \,=\, -n\,K(V,V)\,\varphi\, \end{eqnarray*} and from the twistor equation \[ \nabla^S_V\nabla^S_V\varphi = K(V,V)\,\varphi. \] Using $\,\nabla_V^S\varphi=ic\varphi\,$ we obtain $\,K(V,V)=-c^2\,$. Therefore, the canonical vector field $V_\varphi$ of the twistor spinor $\varphi$ satisfies the conditions of Sparling's characterization theorem for Fefferman metrics. Now, we proceed as in Graham's proof of that theorem. Since $\,V_{\alpha\varphi} =|\alpha|^2V_\varphi\,$ we can normalize $\varphi$ in such a way that $\,K(V_\varphi,V_\varphi)=-\frac{1}{4}\,$. Then, let $\tilde{T}$ be the vector field on $B$ defined by \[ h(\tilde{T},X)=-4\,K(X,V_\varphi) \,,\qquad X\in \Gamma(TB). \] $\tilde{T}$ is isotropic and $\,h(\tilde{T},V_\varphi)=1\,$. Then we can use $V_\varphi$ and $\tilde{T}$ to reduce the spinor structure of the Lorentzian manifold $(B,h)$ to the group $\,\mbox{Spin}(2n)\,$. This reduced spinor structure projects to a spinor structure of $(H,L_\theta)$, where $\theta$ is the projection of the 1-form $\,\tilde{\theta}\in\Omega^1(B)\,$ dual to $V_\varphi$ and $H\subset TM$ is the projection of the subbundle $\,\tilde{H}=\mbox{span}(\tilde{T}, V_\varphi)^\bot\subset TB\,$ onto $M$. $J:H\to H$ is given by projection of the map \begin{eqnarray*} \tilde{J}: TB &\longrightarrow & TB\\ X &\longmapsto & 2\,\nabla_XV_\varphi\,, \end{eqnarray*} which acts on $\tilde{H}$ with $\tilde{J}^2=-id$. Then in \cite{Graham:87} is proved that $\,(M,H,J,\theta)\,$ in fact is a strictly pseudoconvex manifold which we equip with the spinor structure arising from that of $(H,L_\theta)$ by enlarging the structure group. In the same way as in \cite{Graham:87} it follows that $(B,h)$ is locally isometric to the Fefferman space $\,(\sqrt{F},h_\theta)\,$, where the isometries are given by $S^1$-bundle maps $\,\sqrt{F}|_U \longrightarrow B|_U$. \qed \ \\ {\bf Remark:} Jerison and Lee studied the Yamabe problem on CR-manifolds (see \cite{Jerison/Lee:87}). They proved that there is a numerical CR-invariant $\lambda(M)$ associated with every compact oriented strictly pseudoconvex manifold $M^{2n+1}$, which is always less than or equal to the value corresponding to the sphere $S^{2n+1}$ in $\C^n$ with its standard CR-structure. If $\lambda(M)$ is strictly less than $\lambda(S^{2n+1})$, then $M$ admits a pseudo-hermitian structure $\theta$ with constant Webster scalar curvature $\,R^W = \lambda(M)\,$. Furthermore, one knows that the scalar curvature $R$ of the Fefferman metric $h_\theta$ is a constant positive multiple of the lift of the Webster scalar curvature $R^W$ to the Fefferman space (see \cite{Lee:86}) . Now, let $(M^{2n+1},T_{10})$ be a compact strictly pseudoconvex spin manifold with $\,0 \not = \lambda(M) < \lambda(S^{2n+1})\,$. Choose a pseudo-hermitian structure $\theta$ on $(M,T_{10})$ such that the Webster scalar curvature $R^W$ is constant (and non-zero since $\lambda(M) \not = 0$).\\ Let $\,\phi_{\varepsilon}$, $\varepsilon=\pm 1\,$, be the twistor spinors on $(\sqrt{F},h_\theta)$, defined in Theorem \ref{t1}. Then according to the remark following Proposition \ref{pr5} the spinor fields \[ \eta_{\varepsilon,\pm} \,:=\, \frac{1}{2}\, \phi_{\varepsilon}\, \pm \, \sqrt{\frac{2n+1}{(2n+2)R}}\,\,\,D\phi_{\varepsilon} \] are eigenspinors of the Dirac operator of the Lorentzian spin manifold $\,(\sqrt{F},h_\theta)\,$ to the eigenvalue $\,\pm \frac{1}{2} \sqrt{\frac{2n+2}{(2n+1)}R}\,$. The length the spinor fields $\,\eta_{\varepsilon,\pm}\,$ is constant with respect to the indefinite scalar product $\,\langle \cdot, \cdot \rangle\,$ as well as to the positive definite scalar product $( \cdot, \cdot )_{\xi}$. \bibliographystyle{alpha}
1,116,691,497,946
arxiv
\section{\@startsection{section}{1}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}} \renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}} \usepackage{enumerate} \usepackage{graphicx} \usepackage[round]{natbib} \usepackage{amssymb,amsmath, amscd, amsthm, mathrsfs} \renewcommand{\arraystretch}{1.5} \usepackage{ bbold } \vfuzz2pt \hfuzz2pt \theoremstyle{definition} \newtheorem{thm}{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{claim}[thm]{Claim} \newtheorem{prop}[thm]{Proposition} \newtheorem{defn}[thm]{Definition} \newtheorem{con}[thm]{Conjecture} \newtheorem*{criterion}{Criterion} \newtheorem{note}[thm]{Notation} \newtheorem{ex}[thm]{Example} \theoremstyle{remark} \newtheorem{rem}[thm]{Remark} \numberwithin{equation}{section} \newcommand{\norm}[1]{\left\Vert#1\right\Vert} \newcommand{\abs}[1]{\left\vert#1\right\vert} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\inpar}[1]{\left( #1 \right)} \newcommand{\mathbb R}{\mathbb R} \newcommand{\mathbb C}{\mathbb C} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\text{Cl}}{\text{Cl}} \newcommand{\inbrac}[1]{\left[ #1 \right]} \newcommand{\mathbf{Hol}}{\mathbf{Hol}} \newcommand{\mathbf{Wil}}{\mathbf{Wil}} \newcommand{\mathbf{PC}}{\mathbf{PC}} \newcommand{\PC_{\text{sec}}}{\mathbf{PC}_{\text{sec}}} \newcommand{\PC_{\text{sf}}}{\mathbf{PC}_{\text{sf}}} \newcommand{\PC_{\text{b}}}{\mathbf{PC}_{\text{b}}} \newcommand{\PC_{\text{bsec}}}{\mathbf{PC}_{\text{bsec}}} \newcommand{\PC_{\text{triv}}}{\mathbf{PC}_{\text{triv}}} \newcommand{\text{GL}(n, \R)}{\text{GL}(n, \mathbb R)} \newcommand{\text{Ad}(G)}{\text{Ad}(G)} \newcommand{\text{Tr}}{\text{Tr}} \newcommand{\cong}{\cong} \newcommand{\text{Hom}}{\text{Hom}} \makeatother \begin{document} \begin{frontmatter} \title{On Representational Redundancy, Surplus Structure, and the Hole Argument} \author{Clara Bradley} \address{Department of Philosophy \\ University of Bristol} \author{James Owen Weatherall}\ead{[email protected]} \address{Department of Logic and Philosophy of Science \\ University of California, Irvine} \date{ } \begin{abstract} We address a recent proposal concerning `surplus structure' due to Nguyen et al. [ `Why Surplus Structure is Not Superfluous.' \emph{Br. J. Phi. Sci} Forthcoming.] We argue that the sense of `surplus structure' captured by their formal criterion is importantly different from---and in a sense, opposite to---another sense of `surplus structure' used by philosophers. We argue that minimizing structure in one sense is generally incompatible with minimizing structure in the other sense. We then show how these distinctions bear on Nguyen et al.'s arguments about Yang-Mills theory and on the hole argument.\end{abstract} \end{frontmatter} \doublespacing \section{Introduction}\label{sec:intro} There are several interrelated themes that arise in contemporary discussions of Einstein's `hole argument'. One theme is largely historical: it is now widely recognized that the hole argument played a significant role in Einstein's thinking as he developed general relativity during the period from 1913 to 1915 \citep{Norton1984,Stachel}. A second theme is essentially metaphysical. On the classic treatment by \citet{Earman+Norton}, for instance, the argument shows that a certain kind of \emph{substantivalist}---namely, one who considers spacetime `points' to have a special ontological status, independent of or prior to the events that occur or field values that obtain there---is committed to a certain kind of indeterminism. To avoid this dismal conclusion, they argue, one must endorse a doctrine known as `Leibniz equivalence', which is meant to be a hallmark of \emph{relationism} (and thus a rejection of substantivalism). A third theme, though rarely disentangled from metaphysical questions related to subtantivalism and relationism, is arguably of greater importance to physics. This theme concerns whether the hole argument reveals an infelicity in the standard formalism of general relativity, in the form of \emph{surplus structure} or \emph{gauge freedom}.\footnote{For a discussion of the role of this theme in Earman's thinking on the hole argument during the 1970s and 1980s, see \citet{WeatherallStein} and references therein.} The idea here is that the manifold substantivalist is committed to some structure---roughly, `spacetime points', though care is needed in interpreting this assertion---that the hole argument reveals is not only unnecessary for physics, but which also has undesirable consequences in the form of indeterminism. Manifold substantivalism, meanwhile, is often taken to be suggested, or perhaps even implied, by the standard formalism of relativity. That is, the standard formalism apparently invokes or, on a natural reading, attributes to the world, precisely the surplus structure that the hole argument exposes. The moral is then taken to be that one either needs to adopt an alternative understanding of this standard formalism---say, by adopting what is sometimes called `sophisticated substantivalism'---or else move to a different formalism altogether that excises this surplus structure.\footnote{Earman, for instance, proposed moving to Einstein algebras \citep{GerochEA} as a suitably `relationist' alternative to standard formulations of general relativity \citep{EarmanPD,Earman1986,Earman1989,EarmanWEST,Rynasiewicz1992,Bain,Rosenstock+etal}. Similar issues are at stake when, for instance, \citet[p. 31]{RovelliDisappearance} argues that the manifold is `a gauge artifact' in general relativity or \citet[p. 5]{SmolinThreeRoads} argues that there are no points in physical spacetime. We take these arguments to assume, often implicitly, that something like manifold substantivalism is the `default' interpretation of the standard formalism, and to avoid that interpretation, one needs a formalism with a different, weaker metaphysics as its `default' interpretation. (Other authors, such as \citet{Friedman} and \citet{Field}, offer more direct arguments for positions similar to manifold substantivalism on the basis of the standard formalism of general relativity.)} We say this third theme is of greater importance to physics than the others because there is a connection between the search for alternative formulations of general relativity that avoid this `gauge freedom' and some approaches to quantum gravity. Briefly, in constructing a quantum theory one generally wishes to identify (and quantize) only those degrees of freedom with physical significance. Hence, if the standard formalism of general relativity implicitly includes surplus structure, a first step towards developing future theories might be to develop a new theory with less structure. It is in connection with this third theme, then, that the hole argument has been of lingering significance in the development of a quantum theory of gravity. In a pair of recent papers, \citet{WeatherallHoleArg,WeatherallUG} has argued against the view that the hole argument reveals that the standard formalism of general relativity has surplus structure. To the contrary, he argues, on a certain precise understanding of `surplus structure', general relativity should not be taken to have surplus structure at all.\footnote{Weatherall uses the expression `excess structure'; nothing turns on the difference between `excess' and `surplus' here.} \citet{Nguyen+etal} have replied by questioning whether the notion of `surplus structure' that Weatherall proposes accurately captures what physicists have in mind when they argue that some physical theories exhibit such structure.\footnote{\citet{Nguyen+etal} focus on Yang-Mills theory, but if Weatherall's argument fails there, it will fail in general relativity too; indeed, \citet{WeatherallUG,WeatherallFBYMGR} has argued that, at least in this connection, Yang-Mills theory and general relativity are strongly analogous, and neither has surplus structure.} Instead, they suggest, the relevant notion of `surplus structure' is one on which a theory exhibits `representational redundancy,' in the sense that a single situation can be represented in many equally good ways.\footnote{Apparently bolstering their case, Prop. 2 of \citet{WeatherallUG} is false as stated \citep{WeatherallErratum}, leading to the surprising conclusion that, on Weatherall's account, theories that are often taken to have `surplus structure' actually have \emph{less} structure than ones that are said \emph{not} to have surplus structure (as opposed to being equivalent, as Weatherall originally claimed).} Nguyen et al. go on to argue that what they characterize as `surplus structure' is not necessarily superfluous, in the sense of being freely eliminable. We strongly agree with the substance of this moral, but think that their arguments are better characterized as establishing a somewhat different thesis than what they appear to state. The reason is that they present their argument as if it is in conflict with another view, which is that one should minimize the structure of one's theories.\footnote{See, for instance, the Abstract and Introduction of their paper---and, indeed, the title, which makes sense only insofar as one might have initially thought surplus structure were superfluous. We emphasize this point because an anonymous referee suggests that \citet{Nguyen+etal} may not have intended to reject the maxim that one should always minimize structure, but we think the plain meaning of their texts suggests otherwise.} But we do not think there is any conflict, because the sense of `surplus structure' that they consider---that is, `representational redundancy'---is importantly different from what philosophers have generally thought of as surplus structure in a theory.\footnote{To be sure, we are not in the business of policing language: Nguyen et al. are clear and precise about what they mean by `surplus structure', and we think that, on their understanding of the expression, their argument is compelling and insightful. Our point, rather, is to clearly distinguish two different, nearly opposite, meanings of an expression both of which seem to be in use in the literature, and to emphasize that showing that surplus structure in one sense is not eliminable does not imply that surplus structure in other, very much distinct, senses is also not eliminable.} In fact, their notion generally pulls in the opposite direction, in the sense that a theory admitting representational redundancy has \emph{less} structure than one without that redundancy. This suggests that exhibiting `representational redundancy' should not be taken as a theoretical vice---at least not on grounds of structural parsimony. To the contrary, we will argue, the sorts of considerations that have led philosophers to wish to excise surplus structure from theories should motivate one to \emph{increase} representational redundancy. Our goal in the present paper is to defend the perspective just stated, and then argue that distinguishing representational redundancy from surplus structure provides insight into the third strand of literature on the hole argument described above. In particular, we will argue, general relativity does not have surplus structure. But it does, in several senses, admit of representational redundancy. Keeping these separate helps clarify what the hole argument accomplishes: ultimately, we will argue, the hole argument is best seen as an argument \emph{against} the recommendation to eliminate representational redundancy from general relativity rather than an argument \emph{for} the claim that general relativity, as standardly presented, has surplus structure. The remainder of the paper will be structured as follows. We begin by reviewing the arguments of \citet{Nguyen+etal} regarding `surplus structure' and `representational redundancy'. We will then argue that in simple and intuitive examples, the precise notion of `surplus structure' that they propose should be associated with a theory having \emph{less} structure, not more. We will then disambiguate the sense of `representational redundancy' captured by Nguyen et al.'s precise criterion from two other intuitive senses of `representational redundancy', one of which, we argue, does correspond to surplus structure. We then bring this machinery to bear on the arguments Nguyen et al. provide concerning Yang-Mills theory, offering a different perspective on what their argument accomplishes. Finally, we return to the hole argument in light of the forgoing discussion. We conclude with some brief remarks about what we take the paper to have done. \section{The Argument}\label{sec:NTW} In this section, we review the proposal for understanding `surplus structure' given by \citet{Nguyen+etal}, and discuss how it differs from the arguments in \citet{WeatherallUG}. Since both use category theory to represent physical theories in similar ways, we will first describe the shared framework used by both approaches.\footnote{These ideas were introduced to philosophy of science by \citet{WeatherallTheoreticalEquiv} as a means of comparing different physical theories, following a suggestion by \citet{Halvorson}---though similar ideas have long been used in mathematics. A review of applications of this approach---termed `Theories as Categories of Models' by \citet{Rosenstock}---is given in \citep{WeatherallCategories}. For background on category theory, see \citep{MacLane}; for a gentler introduction, see \citep{Leinster}.} In this framework, physical theories are represented as categories, whose objects consist of models of the theory, and arrows between the objects represent relations between the models. For the purpose at hand, we consider categories whose arrows are isomorphisms of the models, since isomorphisms are transformations that preserve structure and thus preserve representational capacity.\footnote{\citet{BarrettSS} and \citet{Rosenstock} both give reasons why, in some applications, it is important to consider more than just isomorphisms; for present purposes, little turns on whether one considers categories with a broader notion of arrow, as long as one does so consistently across all theories under discussion.} This is motivated by the idea that the structure of a model captures its representational content, and so isomorphic models are able to represent the same physical situations. Relations between theories are described by functors between the categories representing those theories. These functors may be classified by what they `forget', using a scheme developed by \citet{Baezetal}. To understand the classification, we first need some terminology. A functor $F: \mathcal{C} \rightarrow \mathcal{D}$ is said to be \textit{full} if for every pair of objects $A,B$ of $\mathcal{C}$ the map $F:$ hom$(A,B)$ $\rightarrow$ hom$(F(A),F(B))$ induced by $F$ is surjective, where hom$(A,B)$ is the collection of arrows from $A$ to $B$. Similarly, $F$ is said to be \textit{faithful} if for every pair of objects the induced map on arrows is injective. Finally, $F$ is said to be \textit{essentially surjective} if for every object $X$ of $\mathcal{D}$, there is some object $A$ of $\mathcal{C}$ such that $F(A)$ is isomorphic to $X$. Using this terminology, we say that a functor $F: \mathcal{C} \rightarrow \mathcal{D}$ forgets \textit{structure} if it is not full; it forgets \textit{stuff} if it is not faithful; and it forgets \textit{properties} if it is not essentially surjective. If $F$ is full, faithful and essentially surjective then it forgets \textit{nothing}. In this case, $F$ is said to realize an \emph{equivalence of categories}.\footnote{We observe that there is an $n-$categorical perspective on this classification, where each of these three notions of `forgetting' correspond to forgetting structure at different `levels': forgetting properties means forgetting 0-structure; forgetting structure means forgetting 1-structure; forgetting stuff means forgetting 2-structure; and so on, where one extends these notions to a hierarchy of `essentially $k$-surjective' functors between $n-$categories \citep{Baez+Shulman}. This alternative perspective may make it seem as if all of these notions of `forgeting' correspond to different kinds of `structure' that may be forgotten. But what is important to emphasize is that, as we discuss below, it is 1-structure that most naturally corresponds to what is usually meant by the structure of a mathematical object or model of a physical theory.} \citet{WeatherallUG} argues that a theory has surplus structure relative to another if there is a functor from the first theory to the second, represented as categories, that forgets structure while preserving empirical significance. On the other hand, \citet{Nguyen+etal} argue that there is another notion of surplus structure, operative in the physics literature, which they call surplus* structure. One theory has surplus* structure relative to another if there exists a functor from the first theory to the second, represented as categories, that forgets stuff while preserving empirical significance. In this case, they argue, what is `forgotten' are extra arrows in the categories. They argue that a theory with surplus* structure has what they call `representational redundancy'. We will say much more about the sense of `representational redundancy' at issue in the next section. But first, let us say why Nguyen et al. make this proposal. The starting point for their argument is largely sociological. They claim that physicists and philosophers often attribute to certain theories---particularly, Yang-Mills theory---some sort of surplus or redundant structure `over and above [their] ... equivalence classes' of representationally equivalent models \citep[p. 10]{Nguyen+etal}. That is, one observes that there are some theories wherein models related by certain transformations (`gauge transformations') are taken to have the same representational capacities, and one observes that these theories are often discussed as having something `extra'. Nguyen et al. balk at the idea that this `something more' arises only if one neglects the transformations realizing these equivalences (which is what Weatherall's proposal amounts to) because physicists are well aware of these transformations, and do not explicitly neglect them when they make claims about the redundancy associated with these theories. And so they want to find some other sense in which a formulation of Yang-Mills theory might be said to have `surplus' over a formulation involving only equivalence classes---one on which the transformations relating equivalent models are never neglected. The next step of their argument is to observe that functors that forget stuff do, in fact, forget \emph{something}: namely, ways in which models are equivalent to one another. And so, surplus* structure is a candidate for something that some formulations of a theory might have that other formulations do not have. More, it is a candidate that meets the desideratum just stated, because, as we elaborate in section \ref{sec:YM}, it turns out that there is a formulation of Yang-Mills theory that \emph{has} surplus* structure relative to a formulation using only equivalence classes, and moreover, this formulation includes all gauge transformations as arrows. Thus, they conclude, surplus* structure is an attractive candidate for a precise way of characterizing what some formulations of theories have `over and above' formulations invoking only equivalence classes.\footnote{We remark that, although this is fair to say, adopting this prescription for what should be meant by `surplus' does not recover common claims that gauge theories exhibit `surplus' anything---because, as \citet{Nguyen+etal} go on to argue, what would putatively be `surplus' in such cases is ineliminable.} \section{Representational Redundancy and Surplus Structure}\label{sec:W} To evaluate Nguyen et al.'s proposal, we will now consider the sense of `representational redundancy' associated with surplus* structure (i.e., stuff). It will be helpful to do so in the context of a simple example. Suppose we are given a map of the Earth. Now fix a two dimensional (real) vector space $V$. (It is essential that we are dealing with just a vector space---not an inner product space, a norm space, or anything else.) Imagine we are interested in representing directions on the surface of the Earth at some fixed location---say, Irvine, California---using the vectors in $V$. One way to proceed is to choose some non-zero vector $v\in V$ and stipulate that this vector represents `North'---that is, it points in the direction of the longitudinal line passing through Irvine, towards the northern pole of the Earth. It makes no difference at all which vector one chooses: any non-zero vector is as good as any other. In fact, at this stage there is nothing to distinguish one non-zero vector from any other.\footnote{One way of thinking about what we have done here is to choose a particular (partial) reference relation. We have not added any structure to $V$; we have just made a choice of mapping from $V$ to the world.} Now suppose we would like to also represent `East'. We once again choose some non-zero vector---$u\in V$ this time---and stipulate its meaning. As before, we have a lot of freedom in which vector we choose, though not quite as much freedom as in choosing `North'. This is because we have already fixed `North', and whatever else is the case, it is essential to what we mean by `North' and `East' that they are linearly independent. So we require that $u$ be such that $u\neq \alpha v$ for any $\alpha\in\mathbb{R}$. Aside from this, $u$ can be any vector we like.\footnote{Observe, however, that we do not have the same freedom for choosing `South', once we have chosen `North'. In fact, a choice for `South' is (essentially) fixed by our choice of `North': `South' must be represented by (some positive multiple of) $-v$, since it is essential to `South' that it be the opposite direction of `North'. We can drop the parentheticals if we adopt the convention, as we will in what follows, that `directions' are all represented by vectors of the same length. But nothing that has been said thus far forces us to do this.} Now that representatives of `North' and `East' have been chosen, however, there are no further choices to make for which vectors represent which directions, as long we want the relations between the vectors in $V$ to reflect the spatial relations between directions as we usually understand them.\footnote{We acknowledge that the expression `as we usually understand them' is doing a fair amount of work, here. In particular, we have fixed a meaning for `orthogonal' in both the mathematical context and in the world, and we are insisting that whatever reference relations we adopt regarding which vectors represent which direction respect those meanings.} There is, in particular, a unique inner product $\langle\cdot,\cdot\rangle$ on $V$, up to a constant scalar factor, with the property that $u$ and $v$ are orthogonal, i.e., $\langle u,v\rangle=0$; the scale factor can be fixed as well by requiring that $u$ and $v$ both have the same length (which we will conventionally set to 1). This inner product uniquely fixes angles between all vectors. Similarly, we can define an orientation by taking the ordered pair $(u,v)$, which, with standard sign conventions, captures the fact that the sense of rotation from `East' to `North' is counterclockwise. In particular, given any further direction, there is a unique (unit) vector $x$ that has the correct inner products with $u$ and with $v$ to represent that direction. We have thus ended up with a vector space $V$, along with a lot more: we now have an inner product, an orientation, and an ordered orthonormal basis $(u,v)$. We got all of this by making (arbitrary) choices for $u$ and $v$. Had we made any other choices---$u'$ and $v'$, say---we would have ended up in the same place, up to unique isomorphism. This fact makes precise the sense in which the original choices were `arbitrary'. We could have started in a different way. Noting, for instance, that we have a natural notion of `angle' between directions at a point on the surface of the Earth, we might have begun with an inner product space, $(V,\langle\cdot,\cdot\rangle)$. Had we done so, assuming we wished that inner product to represent the spatial relations between directions as we generally understand them, there would have been less freedom in how we chose `North' and `East': the vector $v$ representing `North' would have been required to be an arbitrary \emph{unit} vector, and, once `North' was chosen, $u$, representing `East', would have been required to be one of the two unit vectors orthogonal to $v$.\footnote{Again, we note that the sense of `requirement' here turns on the prior assumption that any suitable reference relation for vectors in the present context should respect the plain meaning of terms such as `orthogonal'.} If we had started with a preferred ordered orthonormal basis $(u,v)$, we would have had fewer choices to make, still. What we are seeing here is a certain trade-off between, on the one hand, freedom in making representational choices; and on the other, mathematical structure. When we use a vector space to represent directions in space, we have a great deal of freedom in choosing which vectors represent which directions; once we have fixed some of these choices, however, we have considerably less freedom in how we make subsequent choices. If we begin with a vector space in which we have already defined additional relations between the vectors---one, that is, in which we have more structure defined---we do not have as much freedom in how we make these choices. Generally speaking, \emph{more freedom} is afforded in cases where we have \emph{less structure}, and vice versa.\footnote{Assuming fixed conventions, shared across members of the comparison class, concerning what reference relations are acceptable.} The `freedom' we have been discussing is a kind of `representational redundancy'---though it is probably better termed `representational freedom'. In fact, as we will presently argue, this is precisely the sense of `representational redundancy' considered by \citet{Nguyen+etal}. The `redundancy' at issue arises because the structure $V$ can represent the directions on a map in infinitely many equally good ways. One way to see what sort of freedom we have in making representational choices is by studying the autormorphisms---that is, the symmetries, or the isomorphisms from an object to itself---of a mathematical structure. The reason is that isomorphisms, generically, map objects to other objects that have the same structure. We infer from this that they have the same representational capacities.\footnote{Here we are implicitly invoking a certain ideology about mathematical representation, defended by \citet{WeatherallHoleArg}. These ideas are explored and substantially developed by \citet{Fletcher}.} A (non-trivial) automorphism, then, can be thought of as revealing a way in which a single mathematical structure can do its representational work in multiple ways: that is, given some way in which the structure might be used to represent some situation, we can find another way in which the same structure might be used to represent the same situation by observing how the automorphism acts. We can see this clearly in the vector space example already discussed. If we begin with a vector space $V$, any non-zero vector $u\in V$ is related to any other non-zero vector $u'\in V$ by an automorphism, which in this case is a bijective linear transformation from $V$ to itself. This captures the sense in which \emph{any} vector in $V$ is equally good at representing the direction `North'. But now consider a vector space $V$ with a preferred vector $u$ already fixed. There are now no bijective linear maps from $V$ to itself that take $u$ to $u$ and also take any other vector to $u$; but given any two vectors $v$ and $v'$, both not equal to $\alpha u$ for any $\alpha$, there always exists a bijective linear transformation on $V$ that keep $u$ fixed and maps $v$ to $v'$. Thus we capture the sense in which once we pick a vector to represent `North', one can still choose \emph{any} other vector, not proportional to $u$, to represent `East'. That we have `less' freedom in this second case is captured by the fact that the linear isomorphisms that preserve $u$ are naturally understood as a (proper) subgroup of the automorphisms of $V$. Thus we see a sense in which if structure $A$ has `more' symmetries than structure $B$, in the sense of there being a natural or implicit (or, in some cases, explicit) proper embedding of the group of automorphisms of $B$ into the group of automorphisms of $A$, then $A$ has more representational redundancy than $B$. Conversely, there is a natural sense in which if $A$ has more symmetries than $B$, we should say that $B$ has `more structure' than $A$.\footnote{This view is defended by \citet{BarrettCM1}.} This is because the maps that preserve all the structure of $B$ also preserve all the structure of $A$, but there are further maps that preserve the structure of $A$ but do not preserve the structure of $B$---suggesting that there is something `more' to $B$ that changes even when everything about $A$ is preserved.\footnote{This intuition can be made precise in various contexts, including the first order case, where adding further relations to a theory (for instance) reduces the number of symmetries of its models. See, for instance, \citet{BarrettSS}.} This is precisely what happens in the vector space case already discussed: a vector space endowed with an inner product has more structure---namely, the inner product---than a bare vector space, which is reflected in the fact that it has fewer symmetries.\footnote{It is perhaps worth noting that the same intuitions play out in standard discussions of classical space-time structure: Newtonian space-time has more structure than Galilean space-time, which has more structure than Leibnizian space-time; this is all reflected by the fact that Leibnizian space-time has more automorphisms than Galilean space-time, which has more automorphisms than Newtonian space-time. These relationships are described in somewhat more detail by \citet{BarrettSTS} and \citet{WeatherallSTG} in a way that connects directly to how we discuss them here, though they were already recognized and well understood by, for instance, \citet{SteinNST} and \citet{EarmanWEST}.} So we see that representational redundancy runs in the opposite direction to the amount of structure that a mathematical object has: more structure means less redundancy, and vice versa. These relationships are naturally captured in the language of category theory as described in the previous section, and indeed, it is precisely these ideas that are meant to be captured by the comparisons of `structure', `stuff', and `properties' given by the classification of forgetful functors already described. In particular, forgetting `stuff' corresponds to removing representational redundancy in the sense we have been discussing; whereas forgetting structure corresponds to removing structure. To see how this works, consider the simplest of the examples above. Define, for instance, a category \textbf{Vect}$_2$ whose objects are two dimensional vector spaces and whose arrows are linear transformations; and a category \textbf{OBVect}$_2$ of two dimensional vector spaces with fixed, ordered basis, with basis preserving maps as arrows. There is a natural functor from \textbf{OBVect}$_2$ to \textbf{Vect}$_2$ that takes every vector space with basis to it underlying vector space, and arrows of \textbf{OBVect}$_2$ to their underlying linear transformations. This functor is faithful and essentially surjective, but not full. Thus, it forgets (only) structure. We can also go in the opposite direction, trivially: choose any object $C$ of \textbf{OBVect}$_2$, and map all objects of \textbf{Vect}$_2$ to $C$, and all arrows to $1_C$.\footnote{This sort of `opposite direction' functor can be complicated to define (it generally, as here, involves the axiom of choice), and it does not always exist---this is why we have limited attention to a highly simplified case, to avoid complicated constructions that obscure the basic conceptual point. In a sense, this is the core of Nguyen et al.'s argument, as we discuss in section \ref{sec:YM}.} This functor is essentially surjective and full, but it is not faithful. And so this functor forgets (only) stuff. There are two morals to draw from this example, which, we claim, are generic, at least among the sorts of structures used in physics. The first is that forgetting structure and forgetting stuff, on this formal account, really do pull in opposite directions, as promised. This is precisely because one of them involves a certain map failing to be surjective, and the other involves a certain map failing to be injective. The second moral is that the direction in which we `forget structure' on this formal account corresponds to the direction in which we drop structure in the intuitive sense described above; whereas the direction in which we `forget stuff' corresponds to the one in which we remove representational redundancy. It is in this sense that we claim the formal machinery recovers the more intuitive claims made above. \section{Kinds of Representational Redundancy}\label{sec:Ladyman} We have just argued that representational redundancy, of the sort Nguyen et al. associate with surplus* structure, has an almost inverse relationship with structure in another, intuitive sense of the term. Increasing structure generally reduces representational redundancy; and conversely, increasing redundancy means eliminating structure. But this claim, taken out of context, invites misunderstanding. As we hope we have made clear, the expressions `representational redundancy' and `surplus* structure' used above have precise meanings, proposed by Nguyen et al.; we have taken on these meanings in our discussions thus far. But there are other senses in which one might use the expression `representational redundancy'. We now will introduce two other possible senses of `representational redundancy' and discuss how they relate to the foregoing. To see the first of these, consider again the example discussed in the previous section. As we remarked then, given a two dimensional vector space, any choice of two (linearly independent) vectors to represent North and East is (uniquely) isomorphic to any other choice. One might be tempted to say that this isomorphism indicates a certain kind of representational redundancy in the vector space with ordered basis, since after all, what we see is that there are many two dimensional vector spaces with ordered bases that provide equally good representations of the cardinal directions. But one has to be careful, because this sort of representational redundancy is not associated with surplus* structure---or surplus structure, as proposed by Weatherall. The reason is that two categories differing only with regard to `how many' isomorphic objects are in each isomorphism class are, in general, categorically equivalent, and so one would not expect empirical-content-preserving functors between such categories to forget \emph{anything}. Indeed, category theory aside, this sort of representational redundancy will \emph{always} be present, at least for any theory formulated in modern mathematics with a set theoretic semantics. This is because given a model of any theory, one can always generate new models of that theory by either applying some permutation to the domain of the model or else by choosing some other, equinumerous set, and fixing a bijection to that set. That a theory has representational redundancy in this sense is uninteresting, at least from the perspective of how much structure the models of a theory have. What the existence of these isomorphic copies \emph{does} indicate is that some underlying structure---in this case, the vector space---has surplus* structure, since it is the freedom associated with choosing which vectors represent North and East that gives rise to the different, but isomorphic, vector spaces with bases. To preview what will come later, this distinction will be relevant to the hole argument, since the hole argument concerns the fact that there are distinct but isomorphic models of GR. As in the case of isomorphic representations of cardinal directions, this is not a source of surplus* structure, but we will argue that it arises because of the surplus* structure of an underlying structure, namely bare manifolds. We now turn to a third possible notion of representational redundancy. Consider the following example (due to James Ladyman). One wishes to model a collection of colourless gas particles using interacting billiard balls. Now consider adding colour to the billiard balls. This adds structure to the theory, namely a `colour structure’. However, this also seems to add representational redundancy because we can use different colours to represent the same collection of gas particles: it does not matter what choice we make because we suppose that the gas particles we are trying to model do not actually have colour.\footnote{If the gas particles really did have colour, then one would want the models to be non-isomorphic. But then there would be no representational redundancy because one would think that there was in fact a correct colour attribution.} Therefore, this is an example where representational redundancy seems to correspond to surplus structure. However, the kind of representational redundancy used in this case is different from what \citet{Nguyen+etal} discuss. This is because the models of the gas particles related by a change in colour are not isomorphic. In other words, if the models were isomorphic, then nothing corresponding to colour would have been added to the theory because there would be no way to distinguish models with different colours. Models differing only in colour structure would be equivalent according to the theory. The representational redundancy comes from the fact that the non-isomorphic models---those related by a change in colour---represent observationally equivalent states of affairs, and therefore any colour can be used to represent the same situation. This sort of representational redundancy in fact corresponds to the notion of surplus structure proposed by Weatherall. If one were to introduce categories of models of these two theories---one with histories of colourless billiard balls as objects and some suitable choice of arrows; and the other with histories of coloured billiard balls as objects, with arrows that, in addition to preserving whatever the arrows of the first category do, also preserve colour---then one would expect there to be a functor from the category of the theory with the colour structure to the one without that colour structure that preserves empirical significance and forgets structure (not stuff). It is simple to see why: the arrows of the theory \emph{with} colour structure need to preserve colour structure, in addition to whatever is preserved by the arrows of the other theory. And so one would expect a functor that preserved empirical significance to fail to be full. What this discussion highlights is that ambiguity can arise in the use of the term `representational redundancy’. To summarize, we have the following three, distinct and not necessarily mutually exclusive, senses of `representational redundancy'. It can refer to: \begin{enumerate} \item (Surplus* structure / stuff) Situations in which a single model / mathematical structure can represent a given situation in many equally good ways; such cases are generally signaled by symmetries (automorphisms) of the models, and correspond to `surplus stuff' in the discussion above. \item (Set theoretic semantics) Situations in which distinct but isomorphic models / mathematical structures can represent a given situation in many equally good ways; such cases are pervasive in modern applied mathematics using set theoretic semantics, and arise when there is surplus* structure of some underlying structure (including, for instance, an underlying set). \item (Surplus structure) Situations in which distinct and \emph{non-isomorphic} models / mathematical structure can represent a given situation equally well; such cases are generally signaled by distinctions between mathematical structures that do not appear to have any physical or empirical significance, and correspond to `surplus structure' in the discussion above. \end{enumerate} The first of these is the sense used by \citet{Nguyen+etal}, and the third is the sense highlighted by Ladyman's example and used by Weatherall. The crucial difference between the third sense of representational redunancy and the other two is that in the first two, the models that can play the same representational roles are equivalent according to the theory, whereas in the third, they are not equivalent: the models are distinguished from one another (in the Ladyman example, by the presence of the colour structure) even though, \emph{ex fiat}, there is no corresponding physical difference in the systems that they represent. In what follows, when ambiguity may arise, we will endeavor to refer to this list to specify the sense of `representational redundancy' at issue. \section{Yang-Mills Theory Revisited}\label{sec:YM} With this conceptual machinery in hand, we now return to Nguyen et al.'s arguments concerning Yang-Mills theory. For the sake of simplicity, and following others in the literature, they focus on the case of electromagnetism, which is a (Abelian) Yang-Mills theory with structure group $U(1)$.\footnote{We remark that, although this is hardly a slight against Nguyen et al, it is not at all clear that the plausible positions in the non-Abelian case look very much like those in the $U(1)$ case. (See, for instance, \citep{Healey}, \citep{WeatherallFBYMGR}, and \citep{Gilton} for discussions of some of the ways in which non-Abelian Yang-Mills theory resists interpretations that seem natural in electromagnetism---among which is the fact that field strength [curvature] is not a gauge-invariant quantity in non-Abelian theories.)} They make two basic arguments that are relevant to the issues now under discussion. The first argument considers various ways of representing Yang-Mills fields on a contractible manifold $M$. The second argument considers what happens when we relax the assumption that $M$ is contractible.\footnote{In fact, they go somewhat further than this, and make a proposal concerning how to think of the spaces of possible field configurations over all manifolds $M$ at once. They conclude that to treat this problem adequately, one should move from thinking about theories as categories of models to thinking of theories as functors---in this case, as a functor from a category of manifolds to a category of groupoids. This proposal has many virtues, but it does not bear directly on the issues we discuss here.} We begin with their first argument, which will concern us for the bulk of the section. Fix a smooth, contractible manifold $M$, which we assume to be four dimensional. By a $U(1)$ \emph{gauge field} on $M$, we mean a smooth one-form $A_a$; following the notation of \citep[\S 3.1]{Nguyen+etal}, a \emph{gauge transformation} is a map from gauge fields to gauge fields of the form \[ A_a\mapsto A_a + g^{-1}d_a g \] where $g:M\rightarrow U(1)$ is a smooth map.\footnote{To unpack this equation: by $d_a g$, we mean the pushforward map along $g$ defined at each point, which, at each $p\in M$, is a map from $T_pM$ to $T_{g(p)}U(1)$. Then $g^{-1}$ is the pushforward along the translation on $U(1)$ determined by the inverse of the group element $g(p)$, yielding an element of the tangent space at the identity of $U(1)$, i.e., an element of the Lie algebra of $U(1)$ (which happens to be $\mathbb{R}$). Thus, $g^{-1}d_a g$ is a (closed) one-form on $M$.} Observe that on this definition, since gauge transformations are parameterized by maps $g$, all gauge fields are related to themselves by gauge transformations that are constant maps from $M$ to $U(1)$---that is, there are gauge transformations that are non-trivial `automorphisms' of gauge fields. Nguyen et al. are concerned with the relationship between several different categories that one might define to characterize the structure of such gauge fields.\footnote{In all of these categories, following Nguyen et al., we `fix' $M$. For some purposes, one might wish to include diffeomorphisms acting on $M$ among the morphisms of the categories, but nothing is lost for present purposes by neglecting them.} \begin{itemize} \item $\mathcal{C}_A$: objects are gauge fields $A_a$; morphisms are gauge transformations; \item $\mathcal{S}_{[A]}$: objects are equivalence classes $[A]$ of gauge fields under gauge transformations; morphisms are identity maps; \item $\mathcal{S}_{A}$: objects are gauge fields; morphisms are identity maps; \item $\mathcal{E}_A$: objects are gauge fields; morphisms are equivalence classes of gauge transformations, where $g\sim h $ if $g^{-1}d_a g - h^{-1}d_a h=\mathbf{0}$. \end{itemize} The category $\mathcal{S}_A$ is what one gets if one takes each gauge field on $M$ to represent a distinct possible situation; any non-identical, gauge-related gauge fields are inequivalent by the lights of this category. The category $\mathcal{C}_A$ is what one gets if one takes gauge fields to represent possible situations, but where every gauge transformation represents an `equivalence' of gauge fields. The categories $S_{[A]}$ and $\mathcal{E}_A$ are categorically equivalent. Both express the idea that any two gauge fields related by gauge transformations are equivalent to one another, such that it is only the equivalence classes of gauge fields that have physical significance. They are also both equivalent to yet another category, $\mathcal{S}_F$, whose objects are smooth two-forms $F_{ab}$ on $M$ satisfying $d_aF_{bc}=\mathbf{0}$. In the context of electromagnetism, such tensors represent the electromagnetic field; they are related to gauge fields by the equation $F_{ab}=d_a A_b$, with any two gauge fields related by a gauge transformation giving rise to the same electromagnetic field. So one can think of $S_{[A]}$ and $\mathcal{E}_A$ as representing the theory that says it is the electromagnetic fields on $M$ that represent distinct possible situations, with different gauge fields representing different situations only if they give rise to different electromagnetic fields. Thus, $\mathcal{S}_{[A]}$ and $\mathcal{E}_A$ correspond to a widespread view that it is the electromagnetic fields, and not the gauge fields directly, that have physical significance in electromagnetism. The category $\mathcal{S}_A$, meanwhile, has more structure that either of these, in the sense defined above: there is a functor from $\mathcal{S}_A$ to $\mathcal{S}_{[A]}$ that preserves empirical content, and which is not full.\footnote{This is because there are (gauge-equivalent) objects $A_a$ and $A'_a$ of $\mathcal{S}_A$ that are mapped to the same object $[A]$ of $\mathcal{S}_{[A]}$, but which have no arrows between them that could map to the identity on $[A]$.} (It is faithful and essentially surjective.) This functor takes gauge fields $A_a$ to their equivalence classes under gauge transformations, and takes all arrows to identities. It is this relationship that is emphasized in \citep{WeatherallUG}, to capture the idea that a theory in which one takes different gauge-related gauge fields to be inequivalent has, in a precise sense, more structure than a theory in which one takes gauge-related gauge fields to be equivalent---or, in light of the equivalence to $\mathcal{S}_F$ already noted, that a theory that distinguishes gauge fields has more structure than one that distinguishes (only) electromagnetic fields. It was in this sense that Weatherall claimed to give a precise characterization of the claim that electromagnetism formulated using gauge fields has `surplus structure': it is because there is another formulation available with less structure, but with the same empirical consequences.\footnote{We remark that there is also a functor going in the opposite direction that (using Choice) chooses, from each equivalence class $[A]$, a representative $A_a$. It is interesting to note that this functor is full and faithful, because every arrow is mapped to an identity arrow, and no two objects have more than one arrow between them; but not essentially surjective, because each equivalence class is identified with a single representative. So this functor forgets forgets property---not structure \emph{or} stuff. To see what is going on here, note that what this functor is doing is associating with each equivalence class a single, preferred representative. But from the point of view of $\mathcal{S}_A$, there are many other fields around that do not get mapped to, representing physical possibilities that are inequivalent to those in the image of the functor, but which do not correspond to any possibility represented in $\mathcal{S}_{[A]}$, according to that functor. We can think of the property that is forgotten as the property of being the (privileged) representative of an equivalence class (or the `one true gauge').} But what about $\mathcal{C}_A$? As Nguyen et al. show, $\mathcal{C}_A$ has surplus* structure relative $\mathcal{E}_{A}$---that is, there is a functor $\tau: \mathcal{C}_A\rightarrow \mathcal{E}_{A}$ that forgets stuff and preserves empirical content.\footnote{It was essentially this functor that \citet{WeatherallUG} mistakenly claimed forgot nothing; see \citet{WeatherallErratum}.} It is this functor, they argue, that supports their claim that there is a sense in which Yang-Mills theory has something `surplus', even after one takes gauge transformations to be equivalences. They write, `This ... precisification of `surplus' structure allows us to define \emph{surplus* structure} as the \emph{stuff} that is forgotten by the functor $\tau$,', and then go on to say: `A (gauge) theory contains the \emph{stuff} that is forgotten by $\tau$ (namely the non-trivial automorphisms of the gauge fields and the result of concatenating them with the morphisms already contained in $\mathcal{E}_A$)' (p. 11). They then proceed to argue that it is really this relationship between $\mathcal{C}_A$ and $\mathcal{E}_A$, and not that between $\mathcal{E}_A$ and $\mathcal{S}_A$, that captures the physically salient sense in which a gauge theory has surplus structure. As they write, \begin{quote}\singlespacing [W]e are interested in a notion of `surplus' that is possessed by theories which take gauge fields to be representationally equivalent (and which represent this by means of gauge transformations between gauge fields); thus $\mathcal{C}_A$ is our candidate for such a theory and `surplus' is characterised by the \emph{stuff}-forgetting functor $\tau:\mathcal{C}_A\rightarrow S_{[A]}$.... By contrast, Weatherall's notion of `surplus' applies to a theory that does not represent gauge fields as representationally equivalent, namely $\mathcal{S}_A$....\end{quote} Thus, even \emph{after} one has taken all of the gauge fields related by gauge transformations to be equivalent, one \emph{still} as a theory with some surplus---namely, the surplus gauge transformations! There are a few remarks to make, here. First, as one might expect given our discussion in section \ref{sec:W}, $\mathcal{C}_A$ and $\mathcal{E}_A$ are also related in another salient way: there is also a functor $K:\mathcal{E}_A\rightarrow \mathcal{C}_A$ that forgets structure (and preserves empirical content). Thus, the difference between Weatherall's account and Nguyen et al.'s account is not just that they are comparing $\mathcal{E}_A$ to different categories and getting different accounts. In fact, these two criteria for when one theory has more structure than another yield precisely \emph{opposite} verdicts: one says that $\mathcal{C}_A$ has more structure; the other says $\mathcal{E}_A$ does. If one adopts the view we have defended here, then, moving to $\mathcal{C}_A$ involves forgetting further structure, even relative to $\mathcal{E}_A$. So it would seem we have two different criteria, giving opposite verdicts. Which, if either, is right? As things stand, it is difficult to see what is at stake in the disagreement. The reason is that, as things have been set up so far, both $\mathcal{C}_A$ and $\mathcal{E}_A$ take precisely the same gauge fields to be equivalent: namely, the ones related by gauge transformations. So if one were pressed to say what structure each of these categories attributes to the world, it would be tempting to say `equivalence classes of gauge fields under gauge transformation.' And yet, on both criteria of structural comparison under consideration, these theories are not equivalent. The difference between the categories---a difference that \emph{both} criteria are tracking---concerns the additional morphisms, such as the non-trivial automorphisms, of $\mathcal{C}_A$. But from the perspective of the structure we attribute to the world in the models of these theories, it is difficult to see what these additional transformations reveal. After all, they are (all) maps of the form $A_a\mapsto A_a + \mathbf{0}$.\footnote{One might even worry about the following proposal: suppose we have a theory, and we would like another theory with `less structure'. We could simply stipulate that every model of the theory is equivalent to itself in more ways, by introducing trivial maps. For instance, in a model of general relativity, consider the new metric automorphisms which are maps of the form $g_{ab}\mapsto g_{ab} + n\mathbf{0}$, for all $n$. Suddenly metrics have a new automorphism group!} To see what is going on, here, we need to think of these categories---or really, the theory of electromagnetism---from a different perspective.\footnote{For a detailed overview of this perspecitve, written for philosophers, see \citep{WeatherallFBYMGR}; see also \citep{Bleecker} and \citep{Palais} for excellent mathematical treatments of the subject.} On this alternate approach, a gauge field is not conceived as a one-form on $M$; instead, it is a principal connection $\omega_{\alpha}$ on a $U(1)$ principal bundle $P\xrightarrow{\pi} M$ over $M$.\footnote{A principal $G$ bundle, for some Lie group $G$, is a smooth surjective map $P\xrightarrow{\pi} M$, where $M$ and $P$ are smooth manifolds with the following property: there is a smooth, free, fiber-preserving right action of $G$ on $P$ such that given any point $p\in M$, there exists a neighborhood $U$ of $p$ and a diffeomorphism $\zeta:U\times G\rightarrow \wp^{-1}[U]$ such that for any $q\in U$ and any $g,g'\in G$, $\zeta(q,g)g'=\zeta(q,gg')$. A (global) \emph{section} of a principal bundle is a smooth map $\sigma:M\rightarrow P$ satisfying $\pi\circ\sigma=1_M$. A \emph{principal connection} on $\pi$ is a smooth Lie-algebra-valued one form $\omega^{\mathfrak{A}}{}_{\alpha}$ on $P$ satisfying certain further conditions, including that $\omega^{\mathfrak{A}}{}_{\alpha}$ be surjective on the Lie algebra. (Here the lowered Greek index indicates action on tangent vectors to $P$ and the raised capital fraktur index indicates membership in the Lie algebra of $G$, $\mathfrak{g}$. Since the Lie algebra of $U(1)$ is $\mathbb{R}$, we drop the fraktur index when discussing principal connections on $U(1)$ bundles.)} A gauge field in the earlier sense, $A_a$, arises as a representation of $\omega_{\alpha}$ on $M$, relative to a choice of (global) section, $\sigma:M\rightarrow P$, by $A_a=\sigma^*(\omega_{\alpha})$. Gauge transformations, meanwhile, can be identified with changes of sections. Since $M$ is contractible, there is a unique principal $U(1)$ bundle over $M$, and so all of the gauge fields under consideration are principal connections on that unique principal bundle, represented relative to different sections. From this perspective, the category $\mathcal{C}_A$ is (isomorphic to) the category whose objects are principal connections $\omega_{\alpha}$ on $P$ and whose morphisms are vertical principal bundle (auto)morphisms that preserve $\omega_{\alpha}$.\footnote{A \emph{vertical principal bundle automorphism} on $\pi$ is a diffeomorphism $\Psi:P\rightarrow P$ such that (a) $\pi\circ\Psi = \pi$ and (b) for any $x\in P$ and $g\in G$, $\Psi(xg)=\Psi(x)g$. We remark that although these maps are automorphisms on the principal bundle, they are not necessarily automorphisms once one fixes a connection.} The `extra' morphisms in $\mathcal{C}_A$ can then be seen as maps that do not act trivially after all: they are symmetries of the connection $\omega_{\alpha}$ under (non-trivial) transformations of the bundle $\pi$. The category $\mathcal{E}_A$, on the other hand, is (isomorphic to) what results if one adds, to this principal bundle, some further `rigidifying' structure, which breaks these symmetries. There are several equivalent ways to characterize the sort of structure that would do this, but one natural candidate is a fixed, global trivialization of the bundle, i.e., a fixed diffeomorphism $\zeta:P\rightarrow M\times G$ such that for any $p\in M$ and any $g,g'\in G$, $\zeta(q,g)g'=\zeta(q,gg')$. (The fact that such a global trivialization exists is a consequence of the contractibility of $M$.) Another way of thinking about this structure is as fixing a choice of identity in the `fiber' $\pi^{-1}[p]$ over each point of $M$, thus fixing the way in which the fiber realizes the Lie group structure. It is perhaps worth emphasizing two related points about principal bundles at this stage, to clarify the remarks just made. First, although principal bundles, like all fiber bundles, are required to be `locally trivial' in the sense that they admit local trivializations, these have a status similar to coordinate charts on manifolds, and there are no `privileged' trivializations. Fixing a global trivialization is analogous to choosing a global coordinate system on a manifold. The second point is that in general, the fibers of a principal bundle are `$G$ torsors', which are manifolds diffeomorphic to $G$ on which $G$ acts, but which are not themselves Lie groups because they do not have a group structure (and do not act on themselves). So fixing an origin for each fiber can be thought of as endowing the $U(1)$-torsors of $\pi$ with Lie group structure. Both of these remarks are (complementary and compatible) ways of pinning down what, exactly, changes when one moves from $\mathcal{C}_A$ to $\mathcal{E}_A$: the latter is the category of principal connections on principal bundles endowed with something that, intuitively, seems like further structure: a global trivialization, akin to a global coordinate system; or Lie group structure on each fiber. On the other hand, as the discussion in section \ref{sec:W} would lead one to expect, $\mathcal{C}_A$, which has less structure, has more stuff---reflecting the representational redundancy (freedom) afforded by the many ways of endowing the bundle with this structure. The upshot of this discussion is that we can see the lessons of the previous sections in practice. We have a sense in which the theory characterized by $\mathcal{C}_A$ has less structure than that of $\mathcal{E}_A$, and correspondingly more representational redundancy, in the first sense discussed above---namely, that captured by surplus* structure (or, surplus stuff). Conversely, we can think of $\mathcal{E}_A$ as also having representational redundancy in the third sense, i.e., that of surplus structure, because by having a fixed global trivialization, this theory treats different choices of trivialization as being non-isomorphic, which are nonetheless observationally equivalent. Finally, we remark that \emph{both} theories have representational redundancy in the second sense, because in both cases there are isomorphic models that may all represent a given situation equally well; but this sense of representational redundancy is irrelevant to which theory has more structure. There might be a lingering dissatisfaction at this stage. Above, following Nguyen et al., we described the categories $\mathcal{E}_A$ and $S_{[A]}$ without ever mentioning any principal bundles---much less the extra `structure' of a global trivialization. So in what sense should we think of these categories as representing theories that invoke such structure? The answer is subtle. When we defined gauge fields above, we introduced them as one-forms on $M$. One-forms form a vector space at each point, with the zero covector as the origin. Once we recognize that these gauge fields are really principal connections, however, we can see a sense in which they should form an affine space at each point, rather than a vector space. One can see this by observing, for instance, that there is no `zero connection', because every principal connection must be a surjective linear transformation; and the addition of two principal connections is not necessarily a principal connection. There do, however, exist connections $\omega_{\alpha}$ and sections $\sigma:M\rightarrow P$ such that $\sigma^*(\omega_{\alpha})=\mathbf{0}$; and relative to such a choice of connection, the space of connections at each point takes on the structure of a vector space (because $\omega_{\alpha}$ fixes an origin), elements of which can be put into one-to-one correspondence with one-forms on $M$, via $\sigma$. Thus we can see a certain sense in which the way we set up the theory in the first place, associating gauge fields with one-forms on $M$, already relied on a choice of background structure; from this perspective, the `extra' arrows of $\mathcal{C}_A$ are a way of washing out this vector space structure on connections. We now turn to the second argument that Nguyen et al. give, which we treat much more briefly because we do not object to its substance. The second argument is that in fact, the surplus* structure that $\mathcal{C}_A$ has over $\mathcal{S}_{[A]}$ is an essential feature of electromagnetism. The reason---restated in the terms of the discussion above---is that a formulation of electromagnetism that can treat the full range of possible gauge field configurations on topologically non-trivial (i.e., non-contractible) spacetime manifolds $M$ in a suitably local way must be able to represent the principal connections that one can define on principal bundles for which no global trivialization exists---that is, non-trivial principal bundles. If no global trivialization exists, it is not possible to fix a global trivialization and identify the space of principal connections on the bundle with the one-forms on the base space. This problem does not arise for contractible manifolds, because the only principal bundles over such manifolds do admit global trivializations. This makes precise, in a somewhat different language from that used by Nguyen et al, the sense in which equivalence classes of one-forms on $M$ under gauge transformation cannot adequately represent the full richness of Yang-Mills theory. Thus, we agree that the surplus* structure captured by $\mathcal{C}_A$---that is, the representational redundancy afforded by a principal bundle formulations of Yang-Mills theory---is necessary to capture the full richness of the theory. What we disagree about is the interpretation of this conclusion. The moral that \cite{Nguyen+etal} draw is that the morphisms of a category, as well as the objects (the models), can feature in the representational content of a theory; in the case of $\mathcal{C}_A$, they represent the ways in which local fields (given by the models of $\mathcal{C}_A$) can be composed to give global systems. On this view, the reason $\mathcal{C}_A$ is superior to $\mathcal{E}_A$ is that the \emph{prima facie} `surplus', namely the morphisms of $\mathcal{C}_A$ that contribute to the representational redundancy of the theory, have representational significance. Hence their title claim: surplus structure is not superfluous. To the contrary, on the view we have defended here, the reason $\mathcal{C}_A$ is adequate, but $\mathcal{E}_A$ is not, is precisely that the former has \emph{less} structure than the latter, which affords it greater representational capacities; put another way, the structure invoked to get from $\mathcal{C}_A$ to $\mathcal{E}_A$ cannot be consistently imposed in all cases of physical interest. From this perspective, the problem with $\mathcal{E}_A$ is analogous to the problem faced by someone who insists that there should be some preferred global coordinate systems in special relativity. One can define such a thing in that context; but when one moves to general relativity, one cannot recover the full richness of the theory if one insists on always having global coordinate systems, because some models of general relativity do not admit such systems. The upshot is that surplus \emph{stuff} may well not be superfluous, but surplus \emph{structure} is not only superfluous, but in some cases it is a barrier to capturing the full richness of a theory. This conclusion is in many ways irenic: in the end, we agree with Nguyen et al. on the principal conclusion of their paper, that one should take $\mathcal{C}_A$ to be the best categorical representation of the structure of electromagnetism, even in the contractible case. We disagree only on our route to the conclusion, namely whether it goes via a comparison of `stuff' or of `structure'. But the difference in perspective matters to the rhetorical posture in \citep{Nguyen+etal}. Their argument is motivated by an alleged puzzle, which is: `how can `surplus' structure be an essential feature of a theory?' (p. 12).\footnote{See also p. 2: `How can `redundancy' be an essential feature of a theory?' This formulation is more congenial to our perspective here.} But if $\mathcal{C}_A$ has \emph{less} structure than $\mathcal{E}_A$, then this puzzle never arises, for it is easy to see how eliminating structure can be essential for a theory, particularly when that structure can only be defined for a small subset of the possible models of the theory. \section{The Hole Argument}\label{sec:holes} We now return to the hole argument in light of the distinctions regarding surplus structure and representational redundancy drawn above. As we noted in the Introduction, one of the issues that the hole argument is sometimes taken to highlight is that the standard formulation of general relativity, using tensor fields on a smooth manifold, has `too much structure'. The hole argument, recall, uses the fact that given a model of general relativity, $(M, g_{ab})$,\footnote{Here $M$ is a smooth, four-dimensional manifold, which we assume to be Hausdorff and paracompact; and $g_{ab}$ is a smooth, Lorentz-signature metric $g_{ab}$ defined on $M$. For further details on the mathematical background of general relativity, see \citet{Wald} or \citet{MalamentGR}. Our discussion in what follows depends only on the sorts of mathematical facts that are usually at issue in the literature on the hole argument.} one can construct another model $(M,g_{ab}')$ through a diffeomorphism $\psi: M \rightarrow M$ on the manifold, where $g_{ab}'$ is defined by the pushforward map determined by the diffeomorphism, $g_{ab}' = \psi_*(g_{ab}) $. If the diffeomorphism does not act as the identity everywhere, then these models agree on all observable structure and yet disagree at certain points on the value of the metric. And so---the argument goes---there must be some `surplus structure' in the standard formulation that is physically insignificant. However, as we have now seen, there are different ways one might understand `surplus structure'. One sense is that a (formulation of a) theory has surplus structure if there is another formulation of the theory and a functor from the first theory to the second, represented as categories, that is not full (and preserves empirical content). As \citet{WeatherallUG} argues, the hole argument does not reveal that general relativity has surplus structure in this sense. For it to do so, it would need to be the case that the hole argument generated models of general relativity that were empirically indistinguishable but \emph{not isomorphic} by the lights of the ambient mathematical theory (i.e., the theory of Lorentzian manifolds). If this were to occur, one might hope to move to another formulation of the theory on which these models \emph{were} isomorphic. But this is precisely what the hole argument does not do.\footnote{Of course, some authors have taken the hole argument to do something like what is described here. But as \citet{WeatherallHoleArg} argues, this is chimerical: to get to the conclusion that the hole argument generates empirically equivalent but non-isomorphic possibilities, one uses the identity map on the manifold to compare particular points on the manifold. Under such a comparison, the models are not equivalent, either representationally (because it does not give rise to an isomorphism) or observationally.} Therefore, the standard formulation does not have surplus structure in this sense, and, \emph{a fortiori}, the hole argument does not reveal otherwise. But what about the other sense of `surplus structure'---viz., surplus* structure or representational redundancy in the first sense of section \ref{sec:Ladyman}, as described by \citet{Nguyen+etal}? First, there is a sense in which general relativity, as ordinarily formulated, has surplus* structure. Of course, just as with surplus structure, surplus* structure is a relative notion: one theory has surplus* structure relative to another if there is a functor from the first to the second that forgets stuff (and preserves empirical content). But we can develop a heuristic for identifying when this is likely to happen, similar to the one we gave above for surplus structure: we will say that a theory, represented as a category, has surplus* structure if there exist models $A$ and $B$ of the theory and isomorphisms $f,g:A\rightarrow B$ such that $f\neq g$. In particular, it suffices for a theory to have surplus* structure in this sense if any of its models has a non-trivial automorphism group. This heuristic captures the idea that the theory has `stuff' to forget, in the form of `extra' arrows; it also captures the idea of the theory having representational redundancy, since as we saw, that is signaled by symmetries of the models of the theory. And we can also see immediately that general relativity has surplus* structure by this criterion: simply observe that Minkowski spacetime---a model of general relativity---has non-trivial symmetries (such as translations or Lorentz boosts). This argument shows a sense in which general relativity does have surplus* structure---but it does not involve the hole argument. In fact, the maps involved in the hole argument---that is, isometries generated by diffeomorphisms from a manifold to itself---in general are \emph{not} automorphisms of any spacetime, in the sense just described, because in general $\psi_*(g_{ab})\neq g_{ab}$. Nor is it the case that, in general, isometric pairs of spacetimes $(M,g_{ab})$ and $(M,\psi_*(g_{ab}))$ generated by the hole argument are related by any further isometries. So not only does the hole argument play no role in the argument just given that general relativity has surplus* structure, it also does not generate the sort of mappings that, we just argued, should be taken to signal surplus* structure. Nonetheless, there are connections between the hole argument and surplus* structure. To see this connection, observe that the hole argument \emph{does} reveal that general relativity has some representational redundancy, in the sense that, given a physical situation that can be modeled by general relativity at all, the hole argument shows that there always exist nondenumerably many isometric spacetimes that one can choose between to model that situation. This is not representational redundancy in the sense captured by surplus* structure, but rather representational redundancy in the \emph{second} sense of section \ref{sec:Ladyman}. As we suggested there, this sort of representational redundancy may be seen to arise from surplus* structure at a different level. In particular, what the hole argument exploits is the fact that (bare) \emph{manifolds} have surplus* structure, as representations of spacetime. Consider the category \textbf{4Man} whose objects are smooth four dimensional manifolds and whose arrows are diffeomorphisms. This category has a rich structure of automorphism groups, signaling, by the heuristic above, that has surplus* structure. And indeed, given, say, the manifold $\mathbb{R}^4$, there are many ways in which one could use that manifold to represent the events of, say, our own universe (assuming that our universe is topologically simple). Indeed, here we find ourselves in a situation strikingly similar to that of the person trying to use a two dimensional vector space (with no further structure) to represent the cardinal directions. Any point at all of $\mathbb{R}^4$ can be used, equally well, to represent `here-now'; likewise, any distinct point can be use to represent `over there ten minutes ago'; and so on. One way of understanding what the hole argument is doing, then, is taking different ways of exercising this freedom to represent events in space and time with a given manifold $M$, and showing that different choices of how to represent (say) `here-now' give rise to distinct, though isometric, spacetimes once we include metrical structure.\footnote{The argument in \citep{WeatherallHoleArg} concerns just this issue---or rather, a possible misunderstanding concerning how to understand this `freedom'.} This is just as in the vector space case above, where different choices of vectors to represent `North' and `East' give rise to distinct but isomorphic representations of the cardinal directions. One should be reluctant to infer too much from these reflections, however. The reason is that---as we remarked in section \ref{sec:Ladyman}---this sort of representational redundancy will \emph{always} be present for any theory formulated with a set theoretic semantics.\footnote{\citet{Rynasiewicz} draws a very similar moral regarding the hole argument, relating it to Putnam's famous permutation argument.} The fact that bare manifolds have surplus* structure that Lorentzian manifolds lack does not reflect anything deep about the theory; instead, it merely reflects the fact that we tend to build mathematical objects out of other mathematical objects with less structure. What these reflections \emph{do} serve to illustrate is rather how surplus* structure relates to ideas that have a long provenance in the philosophy of physics literature. The hole argument does make use of surplus* structure, and many philosophers have taken the existence of this surplus* structure, or at least consequences of its existence, to suggest strong morals regarding the adequacy of the standard formalism of general relativity. But as we hope we have shown here, the fact that manifolds have surplus* structure, of the sort exploited in the hole argument, should not be taken to imply that manifolds, or spacetimes, or general relativity more generally, have surplus structure. To the contrary, that manifolds have surplus* structure suggests that they have \emph{less} structure than, say, Lorentzian manifolds.\footnote{Or, we hasten to add, Einstein algebras, since the latter are, in a precise sense, equivalent to relativistic spacetimes \citep{Rosenstock+etal}.} There is a certain irony to all of this. Once we are clear about the difference between surplus structure and surplus* structure, and we see that the hole argument exploits the surplus* structure of manifolds, rather than any surplus structure of manifolds or of relativistic spacetimes, we can now ask what it would mean to adopt a methodological dictum to `minimize' either surplus structure or surplus* structure. Since it is not clear that general relativity \emph{has} surplus structure---and indeed, as we have argued, there are good reasons to think it does not---the dictum that says `minimize surplus structure' would recommend keeping the standard formalism. As we have seen, however, general relativity \emph{does} have surplus* structure. And this surplus* structure can be eliminated, basically by adding further structure to relativistic spacetimes to `rigidify' them, in the sense of removing any non-trivial automorphisms. One way of doing this would be to fix a global labeling system, so that each point is given a unique label.\footnote{Note the echoes of the move from $\mathcal{C}_A$ to $\mathcal{E}_A$ as described in the previous section. Note that one cannot generally introduce a global \emph{coordinate system}, in the sense of a smooth map from a generic four dimensional manifold to $\mathbb{R}^4$, but one can always assign unique labels to points, for instance by fixing some (non-smooth) map to $\mathbb{R}^4$} And this brings us to the irony, which is that this sort of model of spacetime, where one has a Lorentzian manifold endowed with an `individuating field' has been proposed, for instance by \citet{StachelSS} and \citet{WeatherallHoleArg,WeatherallStein}, as a way of capturing the structure that the `manifold substantivalist' wishes to express, namely, that locations, or points, of spacetime have some ontological status independent of or prior to the events that occur there. And so, minimizing surplus* structure, far from \emph{eliminating} the structure that the substantivalist wished to endorse, leads us to \emph{add} that structure to spacetime. And there are good reasons not to do this---not least of which is that the hole argument shows that a theory that posited \emph{this} structure would not be deterministic, or at least, the evolution of the world would not be uniquely determined by Einstein's equation up to isomorphism. And so we find another connection between surplus* structure and the hole argument: in the context of general relativity, the hole argument provides strong reasons \emph{not} to minimize surplus* structure. \section{Conclusion} We have studied two proposals for how to use category theory to make precise the idea of a physical theory having `surplus structure': those of \citet{WeatherallUG} and \citet{Nguyen+etal}. We have argued, by looking at simple examples, that surplus* structure in the sense of Nguyen et al. does not generally correspond to `surplus structure' in at least one intuitive sense common in mathematics and philosophy of physics. To the contrary, we argued, having surplus* structure is generally associated with a kind of representational redundancy that signals having \emph{less} structure---and to remove representational redundancy in this sense requires adding structure to a theory. Thus, although we agree with Nguyen et al. that Yang-Mills theory has surplus* structure, and more, that it is essential for Yang-Mills theory to have surplus* structure to accommodate the full range of physical situations to which one would like to apply it, we do not think this conclusion is in tension with the idea that one should wish to minimize structure in physical theories. To the contrary, we argue, the reason Yang-Mills theory is able to do the work that Nguyen et al. highlight is that formulations of the theory with less surplus* structure require one to fix structure globally in a way that cannot be done consistently in all cases. Hence it is the fact that removing surplus* structure amounts to \emph{adding} structure, in the sense of \citep{WeatherallUG}, that vitiates these alternative formulations of Yang-Mills theory that Nguyen et al. (correctly) argue are inadequate. We then applied these morals to the hole argument, arguing that the differences between surplus structure and surplus* structure helps clarify both why the hole argument does not reveal that the standard formalism of general relativity is inadequate, and also why one might have thought it did. The reason for the latter is that the hole argument does invoke the surplus* structure, or representational redundancy, of manifolds as representations of events in space and time. But this should not be taken to signal a problem with the standard formalism---nor should it motivate moving to a different formulation with less surplus* structure than manifolds, because to do so would amount to fixing \emph{extra} structure, along the lines of what the manifold substantivalist endorses. This helps clarify the competing intuitions in the literature on the hole argument, but also leads to the ironic situation that the sort of representational redundancy that some philosophers have taken to signal a problem with general relativity is precisely what one gets when one adopts a formalism that avoids surplus* structure. \section*{Acknowledgments} This material is partially based upon work produced for the project “New Directions in Philosophy of Cosmology”, funded by the John Templeton Foundation under grant number 61048. We are grateful to Thomas Barrett, James Ladyman, and Nic Teh for helpful discussions and suggestions as we prepared this paper. \bibliographystyle{elsarticle-harv}
1,116,691,497,947
arxiv
\section{Introduction} One of the central challenges in graph theory is to determine the extremal and typical properties of the family of $H$-free graphs on $n$ vertices. For non-bipartite graphs, an enormous amount of progress has been made on this problem; for bipartite graphs, on the other hand, surprisingly little is known. For example, the extremal number $\textup{ex}(n,H)$ (the maximum number of edges in an $H$-free graph on $n$ vertices) was determined asymptotically for all non-bipartite $H$ over 60 years ago, but, despite much effort, even its order of magnitude is known for only a handful of bipartite graphs. A significantly harder question asks: how many $H$-free graphs are there with $n$ vertices? In particular, Erd\H{o}s asked more than thirty years ago (see, e.g.,~\cite{KW82}) whether or not the number of such graphs is at most $2^{O(\textup{ex}(n,H))}$ for every bipartite graph $H$, but the answer is known in only a few special cases, see~\cite{BSmm,BSst,KW96,KW82}. In this paper we prove that the number of $C_{2\ell}$-free graphs is at most $2^{O(n^{1 + 1/\ell})}$, confirming a longstanding conjecture of Erd\H{o}s. Our method is very general, and is likely to apply to various other classes of bipartite graphs; in particular, we show that a similar bound holds for any bipartite graph which has a certain `refined supersaturation' property. We also essentially resolve the Tur\'an problem on the Erd\H{o}s-R\'enyi random graph $G(n,p)$ for both even cycles and complete bipartite graphs, obtaining close to best possible bounds for all values of $p$. Finally, we show that the natural conjecture (often attributed to Erd\H{o}s) that the number of $H$-free graphs on $n$ vertices is $2^{(1 + o(1))\textup{ex}(n,H)}$ fails for $H = C_6$. \subsection{History and background} The study of extremal graph theory was initiated roughly 70 years ago by Tur\'an~\cite{T41}, who determined exactly the extremal number of the complete graph, by Erd\H{o}s and Stone~\cite{ES46}, who determined $\textup{ex}(n,H)$ asymptotically\footnote{More precisely, they obtained bounds on the extremal number of the complete $r$-partite graph $K_r(t)$. It was noticed by Erd\H{o}s and Simonovits~\cite{ES66} that these are sufficient to determine $\textup{ex}(n,H)$ asymptotically.} for every non-bipartite graph~$H$, and by K\"ov\'ari, S\'os and Tur\'an~\cite{KST} who showed that $\textup{ex}(n,K_{s,t}) = O(n^{2 - 1/s})$, where $K_{s,t}$ denotes the complete bipartite graph with part sizes~$s$ and~$t$. (The case $K_{2,2} = C_4$ was solved some years earlier by Erd\H{o}s~\cite{E38} during his study of multiplicative Sidon sets.) Over the following decades, a huge amount of effort was put into determining more precise bounds for specific families of graphs (see, e.g.,~\cite{MGT,FuS}), and a great deal of progress has been made. Nevertheless, the order of magnitude of $\textup{ex}(n,H)$ for most bipartite graphs, including simple examples such as $K_{4,4}$ and $C_8$, remains unknown. \enlargethispage{\baselineskip} In the 1970s, the problem of determining the number of $H$-free graphs on $n$ vertices was introduced by Erd\H{o}s, Kleitman and Rothschild~\cite{EKR}, who proved that there are $2^{(1 + o(1))\textup{ex}(n,K_r)}$ $K_r$-free graphs, and moreover that almost all triangle-free graphs are bipartite. This latter result was extended to all cliques by Kolaitis, Pr\"omel and Rothschild~\cite{KPR} and to more general graphs by Pr\"omel and Steger~\cite{PS}, and the former to all non-bipartite graphs by Erd\H{o}s, Frankl and R\"odl~\cite{EFR}, using the Szemer\'edi regularity lemma. The corresponding result for $k$-uniform hypergraphs was proved by Nagle, R\"odl and Schacht~\cite{NRS} using hypergraph regularity, and reproved by Balogh, Morris and Samotij~\cite{BMS} and Saxton and Thomason~\cite{ST} using the hypergraph container method (see below). Much more precise results for graphs were obtained by Balogh, Bollob\'as and Simonovits~\cite{BBS1,BBS2,BBS3}. For bipartite $H$ the problem seems to be significantly harder, and much less is known. The first progress was made by Kleitman and Winston~\cite{KW82} in 1982, who showed that there are at most $2^{(1+c)\textup{ex}(n,C_4)}$ $C_4$-free graphs on $n$ vertices, where $c \approx 1.17$, improving the trivial upper bound of $n^{\textup{ex}(n,C_4)}$, and getting within striking distance of the trivial lower bound $2^{\textup{ex}(n,C_4)}$. Their result moreover resolved a longstanding open question posed by Erd\H{o}s (see~\cite{KW82}). However, it was not until almost 30 years later that their theorem was extended to other complete bipartite graphs. This important breakthrough was achieved by Balogh and Samotij~\cite{BSmm,BSst}, who proved, for every $2 \leqslant s \leqslant t$, that there are at most $2^{O(n^{2 - 1/s})}$ $K_{s,t}$-free graphs on $n$ vertices. Their bound is conjectured to be sharp up to the constant implicit in the $O(\cdot)$; however, constructions giving a matching lower bound are known only when either $s \in \{2,3\}$ or $t > (s-1)!$, see~\cite{ARS,Brown,ERS,F96,KRS}. For other (i.e., non-complete) bipartite graphs, the only known bounds of this form are for forests, where the problem is much easier, and for even cycles of length six and eight. Recall that $\textup{ex}(n,C_{2\ell}) = O(n^{1 + 1/\ell})$ for every $\ell \geqslant 2$.\footnote{The first published proof of this bound was given by Bondy and Simonovits~\cite{BS74}, but they attribute the result to Erd\H{o}s, see also~\cite{E64} and~\cite[Theorem~4.6]{FuS}. For more recent improvements, see~\cite{P12} and~\cite{V00}.} Erd\H{o}s and Simonovits conjectured (see~\cite{E64} or~\cite{FuS}) that this bound is sharp up to the implied constant factor, but matching lower bounds are known only for $C_4 = K_{2,2}$ (see above), $C_6$ and $C_{10}$ (see~\cite{Benson,FNV,LUW,W91}). It was therefore natural for Erd\H{o}s to conjecture that, for every $\ell \geqslant 2$, the number of $C_{2\ell}$-free graphs is at most $2^{O(n^{1+1/\ell})}$, and indeed Kleitman and Wilson~\cite{KW96} proved this in the cases $\ell = 3$ and $\ell = 4$, using a clever colouring argument to reduce the problem to that solved in~\cite{KW82}. They (and independently Kreuter~\cite{Kreuter}, see also~\cite{KKS}) moreover proved that there are $2^{O(n^{1+1/\ell})}$ graphs with no even cycles of length \emph{at most} $2\ell$. However, they were unable to resolve the case of a single forbidden long even cycle, and no further progress has been made in the decade and a half since. \subsection{Main results} In this paper we resolve this longstanding open problem for all even cycles, using a very general method, which we expect to give similar bounds for many other bipartite graphs. More precisely, we shall prove the following theorem. \begin{thm} \label{thm:main} For every $\ell \in \mathbb{N}$, there are at most $2^{O(n^{1+1/\ell})}$ $C_{2\ell}$-free graphs on $n$ vertices. \end{thm} As noted above, it is generally believed that the bound in Theorem~\ref{thm:main} is sharp up to the constant implicit in the $O(\cdot)$, but this is only known in the cases $\ell \in \{ 2, 3, 5 \}$. Theorem~\ref{thm:main} is an immediate consequence of the following result, which is our main theorem and gives a rough structural description of $C_{2\ell}$-free graphs. \begin{thm}\label{thm:cycle:containers} For each $\ell \in \mathbb{N}$ and $\delta > 0$, there exists a constant $C = C(\delta,\ell)$ such that the following holds for every sufficiently large $n \in \mathbb{N}$. There exists a collection~$\mathcal{G}$ of at most $$2^{\delta n^{1 + 1/\ell}}$$ graphs on vertex set~$[n]$ such that $$e(G) \leqslant C n^{1+1/\ell}$$ for every $G \in \mathcal{G}$, and every $C_{2\ell}$-free graph is a subgraph of some~$G \in \mathcal{G}$. \end{thm} We remark that we shall in fact prove a substantial generalization of Theorem~\ref{thm:cycle:containers}, which will provide us with close to optimal family of `containers' of any given size, see Theorem~\ref{thm:cycle:containers:turan}. A closely related structural question asks: how many edges does a typical $H$-free graph on $n$ vertices have? Balogh, Bollob\'as and Simonovits~\cite{BBS3} conjectured that there exists a constant $c > 0$ such that, if $H$ contains a cycle, then almost every $H$-free graph on $n$ vertices has between $c \cdot \textup{ex}(n,H)$ and $(1 - c) \textup{ex}(n,H)$ edges. This is only known for some complete bipartite graphs~\cite{BSC4,BSst,KW82} and for $C_6$~\cite{KW96} (as usual, much more is known for non-bipartite graphs). An immediate consequence of Theorem~\ref{thm:cycle:containers} is the following corollary. \begin{cor}\label{cor:fewwithfewedges} There are $2^{o(n^{1 + 1/\ell})}$ $C_{2\ell}$-free graphs on $n$ vertices with $o\big( n^{1 + 1/\ell} \big)$ edges. \end{cor} \enlargethispage{\baselineskip} Under the additional assumption that $\textup{ex}(n,C_{2\ell}) = \Omega( n^{1 + 1/\ell} )$, this implies that almost all $C_{2\ell}$-free graphs have $\Omega( n^{1 + 1/\ell} )$ edges. Corollary~\ref{cor:fewwithfewedges} may therefore be seen as evidence in favour of the conjecture of Balogh, Bollob\'as and Simonovits. Another very strong conjecture, often attributed to Erd\H{o}s (see,~e.g.,~\cite{BBS3}), and mentioned explicitly\footnote{More precisely, they said that it ``seems likely" that this holds for every bipartite graph $H$.} by Erd\H{o}s, Frankl and R\"odl~\cite{EFR}, states that the number of $H$-free graphs on $n$ vertices is $2^{(1 + o(1))\textup{ex}(n,H)}$ for every graph $H$ which contains a cycle. Recall that it was proved in~\cite{EFR} that this holds for all non-bipartite $H$. \pagebreak We shall prove the following proposition, which disproves this conjecture for $C_6$. \begin{prop}\label{prop:counterexample} There exists a constant $c > 0$ such that there are at least $$2^{(1 + c) \textup{ex}(n,C_6)}$$ $C_6$-free graphs on $n$ vertices for infinitely many values of $n \in \mathbb{N}$. \end{prop} We will prove Proposition~\ref{prop:counterexample} using bounds on $\textup{ex}(n,C_6)$ due to F\"uredi, Naor and Verstra\"ete~\cite{FNV}. However, we will also prove a similar result for various forbidden families of short cycles, using only the Erd\H{o}s-Bondy-Simonovits bound on the extremal number $\textup{ex}(n,C_{2\ell})$. For example, we will give an extremely simple proof that if $\mathcal{F} = \{K_3,C_6\}$, then there are at least $2^{(1+c) \textup{ex}(n,\mathcal{F})}$ $\mathcal{F}$-free graphs on $n$ vertices for infinitely many values of $n$. We conjecture that a similar result holds for all even cycles of length at least six, but our method fails (though not by much!) for $C_4$, and so we are not sure whether or not to expect the conjecture to hold for this and other complete bipartite graphs. It is not inconceivable that the number of $H$-free graphs on $n$ vertices is $2^{(1 + o(1))\textup{ex}(n,H)}$ for every $H$ such that $\textup{ex}(n,H) = \Omega(n^{3/2})$. \subsection{Refined supersaturation for even cycles, and hypergraph containers} \enlargethispage{\baselineskip} \enlargethispage{\baselineskip} We shall prove Theorem~\ref{thm:cycle:containers} using the hypergraph container method, which was introduced recently by Balogh, Morris and Samotij~\cite{BMS} and by Saxton and Thomason~\cite{ST}. This technique, which allows one to find a relatively small family of sets (`containers') which cover the independent sets in an $r$-uniform hypergraph, has already found many applications; for example, it implies deterministic analogues of the recent breakthrough results of Conlon and Gowers~\cite{CG} and Schacht~\cite{Schacht} on extremal results in sparse random sets, and in many cases proves stronger `counting' versions of those theorems. In order to apply this method, we need to bound, for every graph $G$ with $\gg \textup{ex}(n,H)$ edges, a particular parameter (see Section~\ref{sec:containers}) of the hypergraph which encodes copies of our forbidden graph $H$ in $G$. Bounding this parameter in the case $H = C_{2\ell}$ is the main technical challenge of this paper. The following `refined supersaturation theorem for even cycles' is our second main result. The existence of a hypergraph satisfying part~$(a)$ was proved\footnote{The proof of this `supersaturation result for even cycles' was not published at the time, but will appear shortly in a paper of Faudree and Simonovits~\cite{FS}.} by Simonovits (see~\cite{ES84}), and was conjectured by Erd\H{o}s and Simonovits~\cite{ES84} to exist for every bipartite graph $H$. The condition in part~$(b)$ is new, and is crucial to our application of the container method. \begin{thm}\label{thm:cycle:hypergraph} For every $\ell \geqslant 2$, there exist constants $C > 0$, $\delta > 0$ and $k_0 \in \mathbb{N}$ such that the following holds for every $k \geqslant k_0$ and every $n \in \mathbb{N}$. Given a graph~$G$ with~$n$ vertices and $k n^{1+1/\ell}$ edges, there exists a $2\ell$-uniform hypergraph~$\mathcal{H}$ on vertex set~$E(G)$, satisfying: \begin{itemize} \item[$(a)$] $\mathcal{H}$ has at least $\delta k^{2\ell} n^2$ edges, and\smallskip \item[$(b)$] $d_\mathcal{H}(\sigma) \leqslant C \cdot k^{2\ell - |\sigma| - \frac{|\sigma| - 1}{\ell-1}} n^{1 - 1/\ell}$ for every $\sigma \subset V(\mathcal{H})$ with $1 \leqslant |\sigma| \leqslant 2 \ell-1$, \end{itemize} such that each of the edges of $\mathcal{H}$ corresponds to a copy of~$C_{2\ell}$ in~$G$. \end{thm} In words, the theorem above states that if $e(G) \gg n^{1 + 1/\ell}$, then $G$ contains at least as many copies of $C_{2\ell}$ (up to a constant factor) as a (typical) random graph of the same density, and moreover these copies of $C_{2\ell}$ are relatively `uniformly distributed' over the edges of $G$. We emphasize that $\mathcal{H}$ does not need to encode every copy of $C_{2\ell}$ in $G$, but only a subset. We make the following conjecture for a general bipartite graph $H$, which follows from Theorem~\ref{thm:cycle:hypergraph} in the case $H = C_{2\ell}$. As noted above, the existence in $G$ of as many copies of $H$ as the Erd\H{o}s-R\'enyi random graph with the same number of edges was conjectured by Erd\H{o}s and Simonovits~\cite{ES84}. Since we ask that these copies of $H$ are moreover reasonably uniformly distributed, we shall refer to it as the `refined Erd\H{o}s-Simonovits conjecture'. \begin{conj}[Refined Erd\H{o}s-Simonovits conjecture for general bipartite $H$]\label{conj:refinedES} Given a bipartite graph $H$, there exist constants $C > 0$, $\varepsilon > 0$ and $k_0 \in \mathbb{N}$ such that the following holds. Let $k \geqslant k_0$, and suppose that~$G$ is a graph on~$n$ vertices with $k \cdot \textup{ex}(n,H)$ edges. Then there exists a non-empty $e(H)$-uniform hypergraph~$\mathcal{H}$ on vertex set~$E(G)$, satisfying \begin{equation}\label{eq:conjES} d_\mathcal{H}(\sigma) \leqslant \displaystyle\frac{C \cdot e(\mathcal{H})}{k^{(1 + \varepsilon)(|\sigma| - 1)} e(G)} \quad \text{for every $\sigma \subset V(\mathcal{H})$ with $1 \leqslant |\sigma| \leqslant e(H)$,} \end{equation} such that each of the edges of $\mathcal{H}$ corresponds to a copy of~$H$ in~$G$. \end{conj} Our motivation in making this conjecture is the following proposition, see also~\cite{Sax}. \begin{prop}\label{prop:conj:implies:thm} Let $H$ be a bipartite graph. If Conjecture~\ref{conj:refinedES} holds for $H$, then there are at most $2^{O(\textup{ex}(n,H))}$ $H$-free graphs on $n$ vertices. \end{prop} In fact we shall actually prove a slightly more general result (see Section~\ref{sec:proof}), which doesn't require a lower bound on the extremal number of $H$. We remark that, although we do not demand a lower bound on the number of edges of the hypergraph $\mathcal{H}$ in Conjecture~\ref{conj:refinedES}, we expect that it can be chosen to have (up to a constant factor) as many copies of $H$ as the random graph $G(n,m)$, where $m = k \cdot \textup{ex}(n,H)$. We remark that the conjecture holds for the complete bipartite graph $H = K_{s,t}$, under the additional assumption that $\textup{ex}(n,K_{s,t}) = \Omega( n ^{2 - 1/s})$. We refer the reader to Section~\ref{sec:Kst} for the precise statement. \subsection{The Tur\'an problem for random graphs} \enlargethispage{\baselineskip} Another consequence of Theorem~\ref{thm:cycle:hypergraph} (which also follows via the hypergraph container method) relates to the so-called `Tur\'an problem on $G(n,p)$', that is, the problem of determining\footnote{More precisely, determining bounds on $\textup{ex} \big( G(n,p), H \big)$ which hold with high probability as $n \to \infty$.} the function $$\textup{ex} \big( G(n,p), H \big) \, := \, \max\Big\{ e(G) \,:\, G \subset G(n,p) \textup{ and $G$ is $H$-free} \Big\}.$$ This question has received an enormous amount of attention in recent years (see the excellent survey~\cite{RS}, and the recent breakthroughs in~\cite{BMS,CG,ST,Schacht}). In the case $H = C_{2\ell}$ it was solved over fifteen years ago by Haxell, Kohayakawa and \L uczak~\cite{HKL}, in the following sense: they proved that if $p \gg n^{-1 + 1/(2\ell-1)}$ then $\textup{ex} \big( G(n,p), C_{2\ell} \big) \ll e\big( G(n,p) \big)$, whereas if $p \ll n^{-1 + 1/(2\ell-1)}$ then $\textup{ex} \big( G(n,p), C_{2\ell} \big) = \big( 1 + o(1) \big) e\big( G(n,p) \big)$. Much more precise bounds for a certain range of $p$ were obtained by Kohayakawa, Kreuter and Steger~\cite{KKS}, who showed that \begin{equation}\label{eq:KKSthm} \textup{ex} \big( G(n,p), C_{2\ell} \big) \, = \, \Theta\Big( n^{1 + 1/(2\ell-1)} (\log \alpha)^{1/(2\ell-1)} \Big) \end{equation} if $p = \alpha n^{-1 + 1/(2\ell-1)}$ and $2 \leqslant \alpha \leqslant n^{1/(2\ell-1)^2}$. However, no non-trivial upper bounds appear to have been obtained for much larger values of $p$. Using the hypergraph container method, together with Theorem~\ref{thm:cycle:hypergraph}, we will prove the following upper bounds on $\textup{ex} \big( G(n,p), C_{2\ell} \big)$. Both are simple consequences of a structural result (i.e., generalization of Theorem~\ref{thm:cycle:containers}) which we will state and prove in Section~\ref{sec:Turan}. Moreover, by~\eqref{eq:KKSthm} the first bound is sharp up to a polylog-factor, and we will show that, modulo a well-known (and widely believed) conjecture of Erd\H{o}s and Simonovits, the second bound is sharp\footnote{This would also imply that the bound in part~$(b)$ of Theorem~\ref{thm:cycle:hypergraph} is essentially best possible.} up to the value of the constant $C$. \begin{thm}\label{thm:randomturan} For every $\ell \geqslant 2$, there exists a constant $C = C(\ell) > 0$ such that $$\textup{ex} \big( G(n,p), C_{2\ell} \big) \, \leqslant \, \left\{ \begin{array} {c@{\quad}l} C n^{1 + 1/(2\ell-1)} (\log n)^2 & \textup{if } \; p \leqslant n^{-(\ell - 1) / (2\ell-1)} (\log n)^{2\ell} \\[+1ex] C p^{1/\ell} n^{1+1/\ell} & \textup{otherwise} \end{array}\right.$$ with high probability as $n \to \infty$. \end{thm} In Section~\ref{sec:Kst} we shall prove a similar (and also probably close to best possible) theorem for $K_{s,t}$-free graphs. For similar results in a closely related context, we refer the reader to the recent work of Kohayakawa, Lee, R\"odl and Samotij~\cite{KLRS} on Sidon sets. The rest of the paper is organised as follows. First, in Section~\ref{sec:outline}, we give an outline of the proofs of Theorems~\ref{thm:main} and~\ref{thm:cycle:hypergraph} and prove Proposition~\ref{prop:counterexample}. Next, in Section~\ref{Sec:ES}, the most substantial part of the paper, we prove Theorem~\ref{thm:cycle:hypergraph}. In Section~\ref{sec:containers} we formally introduce the hypergraph container method, and in Section~\ref{sec:proof} we will use it to prove Theorems~\ref{thm:main} and~\ref{thm:cycle:containers}, Corollary~\ref{cor:fewwithfewedges} and Proposition~\ref{prop:conj:implies:thm}. We will also give a relatively simple proof of a slightly weaker version of Theorem~\ref{thm:randomturan} (with an extra $\log$-factor), and then, in Section~\ref{sec:Turan}, a more involved proof of the precise statement. Finally, in Section~\ref{sec:Kst}, we will sketch the proof of some similar results with `even cycle' replaced by `complete bipartite graph'. \section{Preliminaries}\label{sec:outline} In this section we will prepare the reader for the proofs of the main theorems, and prove the lower bounds claimed in the Introduction. First, in Sections~\ref{sec:containers:outline} and~\ref{sec:superset:sketch}, we will describe the hypergraph container method, and give a sketch of the proof of Theorem~\ref{thm:cycle:hypergraph}. Next, in Section~\ref{sec:lowerbounds}, we will prove Proposition~\ref{prop:counterexample}, as well as similar lower bounds for other families of cycles, and the (conditional) lower bound in Theorem~\ref{thm:randomturan}. Lastly, in Sections~\ref{sec:definitions} and~\ref{sec:notation}, we will introduce some of the basic concepts which will be used in the proof of Theorem~\ref{thm:cycle:hypergraph}. \subsection{The hypergraph container method}\label{sec:containers:outline} \enlargethispage{\baselineskip} One of the main results of~\cite{BMS,ST} (see Theorem~\ref{thm:coveroff}, below) states that, given any $r$-uniform hypergraph $\mathcal{H}$, there exists a relatively small collection of vertex sets (containers), which cover the independent sets of $\mathcal{H}$, and each of which contains fewer than $(1 - \delta) e(\mathcal{H})$ edges of $\mathcal{H}$. The number of containers depends on the `uniformity' of the hypergraph; more precisely, the stronger our upper bounds on the degrees of sets in $\mathcal{H}$, the smaller the family of containers is guaranteed to be. We will repeatedly apply this surprisingly powerful result to the hypergraph produced by Theorem~\ref{thm:cycle:hypergraph}, which encodes (a highly uniform sub-family of) the copies of $C_{2\ell}$ in a graph $G$. The container theorem produces a family of subgraphs of $G$ which form a cover of the $C_{2\ell}$-free subgraphs of $G$; we then apply the container theorem to each of these graphs, and so on. By this method, we obtain a rooted tree $T$ of subgraphs of $K_n$, such that every $C_{2\ell}$-free graph is contained in some leaf of $T$, and each leaf has $O(n^{1 + 1/\ell})$ edges. (To be slightly more precise, each vertex of this tree corresponds to a graph, the root is $K_n$, and the out-neighbours of each vertex are given by the container theorem described above.) To guarantee that each leaf has $O(n^{1 + 1/\ell})$ edges, we simply apply the container theorem sufficiently many times, noting that Theorem~\ref{thm:cycle:hypergraph} is valid as long as $G$ has more than this many edges. It remains to count the leaves of $T$; in order to do so, we need to bound the number of containers formed in each application of Theorem~\ref{thm:coveroff}. This is controlled by a parameter~$\tau$ which (roughly speaking) measures the uniformity of $\mathcal{H}$, and it turns out that (to deduce Theorem~\ref{thm:main}, for example) we need to be able to apply the theorem with $\tau \approx k^{-(1+\varepsilon)}$, for some $\varepsilon > 0$, when $e(G) = k n^{1 + 1/\ell}$. In order to do so, we shall use properties~$(a)$ and~$(b)$ of Theorem~\ref{thm:cycle:hypergraph}, in particular the fact the our upper bound on $d_\mathcal{H}(\sigma)$ improves by a factor of more than $k^{1+\varepsilon}$ each time $|\sigma|$ increases by one. In order to deduce Theorem~\ref{thm:randomturan}, on the other hand, we will need the full strength of Theorem~\ref{thm:cycle:hypergraph}, see Sections~\ref{sec:proof} and~\ref{sec:Turan}. \enlargethispage{\baselineskip} \subsection{The proof of Theorem~\ref{thm:cycle:hypergraph}}\label{sec:superset:sketch} The most technical part of this paper is the proof of Theorem~\ref{thm:cycle:hypergraph}, in Section~\ref{Sec:ES}. Here we will attempt to give an outline of the key ideas in the proof, and thereby hopefully make it easier for the reader to follow the details of the calculation. The basic idea, motivated by the proof of Bondy and Simonovits~\cite{BS74}, is as follows: we will find a vertex $x \in V(G)$ and a $t \in \{2,\ldots,\ell\}$ such that the $t^{th}$ neighbourhood $A_t$ of $x$ is no larger than it `should' be. By the conditions $e(G) = k n^{1 + 1/\ell}$ and $k \geqslant k_0$, such a pair $(x,t)$ must exist (see Lemma~\ref{lem:concentrated:exists}); we will choose a pair with $t$ as small as possible. One can find many cycles in $G$ formed by two paths from $x$ to $A_t$, plus a path of length $2\ell - 2t$ which alternates between the sets $A_{t-1}$ and $A_t$. Repeating this process for a positive proportion of the vertices of $G$, we find at least $\delta k^{2\ell} n^2$ copies of $C_{2\ell}$ in $G$. The proof of part~$(a)$, outlined above, is already highly non-trivial, and the requirement that no set of edges of $G$ be contained in too many edges of $\mathcal{H}$ (i.e., copies of $C_{2\ell}$) introduces significant extra complications. We overcome these by substantially modifying our strategy. First, we shall find the cycles which form the edges of $\mathcal{H}$ one at a time, selecting carefully from the available choices, instead of simply taking every cycle in $G$ which passes through the vertex $x$. In order to do so, we shall construct (in each step of the process) a sufficiently large sub-family $\mathcal{C}$ of the cycles through $x$ with the following property: no cycle in $\mathcal{C}$ contains any (`saturated') set of edges which are already contained in the maximum allowed number of edges of $\mathcal{H}$. Since $\mathcal{C}$ is sufficiently large, we will be able to deduce that not all of these cycles are already in $\mathcal{H}$, and so we can add one of them to our collection. In order to construct $\mathcal{C}$, we will need to introduce two further types of neighbourhood, which we term the \emph{balanced} and \emph{refined $t$-neighbourhoods} of a vertex $x \in V(G)$. These both consist of a family of sets $\mathcal{A} = (A_1,\ldots,A_t)$ and a family of paths $\mathcal{P}$ from $x$ to $A_t$, whose $j$th edge ends in $A_j$, which satisfy several further `uniformity' conditions. In particular, for a pair $(\mathcal{A},\mathcal{P})$ to be balanced we will require there to be `not too many' sub-paths of $\mathcal{P}$ between any two vertices of $\mathcal{A}$, and for it to be refined we will require that every vertex of $A_t$ receives `many' paths from $x$. Using the minimality of $t$ (in the $t$-neighbourhood of $x$ chosen above) we will show (see Lemma~\ref{prop:RNF:exists}) that $x$ has a balanced neighbourhood with almost as many paths as one would expect, which avoids all saturated sets of edges, and (see Lemma~\ref{lem:balanced_to_refined}) that every balanced $t$-neighbourhood $(\mathcal{A},\mathcal{P})$ contains a refined $t$-neighbourhood $(\mathcal{B},\mathcal{Q})$. We now perform the following algorithm. Let $(\mathcal{A},\mathcal{P})$ be a balanced $t$-neighbourhood of the vertex $x$, and use the lemma mentioned above to find a refined $t$-neighbourhood $(\mathcal{B},\mathcal{Q})$. We form cycles by choosing a path from $x$ to $B_t$, a zig-zag path of length $2\ell - 2t$ between $B_t$ and $B_{t-1}$, and then a path in $\mathcal{Q}$ back to $x$. We repeat this process sufficiently many times, adding to $\mathcal{C}$ only those cycles which avoid all saturated sets of edges. This part of the proof is surprisingly intricate; in particular, one of the key difficulties will be in ensuring we (typically) have many `legal' choices for the path back to $x$. Once we have shown that the family $\mathcal{C}$ thus constructed is sufficiently large, we simply note that each of the cycles passes through one of the edges between $x$ and $B_1$. Assuming that the hypergraph $\mathcal{H}$ constructed so far does not already have sufficiently many edges, it follows by the pigeonhole principle that one of the cycles of $\mathcal{C}$ is not already an edge of $\mathcal{H}$, as required. \subsection{Lower bounds, and a proof of Proposition~\ref{prop:counterexample}}\label{sec:lowerbounds} In this section we will describe an extremely simple method of producing many $\mathcal{F}$-free graphs on $n$ vertices, for certain forbidden families $\mathcal{F}$ consisting of cycles. The proof is based on that of a similar result of Saxton and Thomason~\cite{ST} for Sidon sets. As well as Proposition~\ref{prop:counterexample}, we shall prove the following result, which generalizes the bound stated earlier for the family $\mathcal{F} = \{K_3,C_6\}$. \begin{prop}\label{prop:general:counter} There exists a constant $c > 0$ such that the following holds. Let $\ell \geqslant 3$, and set $\mathcal{F} = \{C_3,\ldots,C_\ell\} \cup \{C_{2\ell}\}$. There are at least $$2^{(1 + c) \textup{ex}(n,\mathcal{F})}$$ $\mathcal{F}$-free graphs on $n$ vertices for infinitely many values of $n \in \mathbb{N}$. \end{prop} \begin{proof} Let $n \in \mathbb{N}$ be a multiple of three such that $\textup{ex}(n,\mathcal{F}) \leqslant 3^{1 + 1/\ell+o(1)} \textup{ex}(n/3,\mathcal{F})$, and note that since $\textup{ex}(n,\mathcal{F}) \leqslant \textup{ex}(n,C_{2\ell}) = O(n^{1 + 1/\ell})$, there exist infinitely many such values of $n$. Let $G$ be an extremal graph for $\mathcal{F}$ on $n/3$ vertices, and let $\mathcal{G}$ be the collection of graphs obtained from $G$ by blowing up each vertex to size three, and replacing each edge by a (not necessarily perfect) matching. Since there are 34 matchings in $K_{3,3}$, it follows that $$|\mathcal{G}| \, = \, 34^{\textup{ex}(n/3,\mathcal{F})} \, > \, 2^{(1 + c) 3^{4/3+o(1)} \textup{ex}(n/3,\mathcal{F})} \, \geqslant \, 2^{(1 + c)\textup{ex}(n,\mathcal{F})}$$ if $c > 0$ is sufficiently small.\footnote{In fact, any $c < 3^{-4/3} \log_2(34) - 1 \approx 0.1758$ will suffice.} It therefore suffices to show that $\mathcal{G}$ is a family of $\mathcal{F}$-free graphs on $n$ vertices. To do so, simply note that a cycle of length $k$ in $G' \in \mathcal{G}$ corresponds to a non-backtracking walk in $G$ of length $k$, which must either be a cycle, or contain a cycle of length at most $\lfloor k/2 \rfloor$. This proves that $\mathcal{G}$ is $\mathcal{F}$-free, as required. \end{proof} It is easy to see that the proof above can be applied to any family $\mathcal{F}$ consisting of a graph $H$ with $\textup{ex}(n,H) = O(n^{4/3})$ together with all graphs obtained by (recursively) identifying pairs of non-adjacent vertices of $H$. On the other hand, in order to prove Proposition~\ref{prop:counterexample} we will need to apply the following theorem of F\"uredi, Naor and Verstra\"ete~\cite{FNV}. \begin{thm}[F\"uredi, Naor and Verstra\"ete, 2005]\label{thm:FNV} For all sufficiently large $n \in \mathbb{N}$, $$\textup{ex}(n,C_6) < 0.6272 \cdot n^{4/3},$$ and for infinitely many values of $n$ there exists a $\{K_3,C_6\}$-free graph $G$ on $n$ vertices with $$e(G) > 0.5338 \cdot n^{4/3}.$$ \end{thm} \enlargethispage{\baselineskip} Since $\frac{0.6272}{0.5338} \approx 1.17497 < 1.1758$, we may apply the same argument as in the proof above. We remark that it seems to be an incredible coincidence that the bounds obtained in~\cite{FNV} are almost exactly what we need in order to deduce the proposition. \begin{proof}[Proof of Proposition~\ref{prop:counterexample}] Let $G$ be the $\{K_3,C_6\}$-free graph constructed in~\cite{FNV}, with $n/3$ vertices and more than $0.5338 \cdot (n/3)^{4/3}$ edges, where $n \in \mathbb{N}$ is chosen arbitrarily among those integers for which this graph exists. We remark that although in~\cite{FNV} it is not proven that $G$ is triangle-free, this follows easily from their construction via a little case analysis. Now, letting $\mathcal{G}$ be the collection of graphs obtained by blowing up each vertex to size three, and replacing each edge by a matching, it follows that $\mathcal{G}$ is a $C_6$-free family, exactly as in the proof above. Moreover, by Theorem~\ref{thm:FNV}, we have\footnote{Note that $2^{\frac{0.6272}{0.5338} \cdot 3^{4/3}} = 33.91$, so we need to take $c < 0.0007$.} $$|\mathcal{G}| \, > \, 34^{0.5338 (n/3)^{4/3}} \, > \, 2^{(1 + c) 0.6272 n^{4/3}} \, > \, 2^{(1 + c)\textup{ex}(n,C_6)},$$ for some $c > 0$, as required. \end{proof} Since it follows from a similar construction, let us also take this opportunity to show that the bounds in Theorem~\ref{thm:randomturan} are essentially best possible, conditional on the following conjecture of Erd\H{o}s and Simonovits~\cite{ES82}. \begin{conj}[Erd\H{o}s and Simonovits, 1982] $$\textup{ex}\big( n, \{ C_3,C_4,\ldots,C_{2\ell} \} \big) \, = \, \Theta\big( n^{1 + 1/\ell} \big).$$ \end{conj} If the conjecture is true, then there exists (for infinitely many values of $N$) a graph $G$ with~$N$ vertices and $\varepsilon N^{1+1/\ell}$ edges, and girth at least $2\ell + 1$. Choose $p \in (0,1)$ arbitrarily small, set $a = \varepsilon / p$, and blow up each vertex of $G$ into a set of $a$ vertices. Set $n = aN$, and place a copy of the Erd\H{o}s-R\'enyi random graph $G(n,p)$ on these $n$ vertices; that is, choose edges independently at random with probability $p$. We discard all edges inside classes, and between pairs of classes corresponding to non-edges of $G$. For each pair of classes corresponding to an edge of $G$, we retain an arbitrary maximal matching. To see that this graph has, with high probability, at least $\varepsilon^3 p^{1/\ell} n^{1 + 1/\ell}$ edges, simply observe that for each edge of $G$, we expect to obtain at least $p a^2 / 2 = \varepsilon^2 / 2p$ edges. (Indeed, we can search for an edge incident to each vertex in turn, ignoring those which have already been used.) Thus the expected number of edges in our $C_{2\ell}$-free subgraph of $G(n,p)$ is at least $$\frac{\varepsilon^2}{2p} \cdot \varepsilon N^{1+1/\ell} \, = \, \frac{\varepsilon^2}{2p} \cdot \varepsilon \bigg( \frac{p n}{\varepsilon} \bigg)^{1+1/\ell} \, \geqslant \, \varepsilon^2 \cdot p^{1/\ell} n^{1+1/\ell}.$$ Since the events are independent for different edges of $G$, a standard concentration argument shows that the claimed bound holds with high probability. \subsection{Saturated sets in good hypergraphs}\label{sec:definitions} In order to simplify slightly the presentation of the proof of Theorem~\ref{thm:cycle:hypergraph} in Section~\ref{Sec:ES}, we shall prepare the ground in this section by giving a couple of key definitions, and proving a simple lemma. Let us fix $\ell \geqslant 2$ throughout the rest of the paper. We will need various constants in the proof below; we define them here for convenience. They will satisfy $$0 \, < \, \delta \, \ll \, \varepsilon(1) \, \ll \, \varepsilon(2) \, \ll \, \cdots \, \ll \, \varepsilon(\ell) \, \ll \, 1 \, \ll \, C \, \ll \, k_0.$$ More precisely, we can set $C = 10\ell$, $\varepsilon(\ell) = 1/C^2$, $\varepsilon(t-1) = \varepsilon(t)^{t}$ for each $2 \leqslant t \leqslant \ell$, $\delta = \varepsilon(1)^{2\ell}$ and take $k_0 = k_0(\delta)$ to be sufficiently large. We emphasize that this value of $C$, which is fixed throughout the proof of Theorem~\ref{thm:cycle:hypergraph}, is not the same as the (various different) constants $C$ which appear in the statements in the Introduction. For each $n,k \in \mathbb{N}$ and $j \in [2\ell-1]$, set \begin{equation}\label{def:Delta} \Delta^{(j)}(k,n) \, = \, \frac{k^{2\ell-1} n^{1 - 1 / \ell}}{ \big( \delta k^{\ell / (\ell-1)} \big)^{j-1}}, \end{equation} and note that $\Delta^{(j)}(k,n) \geqslant 1$ if $k \leqslant n^{(\ell-1)/\ell}$. This will represent the maximum degree of a set of~$j$ vertices in the $2\ell$-uniform hypergraph we will construct, whose edges will represent (a subset of the) copies of~$C_{2\ell}$ in a given graph $G$ with~$n$ vertices and~$kn^{1+1/\ell}$ edges. \begin{defn}[Good hypergraphs] We will say that a hypergraph $\mathcal{H}$ is \emph{good} with respect to $(\delta, k,\ell,n)$ (or simply good), if $d_\mathcal{H}(\sigma) \leqslant \Delta^{(|\sigma|)}(k,n)$ for every $\sigma \subset V(\mathcal{H})$ with $1 \leqslant |\sigma| < 2\ell$. \end{defn} Next, we define~$\mathcal{F}(\mathcal{H})$ to be the collection of subsets of~$V(\mathcal{H})$ that are at their maximum degree. \begin{defn}[Saturated sets] Given a $2\ell$-uniform hypergraph $\mathcal{H}$ and a set $\sigma \subset V(\mathcal{H})$ with $1 \leqslant |\sigma| < 2 \ell$, we say that~$\sigma$ is \emph{saturated} if $d_\mathcal{H}(\sigma) \geqslant \lfloor \Delta^{(|\sigma|)}(k,n) \rfloor$. Set \[ \mathcal{F}(\mathcal{H}) = \big\{ \sigma \subset V(\mathcal{H}) \,:\, \sigma \subset V(G) \mbox{ is saturated} \big\}. \] \end{defn} We will sometimes need to ensure that a copy of $C_{2\ell}$ contains no saturated set of edges. In avoiding these sets, the following concept will be useful. For each $S \subset V(\mathcal{H})$ and $j \in \mathbb{N}$, define the \emph{$j$-link} of~$S$ in~$\mathcal{F} = \mathcal{F}(\mathcal{H})$ to be \[ L_\mathcal{F}^{(j)}(S) \,=\, \Big\{ \sigma \in \big( V(\mathcal{H}) \setminus S \big)^{(j)} \,:\, \sigma \cup \tau \in \mathcal{F} \mbox{ for some non-empty } \tau \subset S \Big\}, \] and set $L_\mathcal{F}(S) = \bigcup_{j \geqslant 1} L_\mathcal{F}^{(j)}(S)$. In order to give the reader some practice with the various notions just introduced, let us prove the following easy (and useful) bound on $|L_\mathcal{F}^{(j)}(S)|$. \begin{lemma}\label{lem:size_of_link} Let $\mathcal{H}$ be a good $2\ell$-uniform hypergraph and let~$\mathcal{F} = \mathcal{F}(\mathcal{H})$. Then \[ \big| L_\mathcal{F}^{(j)}(S) \big| \, \leqslant \, 2^{2\ell+|S|+1} \cdot \big( \delta k^{\ell/(\ell-1)} \big)^j \] for every $j \in \mathbb{N}$ and every $S \subset V(\mathcal{H})$. \end{lemma} \begin{proof} For each non-empty set $\tau \subset S$, set \[ \mathcal{J}(\tau) \, = \, \Big\{ \sigma \in \big( V(\mathcal{H}) \setminus S \big)^{(j)} \,:\, \sigma \cup \tau \in \mathcal{F} \Big\}. \] Now, by the handshaking lemma and the definition of goodness, we have $$2^{-2\ell} \sum_{\sigma \in \mathcal{J}(\tau)} d_\mathcal{H}(\sigma \cup \tau) \, \leqslant \, d_\mathcal{H}(\tau) \, \leqslant \, \Delta^{(|\tau|)}(k,n),$$ since each edge of~$\mathcal{H}$ is counted at most $2^{2\ell}$ times in the sum (once for each~$\sigma \in \mathcal{J}(\tau)$ which is a subset of that edge). Moreover $$\sum_{\sigma \in \mathcal{J}(\tau)} d_\mathcal{H}(\sigma \cup \tau) \, \geqslant \, |\mathcal{J}(\tau)| \cdot \lfloor \Delta^{(|\tau|+j)}(k,n) \rfloor,$$ by the definitions of~$\mathcal{J}$ and~$\mathcal{F}$. Thus \begin{equation}\label{eq:Deltaratio} |\mathcal{J}(\tau)| \, \leqslant \, 2^{2\ell} \cdot \frac{\Delta^{(|\tau|)}(k,n)}{\lfloor \Delta^{(|\tau|+j)}(k,n) \rfloor} \, \leqslant \, 2^{2\ell+1} \cdot \big( \delta k^{\ell/(\ell-1)} \big)^j. \end{equation} Finally, since the sets $\mathcal{J}(\tau)$ cover $L_\mathcal{F}^{(j)}(S)$, it follows that \[ \big| L_\mathcal{F}^{(j)}(S) \big| \, \leqslant \sum_{\emptyset \,\neq\, \tau \,\subset\, S} |\mathcal{J}(\tau)| \, \leqslant \, 2^{2\ell+|S|+1} \cdot \big( \delta k^{\ell/(\ell-1)} \big)^j, \] as required. \end{proof} \enlargethispage{\baselineskip} \subsection{Notation}\label{sec:notation} Let us finish this section by introducing a few more pieces of notation which will prove useful in the proof of Theorem~\ref{thm:cycle:hypergraph}. First, a \emph{$t$-neighbourhood} of a vertex $x \in V(G)$ is simply a collection $\mathcal{A} = (A_1,\ldots,A_t)$ of (not necessarily disjoint) sets of vertices such that $A_i \subset N(A_{i-1})$ for each $i \in [t]$ (here, and throughout, we set $A_0 = \{x\}$). In the following definitions, let $\mathcal{P}$ be a collection of paths $(x,u_1,\ldots,u_t)$ of length $t$ in $G$, where $u_i \in A_i$. We denote by $$\mathcal{P}_{i,j} \, = \, \big\{ (u_i,\ldots,u_j) \,:\, (x, u_1,\ldots, u_t) \in \mathcal{P} \big\}$$ the set of paths which travel between the $i$th and the $j$th vertices of a path in~$\mathcal{P}$. Moreover, given $u,v \in V(G)$ and a collection of paths $\mathcal{Q}$ (for example, $\mathcal{Q} = \mathcal{P}_{i,j}$), we write $$\mathcal{Q}[u \to v] \, = \, \big\{ (x_1,\ldots,x_s) \in \mathcal{Q} \,:\, x_1 = u \text{ and } x_s = v \big\}$$ for the set of paths in $\mathcal{Q}$ which begin at $u$ and end at $v$. Similarly, for $S \subset V(G)$ we write $\mathcal{Q}[u \to S] = \bigcup_{v \in S} \mathcal{Q}[u\to v]$ for the set of paths in~$\mathcal{Q}$ that start at~$u$ and end in~$S$. Given a family of edge-sets $\mathcal{F}$, we will say that a path $P \in \mathcal{P}$ \emph{avoids} $\mathcal{F}$ if $E(P)$ contains no member of $\mathcal{F}$ as a subset, and similarly that $\mathcal{P}$ avoids $\mathcal{F}$ if every $P \in \mathcal{P}$ avoids $\mathcal{F}$. Given a vertex $v \in V(G)$ and $r \in [t]$, the \emph{$r$-branching factor} of $v$ in $\mathcal{P}$ is the maximum $d$ such that there exist $d$ paths in $\mathcal{P}$ with the $r$th vertex $v$, and pairwise distinct $(r+1)$st vertices. The branching factor of $\mathcal{P}$ is the maximum branching factor of a vertex in $\mathcal{P}$. We shall write $X^{(s)}$ and $X^{(\leqslant \, s)}$ to denote the collection of subsets of $X$ of size (at most)~$s$. Finally, if $\mathcal{A} = (A_1,\ldots,A_t)$ and $\mathcal{B} = (B_1,\ldots,B_t)$ are sequences of sets of vertices of the same length, then we write $\mathcal{A} \prec \mathcal{B}$ to denote the fact that $A_i \subset B_i$ for every $i \in [t]$. \section{The refined Erd\texorpdfstring{\H{o}}{o}s-Simonovits conjecture for even cycles}\label{Sec:ES} In this section we shall prove Theorem~\ref{thm:cycle:hypergraph}, using the following key proposition, which allows us to build up the good hypergraph~$\mathcal{H}$ representing cycles in~$G$ one cycle at a time. Recall that an integer $\ell \geqslant 2$, and constants $\delta > 0$ and $k_0 \in \mathbb{N}$ were fixed above. \begin{prop}\label{prop:finding:cycles} Let $n,k \in \mathbb{N}$, with $k \geqslant k_0$, let $G$ be a graph with $n$ vertices and $kn^{1+1/\ell}$ edges, and let $\mathcal{H}$ be a good $2\ell$-uniform hypergraph with respect to $(\delta,k,\ell,n)$. If $e(\mathcal{H}) \leqslant \delta k^{2\ell} n^2$, then there exists a $2\ell$-cycle $C \subset E(G)$ with $C \not\in \mathcal{H}$, such that $\mathcal{H} \cup \{C\}$ is good. \end{prop} We remark that Theorem~\ref{thm:cycle:hypergraph} follows easily by repeatedly applying Proposition~\ref{prop:finding:cycles}, see Section~\ref{ESconj:proofSec} for the details. \subsection{A sketch of the proof} The proof Proposition~\ref{prop:finding:cycles} is quite long and technical, and so we shall begin with a brief outline of the proof, see also Section~\ref{sec:superset:sketch}. We shall introduce three different types of `$t$-neighbourhood' of a vertex $x \in V(G)$, which we term \emph{concentrated}, \emph{balanced} and \emph{refined}, respectively, each of which is more restrictive than the previous one. The first step in the proof is to choose a vertex $x \in V(G)$ with a concentrated $t$-neighbourhood (see Definition~\ref{defn:concentrated_nbd}), with $t$ as small as possible. Next, we show that $x$ moreover has a balanced $t$-neighbourhood $(\mathcal{A},\mathcal{P})$ (see Definition~\ref{def:balanced:nbhd}) containing roughly as many paths as one would expect, and avoiding $\mathcal{F} = \mathcal{F}(\mathcal{H})$ (see Lemma~\ref{prop:RNF:exists}). We then show that $x$ has a refined $t$-neighbourhood $(\mathcal{B},\mathcal{Q})$ (see Definition~\ref{def:refined:nbhd}) with $\mathcal{B} \prec \mathcal{A}$ and $\mathcal{Q} \subset \mathcal{P}$ (see Lemma~\ref{lem:balanced_to_refined}). The minimality of $t$ is crucial in the proofs of Lemmas~\ref{prop:RNF:exists} and~\ref{lem:balanced_to_refined}. Having completed these preliminaries, we then construct $\mathcal{C}$ by choosing a path from $x$ to $B_t$, a zigzag path of length $2\ell - 2t$ between $B_t$ and $B_{t-1}$, and a path in $\mathcal{Q}$ back to $x$. We use the properties of a refined $t$-neighbourhood to show that there are many such cycles, and to control the number of cycles which contain a saturated set of edges. Finally, we use our assumption that $\mathcal{H}$ is good, together with the pigeonhole principle, to show that the collection $\mathcal{C}$ is not contained in $\mathcal{H}$, as required. \subsection{Balanced \texorpdfstring{$t$}{t}-neighbourhoods}\label{sec:balanced} In this section we will lay the groundwork for the proof of Proposition~\ref{prop:finding:cycles} by finding a vertex in $V(G)$ with a particularly `well-behaved' collection of paths of (some well-chosen) length $t$ leaving it. To avoid repetition, let us fix integers $n,k \in \mathbb{N}$ with $k \geqslant k_0$, a graph $G$ with at most $n$ vertices and at least $k n^{1 + 1/\ell}$ edges, and a $2\ell$-uniform hypergraph $\mathcal{H}$ on $E(G)$ which is good with respect to $(\delta,k,\ell,n)$, and has fewer than $\delta k^{2\ell} n^2$ edges. By choosing a subgraph of~$G$ if necessary (and weakening the bound on $e(G)$ slightly), we may assume that \begin{equation} \label{eq:Delta1_bounded} d_\mathcal{H}(e) \,<\, 4\ell \delta \cdot k^{2\ell-1} n^{1-1/\ell} \quad\mbox{ for every $e\in E(G)$,} \end{equation} since $e(\mathcal{H}) \leqslant \delta k^{2\ell} n^2$, and also that~$G$ has minimal degree at least $C \varepsilon(\ell) k n^{1/\ell}$. The first step is to define, for each vertex $x \in V(G)$, a distance $t(x) \in [2,\ell]$ at which the neighbourhood of $x$ becomes sufficiently concentrated. \begin{defn}\label{defn:concentrated_nbd} Let $x \in V(G)$, and let $\mathcal{A} = ( A_1, \ldots, A_t )$ be a collection of (not necessarily disjoint) subsets of $V(G)$. We say that $\mathcal{A}$ is a {\em concentrated $t$-neighbourhood of~$x$} if \begin{equation}\label{eq:concentrated} |A_t| \leqslant k^{(\ell-t)/(\ell-1)} n^{t/\ell}, \end{equation} but $|N(v) \cap A_i| \geqslant C\varepsilon(t) k n^{1/\ell}$ for every $v \in A_{i-1}$ and every $i \in [t]$, where $A_0 = \{x\}$. Define $t(x)$ to be the minimal $t \geqslant 2$ such that there exists a concentrated $t$-neighbourhood of~$x$ in $G$ (set $t(x) = \infty$ if none exists), and define $t(G) = \min\{ t(x) : x \in V(G) \}$. \end{defn} The following easy observation uses the fact that $\delta(G) \geqslant C \varepsilon(\ell) k n^{1/\ell}$. \begin{lemma}\label{lem:concentrated:exists} $t(x) \leqslant \ell$ for every $x \in V(G)$. \end{lemma} \begin{proof} The proof is an easy consequence of the definition: simply set $A_i = N(A_{i-1})$ for each $1 \leqslant i \leqslant \ell$, and note that the condition~\eqref{eq:concentrated} holds trivially for $t = \ell$. \end{proof} As noted above, the cycles in $\mathcal{C}$ will all pass through some vertex $x$ with $t(x) = t(G)$. The next step is to show that, given such a vertex $x$, there exists a `well-behaved' collection of paths in $G$, each starting at $x$ and of length $t(G)$. The following definition gathers the properties which we will require this collection to possess. \begin{defn}[Balanced $t$-neighbourhoods]\label{def:balanced:nbhd} Let $x \in V(G)$ and $2 \leqslant t \in \mathbb{N}$. Set $A_0 = \{x\}$, and suppose that \begin{itemize} \item $\mathcal{A} = ( A_1, \ldots, A_t )$ is a collection of (not necessarily disjoint) sets of vertices of $G$, \smallskip \item $\mathcal{P}$ is a collection of paths in~$G$ of the form $(x,u_1, \ldots, u_t)$, with $u_i \in A_i$ for each $i \in [t]$. \end{itemize} We will say that the pair $(\mathcal{A},\mathcal{P})$ forms a {\em balanced $t$-neighbourhood of $x$} if the following conditions hold: \begin{enumerate} \item[$(i)$] The first and last levels are not too large, that is: \begin{equation} |A_1| \, \leqslant \, kn^{1/\ell} \qquad \text{and} \qquad |A_t| \, \leqslant \, k^{(\ell-t)/(\ell-1)}n^{t/\ell}. \end{equation} \item[$(ii)$] For every $i,j$ with $0 \leqslant i < j \leqslant t$ and $(i,j) \ne (0,t)$, and every pair $u\in A_i$, $v \in A_j$, \begin{equation}\label{eq:def:prop5} |\mathcal{P}_{i,j}[u \to v]| \, \leqslant \, k^{(j-i-1)\ell/(\ell-1)}. \end{equation} \item[$(iii)$] The branching factor of~$\mathcal{P}$ is at most $C\varepsilon(t)kn^{1/\ell}$. \end{enumerate} \end{defn} Note that in a balanced $t$-neighbourhood $\mathcal{P}$ could be empty, so we will always additionally require a lower bound (of order $(k n^{1/\ell})^t$) on the number of paths; however, the exact value of this lower bound will vary depending on the situation. The purpose of condition~$(i)$ is to allow us to apply the pigeonhole principle later on; conditions~$(ii)$ and~$(iii)$ are designed to prevent too many paths from being removed when we `refine' the neighbourhood in Section~\ref{sec:finding:cycles}. The main objective of this subsection is to prove the following lemma. Recall that the graph $G$ and the hypergraph $\mathcal{H}$ were fixed earlier, and set $t := t(G)$. \enlargethispage{\baselineskip} \begin{lemma}\label{prop:RNF:exists} Let $x \in V(G)$. If $t(x) = t$, then $G$ contains a balanced $t$-neighbourhood $(\mathcal{A},\mathcal{P})$ of $x$, with \begin{equation}\label{eq:number:of:paths} |\mathcal{P}| \, \geqslant \, \frac{C^t}{2} \big( \varepsilon(t) kn^{1/\ell} \big)^t, \end{equation} such that $\mathcal{P}$ avoids $\mathcal{F} = \mathcal{F}(\mathcal{H})$. \end{lemma} In order to find a collection of paths as in Lemma~\ref{prop:RNF:exists}, we simply take a large collection of paths of length $t$ from $x$ (which is guaranteed to exist by Definition~\ref{defn:concentrated_nbd}), and then remove paths until condition~$(ii)$ holds. The key point is that, if too many paths are removed in this second step, then we will be able to show that $t(x) < t$, which is a contradiction. The following easy lemma is the key tool in this deduction. \begin{lemma}\label{lem:paths_to_concentrated_nbd} Let $\mathcal{R}$ be a collection of paths of length $s$ in $G$ from a vertex $u \in V(G)$ to a set $B \subset V(G)$. If $|B| \leqslant k^{(\ell-s)/(\ell-1)}n^{s/\ell}$, $$|\mathcal{R}| \, \geqslant \, 2Cs \cdot \varepsilon(s) \big( kn^{1/\ell} \big)^s$$ and $\mathcal{R}$ has branching factor at most $kn^{1/\ell}$, then $t(u) \leqslant s$. \end{lemma} \begin{proof} For each $i \in [s]$, set $A_i$ equal to the collection of $i$th vertices of the paths in a collection $\mathcal{R}' \subset \mathcal{R}$ obtained as follows: simply remove (repeatedly) from each $A_i$, any vertex whose $i$-branching factor (in the remaining paths) is less than $C \varepsilon(s) k n^{1/\ell}$, and remove from $\mathcal{R}$ all paths with this as their $i$th vertex. If $\mathcal{R}'$ is non-empty, then it follows by construction that $\mathcal{A} = ( A_1, \ldots, A_s )$ is a concentrated $s$-neighbourhood of~$u$, and thus $t(u) \leqslant s$, as claimed. It therefore suffices to show that not all paths are destroyed in the process described above. To see this, simply note that the number of paths destroyed is at most $$Cs \cdot \varepsilon(s) \big( k n^{1/\ell} \big)^s \, \leqslant \, |\mathcal{R}| / 2,$$ since each destroyed path passes through a vertex of branching factor at most $C \varepsilon(s) k n^{1/\ell}$, and the branching factor of every other vertex is at most $k n^{1/\ell}$. \end{proof} We are now ready to prove Lemma~\ref{prop:RNF:exists}. \begin{proof}[Proof of Lemma~\ref{prop:RNF:exists}] Let $\mathcal{A}$ be a concentrated $t$-neighbourhood of $x$, where $t(x) = t(G) = t$. We may and will assume that $|A_1| \leqslant kn^{1/\ell}$, since if~$A_1$ is larger than this, then we can remove vertices from~$A_1$ without destroying the concentrated neighbourhood property. Furthermore since $\mathcal{A}$ is a concentrated $t$-neighbourhood, we have that $|A_t| \leqslant k^{(\ell-t)/(\ell-1)}n^{t/\ell}$. For every $i \in [t]$ and every $v \in A_{i-1}$, where $A_0 = \{x\}$, select an arbitrary subset \[ Q(v) \subset N(v) \cap A_i \] of size $|Q(v)| = C \varepsilon(t) k n^{1/\ell}$. Let~$\mathcal{Q}$ be the set of all paths of the form $(x,u_1, \ldots, u_t)$, generated as follows: For each $i=1,2, \ldots, t$, select $u_i \in Q(u_{i-1})$ satisfying\footnote{Recall that $L^{(1)}_\mathcal{F}\big( \cdot \big)$ is a collection of edges of $G$.} $$u_i \not\in V(S_i) \qquad\mbox{ and }\qquad \{u_{i-1}, u_i\} \not\in L^{(1)}_\mathcal{F}\big( E(S_i) \big),$$ where $S_i$ is the path $(x,u_1, \ldots, u_{i-1})$. We claim that \begin{equation} \label{eq:initial_Q_is_large} |\mathcal{Q}| \geqslant \frac{3}{4} \big( C\varepsilon(t) kn^{1/\ell} \big)^t. \end{equation} To see this, simply recall that $\big| L^{(1)}_\mathcal{F}\big( E(S_i) \big) \big| \leqslant 2^{5\ell} \delta k^{\ell/(\ell-1)}$, by Lemma~\ref{lem:size_of_link}, and so the number of choices for the vertex~$u_i$ is at least \[ C \varepsilon(t) k n^{1/\ell} - 2\ell - 2^{5\ell} \delta k^{\ell/(\ell-1)} \, \geqslant \, \frac{C \varepsilon(t)}{(4/3)^{1/t}} \cdot k n^{1/\ell}, \] where we used the inequalities $\delta \ll \varepsilon(t)$ and $k \leqslant n^{(\ell-1)/\ell}$. Now we refine the collection~$\mathcal{Q}$ to produce the collection~$\mathcal{P}$. If there exists $0 \leqslant i < j \leqslant t$ with $(i,j) \ne (0,t)$, and a path $Q = (x,u_1,\ldots,u_t) \in \mathcal{Q}$ such that \begin{equation}\label{eq:pathalg1} |\mathcal{Q}_{i,j}[u_i \to u_j]| > k^{(j-i-1)\ell/(\ell-1)}, \end{equation} then choose such a path $Q$, and remove $Q$ from $\mathcal{Q}$. Repeat this until there are no such paths remaining in~$\mathcal{Q}$, and let $\mathcal{P}$ be the set of paths remaining at the end. By construction, $\mathcal{P}$ satisfies conditions~$(ii)$ and~$(iii)$ of Definition~\ref{def:balanced:nbhd}, and moreover it avoids~$\mathcal{F}$, since none of the edges of $G$ are saturated (by~\eqref{eq:Delta1_bounded}), and since we chose $u_i$ so that $\{u_{i-1}, u_i\} \not\in L^{(1)}_\mathcal{F}\big( E(S_i) \big)$. It only remains to prove~\eqref{eq:number:of:paths}; to do so, we will use the fact that $t(x) = t(G)$ to bound the number of paths removed from $\mathcal{Q}$. For each $0 \leqslant i < j \leqslant t$, let us say that an ordered pair of vertices $(u_i,u_j)$ is {\em unbalanced} if ~\eqref{eq:pathalg1} holds (in the original family $\mathcal{Q}$), and set $$\mathcal{R}(i,j) \,=\, \big\{ Q = (x,u_1,\ldots,u_j) \in \mathcal{Q}_{0,j} \,:\, (u_i, u_j) \textup{ is unbalanced} \big\}.$$ We claim that \begin{equation} \label{eq:size_of_Rij} |\mathcal{R}(i,j)| \leqslant \displaystyle\frac{\big( C\varepsilon(t)kn^{1/\ell} \big)^j}{4t^2} \end{equation} for every $0 \leqslant i < j \leqslant t$ with $(i,j) \ne (0,t)$. To prove~\eqref{eq:size_of_Rij}, observe first that $$|\mathcal{R}(i,j)| \leqslant \sum_{u \in A_i} \big| \mathcal{R}(i,j)_{0,i}[x \to u] \big| \cdot \big| \mathcal{R}(i,j)_{i,j}[u \to A_j] \big|,$$ and that $\sum_{u \in A_i} |\mathcal{R}(i,j)_{0,i}[x \to u]| \leqslant \big( C \varepsilon(t)kn^{1/\ell} \big)^i$, since $\mathcal{R}(i,j)_{0,i} \subset \mathcal{Q}_{0,i}$, and so we have at most $C \varepsilon(t)kn^{1/\ell}$ choices at each step. Hence, if $|\mathcal{R}(i,j)| \geqslant (1/4t^2) \big( C \varepsilon(t)kn^{1/\ell} \big)^j$, then there must exist a vertex $u \in A_i$ such that \begin{equation}\label{eq:Rpaths:utoBj} \big| \mathcal{R}(i,j)_{i,j}[u \to A_j] \big| \, \geqslant \, \frac{1}{4t^2} \cdot \big( C \varepsilon(t) kn^{1/\ell} \big)^{j-i} \, \geqslant \, 2Ct \cdot \varepsilon(j-i) \big( kn^{1/\ell} \big)^{j-i} \end{equation} since $j - i \leqslant t - 1$, and therefore $\varepsilon(t)^{j-i} \gg t^3 \cdot \varepsilon(j-i)$. We will show that $t(u) < t(x)$. To do so, we will apply Lemma~\ref{lem:paths_to_concentrated_nbd} to the collection $\mathcal{R}(i,j)_{i,j}[u \to A_j]$. The inequality~\eqref{eq:Rpaths:utoBj} gives us a lower bound on the number of paths, and the branching factor follows from the definition of $\mathcal{Q}$; however, we also require an upper bound on the size of the set $$R_j(u) \, = \, \big\{ u_j \in A_j \,:\, \textup{there exists a path } (x,u_1,\ldots,u_j) \in \mathcal{R}(i,j) \textup{ with } u_i = u \big\},$$ that is, the set of end-vertices of the paths in $\mathcal{R}(i,j)$ whose $i$th vertex is $u$. Observe that, since each pair $(u,u_j)$ is unbalanced, we have \begin{equation}\label{eq:boundingRju} |R_j(u)| \, \leqslant \, \frac{ \big( C \varepsilon(t) k n^{1/\ell} \big)^{j-i} }{k^{(j-i-1)\ell/(\ell-1)} } \, \leqslant \, k^{(\ell-j+i)/(\ell-1)}n^{(j-i)/\ell}. \end{equation} By Lemma~\ref{lem:paths_to_concentrated_nbd}, it follows from~\eqref{eq:Rpaths:utoBj} and~\eqref{eq:boundingRju} that $t(u) \leqslant j - i < t(x)$, as claimed. This contradicts the minimality of $t(x)$, and hence proves~\eqref{eq:size_of_Rij}. Finally, it follows from~\eqref{eq:size_of_Rij}, and the branching factor of $\mathcal{Q}$, that at most $$\sum_{i,j} \big| \mathcal{R}(i,j) \big| \cdot \big( C \varepsilon(t) k n^{1/\ell} \big)^{t-j} \, \leqslant \, \frac{1}{4} \cdot \big( C\varepsilon(t)kn^{1/\ell} \big)^t$$ paths are removed. Combining this with~\eqref{eq:initial_Q_is_large} gives~\eqref{eq:number:of:paths}, as required. \end{proof} Before moving on to the meat of the proof -- the construction of the family of cycles $\mathcal{C}$ -- let us note two simple but key properties of balanced neighbourhoods. \begin{lemma} \label{lem:counting_paths_through_vertices} Let $x \in V(G)$, and let $(\mathcal{A},\mathcal{P})$ be a balanced $t$-neighbourhood of $x$. Let $w \in A_t$, and let $v \in V(G) \setminus \{x,w\}$. There are at most \[ \ell\cdot k^{(t-2)\ell/(\ell-1)} \] paths $P \in \mathcal{P}[x \to w]$ with $v \in V(P)$. \end{lemma} \begin{proof} For each $1 \leqslant j \leqslant t - 1$, we count the number of paths $(x,u_1,\ldots,u_{t-1},w)$ in $\mathcal{P}[x \to w]$ such that $u_j = v$. By property~$(ii)$ of Definition~\ref{def:balanced:nbhd}, we obtain a bound of $$\sum_{j = 1}^{t-1} |\mathcal{P}_{0,j}[x \to v]| \cdot |\mathcal{P}_{j,t}[v \to w]| \, \leqslant \, (t-1) \cdot k^{(t-2)\ell/(\ell-1)} \, \leqslant \, \ell \cdot k^{(t-2)\ell/(\ell-1)},$$ as claimed. \end{proof} \begin{lemma} \label{lem:counting_paths_through_sets} Let $x \in V(G)$, and let $(\mathcal{A},\mathcal{P})$ be a balanced $t$-neighbourhood of $x$. Let $\sigma \subset E(G)$ with $1 \leqslant |\sigma| \leqslant t - 1$. For each $w \in A_t$, there are at most \[ t^t \cdot k^{(t-|\sigma|-1)\ell/(\ell-1)} \] paths $P \in \mathcal{P}[x \to w]$ with $\sigma \subset E(P)$. \end{lemma} \begin{proof} When $|\sigma| = t-1$ the result is trivial, since there is at most one path $P \in \mathcal{P}[x \to w]$ with $\sigma \subset E(P)$. Therefore, let us assume that $1 \leqslant |\sigma| \leqslant t - 2$. We will count the number of paths in $\mathcal{P}[x \to w]$ with $\sigma \subset E(P)$ as follows. Observe that, if $\sigma \subset E(P)$ for some path $P \in \mathcal{P}$, then $\sigma$ may be decomposed into a sequence of paths between disjoint collections of sets in $\mathcal{A}$. Since property~$(ii)$ of Definition~\ref{def:balanced:nbhd} gives us a bound on the number of ways of travelling between a given vertex of $A_i$ and a given vertex of $A_j$ along a path of $\mathcal{P}$, it is easy to deduce the claimed bound. To spell out the details, let $\sigma = \sigma_1 \cup \ldots \cup \sigma_r$, where $\sigma_q$ is a path between $u_q \in A_{i(q)}$ and $v_q \in A_{j(q)}$, and $i(q) < j(q) < i(q')$ for each $1 \leqslant q < q' \leqslant r$. Note that the sequence $\big( i(1), j(1), \ldots, i(r), j(r) \big)$ might not be unique, since the sets of $\mathcal{A}$ may not be disjoint, but in any case there are at most $t^t$ possible sequences for each set $\sigma$. By~property~$(ii)$ of Definition~\ref{def:balanced:nbhd}, and setting $v_0 = x$, $u_{r+1} = w$, $j(0) = 0$ and $i(r+1) = t$, it follows that the number of paths in~$\mathcal{P}[x \to w]$ containing~$\sigma$ is at most \[ t^t \prod_{q=0}^r \big| \mathcal{P}_{j(q),i(q+1)}[v_q \to u_{q+1}] \big| \, \leqslant \, t^t \big( k^{\ell / (\ell-1)} \big)^{\sum_{q=0}^r \max\{ i(q+1) - j(q) - 1, 0\}}, \] which is at most the bound stated in the lemma, since $\sum_{q=0}^r \big( i(q+1) - j(q) \big) = t - |\sigma|$. \end{proof} \subsection{Refined \texorpdfstring{$t$}{t}-neighbourhoods} In this section we will show how to `refine' a balanced $t$-neighbourhood so that each vertex has a sufficiently large neighbourhood, and each vertex of the final set receives many paths, without destroying too many of the paths of $\mathcal{P}$. \begin{defn}[Refined $t$-neighbourhood]\label{def:refined:nbhd} Let $(\mathcal{B}, \mathcal{Q})$ be a balanced $t$-neighbourhood of $x$. We say that $(\mathcal{B}, \mathcal{Q})$ form a {\em refined $t$-neighbourhood} of $x$ if the following conditions also hold: \begin{enumerate} \item[$(i)$] For every $i \in \{0,1,\ldots,t-1\}$ and every $u \in B_i$, $$|N(u)\cap B_{i+1}| \, \geqslant \, \varepsilon(t) k n^{1/\ell}.$$ \item[$(ii)$] For every $v \in B_t$, $$|N(v)\cap B_{t-1}| \, \geqslant \, \varepsilon(t) k^{\ell/(\ell-1)}.$$ \item[$(iii)$] For every $v \in B_t$, $$|\mathcal{Q}[x \to v]| \, \geqslant \, \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)}.$$ \end{enumerate} \end{defn} The properties above will allow us to find many cycles through $x$ of the form described earlier (a path out from $x$, a zig-zag path, and a path back to $x$). The following lemma allows us to find a refined neighbourhood inside a balanced neighbourhood. Recall that $t = t(G)$. \begin{lemma}\label{lem:balanced_to_refined} If $(\mathcal{A}, \mathcal{P})$ is a balanced $t$-neighbourhood of $x$, and $$|\mathcal{P}| \, \geqslant \, \frac{C^t}{2} \big( \varepsilon(t) kn^{1/\ell} \big)^t,$$ then there exists a refined $t$-neighbourhood $(\mathcal{B}, \mathcal{Q})$ of $x$, with $\mathcal{B} \prec \mathcal{A}$ and $\mathcal{Q} \subset \mathcal{P}$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:balanced_to_refined}] Repeatedly perform the following three steps until no further vertices are removed. \begin{enumerate} \item[1.] If there exists $i \in \{1,\ldots,t-1\}$ and a vertex $v \in A_i$ with $$|N(v)\cap A_{i+1}| < \varepsilon(t) kn^{1/\ell},$$ then remove $v$ from $A_i$, and remove all paths $P = (x,u_1,\ldots,u_t) \in \mathcal{P}$ with $u_i = v$. \item[2.] If there exists a vertex $v \in A_t$ with $$|N(v)\cap A_{t-1}| < \varepsilon(t) k^{\ell/(\ell-1)},$$ then remove $v$ from $A_t$, and remove all paths $P = (x,u_1,\ldots,u_t) \in \mathcal{P}$ with $u_t = v$. \item[3.] If there exists a vertex $v \in A_t$ with $$|\mathcal{P}[x \to v]| < \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)},$$ then remove $v$ from $A_t$, and remove all paths $P = (x,u_1,\ldots,u_t) \in \mathcal{P}$ with $u_t = v$. \end{enumerate} Let~$\mathcal{B} \prec \mathcal{A}$ and $\mathcal{Q} \subset \mathcal{P}$ be the collection of sets and paths at the end of this process. We claim that $(\mathcal{B},\mathcal{Q})$ is a refined $t$-neighbourhood of $x$. Indeed, properties~$(ii)$ and~$(iii)$ of Definition~\ref{def:refined:nbhd}, and also property~$(i)$ for $i \neq 0$, hold by construction, since we would have removed any offending vertex or path. It therefore only remains to prove that property~$(i)$ holds for $i=0$, i.e., that $|N(x) \cap B_1| \geqslant \varepsilon(t) k n^{1/\ell}$. In order to prove this, we will in fact show that \begin{equation}\label{eq:ref:numberofpathsinP:repeat} |\mathcal{Q}| \, \geqslant \, \frac{C^t}{4} \big( \varepsilon(t) kn^{1/\ell} \big)^t. \end{equation} Since, by property~$(iii)$ of Definition~\ref{def:balanced:nbhd}, the branching factor of~$\mathcal{P}$ is at most $C\varepsilon(t)kn^{1/\ell}$, the required bound on the degree of $x$ follows immediately. In order to prove~\eqref{eq:ref:numberofpathsinP:repeat}, we shall consider each of the three steps, showing for each that not too many paths are destroyed. We claim first that few paths are removed in Steps~1 and~3. Indeed, in Step~1 we remove at most \[ t \cdot \varepsilon(t) k n^{1/\ell} \cdot \big( C \varepsilon(t) k n^{1/\ell} \big)^{t-1} \, = \, t \cdot C^{t-1} \big( \varepsilon(t) k n^{1/\ell} \big)^t \] paths from~$\mathcal{P}$, and in Step~3 we remove at most \[ \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)} |A_t| \, \leqslant \, \big( \varepsilon(t) k n^{1/\ell} \big)^t \] paths from~$\mathcal{P}$, since $|A_t| \leqslant k^{(\ell-t)/(\ell-1)}n^{t/\ell}$, by property~$(i)$ of Definition~\ref{def:balanced:nbhd}. Finally, we claim that fewer than $2 C^{t-1} \big( \varepsilon(t)kn^{1/\ell} \big)^t$ paths are destroyed in Step~2; we will again use the minimality of $t = t(x)$. Indeed, let $Z \subset A_t$ and $\mathcal{P}(Z)$ denote the collection of vertices and paths (respectively) removed in Step~2, and consider the set $$Y \, = \, \big\{ v \in A_{t-1} \,:\, v \textup{ has $(t-1)$-branching factor at least $\varepsilon(t) k n^{1/\ell}$ in } \mathcal{P}(Z) \big\}.$$ \noindent By counting edges between $Y$ and $Z$, and recalling that $|Z| \leqslant |A_t| \leqslant k^{(\ell-t)/(\ell-1)} n^{t/\ell}$, we have \begin{equation}\label{eq:Ybound} |Y| \, \leqslant \, \frac{|Z| \cdot \varepsilon(t) k^{\ell/(\ell-1)}}{\varepsilon(t) k n^{1/\ell}} \, \leqslant \, k^{(\ell - t + 1)/(\ell-1)} n^{(t-1)/\ell}. \end{equation} Moreover, by the definition of $Y$ and our bound on the branching number of $\mathcal{P}$, at most $C^{t-1} (\varepsilon(t) k n^{1/\ell})^t$ paths in~$\mathcal{P}(Z)$ have their penultimate vertex in~$A_{t-1} \setminus Y$ and final vertex in~$Z$. Hence, if more than $2 C^{t-1} \big( \varepsilon(t)kn^{1/\ell} \big)^t$ paths were destroyed via the removal of vertices in $Z$, then there exists a set of $C^{t-1} \big( \varepsilon(t) k n^{1/\ell} \big)^{t}$ paths in $\mathcal{P}(Z)$ whose penultimate vertex is in $Y$, and hence (again using our bound on the branching number) there exists a set of at least \begin{equation}\label{eq:numberofpathsthroughY} \frac{C^{t-1} \big( \varepsilon(t) k n^{1/\ell} \big)^{t}}{C \varepsilon(t) k n^{1/\ell} } \, \geqslant \, 2C t \cdot \varepsilon(t-1) (k n^{1/\ell})^{t-1} \end{equation} paths of length $t-1$ in~$G$ from $x$ to~$Y$, since $\varepsilon(t-1) \ll \varepsilon(t)^t$. Combining~\eqref{eq:Ybound} and~\eqref{eq:numberofpathsthroughY} with Lemma~\ref{lem:paths_to_concentrated_nbd}, it follows that $t(x) \leqslant t - 1$, which contradicts our assumption that $t(x) = t$. Putting the pieces together, and recalling that $C \geqslant 4(\ell+3)$, it follows that at most $$\Big( (t + 2) C^{t-1} + 1 \Big) \big( \varepsilon(t) k n^{1/\ell} \big)^t \, \leqslant \, \frac{C^t}{4} \big( \varepsilon(t) k n^{1/\ell} \big)^t$$ paths were removed in Steps~1,~2 and~3 combined, and hence~\eqref{eq:ref:numberofpathsinP:repeat} holds. By the comments above, it follows that $(\mathcal{B},\mathcal{Q})$ is a refined $t$-neighbourhood of~$x$, as required. \end{proof} \subsection{Finding cycles in refined \texorpdfstring{$t$}{t}-neighbourhoods}\label{sec:finding:cycles} We are now ready to complete the proof of the key proposition. \begin{proof}[Proof of Proposition~\ref{prop:finding:cycles}] Fix an arbitrary $x \in V(G)$ with $t(x) = t(G) = t$. By Lemma~\ref{prop:RNF:exists}, there exists a balanced $t$-neighbourhood $(\mathcal{A},\mathcal{P})$ of~$x \in V(G)$, avoiding $\mathcal{F}=\mathcal{F}(\mathcal{H})$. Apply Lemma~\ref{lem:balanced_to_refined} to obtain a refined $t$-neighbourhood $(\mathcal{B}, \mathcal{Q})$ with $\mathcal{B} \prec \mathcal{A}$ and $\mathcal{Q} \subset \mathcal{P}$. To find a cycle $C$ in~$G$ that is not already in~$\mathcal{H}$ and with $\mathcal{H} \cup \{C\}$ good, we will find a large collection of cycles, each cycle containing an edge between~$x$ and~$B_1$ and avoiding all saturated sets of edges. Note that, since $|B_1| \leqslant kn^{1/\ell}$, by property~$(i)$ of Definition~\ref{def:balanced:nbhd}, it is sufficient to find such a collection of cycles~$\mathcal{C}$ with \begin{equation} \label{eq:size_of_C} |\mathcal{C}| \geqslant 4\ell \delta \cdot k^{2\ell-1} n^{1-1/\ell} \cdot kn^{1/\ell} = 4\ell \delta \cdot k^{2\ell} n. \end{equation} Indeed, by the pigeonhole principle and our bound~\eqref{eq:Delta1_bounded} on the degree of a single edge in $\mathcal{H}$, it follows from~\eqref{eq:size_of_C} that at least one of these cycles will not already be in~$\mathcal{H}$. Since no cycle in $\mathcal{C}$ contains a saturated set of edges, we can add this cycle to~$\mathcal{H}$ and obtain a hypergraph that is still good. As noted above, each cycle in $\mathcal{C}$ will be formed by paths~$P$ and~$Q$, constructed as follows: $P$, of length $2\ell - t$, goes from~$x$ to~$B_t$, and then alternates between $B_t$ and $B_{t-1}$, finishing at some vertex $v_{2\ell-t} \in B_t$, and $Q \in \mathcal{Q}[x \to v_{2\ell-t}]$ is chosen so that $P \cup Q$ does not contain any forbidden set. For technical reasons (see Claim~\ref{claim:m_of_j}, below), it is important that there are not too many choices for the path $P$. Therefore, let us first choose sets $$X_i(u) \subset N(u) \cap B_{i+1} \quad \text{with} \quad |X_i(u)| = \varepsilon(t) k n^{1/\ell}$$ for each $i \in \{0,\ldots,t-1\}$ and each $u \in B_i$, and $$X_t(u) \subset N(u) \cap B_{t-1}\quad \text{with} \quad |X_t(u)| = \varepsilon(t) k^{\ell/(\ell-1)}$$ for each $u \in B_t$. (These exist by properties~$(i)$ and~$(ii)$ of Definition~\ref{def:refined:nbhd}.) We will choose the vertices of the path $P$ from the sets $X_i(u)$. \enlargethispage{\baselineskip} To be precise, we perform the following algorithm: \begin{Alg} Let~$\mathcal{C}$ be the set of cycles obtained via the following process. \begin{enumerate} \item[0.] Set $\mathcal{C} = \emptyset$. Now repeat the following two steps until STOP.\smallskip \item[1.] If possible, generate a path $(v_0, v_1, \ldots, v_{2\ell-t})$ not previously generated in this step as follows. Let $v_0 = x$, and define \[ s(i) = \begin{cases} i &\mbox{if } i \in \{0, 1, \ldots, t\}, \\ t-1 &\mbox{if } i \in \{t+1, t+3, \ldots, 2\ell-t-1\}, \\ t &\mbox{if } i \in \{t+2, t+4, \ldots, 2\ell-t\}. \end{cases} \] Now, for $i = 0,\ldots, 2\ell-t-1$, select $v_{i+1}$ from $X_{s(i)}(v_{i}) \subset N(v_{i}) \cap B_{s(i+1)}$ such that \begin{equation} \label{eq:path_extn_condn} v_{i+1} \not \in \big\{ v_0, \ldots, v_i \big\} \qquad \mbox{and} \qquad \{ v_i, v_{i+1} \} \not\in L^{(1)}_\mathcal{F}\big( E(P_i) \big), \end{equation} where $P_i$ is the path $(v_0, \ldots, v_i)$. Set $P = P_{2\ell-t}$. \\ Otherwise (that is, if no such path~$P$ exists that has not already been generated), then STOP.\smallskip \item[2.] Let $\mathcal{Q}(P) \subset \mathcal{Q}[x \to v_{2\ell-t}]$ be an arbitrary set of exactly $\varepsilon(t)^t k^{(t-1)\ell/(\ell-1)}$ paths ending at $v_{2\ell-t}$, and let $\mathcal{Q}'(P) \subset \mathcal{Q}(P)$ denote the collection of paths $Q \in \mathcal{Q}(P)$ which use no vertex of $\{ v_1, \ldots, v_{2\ell-t-1} \}$ and avoid $L_\mathcal{F} ( E(P) )$.\\ \noindent For each path $Q \in \mathcal{Q}'(P)$, join the paths~$P$ and~$Q$ to form a cycle and add this to~$\mathcal{C}$. \end{enumerate} \end{Alg} As noted above, in order to complete the proof it is sufficient to show that~\eqref{eq:size_of_C} holds. \newcounter{ClaimsInCycleFinding} \medskip \refstepcounter{ClaimsInCycleFinding} \noindent \textbf{Claim \arabic{ClaimsInCycleFinding}:}\label{claim:many_choices_for_P} There are at least $$\frac{1}{2} \cdot \big( \varepsilon(t)k n^{1/\ell} \big)^\ell \cdot \big( \varepsilon(t) k^{\ell/(\ell-1)} \big)^{\ell-t}$$ choices for the path~$P$ in Step~1. \begin{proof}[Proof of Claim~\ref{claim:many_choices_for_P}] We will show that at most $2\ell + 2^{5\ell} \delta k^{\ell/(\ell-1)}$ choices are excluded at each step. To see this, note first that at most $2\ell$ choices are excluded by the condition that $v_{i+1} \not \in \big\{ v_0, \ldots, v_i \big\}$. Moreover, by Lemma~\ref{lem:size_of_link} we have \[ \big| L_\mathcal{F}^{(1)}\big( E(P_i) \big) \big| \, \leqslant \, 2^{2\ell + e(P_i) + 1} \cdot \delta k^{\ell/(\ell-1)} \, \leqslant \, 2^{5\ell} \delta k^{\ell/(\ell-1)}, \] as required. Hence, if $i \in \{1,\ldots,t-1\} \cup \{t+1, t+3, \ldots, 2\ell-t-1\}$, then there are at least \[ \varepsilon(t) kn^{1/\ell} - \Big( 2\ell + 2^{5\ell} \delta k^{\ell/(\ell-1)} \Big) \, \geqslant \, \frac{1}{2^{1/2\ell}} \cdot \varepsilon(t) kn^{1/\ell} \] choices for $v_{i+1}$, where the last inequality follows since $k \leqslant n^{(\ell-1)/\ell}$ and $\delta \ll \varepsilon(t)$. Similarly, if $i \in \{t, t+2, \ldots, 2\ell-t-2\}$, then there are at least \[ \varepsilon(t) k^{\ell/(\ell-1)} - \Big( 2\ell + 2^{5\ell} \delta k^{\ell/(\ell-1)} \Big) \, \geqslant \, \frac{1}{2^{1/2\ell}} \cdot \varepsilon(t) k^{\ell/(\ell-1)} \] choices for $v_{i+1}$, again using the fact that~$\delta \ll \varepsilon(t)$. It follows immediately that the total number of choices for~$P$ is at least $$\frac{1}{2} \cdot \big( \varepsilon(t)k n^{1/\ell} \big)^\ell \cdot \big( \varepsilon(t) k^{\ell/(\ell-1)} \big)^{\ell-t}$$ as claimed. \end{proof} Let $\mathcal{D}$ denote the collection of paths $P$ generated in Step~1 for which there are at least $$\frac{1}{4} \cdot \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)}$$ paths $Q \in \mathcal{Q}(P)$ with $E(Q) \in L_\mathcal{F}(E(P))$. We will show later that $|\mathcal{D}|$ is not too large. The next claim shows that if the path chosen in Step~1 satisfies $P \not\in \mathcal{D}$, then we have many choices for the path $Q$ in Step~2. \pagebreak \refstepcounter{ClaimsInCycleFinding} \label{claim:Q_prime_is_large} \noindent \textbf{Claim \arabic{ClaimsInCycleFinding}:} Let $P$ be a path chosen in Step~1. If $P \not\in \mathcal{D}$, then $$|\mathcal{Q}'(P)| \, \geqslant \, \frac{1}{2} \cdot \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)}.$$ \begin{proof}[Proof of Claim~\ref{claim:Q_prime_is_large}] Since $|\mathcal{Q}(P)| = \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)}$, we are required to bound $|\mathcal{Q}(P) \setminus \mathcal{Q}'(P)|$ from above, that is, to bound the number of paths in $\mathcal{Q}(P)$ that either contain a vertex of $P$, or fail to avoid $L_\mathcal{F}(E(P))$. To do so, note first that, by Lemma~\ref{lem:counting_paths_through_vertices}, the number of paths in $\mathcal{Q}(P)$ which contain some vertex of $P$ is at most $$2\ell^2 \cdot k^{(t-2)\ell/(\ell-1)} \, \ll \, \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)},$$ where the last inequality follows since $k \geqslant k_0 \geqslant \varepsilon(t)^{-2\ell}$. So let $\sigma \in L_\mathcal{F}(E(P))$, and consider the paths in $\mathcal{Q}(P)$ which contain $\sigma$. Suppose first that $1 \leqslant | \sigma | \leqslant t - 1$. Then, by Lemma~\ref{lem:counting_paths_through_sets}, the number of paths in~$\mathcal{Q}(P)$ containing~$\sigma$ is at most \[ t^t k^{(t-|\sigma|-1)\ell/(\ell-1)}, \] and recall that, by Lemma~\ref{lem:size_of_link}, we have \[ \big| L_\mathcal{F}^{(|\sigma|)}( E(P) ) \big| \, \leqslant \, 2^{5\ell} \big( \delta k^{\ell/(\ell-1)} \big)^{|\sigma|}. \] Therefore, multiplying the above expressions, it follows that the number of paths in $\mathcal{Q}(P)$ which contain some $\sigma \in L_\mathcal{F}(E(P))$ with $1 \leqslant |\sigma| \leqslant t - 1$ is at most $$(2\ell)^{5\ell} \cdot \delta k^{(t-1)\ell/(\ell-1)} \, \ll \, \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)},$$ where the last inequality holds since $\delta \leqslant \varepsilon(t)^{2\ell}$. On the other hand, since $P \not\in \mathcal{D}$, there are at most $$\frac{1}{4} \cdot \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)}$$ paths in $\mathcal{Q}(P)$ which contain (and are therefore equal to) some $\sigma \in L_\mathcal{F}(E(P))$ with $|\sigma| = t$. Summing these three bounds, it follows that $$|\mathcal{Q}(P) \setminus \mathcal{Q}'(P)| \, \leqslant \, \frac{1}{2} \cdot \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)},$$ as claimed. \end{proof} Finally, we need to show that $\mathcal{D}$ is not too large. In order to do so, we will need the following slightly technical bounds. \medskip \refstepcounter{ClaimsInCycleFinding} \label{claim:m_of_j} \noindent \textbf{Claim \arabic{ClaimsInCycleFinding}:} Given a set of edges $J \subset E(G)$ of size $|J| = j \geqslant 1$, and a vertex $v \in B_t$, there are at most \begin{equation*} m(j) \,:=\, \begin{cases} (2\ell)^{2\ell} \cdot \big( \varepsilon(t) kn^{1/\ell} \big)^{\ell-1} \cdot \big( \varepsilon(t)k^{\ell/(\ell-1)} \big)^{\ell-t-j} &\mbox{if } 1 \leqslant j \leqslant \ell-t, \\ (2\ell)^{2\ell} \cdot \big( \varepsilon(t) k n^{1/\ell} \big)^{2\ell-t-j-1} &\mbox{if } \ell - t < j < 2\ell - t \end{cases} \end{equation*} paths $P$ ending at $v$ with $J \subset E(P)$. \begin{proof}[Proof of Claim~\ref{claim:m_of_j}] Note that we have at most $(2\ell)^{2\ell}$ choices for the positions of the edges of $J$ in the path $P$. So let's fix such a choice, and count the corresponding paths. To do so, it suffices to observe that at least $j+2$ of the vertices of $P$ are fixed (these are the endpoints of edges in $J$, and the endpoints $x$ and $v$ of $P$), and that $kn^{1/\ell} \geqslant k^{\ell/(\ell-1)}$. The bound in the case $j > \ell - t$ now follows immediately, since we have at most $\varepsilon(t) k n^{1/\ell}$ choices for each not-yet-chosen vertex $v_i$. Moreover, the bound in the case $j \leqslant \ell - t$ also follows easily, we have $\varepsilon(t) k n^{1/\ell}$ choices for (at most) $\ell - 1$ of the not-yet-chosen vertices (since $j \geqslant 1$), and at most $\varepsilon(t)k^{\ell/(\ell-1)}$ choices for each of the remaining vertices. \end{proof} The key property of the bound $m(j)$ is that it satisfies \begin{equation}\label{eq:mj:bound} \delta \cdot m(j) \cdot k^{j\ell / (\ell - 1)} \, \leqslant \, \big( \varepsilon(t) kn^{1/\ell} \big)^{\ell-1} \cdot \big( \varepsilon(t)k^{\ell/(\ell-1)} \big)^{\ell-t} \end{equation} for every $j \geqslant 1$. We shall use the inequality~\eqref{eq:mj:bound} in the proof of Claim~\ref{claim:D_is_small}, below. We can now bound $|\mathcal{D}|$. We shall do so by showing that if $\mathcal{D}$ were too large, then some edge of~$\mathcal{H}$ would be contained in too many cycles, contradicting our assumption that~$\mathcal{H}$ is good. In order to find such an edge, we will count disjoint pairs of edge sets $(J,Q)$, where $Q \in \mathcal{Q}$ and $E(Q) \cup J \in \mathcal{F}$. \medskip \refstepcounter{ClaimsInCycleFinding} \label{claim:D_is_small} \noindent \textbf{Claim \arabic{ClaimsInCycleFinding}:} $|\mathcal{D}| \leqslant \displaystyle\frac{1}{4} \cdot \big( \varepsilon(t)k n^{1/\ell} \big)^\ell \cdot \big( \varepsilon(t) k^{\ell/(\ell-1)} \big)^{\ell-t}$. \begin{proof}[Proof of Claim~\ref{claim:D_is_small}] Consider the collection of pairs $(J,Q)$ such that $Q \in \mathcal{Q}$, $J$ is a set of edges disjoint from $E(Q)$, and $E(Q) \cup J \in \mathcal{F}$. We first claim that, for some $j \in [t]$, there are at least \begin{equation}\label{eq:countingRJ:lower} 2^{-3\ell} \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)} \cdot \frac{|\mathcal{D}|}{2\ell \cdot m(j)} \end{equation} distinct such pairs with $|J| = j$. To see this first recall that if $P \in \mathcal{D}$, then (by definition) there are at least $\frac{1}{4} \cdot \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)}$ paths $Q \in \mathcal{Q}(P)$ with $E(Q) \in L_\mathcal{F}(E(P))$. By the pigeonhole principle, it follows that for each such path $P$, there exists a set $\emptyset \ne f(P) \subset E(P)$ such that there are at least \begin{equation}\label{eq:countingRs} 2^{-3\ell} \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)} \end{equation} paths $Q \in \mathcal{Q}(P)$, each of which is disjoint from $f(P)$ and such that $E(Q) \cup f(P) \in \mathcal{F}$. By another application of the pigeonhole principle, there exists $1 \leqslant j \leqslant 2\ell - t - 1$ for which there are at least $|\mathcal{D}|/2\ell$ paths $P \in \mathcal{D}$ with $|f(P)| = j$. Now, by Claim~\ref{claim:m_of_j}, for each $v \in B_t$, each $j$-set is contained in at most $m(j)$ paths ending at~$v$. Hence there are at least $|\mathcal{D}| / (2\ell \cdot m(j)) $ distinct pairs $(J,v)$, where $v \in B_t$ and $f(P) = J$ for some path $P \in \mathcal{D}$ which ends at $v$. Since for each of such pair there are~\eqref{eq:countingRs} paths $Q \in \mathcal{Q}[x \to v]$ with $E(Q) \cup J \in \mathcal{F}$, the bound~\eqref{eq:countingRJ:lower} follows. Now, to deduce that some edge from $x$ to $B_1$ is forbidden, recall first that if $E(Q) \cap J = \emptyset$ and $E(Q) \cup J \in \mathcal{F}$ then, by definition, \pagebreak \[ d_\mathcal{H}\big( E(Q) \cup J \big) \, \geqslant \, \lfloor \Delta^{(t+j)}(k, n) \rfloor \, \geqslant \, \frac{1}{2} \cdot \frac{\Delta^{(1)}(k, n)}{\big( \delta k^{\ell / (\ell-1)} \big)^{t+j-1}}. \] On the other hand, every path~$Q$ must contain an edge of~$G$ between~$x$ and $B_1$. Thus, noting that each edge of~$\mathcal{H}$ contains $E(Q) \cup J$ for at most $2^{4\ell}$ pairs $(J,Q)$, it follows that $\mathcal{H}$ contains at least \[ 2^{-3\ell} \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)} \cdot \frac{|\mathcal{D}|}{2\ell \cdot m(j)} \cdot \frac{\Delta^{(1)}(k, n)}{2^{4\ell+1} \big( \delta k^{\ell / (\ell-1)} \big)^{t+j-1}} \] cycles, each containing some edge between $x$ and $B_1$. Since $|B_1| \leqslant kn^{1/\ell}$, by another application of the pigeonhole principle there is a vertex $u \in B_1$ with $$d_\mathcal{H}\big( \{x, u\} \big) \, \geqslant \, 2^{-9\ell} \varepsilon(t)^t \cdot \frac{k^{(t-1)\ell/(\ell-1)} \cdot |\mathcal{D}| \cdot \Delta^{(1)}(k, n)}{kn^{1/\ell} \cdot m(j) \cdot \big( \delta k^{\ell / (\ell-1)} \big)^{t+j-1}}.$$ Using~\eqref{eq:mj:bound}, and the fact that $\delta \leqslant \varepsilon(t)^{2\ell}$, it follows that $$d_\mathcal{H}\big( \{x, u\} \big) \, > \, 4 \cdot \frac{|\mathcal{D}| \cdot \Delta^{(1)}(k, n)}{\big( \varepsilon(t) kn^{1/\ell} \big)^{\ell} \cdot \big( \varepsilon(t) k^{\ell/(\ell-1)} \big)^{\ell-t}} \, > \, \Delta^{(1)}(k, n)$$ if $|\mathcal{D}| > \frac{1}{4} \cdot \big( \varepsilon(t)k n^{1/\ell} \big)^\ell \cdot \big( \varepsilon(t) k^{\ell/(\ell-1)} \big)^{\ell-t}$, which contradicts the fact that $\mathcal{H}$ is good. \end{proof} It is now easy to deduce~\eqref{eq:size_of_C}. Indeed, multiplying the number of choices in Step~1 for a path~$P \not\in \mathcal{D}$, by the number of choices in Step~2 for the return path $Q$, assuming $P \not\in \mathcal{D}$, it follows from Claims~\ref{claim:many_choices_for_P},~\ref{claim:Q_prime_is_large} and~\ref{claim:D_is_small} that the number of cycles formed is at least \[ \frac{1}{2} \cdot \bigg( \frac{1}{4} \cdot \big( \varepsilon(t)k n^{1/\ell} \big)^\ell \cdot \big( \varepsilon(t) k^{\ell/(\ell-1)} \big)^{\ell-t} \bigg) \bigg( \frac{1}{2} \cdot \varepsilon(t)^t k^{(t-1)\ell/(\ell-1)} \bigg) \, \geqslant \, \frac{\big( \varepsilon(t) k \big)^{2\ell} n}{16}, \] giving~\eqref{eq:size_of_C}, where the initial factor of $1/2$ is because we may have counted each cycle twice. This completes the proof of Proposition~\ref{prop:finding:cycles}. \end{proof} \subsection{Proof of Theorem~\ref{thm:cycle:hypergraph}}\label{ESconj:proofSec} Finally, let us deduce Theorem~\ref{thm:cycle:hypergraph} from Proposition~\ref{prop:finding:cycles}. \begin{proof}[Proof of Theorem~\ref{thm:cycle:hypergraph}] Initialize~$\mathcal{H}$ to be the empty hypergraph with vertex set $V(\mathcal{H}) = E(G)$. Repeatedly apply Proposition~\ref{prop:finding:cycles} to find a cycle $C \subset E(G)$ not already in~$\mathcal{H}$, such that $\mathcal{H} \cup \{C\}$ is good, and add this to~$\mathcal{H}$. We may continue in this fashion until $e(\mathcal{H}) \geqslant \delta k^{2\ell} n^2$, giving a hypergraph that satisfies condition~$(a)$ of the theorem. To complete the proof, simply recall that $\mathcal{H}$ is good with respect to $(\delta,k,\ell,n)$, and note that $$\Delta^{(j)}(k,n) \, = \, \frac{k^{2\ell-1} n^{1 - 1 / \ell}}{ \big( \delta k^{\ell / (\ell-1)} \big)^{j-1}} \, = \, \frac{1}{\delta^{2\ell}} \cdot k^{2\ell - j - \frac{j - 1}{\ell-1}} n^{1 - 1/\ell}$$ for every $j \in [2\ell-1]$ and $k \leqslant n^{1 - 1/\ell}$. It follows immediately that the hypergraph~$\mathcal{H}$ also satisfies condition~$(b)$ of the theorem, as required. \end{proof} \section{Hypergraph containers}\label{sec:containers} In this section we recall the basic definitions relating to hypergraph containers, and state the theorem from~\cite{BMS,ST} which we will use in the proof of Theorems~\ref{thm:main} and~\ref{thm:cycle:containers}. In order to do so, we will need to define a parameter $\delta(\mathcal{H},\tau)$ which (roughly speaking) measures the `uniformity' of the hypergraph $\mathcal{H}$. \begin{defn}\label{def:tau} Given an $r$-uniform hypergraph $\mathcal{H}$, define the \emph{co-degree function} of $\mathcal{H}$ $$\delta(\mathcal{H},\tau) \, = \, \frac{1}{e(\mathcal{H})} \,\sum_{j=2}^r \,\frac{1}{\tau^{j-1}} \sum_{v \in V(\mathcal{H})} d^{(j)}(v),$$ where $$d^{(j)}(v) \, = \, \max\big\{ d_\mathcal{H}(\sigma) \, : \, v \in \sigma \subset V(\mathcal{H}) \textup{ and }|\sigma| = j \big\}$$ denotes the maximum degree in $\mathcal{H}$ of a $j$-set containing $v$. \end{defn} We remark that we have removed some extraneous constants from the definition in~\cite{ST}, since these do not affect the formulation of the theorem below. The following theorem is an easy consequence of~\cite[Theorem~5.2]{ST}, see also~\cite[Proposition~3.1]{BMS}. We remark that we may take $\delta_0(r)$ to be roughly $r^{-2r}$. \begin{thm}\label{thm:coveroff} Let $r \geqslant 2$ and let $0 < \delta < \delta_0(r)$ be sufficiently small. Let $\mathcal{H}$ be an $r$-graph with~$N$ vertices, and suppose that $\delta(\mathcal{H},\tau) \leqslant \delta$ for some $0 < \tau < \delta$. Then there exists a collection $\mathcal{C}$ of at most $$\exp\bigg( \frac{\tau \log( 1/\tau ) N}{\delta} \bigg)$$ subsets of $V(\mathcal{H})$ such that \begin{itemize} \item[$(a)$] for every independent set $I$ there exists $C \in \mathcal{C}$ with $I \subset C$,\smallskip \item[$(b)$] $e\big( \mathcal{H}[C] \big) \leqslant \big(1 - \delta \big) e(\mathcal{H})$ for every $C \in \mathcal{C}$. \end{itemize} \end{thm} We will refer to the collection $\mathcal{C}$ as the \emph{containers} of $\mathcal{H}$, since, by~$(a)$, every independent set is contained in some member of $\mathcal{C}$. The reader should think of $V(\mathcal{H})$ as being the \emph{edge} set of some underlying graph $G$, and $E(\mathcal{H})$ as encoding (some subset of) the copies of a forbidden graph $H$ in $G$. Thus every $H$-free subgraph of $G$ is an independent set of $\mathcal{H}$. \subsection{How to apply the container theorem} In order to motivate the (slightly technical) details in the proof below, let us first give a brief outline of how Theorem~\ref{thm:coveroff} can be used, in conjuntion with a refined supersaturation theorem like Theorem~\ref{thm:cycle:hypergraph}, to count $H$-free graphs on $n$ vertices. There are two steps: first we use the refined supersaturation theorem to bound the co-degree function $\delta(\mathcal{H},\tau)$; then we repeatedly apply Theorem~\ref{thm:coveroff} to each container (which is just a graph on $n$ vertices) produced by the previous application of the theorem, until all containers have $O(n^{1 + 1/\ell})$ edges. To be slightly more precise, we will show, using Theorem~\ref{thm:cycle:hypergraph}, that for every graph $G$ with $n$ vertices and $k n^{1 + 1/\ell}$ edges, there exists a $2\ell$-uniform hypergraph $\mathcal{H}$ with the following properties: each edge of $\mathcal{H}$ corresponds to a copy of $C_{2\ell}$ in $G$ and, if $\tau = k^{- (1 + \varepsilon)}$, then $\delta(\mathcal{H},\tau) = o(1)$ as $k \to \infty$. Thus, by Theorem~\ref{thm:coveroff}, there exists a collection $\mathcal{C} = \mathcal{C}(G)$ of at most $\exp\big( O(1) \cdot \tau \log( 1/\tau ) N \big)$ containers for the $C_{2\ell}$-free subgraphs of $G$, each with slightly fewer $\mathcal{H}$-edges, and hence (using the uniformity of $\mathcal{H}$), with slightly fewer edges of $G$. A simple calculation then allows us to bound the number of containers with $O(n^{1 + 1/\ell})$ edges produced via this process, and hence to prove Theorem~\ref{thm:cycle:containers}. \section{Counting \texorpdfstring{$H$}{H}-free graphs}\label{sec:proof} In this section we will prove Theorem~\ref{thm:cycle:containers}, deduce Theorem~\ref{thm:main} and Corollary~\ref{cor:fewwithfewedges}, and prove Proposition~\ref{prop:conj:implies:thm}. In fact, we will prove the following generalization of Theorem~\ref{thm:cycle:containers}, which will also prove useful in studying the Tur\'an problem on the random graph. \begin{thm}\label{thm:cycle:containers:turan} For each $\ell \in \mathbb{N}$, there exists a constant $C = C(\ell)$ such that the following holds for all sufficiently large $n,k \in \mathbb{N}$ with $k \leqslant n^{(\ell-1)^2 / \ell(2\ell-1)} / (\log n)^{\ell-1}$. There exists a collection~$\mathcal{G}_\ell(k)$ of at most $$\exp\Big( C k^{-1/(\ell-1)} n^{1 + 1/\ell} \log k \Big)$$ graphs on vertex set~$[n]$ such that $$e(G) \leqslant k n^{1+1/\ell}$$ for every $G \in \mathcal{G}_\ell(k)$, and every $C_{2\ell}$-free graph is a subgraph of some~$G \in \mathcal{G}_\ell(k)$. \end{thm} Note that Theorem~\ref{thm:cycle:containers} follows immediately from Theorem~\ref{thm:cycle:containers:turan} by setting $k$ equal to a sufficiently large constant. Moreover, as we will see below, the bounds on $k$ and $|\mathcal{G}_\ell(k)|$ in the statement above are both probably close to best possible. We begin the proof of Theorem~\ref{thm:cycle:containers:turan} by using Theorems~\ref{thm:cycle:hypergraph} and~\ref{thm:coveroff}, as outlined in the previous section, to prove the following container result for $C_{2\ell}$-free graphs. \begin{prop} \label{prop:containers_for_graphs} For every $\ell \geqslant 2$, there exist $k_0 \in \mathbb{N}$ and $\varepsilon > 0$ such that the following holds for every $k \geqslant k_0$ and every $n \in \mathbb{N}$. Given a graph~$G$ with~$n$ vertices and $k n^{1+1/\ell}$ edges, there exists a collection~$\mathcal{C}$ of at most $$\exp\bigg( \frac{n^{1 + 1/\ell}}{\varepsilon} \cdot \max \Big\{ k^{-1/(\ell-1)} \log k, \, n^{-(\ell-1) / \ell(2\ell-1)} \log n \Big\} \bigg)$$ subgraphs of~$G$ such that: \begin{itemize} \item[$(a)$] Every $C_{2\ell}$-free subgraph of~$G$ is a subgraph of some $C \in \mathcal{C}$, and\smallskip \item[$(b)$] $e(C) \leqslant (1 - \varepsilon) e(G)$ for every $C \in \mathcal{C}$. \end{itemize} \end{prop} \begin{proof} Given $\ell \geqslant 2$, let $\delta > 0$ and $k_0 \in \mathbb{N}$ be constants such that Theorems~\ref{thm:cycle:hypergraph} and~\ref{thm:coveroff} hold (in the latter case with $r = 2\ell$), and let us assume that $k_0 = k_0(\delta)$ is sufficiently large. Let $G$ be a graph as described in the theorem, and apply Theorem~\ref{thm:cycle:hypergraph} to~$G$. We obtain a $2\ell$-uniform hypergraph~$\mathcal{H}$ on vertex set~$E(G)$, satisfying\footnote{We recall that the function $\Delta^{(j)}(k,n)$ was defined in~\eqref{def:Delta}.}: \pagebreak \begin{itemize} \item[$(a)$] $e(\mathcal{H}) \geqslant \delta k^{2\ell} n^2$,\smallskip \item[$(b)$] $d_\mathcal{H}(\sigma) \leqslant \Delta^{(j)}(k,n)$ for every $\sigma \subset V(\mathcal{H})$ with $1 \leqslant |\sigma| \leqslant 2 \ell-1$, \end{itemize} such that each of the edges of $\mathcal{H}$ corresponds to a copy of~$C_{2\ell}$ in~$G$. We will show that if $$\frac{1}{\tau} = \delta^4 k \cdot \min\big\{ k^{1/(\ell-1)}, n^{(\ell-1) / \ell ( 2\ell-1)} \big\},$$ then it follows from~$(a)$ and~$(b)$ that $\delta(\mathcal{H},\tau) \leqslant \delta$. Indeed, since $v(\mathcal{H}) = e(G) = kn^{1+1/\ell}$, it follows that \begin{align*} \delta(\mathcal{H},\tau) & \, = \, \frac{1}{e(\mathcal{H})} \,\sum_{j=2}^{2\ell} \,\frac{1}{\tau^{j-1}} \sum_{v \in V(\mathcal{H})} d^{(j)}(v) \\ & \, \leqslant \, \frac{v(\mathcal{H})}{e(\mathcal{H})} \Bigg[ \sum_{j=2}^{2\ell-1} \Big( \delta^4 k^{\ell/(\ell-1)} \Big)^{j-1} \Delta^{(j)}(k,n) + \Big( \delta^4 k \cdot n^{(\ell-1) / \ell ( 2\ell-1)} \Big)^{2\ell-1} \Bigg]\\ & \, \leqslant \, \frac{1}{\delta k^{2\ell-1} n^{(\ell - 1)/\ell}} \Bigg[ \sum_{j=2}^{2\ell-1} \frac{\big( \delta^4 k^{\ell/(\ell-1)} \big)^{j-1} \cdot k^{2\ell-1} n^{(\ell - 1)/\ell}}{\big( \delta k^{\ell/(\ell-1)}\big)^{j-1}} + \big( \delta^4 k \big)^{2\ell-1} n^{(\ell-1) / \ell} \Bigg]\\ & \, \leqslant \, \sum_{j=2}^{2\ell-1} \delta^{3j-4} + \delta^3 \, \leqslant \, \delta, \end{align*} as required, where we used the bounds~$d_\mathcal{H}(\sigma) \leqslant \Delta^{(j)}(k,n)$ and $1/\tau \leqslant \delta^4 k^{\ell/(\ell-1)}$ for each $j \in \{1,\ldots,2\ell-1\}$, and the bounds $d_\mathcal{H}(\sigma) \leqslant 1$ and $1/\tau \leqslant \delta^4 k n^{(\ell-1) / \ell ( 2\ell-1)}$ when $j = 2\ell$. Thus, applying Theorem~\ref{thm:coveroff}, and setting $\varepsilon = \delta^6$, we obtain a collection~$\mathcal{C}$ of at most $$\exp\bigg( \frac{\tau \log( 1/\tau ) N}{\delta} \bigg) \, \leqslant \, \exp\bigg( \frac{n^{1 + 1/\ell}}{\varepsilon} \cdot \max \Big\{ k^{-1/(\ell-1)} \log k, \, n^{-(\ell-1) / \ell(2\ell-1)} \log n \Big\} \bigg)$$ subsets of $V(\mathcal{H}) = E(G)$ such that: \begin{itemize} \item[$(i)$] Every $C_{2\ell}$-free subgraph of~$G$ is a subgraph of some $C \in \mathcal{C}$, and \smallskip \item[$(ii)$] $e\big( \mathcal{H}[C] \big) \leqslant \big(1 - \delta \big) e(\mathcal{H})$ for all $C \in \mathcal{C}$. \end{itemize} It only remains to show that condition~$(ii)$ implies that $e(C) \leqslant (1-\varepsilon) e(G)$ for every $C \in \mathcal{C}$. To prove this, for each $C \in \mathcal{C}$ set \[ \mathcal{D}(C) \, = \, E(\mathcal{H}) \setminus E(\mathcal{H}[C]) \, = \, \big\{ e \in E(\mathcal{H}) \,:\, v \in e \mbox{ for some } v \in V(\mathcal{H}) \setminus C \big\}, \] and recall that $d_\mathcal{H}(v) \leqslant e(\mathcal{H}) / \big( \delta k n^{1 + 1/\ell} \big)$ for every $v \in V(\mathcal{H})$, by~$(a)$ and~$(b)$. Therefore, \[ |\mathcal{D}(C)| \, \leqslant \, \frac{e(\mathcal{H})}{\delta k n^{1 + 1/\ell}} \cdot |E(G) \setminus C|. \] On the other hand, we have $|\mathcal{D}(C)| = e(\mathcal{H}) - e(\mathcal{H}[C]) \geqslant \delta e(\mathcal{H})$, by condition~$(ii)$, and so $$ |E(G) \setminus C| \, \geqslant \, \delta^2 k n^{1+1/\ell},$$ as required. Hence the proposition follows with $\varepsilon = \delta^6$, as claimed. \end{proof} \pagebreak It is straightforward to deduce Theorem~\ref{thm:cycle:containers:turan} from Proposition~\ref{prop:containers_for_graphs}. \begin{proof}[Proof of Theorem~\ref{thm:cycle:containers:turan}] We apply Proposition~\ref{prop:containers_for_graphs} repeatedly, each time refining the set of containers obtained at the previous step. More precisely, suppose that after $t$ steps we have constructed a family $\mathcal{C}_t$ such that $$|\mathcal{C}_t| \, \leqslant \, \exp\bigg( \frac{n^{1 + 1/\ell}}{\varepsilon} \sum_{i=1}^t \max \Big\{ k(i)^{-1/(\ell-1)} \log k(i), \, n^{-(\ell-1) / \ell(2\ell-1)} \log n \Big\} \bigg),$$ $e(G) \leqslant k(t) n^{1+1/\ell}$ for every $G \in \mathcal{C}_t$, and every $C_{2\ell}$-free graph is a subgraph of some $G \in \mathcal{C}_t$, where $$k(i) = \max\big\{ (1 - \varepsilon)^i n^{1 - 1/\ell}, k_0 \big\}$$ and $k_0$ and $\varepsilon$ are the constants given by Proposition~\ref{prop:containers_for_graphs}. We will construct a family $\mathcal{C}_{t+1}$ by applying Proposition~\ref{prop:containers_for_graphs} to each graph $G \in \mathcal{C}_t$ with at least $k(t+1) n^{1+1/\ell}$ edges. Finally, we will show that the family $\mathcal{G}_\ell(k) := \mathcal{C}_m$ obtained after $m$ iterations of this process has the required properties, for some suitably chosen $m \in \mathbb{N}$. Set $\mathcal{C}_0 = \{K_n\}$, and observe it satisfies the conditions above. Now, given such a family $\mathcal{C}_t$, for each $G \in \mathcal{C}_t$ we will define a collection of containers $\mathcal{C}(G)$ as follows: if $e(G) \leqslant k(t+1) n^{1+1/\ell}$ then set $\mathcal{C}(G) = \{G\}$; otherwise apply Proposition~\ref{prop:containers_for_graphs} to $G$. We obtain a collection~$\mathcal{C}(G)$ of at most $$\exp\bigg( \frac{n^{1 + 1/\ell}}{\varepsilon} \max \Big\{ k(t+1)^{-1/(\ell-1)} \log k(t+1), \, n^{-(\ell-1) / \ell(2\ell-1)} \log n \Big\} \bigg)$$ subgraphs of~$G$ such that: \begin{itemize} \item[$(a)$] Every $C_{2\ell}$-free subgraph of~$G$ is a subgraph of some $C \in \mathcal{C}(G)$, and\smallskip \item[$(b)$] $e(C) \leqslant (1 - \varepsilon) e(G) \leqslant k(t+1) n^{1+1/\ell}$ for every $C \in \mathcal{C}(G)$. \end{itemize} Now simply set $\mathcal{C}_{t+1} = \bigcup_{G \in \mathcal{C}_t} \mathcal{C}(G)$, and observe that $\mathcal{C}_{t+1}$ satisfies the required conditions. Finally, let us show that if $k \leqslant n^{(\ell-1)^2 / \ell(2\ell-1)} / (\log n)^{\ell-1}$ and $m$ is chosen to be minimal so that $k(m) \leqslant k$, then $$|\mathcal{C}_m| \, \leqslant \, \exp\Big( O(1) \cdot k^{-1/(\ell-1)} n^{1 + 1/\ell} \log k \Big)$$ as required. To see this, note first that $m = O(\log n)$, and that $$k^{-1/(\ell-1)} \log k \, \geqslant \, n^{- (\ell-1) / \ell(2\ell-1)} (\log n)^2.$$ It follows immediately that $$\sum_{i=1}^m \max \Big\{ k(i)^{-1/(\ell-1)} \log k(i), \, n^{-(\ell-1) / \ell(2\ell-1)} \log n \Big\} \, = \, O\Big( k^{-1/(\ell-1)} \log k \Big),$$ as claimed, and so the theorem follows. \end{proof} As noted above, Theorem~\ref{thm:cycle:containers} follows immediately from Theorem~\ref{thm:cycle:containers:turan} by choosing $k$ to be a suitably large constant. Moreover, Theorem~\ref{thm:main} and Corollary~\ref{cor:fewwithfewedges} are immediate consequences of Theorem~\ref{thm:cycle:containers}. \begin{proof}[Proof of Theorem~\ref{thm:main}] Let~$\mathcal{G}$ be the collection given by Theorem~\ref{thm:cycle:containers}. Since every $C_{2\ell}$-free graph is a subgraph of some~$G \in \mathcal{G}$, it follows that the number of $C_{2\ell}$-free graphs on $n$ vertices is at most $$\sum_{G \in \mathcal{G}} 2^{e(G)} \, \leqslant \, 2^{\delta n^{1 + 1/\ell}} \cdot 2^{C n^{1+1/\ell}} \, = \, 2^{O(n^{1+1/\ell})},$$ as required. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:fewwithfewedges}] Given $\varepsilon > 0$, let~$\mathcal{G}$ be the collection of graphs given by Theorem~\ref{thm:cycle:containers}, applied with $\delta = \varepsilon/2$. Now, for any function $m = m(n)$ with $m = o\big( n^{1 + 1/\ell} \big)$, the number of $C_{2\ell}$-free graphs with $n$ vertices and at most $m$ edges is at most $$\sum_{G \in \mathcal{G}} \sum_{s = 0}^{m} \binom{e(G)}{s} \, \leqslant \, n^{1 + 1/\ell} \cdot 2^{\delta n^{1 + 1/\ell}} \binom{C n^{1+1/\ell}}{m} \, \leqslant \, 2^{\varepsilon n^{1+1/\ell}}$$ if $n$ is sufficiently large. Since $\varepsilon > 0$ was arbitrary, the claimed bound follows. \end{proof} Moreover, it is easy to deduce the following theorem, which is only slightly weaker than Theorem~\ref{thm:randomturan}, from Theorem~\ref{thm:cycle:containers:turan} and Markov's inequality. \begin{thm}\label{thm:randomturan:weak} For every $\ell \geqslant 2$, and every function $p = p(n) \gg n^{-(\ell-1) / (2\ell-1)} (\log n)^{\ell+1}$, $$\textup{ex} \big( G(n,p), C_{2\ell} \big) \, \leqslant \, p^{1/\ell} n^{1+1/\ell} \log n$$ with high probability as $n \to \infty$. \end{thm} \begin{proof} Choose $k$ so that $p = k^{-\ell/(\ell-1)} \log k$, and let~$\mathcal{G}_\ell(k)$ be the collection of graphs given by Theorem~\ref{thm:cycle:containers:turan}. Observe that, if there exists a $C_{2\ell}$-free subgraph of $G(n,p)$ with $m$ edges, then some graph in $\mathcal{G}_\ell(k)$ must contain at least $m$ edges of $G(n,p)$. By Theorem~\ref{thm:cycle:containers:turan}, the expected number of such graphs is at most $$\exp\Big( C k^{-1/(\ell-1)} n^{1 + 1/\ell} \log k \Big) \cdot {k n^{1+1/\ell} \choose m} \cdot p^m \, \leqslant \, \left( \frac{O(1) \cdot p k n^{1+1/\ell}}{m} \right)^m \, \ll \, 1$$ if $k \leqslant n^{(\ell-1)^2 / \ell(2\ell-1)} / (\log n)^{\ell-1}$ and $$m \, \gg \, \max\Big\{ pk n^{1+1/\ell} , \, k^{-1/(\ell-1)} n^{1 + 1/\ell} \log k \Big\}.$$ Since one can check that these inequalities hold if $$p \, \gg \, n^{-(\ell-1) / (2\ell-1)} (\log n)^{\ell+1} \qquad \text{and} \qquad m \, \geqslant \, p^{1/\ell} n^{1+1/\ell} \log n,$$ the result follows. \end{proof} \begin{rmk} Note that, since $\textup{ex} \big( G(n,p), C_{2\ell} \big)$ is increasing in $p$, the bound in Theorem~\ref{thm:randomturan:weak} implies that, for every $p(n) \leqslant n^{-(\ell-1) / (2\ell-1)} (\log n)^{2\ell}$, we have $$\textup{ex} \big( G(n,p), C_{2\ell} \big) \, \leqslant \, n^{1+1/(2\ell-1)} (\log n)^3$$ with high probability as $n \to \infty$. \end{rmk} To finish this section, let us prove Proposition~\ref{prop:conj:implies:thm}. In fact, since the lower bound on $\textup{ex}(n,H)$ implicit in that statement is irrelevant to our counting argument (and moreover is usually unknown), we shall in fact prove the following rephrased version of the proposition. \begin{defn}\label{def:ESgood} Let us say that a bipartite graph $H$ is \emph{Erd\H{o}s-Simonovits good for a function $m = m(n)$} if there exist constants $C > 0$, $\varepsilon > 0$ and $k_0 \in \mathbb{N}$ such that the following holds. Let $k \geqslant k_0$, and suppose that~$G$ is a graph with~$n$ vertices and~$k \cdot m(n)$ edges. Then there exists a non-empty $e(H)$-uniform hypergraph~$\mathcal{H}$ on vertex set~$E(G)$, satisfying: $$d_\mathcal{H}(\sigma) \leqslant \displaystyle\frac{C \cdot e(\mathcal{H})}{k^{(1 + \varepsilon)(|\sigma| - 1)} e(G)} \quad \textup{for every $\sigma \subset V(\mathcal{H})$ with $1 \leqslant |\sigma| \leqslant e(H)$,}$$ such that each of the edges of $\mathcal{H}$ corresponds to a copy of~$H$ in~$G$. \end{defn} In this language, Conjecture~\ref{conj:refinedES} states that every bipartite graph $H$ is Erd\H{o}s-Simonovits good for the function $\textup{ex}(n,H)$. \begin{prop}\label{prop:generalH} Let $H$ be a bipartite graph and let $m \colon \mathbb{N} \to \mathbb{N}$ be a function. If $H$ is Erd\H{o}s-Simonovits good for $m$, then there are at most $2^{O(m(n))}$ $H$-free graphs on $n$ vertices. \end{prop} \begin{proof}[Sketch proof] The proof is almost identical to (and actually slightly simpler than) that of Theorem~\ref{thm:main}, so let us emphasize only the differences in the general case. We first prove a statement analogous to Proposition~\ref{prop:containers_for_graphs}, except that the collection $\mathcal{C}$ consists of at most $$\exp\Big( k^{-\alpha} n^{1+1/\ell} \Big)$$ subgraphs of~$G$ which cover the $H$-free graphs on $n$ vertices, where $\alpha = \varepsilon^2$, say. To do so, set $1/\tau = \delta^2 k^{1+\varepsilon}$ and observe that, if $\delta < 1/C^2$, then \begin{align*} \delta(\mathcal{H},\tau) & \, = \, \frac{1}{e(\mathcal{H})} \,\sum_{j=2}^r \,\frac{1}{\tau^{j-1}} \sum_{v \in V(\mathcal{H})} d^{(j)}(v) \\ & \, \leqslant \, \frac{1}{e(\mathcal{H})} \sum_{j=2}^r \delta^{2(j-1)} k^{(1+\varepsilon)(j-1)} \sum_{e \in E(G)} \frac{C \cdot e(\mathcal{H})}{k^{(1 + \varepsilon)(j - 1)} e(G)} \, \leqslant \, \delta. \end{align*} The rest of the proof is exactly the same as above, except that our family $\mathcal{G}_\ell(k)$ of containers (as in Theorem~\ref{thm:cycle:containers:turan}) might consist of as many as $\exp\big( k^{-\alpha/2} n^{1 + 1/\ell} \big)$ graphs, each with at most $k n^{1 + 1/\ell}$ edges. We leave the details to the reader. \end{proof} \section{The Tur\'an problem on the Erd\H{o}s-R\'enyi random graph}\label{sec:Turan} In this section we will show how to use the hypergraph container method in a slightly more complicated way in order to remove the unwanted factor of $\log n$ from the bound in Theorem~\ref{thm:randomturan:weak}, and hence to deduce Theorem~\ref{thm:randomturan}. Let us introduce some notation to simplify the statements which follow. First, let $\mathcal{I} = \mathcal{I}(n)$ denote the collection of $C_{2\ell}$-free graphs with $n$ vertices, and let $\mathcal{G} = \mathcal{G}(n,k)$ denote the collection of all graphs with $n$ vertices and at most $k n^{1+1/\ell}$ edges. By a \emph{coloured graph}, we mean a graph together with an arbitrary labelled partition of its edge set. The following structural result turns out to be exactly what we need. \begin{thm}\label{thm:cycle:containers:turan:refined} For each $\ell \in \mathbb{N}$, there exists a constant $C = C(\ell)$ such that the following holds for all sufficiently large $n,k \in \mathbb{N}$ with $k \leqslant n^{(\ell-1)^2 / \ell(2\ell-1)} / (\log n)^{2\ell-2}$. There exists a collection $\mathcal{S}$ of coloured graphs with $n$ vertices and at most $C k^{-1/(\ell-1)} n^{1 + 1/\ell}$ edges, and functions $$g \colon \mathcal{I} \to \mathcal{S} \qquad \text{and} \qquad h \colon \mathcal{S} \to \mathcal{G}$$ with the following properties: \begin{itemize} \item[$(a)$] For every $s \geqslant 0$, the number of coloured graphs in $\mathcal{S}$ with $s$ edges is at most $$\bigg( \frac{C n^{1+1/\ell}}{s} \bigg)^{\ell s} \cdot \exp\Big( C k^{-1/(\ell-1)} n^{1 + 1/\ell} \Big).$$ \item[$(b)$] $g(I) \subset I \subset g(I) \cup h(g(I))$ for every $I \in \mathcal{I}$. \end{itemize} \end{thm} Note that Theorem~\ref{thm:cycle:containers:turan:refined} implies Theorem~\ref{thm:cycle:containers:turan}. In order to prove Theorem~\ref{thm:cycle:containers:turan:refined}, we will need the following slight improvement of Theorem~\ref{thm:coveroff}, which was also proved by Balogh, Morris and Samotij~\cite[Proposition~3.1]{BMS} and by Saxton and Thomason~\cite[Theorem~5.2]{ST}\footnote{To be precise, Theorem~5.2 in~\cite{ST} is stated where~$T$ is a tuple of vertex sets rather than a single vertex set, but it is straightforward to deduce this form from the methods of~\cite{ST}.}. \begin{thm}\label{thm:containers:turan} Let $r \geqslant 2$ and let $0 < \delta < \delta_0(r)$ be sufficiently small. Let $\mathcal{H}$ be an $r$-graph with~$N$ vertices, and suppose that $\delta(\mathcal{H},\tau) \leqslant \delta$ for some $0 < \tau < \delta$. Then there exists a collection $\mathcal{C}$ of subsets of $V(\mathcal{H})$, and a function $f \colon V(\mathcal{H})^{(\leqslant \tau N / \delta)} \to \mathcal{C}$ such that: \begin{itemize} \item[$(a)$] for every independent set $I$ there exists $T \subset I$ with $|T| \leqslant \tau N / \delta$ and $I \subset f(T)$,\smallskip \item[$(b)$] $e\big( \mathcal{H}[C] \big) \leqslant \big(1 - \delta \big) e(\mathcal{H})$ for every $C \in \mathcal{C}$. \end{itemize} \end{thm} Note that Theorem~\ref{thm:containers:turan} implies Theorem~\ref{thm:coveroff}. The first step in the proof of Theorem~\ref{thm:cycle:containers:turan:refined} is the following strengthened version of Proposition~\ref{prop:containers_for_graphs} \begin{prop} \label{prop:refined_containers_for_graph} For every $\ell \geqslant 2$, there exists $k_0 \in \mathbb{N}$ and $\varepsilon > 0$ such that the following holds for every $k \geqslant k_0$ and every $n \in \mathbb{N}$. Set \begin{equation}\label{def:mu} \mu \, = \, \frac{1}{\varepsilon} \cdot \max\Big\{ k^{-1/(\ell-1)}, \, n^{-(\ell-1) / \ell(2\ell-1)} \Big\}. \end{equation} Given a graph~$G$ with~$n$ vertices and $k n^{1+1/\ell}$ edges, there exists a collection~$\mathcal{C}$ of subgraphs of~$G$, and a function $f_G \colon E(G)^{(\leqslant \mu n^{1+1/\ell})} \to \mathcal{C}$ such that, for every $C_{2\ell}$-free subgraph~$I \subset G$, \begin{itemize} \item[$(a)$] There exists a subgraph $T=T(I) \subset I$ with $e(T) \leqslant \mu n^{1+1/\ell}$ and $I \subset f_G(T)$, and\smallskip \item[$(b)$] $e\big( f_G(T(I)) \big) \leqslant (1 - \varepsilon) e(G)$. \end{itemize} \end{prop} The deduction of Proposition~\ref{prop:refined_containers_for_graph} from Theorem~\ref{thm:containers:turan} is identical to that of Proposition~\ref{prop:containers_for_graphs} from Theorem~\ref{thm:coveroff}, and so we leave the details to the reader. Before proving Theorem~\ref{thm:cycle:containers:turan:refined}, let us make the following observation, whose proof is a straightforward but tedious optimization argument. \begin{obs}\label{obs:optimize} Let $a_1,\ldots,a_m \in \mathbb{N}$ satisfy $a_j \leqslant \frac{1}{\varepsilon} \cdot (1 - \varepsilon)^{j/(\ell-1)} k^{-1/(\ell - 1)} n^{1 + 1/\ell}$ for each $j \in [m]$, and set $s = \sum_j a_j$. Then $$\prod_{j=1}^m {(1 - \varepsilon)^{-j} k n^{1+1/\ell} \choose a_j} \, \leqslant \, \bigg( \frac{C n^{1+1/\ell}}{s} \bigg)^{\ell s} \cdot \exp\Big( C k^{-1/(\ell-1)} n^{1 + 1/\ell} \Big).$$ \end{obs} We can now deduce Theorem~\ref{thm:cycle:containers:turan:refined}. \begin{proof}[Proof of Theorem~\ref{thm:cycle:containers:turan:refined}] We construct the functions $g$ and $h$ and the family $\mathcal{S}$ as follows. Given a $C_{2\ell}$-free graph $I \in \mathcal{I}$, we repeatedly apply Proposition~\ref{prop:refined_containers_for_graph}, first to the complete graph $G_0 = K_n$, then to the graph $G_1 = f_{G_0}(T_1) \setminus T_1$, where $T_1 \subset I$ is the set guaranteed to exist by part~$(a)$, then to the graph $G_2 = f_{G_1}(T_2) \setminus T_2$, where $T_2 \subset I \cap G_1 = I \setminus T_1$, and so on. We continue until we arrive at a graph $G_m$ with at most $k n^{1+1/\ell}$ edges, and set $$g(I) = (T_1, \ldots, T_m) \qquad \text{and} \qquad h\big( g(I) \big) = G_m.$$ Since $G_m$ depends only on the sequence $(T_1,\ldots,T_m)$, the function $h$ is well-defined. It remains to bound the number of coloured graphs in $\mathcal{S}$ with $s$ edges. To do so, it suffices to count the number of choices for the sequence of graphs $(T_1,\ldots,T_m)$ with $\sum_j e(T_j) = s$. For each $j \geqslant 0$, set $$k(j) = (1 - \varepsilon)^{-j} k \quad \text{and} \quad \mu(j) = \frac{1}{\varepsilon} \cdot \max\Big\{ k(j)^{-1/(\ell-1)}, \, n^{-(\ell-1) / \ell(2\ell-1)} \Big\},$$ and note that $$e\big( G_{m-j} \big) \leqslant k(j) n^{1+1/\ell}, \qquad T_{j+1} \subset G_j \qquad \text{and} \qquad e(T_{m-j}) \leqslant \mu(j) n^{1+1/\ell}.$$ Thus, writing $$\mathcal{A}(s) \, = \, \Big\{ \mathbf{a} = (a_1,\ldots,a_m) \, : \, a_j \leqslant \mu(j) n^{1 + 1/\ell} \text{ and } \sum_j a_j = s \Big\},$$ it follows that the number of coloured graphs in $\mathcal{S}$ with $s$ edges is at most $$\sum_{\mathbf{a} \in \mathcal{A}(s)} \prod_{j=1}^m { k(j) n^{1+1/\ell} \choose a_j}.$$ Consider first the product over $j$ such that $\mu(j) = \frac{1}{\varepsilon} \cdot n^{-(\ell-1) / \ell(2\ell-1)}$. Since $m = O(\log n)$, this is at most $$\big( n^2 \big)^{\sum_j a_j} \, \leqslant \, \exp\Big( O(1) \cdot n^{1 + 1/\ell - (\ell-1) / \ell(2\ell-1)} (\log n)^2 \Big) \, \leqslant \, \exp\Big( O(1) \cdot k^{-1/(\ell-1)} n^{1 + 1/\ell} \Big),$$ since $k \leqslant n^{(\ell-1)^2 / \ell(2\ell-1)} / (\log n)^{2\ell-2}$. On the other hand, by Observation~\ref{obs:optimize}, the product over the remaining $j$ is at most $$\bigg( \frac{C n^{1+1/\ell}}{s} \bigg)^{\ell s} \cdot \exp\Big( C k^{-1/(\ell-1)} n^{1 + 1/\ell} \Big).$$ Multiplying these two bounds together, and noting that $|\mathcal{A}(s)| \leqslant {s+m \choose m}$, the theorem follows. \end{proof} We can now easily deduce Theorem~\ref{thm:randomturan}. \begin{proof}[Proof of Theorem~\ref{thm:randomturan}] Suppose that there exists a $C_{2\ell}$-free subgraph $I \subset G(n,p)$ with $m$ edges. Then $g(I) \subset G(n,p)$ and $G(n,p)$ contains at least $m - e\big( g(I) \big)$ elements of $h(g(I))$. The probability is therefore at most \begin{align*} \sum_{S \in \mathcal{S}} {k n^{1+1/\ell} \choose m - e(S)} p^m & \, \leqslant \sum_{s = 0}^{C k^{-1/(\ell-1)} n^{1 + 1/\ell}} \bigg( \frac{C p^{1/\ell} n^{1+1/\ell}}{s} \bigg)^{\ell s} \exp\Big( C k^{-1/(\ell-1)} n^{1 + 1/\ell} \Big) \bigg( \frac{3pk n^{1+1/\ell}}{m - s} \bigg)^{m - s}\\ & \, \leqslant \, \exp\bigg[ O(1) \cdot \Big( p^{1/\ell} n^{1+1/\ell} + k^{-1/(\ell-1)} n^{1 + 1/\ell} \Big) \bigg] \bigg( \frac{4pk n^{1+1/\ell}}{m} \bigg)^{m/2} \, \ll \, 1 \end{align*} for every $k \leqslant n^{(\ell-1)^2 / \ell(2\ell-1)} / (\log n)^{2\ell-2}$, if we set $p = k^{-\ell/(\ell-1)}$ and assume that $$m \, \gg \, \max\Big\{ pk n^{1+1/\ell} , \, k^{-1/(\ell-1)} n^{1 + 1/\ell} \Big\}.$$ Since one can check that these inequalities hold if $$p \, \geqslant \, n^{-(\ell-1) / (2\ell-1)} (\log n)^{2\ell} \qquad \text{and} \qquad m \, \gg \, p^{1/\ell} n^{1+1/\ell},$$ the result follows. \end{proof} \section{Complete bipartite graphs}\label{sec:Kst} In this section we will prove Conjecture~\ref{conj:refinedES} for the complete bipartite graph $H = K_{s,t}$, under the assumption that $\textup{ex}(n, K_{s,t}) = \Omega(n^{2-1/s})$, which is known to be the case when $t > (s-1)!$ (see~\cite{ARS,KRS}). The bound is generally believed to hold for every $2 \leqslant s \leqslant t$, and was conjectured already in 1954 by K\"ov\'ari, S\'os and Tur\'an~\cite{KST}. \begin{thm}\label{thm:Kst:ESgood} For every $2 \leqslant s \leqslant t$, the graph $K_{s,t}$ is Erd\H{o}s-Simonovits good for $n^{2-1/s}$. \end{thm} Combining Theorem~\ref{thm:Kst:ESgood} with Proposition~\ref{prop:generalH}, we obtain a second proof\footnote{The proof of Corollary~\ref{cor:Kst} by Balogh and Samotij~\cite{BSmm,BSst} played an important role in the development of the hypergraph container method in~\cite{BMS}, so it is perhaps unsurprising that the method of this paper can be applied to $K_{s,t}$-free graphs. Nevertheless, the proof in~\cite{BSmm,BSst} is somewhat different to that presented here.} of the following breakthrough result of Balogh and Samotij~\cite{BSmm,BSst}. \begin{cor}\label{cor:Kst} For every $2 \leqslant s \leqslant t$, there are $2^{O(n^{2-1/s})}$ $K_{s,t}$-free graphs on $n$ vertices. \end{cor} Moreover, repeating the argument of Sections~\ref{sec:proof} and~\ref{sec:Turan}, we also obtain the following bounds for the Tur\'an problem on $G(n,p)$, which are likely to be close to best possible. \begin{thm}\label{thm:Kst:Turan} For every $2 \leqslant s \leqslant t$, there exists a constant $C = C(s,t) > 0$ such that $$\textup{ex} \big( G(n,p), K_{s,t} \big) \, \leqslant \, \left\{ \begin{array} {c@{\quad}l} C n^{2 - (s+t-2) /(st-1)} (\log n)^2 & \textup{if } \; p \leqslant n^{-(s - 1) / (st-1)} (\log n)^{2s/(s-1)} \\[+1ex] C p^{(s-1)/s} n^{2-1/s} & \textup{otherwise} \end{array}\right.$$ with high probability as $n \to \infty$. \end{thm} Using the construction described in Section~\ref{sec:outline} (taking a blow-up and intersecting it with $G(n,p)$), one can easily show that, for each $2 \leqslant s \leqslant t$ such that $\textup{ex}(n, K_{s,t}) = \Omega(n^{2-1/s})$, we have $\textup{ex} \big( G(n,p), K_{s,t} \big) = \Omega\big( p^{(s-1)/s} n^{2-1/s} \big)$ with high probability as $n \to \infty$. Moreover, another standard construction (remove one edge from each copy of $K_{s,t}$) shows that $\textup{ex} \big( G(n,p), K_{s,t} \big) = \Omega\big( n^{2 - (s+t-2) /(st-1)} \big)$ for every $p \geqslant n^{-(s+t-2)/(st-1)}$. We remark also that somewhat weaker upper bounds were obtained in~\cite{BSst}. Let us fix $2 \leqslant s \leqslant t$. In order to prove Theorem~\ref{thm:Kst:ESgood} and Corollary~\ref{cor:Kst}, it is enough to prove a relatively weak refined supersaturation theorem; however, to obtain the bounds in Theorem~\ref{thm:Kst:Turan} we need close to best possible bounds on $d_\mathcal{H}(\sigma)$ for every $\sigma \subset E(G)$ with $\sigma \subset K_{s,t}$, where $G$ is a graph with $n$ vertices and $k n^{2 - 1/s}$ edges. In order to do so, it will be useful to enrich the hypergraph~$\mathcal{H}$ (of Conjecture~\ref{conj:refinedES}) slightly. The edge set of~$\mathcal{H}$ will still represent copies of~$K_{s,t}$ in~$G$, but we will also remember the vertex partition of each copy of~$K_{s,t}$. Thus the edge set of the `hypergraph' will be a collection of ordered pairs $(S,T)$ with $S \in V(G)^{(s)}$, $T \in V(G)^{(t)}$, and with $G[S,T]$ a complete bipartite graph. Of course, when $\sigma \subset E(G)$ is an unlabelled set of edges, then $d_\mathcal{H}(\sigma)$ retains the usual definition of the number of copies of~$K_{s,t}$ whose edge set contains~$\sigma$. However in the algorithm below, we will also need to define $d_\mathcal{H}(A,B)$ where $A, B \subset V(G)$ with $1 \leqslant |A| \leqslant s$, $1 \leqslant |B| \leqslant t$, and $G[A,B]$ is a complete bipartite graph, to be the number of copies of $K_{s,t}$ in~$\mathcal{H}$ for which the left vertex class contains~$A$ and the right vertex class contains~$B$, that is, \[ d_\mathcal{H}(A,B) \,=\, \big| \big\{ (S,T) \in E(\mathcal{H}) : A \subset S \mbox{ and } B \subset T \big\} \big|. \] Moreover, for each $1 \leqslant i \leqslant s$ and $1 \leqslant j \leqslant t$, define $$D^{(i,j)}(k,n) \, := \, \big( \delta k n^{(s-1)/s} \big)^{s-i} (\delta k^s)^{t-j}.$$ We will prove the following refined supersaturation theorem. \begin{thm}\label{thm:Kst:hypergraph} For every $2 \leqslant s \leqslant t$, there exist constants $C > 0$, $\delta > 0$ and $k_0 \in \mathbb{N}$ such that the following holds for every $k \geqslant k_0$ and every $n \in \mathbb{N}$. Given a graph~$G$ with~$n$ vertices and $k n^{2-1/s}$ edges, there exists~$\mathcal{H}$ encoding copies of~$K_{s,t}$ in~$G$, satisfying: \begin{itemize} \item[$(a)$] $e(\mathcal{H}) = \Omega\big( k^{st} n^s \big)$, and\smallskip \item[$(b)$] $d_\mathcal{H}(A,B) \leqslant D^{(|A|,|B|)}(k,n)$ for every $A, B \subset V(G)$. \end{itemize} \end{thm} Let us fix $2 \leqslant s \leqslant t$, constants $C$, $\delta$ and $k_0$, and a graph $G$ as in the theorem above. For simplicity, we will say that~$\mathcal{H}$ is \emph{good} if $d_\mathcal{H}(A,B) \leqslant D^{(|A|,|B|)}(k,n)$ for every pair $\emptyset\ne A, B \subset V(G)$, that such a pair $(A,B)$ is \emph{saturated} if $d_\mathcal{H}(A,B) \geqslant \lfloor D^{(|A|,|B|)}(k,n) \rfloor$, and moreover that $(A,B)$ is \emph{good} if it contains no saturated pair $(A', B')$ with $A' \subset A$ and $B' \subset B$. The main step in the proof of Theorem~\ref{thm:Kst:hypergraph} is the following proposition, cf.~Proposition~\ref{prop:finding:cycles}. \begin{prop}\label{prop:finding:kst} Let $\mathcal{H}$ be a good hypergraph, with $e(\mathcal{H}) = o\big( k^{st} n^s \big)$. There exist sets $S, T \subset V(G)$ with $G[S, T] = K_{s,t}$ and $(S, T) \not\in E(\mathcal{H})$, such that $\mathcal{H} \cup \{(S,T)\}$ is good. \end{prop} In order to deduce Theorem~\ref{thm:Kst:hypergraph} from Proposition~\ref{prop:finding:kst}, simply build up the hypergraph $\mathcal{H}$ edge by edge, until it has at least $\Omega( k^{st} n^s )$ edges. \begin{proof}[Sketch proof of Proposition~\ref{prop:finding:kst}] Let~$\mathcal{F}$ to be the collection of saturated sets, \[ \mathcal{F} = \big\{ (A,B) \,:\, \emptyset \ne A, B \subset V(G) \mbox{ and } d_\mathcal{H}(A, B) \geqslant \lfloor D^{(|A|, |B|)}(k,n)\rfloor \big\}. \] Provided that we do not pick~$S, T \subset V(G)$ with $A \subset S$ and $B \subset T$ for some $(A,B) \in \mathcal{F}$, then $\mathcal{H} \cup \{(S,T)\}$ will be good. By choosing a subgraph of~$G$ if necessary, we will assume that~$\mathcal{F}$ does not contain $(\{u\},\{v\})$ for any $\{u, v\} \in E(G)$ (observe that at most $o(e(G))$ of the edges of~$G$ correspond to saturated pairs, since $e(\mathcal{H}) = o(k^{st} n^s)$). For $A, B \subset V(G)$, define (cf.~the definition of $L_\mathcal{F}^{(1)}$ for cycles) \begin{align*} X(A,B) &= \{u \in V(G) : (A' \cup \{u\}, B') \in \mathcal{F} \mbox{ for some non-empty }A' \subset A, B' \subset B \}, \\ Y(A,B) &= \{v \in V(G) : (A', B' \cup \{v\}) \in \mathcal{F} \mbox{ for some non-empty }A' \subset A, B' \subset B \}. \end{align*} Following the proof of Lemma~\ref{lem:size_of_link}, it can easily be checked that \begin{equation} \label{eq:size_of_link} |X(A,B)| = O(\delta kn^{1-1/s}) \qquad\mbox{and}\qquad |Y(A,B)| = O(\delta k^s). \end{equation} Let $\mathcal{M}$ be the collection of all pairs $(S,v)$ with $v \in V(G)$, $S \in N_G(v)^{(s)}$, and such that $(S, \{v\})$ is good. We claim that \begin{equation} \label{eq:size_of_M} |\mathcal{M}| = \Omega(k^s n^s). \end{equation} Indeed, for each $v \in V(G)$, observe that~$\mathcal{M}$ contains all pairs $(S,v)$ where~$S$ is generated as follows. Select an arbitrary $u_1 \in N_G(v)$. Now for $i=2, \ldots, s$, select \[ u_i \in N_G(v) \setminus \Big( \{u_1, \ldots, u_{i-1}\} \cup X\big(\{u_1, \ldots, u_{i-1}\}, \{v\}\big) \Big) \] and set $S = \{u_1, \ldots, u_s\}$. Since we choose $u_i \not\in X\big( \{u_1, \ldots, u_{i-1}\}, \{v\} \big)$, and using the fact that~$\mathcal{F}$ does not contain any saturated pairs $(\{u_i\}, \{v\})$, it can be checked that the pair $(\{u_1, \ldots, u_i\}, \{v\})$ is good for every~$i \in [s]$, and hence $(S, \{v\})$ is also good. By~\eqref{eq:size_of_link}, the number of choices for each~$v_i$ is at least \[ |N_G(v)| - \big( s + O(\delta kn^{1-1/s}) \big). \] Thus for~$v$ whose degree is comparable with the average degree of~$G$, that is, $d_G(v) = \Omega(kn^{1-1/s})$, we have that the total number of choices for~$S$ is $\Omega(d_G(v)^s)$. Summing over all $v \in V(G)$ and using convexity, one obtains the bound~\eqref{eq:size_of_M}. We now claim that there are $\Omega(k^{st}n^s)$ good pairs $(S,T)$ with $G[S,T] = K_{s,t}$. From this, the proposition follows immediately, since at least one of these is not in~$\mathcal{H}$. Set $$\mathcal{M}(S) = \big\{ v \in V(G) : (S,v) \in \mathcal{M} \big\}$$ for each $S \in V(G)^{(s)}$, and consider $S \in V(G)^{(s)}$ with $|\mathcal{M}(S)| = \Omega(k^s)$. We claim that there are $\Omega\big(|\mathcal{M}(S)|^t\big)$ sets $T \in \mathcal{M}(S)^{(t)}$ such that $(S,T)$ is good (and, since $T \subset \mathcal{M}(S)$, $G[S,T]$ is a complete bipartite graph). Indeed, for $i=1, \ldots, t$, we can pick an arbitrary vertex \[ v_i \in \mathcal{M}(S) \setminus \Big( \{v_1, \ldots, v_{i-1}\} \cup Y\big(S, \{v_1, \ldots, v_{i-1}\} \big) \Big), \] and set $T = \{v_1, \ldots, v_t\}$. Since we chose $v_i \not\in Y\big(S, \{v_1, \ldots, v_{i-1}\} \big)$, it can be checked that $(S, \{v_1, \ldots, v_i\})$ is good for every~$i$ and hence $(S,T)$ is good. By~\eqref{eq:size_of_link}, the number of choices for each~$v_i$ is at least \[ |\mathcal{M}(S)| - \big( t + O(\delta k^s) \big) = \Omega( |\mathcal{M}(S)|) \] Thus the total number of choices is $\Theta\big(|\mathcal{M}(S)|^t\big)$ as claimed. Finally, observe that $\sum_{S \in V(G)^{(s)}} |\mathcal{M}(S)| = |\mathcal{M}|$, and thus for a typical $s$-set~$S$ we have $|\mathcal{M}(S)| \sim \binom{n}{s}^{-1} |\mathcal{M}| = \Omega(k^s)$. Summing over all $S \in V(G)^{(s)}$ with $|\mathcal{M}(S)| = \Omega(k^s)$ and using convexity, it follows easily that the total number of $K_{s,t}$ in~$G$ that do not contain a saturated set of vertices is at least $\Omega(k^{st} n^s)$. Thus there exists a good $K_{s,t}$ not already in~$\mathcal{H}$, as required. \end{proof} Theorems~\ref{thm:Kst:ESgood} and~\ref{thm:Kst:Turan} and Corollary~\ref{cor:Kst} follow easily from Theorem~\ref{thm:Kst:hypergraph} using the method of Sections~\ref{sec:proof} and~\ref{sec:Turan}, and so we shall give only a very rough outline of the proof. Observe first that for $\sigma \subset V(\mathcal{H}) = E(G)$, $$d_\mathcal{H}(\sigma) \, = \, O(1) \cdot \max_{ij \,\geqslant\, |\sigma|} D^{(i,j)}(k,n),$$ and therefore the value of $\tau$ we require in order to apply Theorems~\ref{thm:coveroff} and~\ref{thm:containers:turan} is roughly $$\max_{2 \,\leqslant\, a \,\leqslant\, st} \bigg( \frac{e(G)}{e(\mathcal{H})} \max_{ij \,\geqslant\, a} \big( k n^{(s-1)/s} \big)^{s-i} k^{s(t-j)} \bigg)^{1/(a-1)} \approx\, \max\Big\{ k^{-s}, \, k^{-1} n^{-(s-1)^2/s(st-1)} \Big\},$$ where the approximation (which indicates equality up to a constant factor) follows from a short calculation\footnote{More precisely, first show that the maximum occurs when $a = ij$ and $j = t$, then note that all remaining values are equal when $k = n^{(s-1)/s(st-1)}$.}. We thus obtain the following analogue of Theorem~\ref{thm:cycle:containers:turan}. \begin{thm}\label{thm:Kst:containers:turan} For each $2 \leqslant s \leqslant t$, there exists a constant $C = C(s,t)$ such that the following holds for all sufficiently large $n,k \in \mathbb{N}$ with $k \leqslant n^{(s-1) / s(st-1)} / (\log n)^{1/(s-1)}$. There exists a collection~$\mathcal{G}_{s,t}(k)$ of at most $$\exp\Big( C k^{-(s-1)} n^{2-1/s} \log k \Big)$$ graphs on vertex set~$[n]$ such that $$e(G) \leqslant k n^{2 - 1/s}$$ for every $G \in \mathcal{G}_{s,t}(k)$, and every $K_{s,t}$-free graph is a subgraph of some~$G \in \mathcal{G}_{s,t}(k)$. \end{thm} Corollary~\ref{cor:Kst} follows easily from Theorem~\ref{thm:Kst:containers:turan}. To prove Theorem~\ref{thm:Kst:Turan}, we use a similar analogue of Theorem~\ref{thm:cycle:containers:turan:refined}, and repeat the argument of Section~\ref{sec:Turan}. \section*{Acknowledgements} The authors would like to thank J\'ozsef Balogh, Wojciech Samotij and Andrew Thomason for many interesting discussions on independent sets in hypergraphs over the past few years. As the reader might imagine, these have had a significant bearing on the present work. They would also like to thank Wojciech Samotij, Xiaochuan Liu and Mikl\'os Simonovits for helpful comments on the manuscript, and David Conlon for interesting discussions on $C_{2\ell}$-free graphs.
1,116,691,497,948
arxiv
\section{Introduction} \setcounter{equation}{0} Irreversible deposition processes of colloidal particles or macromolecules on solid surfaces have received considerable attention during the last years. Both the adsorption (or deposition) kinetics and the structure of the assembly of deposited particles have been analyzed from an experimental and theoretical point of view. By irreversible we mean processes in which, once the particle has interacted with the surface, it can neither desorb from the surface nor diffuse on it. Moreover, we will focus on situations in which, due to the interactions between the spherical particles and the plane, only one adsorbed layer is formed. Despite their apparent simplicity, the deposition processes are determined by the interplay of various processes: the Brownian motion of the adhering particle, the gravitational force, the hydrodynamic interactions (HI) and other kind of interactions between adhering particles and the adsorbed ones. Due to the non additivity of the hydrodynamic interactions, most of the models which have been developed up to now to describe irreversible adhesion processes have neglected these latter, and have focused primarily on the geometric aspects, related to the exclusion surface effects. Among all the models which have been developed, two have captured most of the attention: (i) On the one hand, the Random Sequential Adsorption (RSA) model which has been shown to reproduce some of the properties of the irreversible adsorption of proteins and small colloidal particles\cite{feder1980}-\cite{wojtaszczyk1995} on solid surfaces. By {\it small} it is meant that the motion of the particles in solution is controlled by Brownian motion, and that therefore the influence of the deterministic forces on their motion can be neglected. In the RSA model particles are placed randomly and sequentially on the surface. If an incoming particle overlaps an already adsorbed one, it is rejected and a new position is chosen randomly over the surface. (ii) The Ballistic Model (BM), on the other hand, has been introduced to account for the deposition of large particles on solid surfaces\cite{wojtaszczyk1995}-\cite{thompson1992}. In this case, the deposition process is dominated by gravity and the diffusion of the particles in the bulk is neglected. In the BM, again the position of each incoming particle is randomly selected, but if it touches an already deposited one, it rolls over this particle according to the deterministic laws of mechanics. It undergoes its motion until it reaches the adhesion plane or is trapped by at least three adsorbed particles. Both models have been compared to experimental observations and the results can be summarized as follows:\\ \noindent (1) The RSA model has been tested in three different ways: \begin{enumerate} \item[a)] Experimental adsorption kinetics of particles\cite{adamczyk1992} and proteins\cite{ramsden1993} have been compared to their theoretical expectations. These systems seem to follow an RSA-like adsorption kinetics at low and intermediate coverages. It has also been reported\cite{ramsden1993} that the jamming limit is reached according to the power law: \begin{equation} \theta(\infty)-\theta(t)\sim t^{-1/2} \label{assrsa} \end{equation} \noindent where $\theta(t)$ represents the coverage after a time t of adsorption and $\theta(\infty)$ is the coverage obtained at saturation. However, due to the great experimental difficulties to precisely determine the evolution of the adsorbed amount in the asymptotic regime, one must be cautious with these results and more experimental investigations should be performed to validate them. \item[b)] The statistical properties of surfaces covered by latex particles have been investigated. For small particles depositing under a process widely governed by the diffusion, the radial distribution function $g(r)$ experimentally found is in good agreement with the $g(r)$ determined by computer simulations according to the RSA rules\cite{feder1980}. \item[c)] Finally, the density fluctuations of adsorbed particles have also been determined. Special attention was paid to the reduced variance, $\sigma^2/<n>$, where $\sigma^2$ corresponds to the variance of the number of adsorbed particles in sub-systems of a given area out of the adsorption plane, and $<n>$ represents the mean number of adsorbed particles on these sub-systems. It can be shown that $\sigma^2/<n>$ is directly related to the radial distribution function $g(r)$ through the relation: \begin{equation} \frac{\sigma^2}{<n>}=1+\rho\int_0^{\infty}2\pi r \left[ g(r)-1 \right] dr \label{siggr} \end{equation} \noindent where $\rho=<n>/S$, $S$ being the area of the adsorption plane\cite{landau1959}. Surprisingly, it has been found that, for the systems which have been investigated, $\sigma^2/<n>$ does not follow the RSA behavior as a function of $\rho$ (or $\theta$) but behaves more closely to the BM predictions\cite{schaaf1995,mann1995}. This result can be explained as follows: if one expands $\sigma^2/<n>$ as a function of the coverage : \begin{equation} \frac{\sigma^2}{<n>}=1-B_n\theta^n+O(\theta^{n+1}), \label{sigexp} \end{equation} \noindent the order $n$ of the first non vanishing term $B_n\theta^n$ represents the smallest number of particles which are required on the surface in order to hinder the adhesion of a new particle. For example, in the RSA case $n=1$, and in the BM case $n=3$. Moreover, $B_n$ is related to the mean exclusion area of $n$ deposited particles. In the case experimentally investigated, which led to an RSA-like radial distribution function, the Brownian motion of the particles in the bulk largely dominates over the gravitational effects even if this latter is not totally absent. So, a diffusing particle from the bulk that will touch an already adsorbed particle will diffuse around it at distances which are large compared to the radius of the particles. However, due to the slight gravitational effect, it will finally reach the adsorbing surface. Thus, in this case, $n$ must exceed 1: one deposited particle cannot hinder another particle to adhere on the surface. On the other hand, the presence of HI favors the diffusion of the particle parallel to the plane, implying that the incoming particle can explore much larger distances along the surface than perpendicularly to it before adhering\cite{bafaluy1993}. As a consequence, in first approximation, the incoming particle can be assumed to adsorb randomly on the surface. This implies that the radial distribution function becomes very close to the $g(r)$ predicted by the RSA model. These slight differences between the RSA-like $g(r)$ and the experimental radial distribution function are responsible for the totally different behavior of the experimental $\sigma^2/<n>$ as compared to the RSA one. This observation shows how subtle the deposition process can be and the importance of the detailed description of the transport process from the bulk to the interface, in order to account properly for the statistical properties of such assemblies of spheres. \end{enumerate} \vspace{0.5cm} \noindent (2) For the deposition of large particles on surfaces in the absence of shear at the interface only the statistical properties of the assembly of spheres have been experimentally determined. A good agreement has been found in this case between the evolution of $\sigma^2/<n>$ as predicted by the BM and the experimental data. This result is expected if one refers to our discussion in the RSA case\cite{schaaf1995}. The radial distribution function $g(r)$ has also been determined as a function of the coverage\cite{wojtaszczyk1993}. Despite the fairly good agreement between experimental data and simulations, some discrepancies remain, especially at distances slightly larger than one particle diameter. Such discrepancies have first been attributed to some polydispersity in the particle sample. However, even if the introduction of polydispersity leads to a better agreement between experiment and theory, it cannot account for the whole differences\cite{wojtaszczyk1993,wojtaszczykth}. It has thus been proposed that these differences are due to the hydrodynamic interactions between the incoming particle, the adsorbed particles and the adsorbing surface. Indeed, whatever the radii of the particles, the HI are always present during the deposition process and their relative importance compared to the gravitational forces does not decrease when the radii of the particles increase. Moreover, the HI are of long range and can thus have significant effects on the distribution of the particles on the deposition plane. It is the purpose of this article to address the problem of the influence of the HI during the irreversible deposition of large colloidal particles on the statistical properties of the assembly of the deposited particles. The study will be performed by means of computer simulations, extending to the 3-D case the study of the effect of HI on the deposition on a one-dimensional substrate\cite{ignacio1994}, and the results will be compared to experimental data. We will first present the theoretical background which is necessary to be able to compute the friction tensor which determines the motion of the depositing particles. We will, in particular, describe the approximation which will be made in our approach. In the next section we will present the simulation model. We will then show the results related to the radial distribution function of the particles on the surface. The $g(r)$ will be compared to the radial distribution function corresponding to the BM model in which HI are neglected, and to experimental results. The available surface function, $\Phi$, which represents the adhesion probability of a particle on the surface, will also be discussed and compared to its counterpart in the BM case. Particular attention will be paid to the third virial coefficient, $B_3$, in the development of $\Phi$ in a power series of $\theta$. The coefficient $B_3$ is related to the area associated with the triangles of preadsorbed spheres which trap incoming particles, preventing them from being adsorbed. A new simulation method aimed to determine the value of $B_3$ will be presented. Finally, we will summarize our results and draw some new lines along which new investigations should be performed. In the appendix, explicit expressions for the components of the friction tensor for two spheres suspended in an unbounded fluid are given for the sake of completeness. \section{Hydrodynamic interactions in the adsorption process} \label{hydro} The slow motion of a particle suspended in a fluid at rest is governed by the frictional force and torque that the fluid exerts on it. These are linear functions of the center of mass velocity, $\vec{u}$, and angular velocity, $\vec{\omega}$ of the particle, the proportionality coefficients being the so-called friction tensors. One can therefore write down: \begin{eqnarray} \vec{F}&=&-\vec{\vec{\xi}}_{tt}\cdot\vec{u} - \vec{\vec{\xi}}_{tr}\cdot\vec{\omega} \nonumber\\ \vec{T}&=&-\vec{\vec{\xi}}_{rt}\cdot\vec{u}- \vec{\vec{\xi}}_{rr}\cdot\vec{\omega} \label{1.1} \end{eqnarray} \noindent where $\vec{F}$ and $\vec{T}$ account for the total force and total torque acting on the particle. $\vec{\vec{\xi}}_{tt}$ and $\vec{\vec{\xi}}_{rr}$ correspond to the translational and the rotational friction tensors respectively, while $\vec{\vec{\xi}}_{rt}$ and $\vec{\vec{\xi}}_{tr}$ are the coupling friction tensors. Due to the reciprocal Onsager relations, they satisfy $\vec{\vec{\xi}}_{rt}=\vec{\vec{\xi}}_{tr}^{ \dag}$, where the superscript $\dag$ indicates that the transpose of the matrix should be taken. We are interested in the motion of a free spherical particle in the presence of the gravitational field. Due to the symmetry and homogeneity of the particles, there will be no net external torque acting on them. Therefore $\vec{T}=0$, and from eqs.(\ref{1.1}) one can deduce the appropriate expression for the center of mass velocity, which will completely determine the motion of the particles. One then obtains \begin{equation} \vec{F}=-\left(\vec{\vec{\xi}}_{tt} - \vec{\vec{\xi}}_{tr} \cdot \vec{\vec{\mu}}_{rr} \cdot \vec{\vec{\xi}}_{tr}^{ \dag}\right) \cdot \vec{u}= -\vec{\vec{\xi}}_{eff}\cdot \vec{u} \label{1.2} \end{equation} \noindent where $\vec{\vec{\mu}}_{rr}$ is the rotational mobility matrix, defined as the inverse of the corresponding friction tensor, $\vec{\vec{\mu}}_{rr} \cdot \vec{\vec{\xi}}_{rr} = \vec{\vec{1}}$. The coefficient between brackets in eq.(\ref{1.2}) can be redefined as an effective friction tensor, $\vec{\vec{\xi}}_{eff}$. The relaxation times for the velocity of colloidal particles are very small, which means that the inertial terms in their motion can be neglected. This implies that the hydrodynamic force $\vec{F}$ that the fluid exerts on the particle is exactly opposite to the external force, which in our case is the gravity force acting on the suspended particles. Moreover, since we are interested in the deposition of heavy colloidal particles, the effect of Brownian diffusion on their motion can be neglected. Therefore, eq.(\ref{1.2}) constitutes the equation of motion of the suspended particles, putting $\vec{F}=-\frac{4}{3}\pi a^3 \Delta\rho g \hat{z}$ the gravity force, where $a$ is the radius of the sphere, $g$ the gravity acceleration, and $\Delta\rho$ the density difference between the colloidal particle and the solvent, assumed to be a positive number. Eq.(\ref{1.2}) will describe the motion of a suspended heavy particle both in the case in which it is alone in an unbounded fluid and in the presence of other objects. The HI between the different particles, which are mediated by the host fluid, appear through the specific expressions for the friction coefficients. These depend both on the geometry of the particles and on their relative distribution in the fluid. For example, for an isolated solid sphere, the friction coefficients are constants, and using the standard stick boundary condition for the velocity field on its surface one has, $\vec{\vec{\xi}}_{tt}=6\pi\eta a\vec{\vec{1}}\equiv \vec{\vec{\xi}}_0$ and $\vec{\vec{\xi}}_{rr}=8\pi\eta a^3\vec{\vec{1}}$ with $\eta$ being the viscosity of the solvent. In this case, there exists no coupling between translational and rotational motion, $\vec{\vec{\xi}}_{tr}=0$. We will be interested in geometries where one sphere approaches a planar surface covered by preadsorbed spheres. In this case, there exist no general analytic expressions for the friction coefficients since exact solutions for many-body hydrodynamic problems are not available. The difficulty in the derivation of an exact solution lies partially in the non-additive character of the HI, which arises as a consequence of the long-range decay of the Oseen propagator\cite{happel}. However, in order to get analytic expressions, additivity in the HI will be assumed. There are two natural ways to introduce such an approximation: one can assume additivity of the friction tensors, which is equivalent to assume that the hydrodynamic force acting on one particle due to the presence of the others is equal to the sum of the forces due to each one of them as if the others were not present. One can also assume additivity of the mobilities, according to which the velocity of a given particle is equal to the sum of the velocities induced on it by the other particles independently. Bossis and Brady\cite{bossis1984} have shown that the additivity of the friction coefficients takes properly into account the lubrication forces, which act when objects are close together. This latter property is not fulfilled under the mobility additivity assumption. Within the friction additivity assumption, the effective friction tensor for a suspended particle at a height $z_v$ from a planar surface in the presence of $N$ previously adsorbed spheres will be written as \begin{equation} \vec{\vec{\xi}}(\vec{r})_{eff}=\vec{\vec{\xi}}_{sp}(z_v)+\sum_{i=1}^N \left( \vec{\vec{\xi}}_{ss}(\vec{r}_i)-\vec{\vec{\xi}}_0\right) \label{1.3} \end{equation} \noindent where $\vec{r}$ represents the position vector of the incoming particle with respect to a given reference frame (we will take the origin of the reference frame at the center of the closest preadsorbed particle), $\vec{r}_i$ is the vector joining the center of the incoming particle and of the $i$th sphere on the plane, $\vec{\vec{\xi}}_{sp}$ is the friction tensor of a spherical particle alone in the presence of a plane, $\vec{\vec{\xi}}_{ss}$ the effective friction tensor of two spheres in an unbounded fluid at relative position $\vec{r}_i$, and $\vec{\vec{\xi}}_0$ is the Stokes' tensor, introduced in the preceding paragraph. The previous expression leads to the correct behavior for the friction tensor when the sphere is far from the surface, where one recovers Stokes law. On the other hand, when the adsorbing particle comes into the vicinity of an already adsorbed one, lubrication forces become dominant. Then, only the small region of the fluid between the particles is responsible for the forces which can, thus, be considered as additive. Therefore, eq.(\ref{1.3}) also provides the right behavior at short distances. In addition, the advantage of introducing approximation (\ref{1.3}) is that the expressions for the friction coefficients appearing in it are known. Indeed, the friction tensor for a sphere at a height $z$ in the presence of a plane, $\vec{\vec{\xi}}_{sp}$, has been shown to be diagonal and to have one component parallel, $\xi_{||}$, to the plane and one perpendicular to it, $\xi_{\bot}$ \cite{brenner1961,brenner1967}: \begin{eqnarray} \xi_{\bot}&=& \frac{4}{3}\sinh \alpha \sum_{k=1}^\infty \frac{k (k+1)}{(2 k-1) (2 k+3)} \left( \frac{2 \sinh ((2 k+1)\alpha)+(2 k+1) \sinh 2\alpha}{4 \sinh^ 2 ((k+1/2)\alpha)-(2 k+1)^2 \sinh^ 2 \alpha} -1\right) \label{1.4} \\ \xi_{||}&=&\left\{\begin{array}{lll} \left(1-\frac{9 a}{16 z}+\frac{1 a^3}{8 z^3}- \frac{45 a^4}{256 z^4}- \frac{a^5}{16 z^5}+...\right)^{-1}& & \;\;\;\;\;\;\frac{z-a}{a}>0.1 \\ & & \\ -\frac{8}{5}\ln \left(\frac{z}{a}-1\right)+.95888+...& & \;\;\;\;\;\; \frac{z-a}{a}<<0.1 \end{array} \right. \label{1.5} \end{eqnarray} \noindent with $\alpha=$arccosh $z/a$. It is worthnoting that at small particle-plane distances, $\xi_{||}$ diverges much more slowly than $\xi_{\bot}$. Since the diffusion coefficient behaves as the inverse of the friction tensor, this fact is responsible for the "{\it randomization}" of the final position of an adhering sphere whose motion is controlled by Brownian motion, as commented in the introduction\cite{bafaluy1993}. Regarding the expression for $\xi_{ss}(\vec{r}_i)$, we will use the friction coefficient of two spheres in an unbounded fluid as worked out at all distances by Jeffrey and Onishi \cite{jeffrey1984}. According to these authors, the effective friction tensor for two spheres at a distance $\vec{r}$ in an unbounded fluid is given by: \begin{equation} \vec{\vec{\xi}}_{ss}(\vec{r})=X_{11}^A(\vec{r}) \hat{e}\hat{e}+\left( Y_{11}^A(\vec{r})-\frac{\left(Y_{11}^B(\vec{r})\right)^2}{3 Y_{11}^C(\vec{r})}\right) (\vec{\vec{1}}-\hat{e}\hat{e}) \label{1.6} \end{equation} \noindent $\hat{e}=\vec{r}/r$ stands for a unit vector along the line of centers of the two spheres, and the expressions for the functions $X_{11}^A$, $Y_{11}^A$, $Y_{11}^B,$ and $Y_{11}^C$ are given in the appendix for the sake of completeness. Expression (\ref{1.3}), when inserted in (\ref{1.2}), provides us with the equation of motion of a suspended particle, \begin{equation} \frac{d \vec{r}}{d t}= \vec{\vec{\mu}}_{eff}\cdot\vec{F}_g \label{1.7} \end{equation} \noindent where where the effective mobility $\vec{\vec{\mu}}_{eff}$ is the inverse of the effective friction tensor given in eq.(\ref{1.3}). The non-linear dependence of the friction tensors as functions of the distance between the particles, together with the fact that the friction tensors associated with the sphere-sphere configuration have a spherical symmetry different from the cylindrical one characteristic of the sphere-plane friction tensors, make it impossible to find an analytic solution for eq.(\ref{1.7}) even in the simplest case in which a single sphere is adsorbed on the plane. In the general situation of adsorption kinetics, one has to take into account that aside from this problem, the number of particles on the surface increases with time, which implies that only a numerical simulation study of the process can be carried out. \section{Simulation model} \label{numerical} We have performed numerical studies simulating the trajectories of the colloidal particles from the bulk to the surface, taking into account that as soon as a moving particle touches the surface, it is irreversibly fixed at that position. The model that we will develop constitutes a natural extension of the Ballistic Model. We will study the arrival of particles from the bulk to the surface in a sequential way. This corresponds to the physical situation in which the bulk concentration is low so that the interactions between the particles in the bulk can be neglected. This situation is indeed encountered in the experimental systems to which we will refer later on. In our simulation algorithm, a position is randomly chosen at a height of 10 particle diameters above the plane. This ensures that initially the effects of the interactions of the incoming sphere and the preadsorbed ones can be neglected. In all the simulation studies, we have rescaled the distances so that the diameter of the spheres is taken as unity, and the time is rescaled by the characteristic sedimentation time $9 \eta /(2 a \Delta \rho g)$. With this adimensionalization, when the spheres are far from the plane, their mobility is equal to 1. This scaling, which is possible due to the structure of eq.(\ref{1.7}), implies that our results are of general validity and will not depend on the particular system we consider, as soon as both inertial and diffusion effects are negligible. We have numerically evaluated the trajectories of the incoming particles by numerically integrating eq.(\ref{1.7}) with the expression for the friction tensor as given by eq.(\ref{1.3}), until they reach either the surface or one or a set of preadsorbed particles. The integration is performed by using a 4th order Runge-Kutta algorithm \cite{abramowitz1972} with variable time step. Since the friction tensor depends on the position of the adsorbing particle relative to the preadsorbed ones, at each time step it is necessary to calculate the appropriate value of the tensor. Moreover, each time a particle adheres, it has to be taken into account in the evaluation of the friction tensor of subsequent adhering spheres. In the numerical integration it is important to take into account that the behavior of the friction changes qualitatively along the trajectory of the particle. When the sphere is far from any other object, the mobility is of order unity, and changes slowly. As the particle comes close to an adsorbed sphere, the mobility becomes anisotropic. Then, while the component associated with the displacement along their line of centers vanishes linearly with the clearance between the spheres (see eq.(\ref{1a.4})), the component related to the displacement at constant separation goes to zero as the inverse of the logarithm of the clearance (see eqs.(\ref{1a.5})-(\ref{1a.7})). Thus, in this region, the mobility changes rapidly, and in a different manner depending on the direction. In fact, the motion will consist basically of angular displacements at practically constant distance between the spheres. Therefore, the variable time step is chosen to ensure that the displacement is never larger than a tenth of the clearance (see fig. 1). Moreover, in order to take the anisotropic behavior of the mobility into account, eq.(\ref{1.7}) is solved in spherical coordinates centered on the adsorbed sphere closest to the incoming one. The fact that the mobility goes to zero when the spheres come into contact, due to the stick boundary conditions, introduces an additional computational difficulty. Indeed, the velocity in that region can become so small that the computer time needed to describe the trajectory becomes exceedingly large. We have decided to stop the trajectory when the clearance between the incoming and an adsorbed sphere becomes smaller than $10^{-4}$ particle diameters. For particles of diameter $2\mu$m, this corresponds to a minimum clearance of 200 $\AA$. At this point, we impose that the particle will follow the steepest descent path towards the surface. Accordingly, we have implemented the BM algorithm\cite{thompson1992,choi1993} to calculate the final position of the sphere on the plane, if it is not hindered to reach the plane by a group of preadsorbed particles. This assumption seems reasonable because at that point, the trajectory is basically controlled by the geometrical constraint that spheres cannot overlap, while the external force drives the particle to the surface. If the particle cannot reach the plane, it is rejected. We thus neglect multilayer effects. When the sphere comes close to the plane, the mobility becomes anisotropic and exhibits the same qualitative behavior as explained in the previous paragraph. Again, the mobility tensor goes to zero when the clearance vanishes. Therefore, we also change the time step in order to ensure that the displacement is smaller than a tenth of the clearance between the incoming sphere and the plane. Moreover, in this case we have also to stop the numerical integration of the trajectory. We have considered that when the gap between the sphere and the plane is smaller than $10^{-2}$ diameters, the particle is deposited on the surface at that position. This truncation procedure can be seen as an effective way to account for the attractive short-range particle-surface potential, which binds the sphere to the substrate. The situation would be completely different for trajectories controlled by Brownian motion. The sphere would then have a large tendency to diffuse parallel to the line, leading to a randomization in its final position on the substrate\cite{bafaluy1993}. A second feature which should be taken into account in the numerical algorithm is the long-range character of HI. In fact, in order to obtain the expression of the effective friction tensor, eq.(\ref{1.3}), a sum over all previously adsorbed spheres should be carried out. However, from the computational point of view such a procedure is time consuming. Therefore, a compromise should be reached between the number of preadsorbed particles which will be considered to compute the friction tensor and the computer time needed. We have then used the results obtained from the study of the deposition of particles on a one dimensional substrate in the presence of HI\cite{ignacio1994}. In this case, it has been shown that if the in-plane initial separation to a preadsorbed sphere is of the order of 10 particle diameters, the effect of HI is negligible on the final location of the incoming sphere. This fact suggests that we can now restrict the interaction of the incoming spheres to all the preadsorbed ones lying in a cylinder of radius 10 diameters centered on the incoming particle (see fig. 2). Moreover, as the particle approaches the surface, the value of the friction tensor will be dominated by the adsorbed particles closest to the adhering one. We have therefore further restricted the range of interaction by considering that the incoming sphere will only be affected by those particles whose distance, $d$, to the projection of the incoming particle on the plane is smaller than the height, $h$, at which this incoming sphere is located. This restriction procedure, however, is only taken into account when the height of the incoming particle is larger than 5. At lower heights, the radius of the {\it interaction cylinder} is assumed constant and equal to 5 (see fig. 2). This further conjecture is based on the observation that the deviation from the straight trajectory of the incoming sphere as determined by the gravity field takes place basically at distances of 2 or 3 over the plane\cite{ignacio1994}. Although this reasoning is based on trajectories obtained with a small number of particles deposited on the surface, it seems reasonable that it also holds in more complex geometries. The advantage of this approximate procedure is that it saves a significant amount of computer time with respect to the initial cylindrical restriction. We have checked the errors induced by the use of the varying {\it interaction cylinder} by covering a surface both using a constant-radius and a varying-radius interaction cylinder, as shown in figure 3. At an intermediate coverage, as shown in figure 3 a, practically all the particles end at the same positions on the plane using both methods, whereas at higher coverages a small fraction of the particles are placed at different positions, as seen in figure 3 b. This is due to the fact that if an incoming sphere arrives close to an ensemble of preadsorbed spheres, a small initial deviation may lead to a completely different final position. Then, due to the infinite memory of the adsorption process, all the particles arriving afterwards will be sensitive to this difference, leading eventually to a different configuration on the surface. Although {\em a priori}, from the point of view of average quantities, it is not clear whether this change may lead or not to significantly different statistical properties of the surface, comparison of the data obtained by this procedure with the experimental ones shows {\it a posteriori} that this approximation works pretty well, since no significant differences are observed. We have performed numerical simulations of the adsorption process on a rectangular surface, of sides 23.34 and 27.34, up to a coverage of $\theta=0.5$. The size of the system has been chosen such that the ratio "size of the system/size of the particles" is the same in the experiments and the simulations (see also next section). We have focused on the study of the radial distribution function, $g(r)$, and of the available surface function, $\Phi (\theta)$, since both represent key quantities for the statistical properties of the adsorbed layer. We have stopped our simulations at a coverage of 0.5, which corresponds to 407 particles deposited on the surface. At this coverage, one can already have an idea of the behavior of the system near jamming, without having to reach it. Indeed, the study of the system in the last 10\% until jamming constitutes computationally the most expensive part by far. We have covered 200 surfaces, and from them, we have constructed the radial distribution function. Simulations of the adsorption process according to BM rules have also been carried out following Ref. \cite{thompson1992} , under the same conditions as those exposed in the previous paragraph, in order to compare both models. The model described in this section, which takes HI into account will be referred to as BHM algorithm hereafter. \section{Results and discussion} \label{results} All the simulation results were compared with experimental data. The latter correspond to the irreversible deposition of melamine particles of diameter 4.5 $\mu$m and density 1.5 g/cm$^3$ on a silica surface. The preparation of the particles, their characterization and the experimental procedure were reported extensively in reference \cite{wojtaszczyk1993} and will only be briefly mentioned here. The experimental system consists of a cell, a flow system that allows the injection of the solution in the cell, an inverted microscope (Axiovert 10, Zeiss, Germany), a CCD video camera (type 4710 CCIR monochrome from Cohu, San Diego, CA) and a computer image analysis system (Visilog, Noesis, France). The experiments were performed in a plane parallel cell whose top and bottom parts are constituted by two silica microscope slides. Once the particle solution was introduced in the cell, it was fixed in its horizontal position to allow the particles to deposit on the lower microscope slide. After the deposition of all the particles present in the cell, a large number of pictures from different regions of the bottom surface were taken. Typically, for coverages of the order of 10-20\%, 300 pictures were needed to obtain satisfactory results from the subsequent image analysis. The area that is covered by each picture equals s = 105 x 123$\mu$m$^2$. Only the particles that touch the surface were taken into account, the other particles, which can form a second layer were discarded by the image analysis. The geodesic center of all the particles was then determined for each picture. From this set of data, the different statistical properties of the surfaces such as the radial distribution function $g(r)$ and the reduced variance of the density fluctuations could be determined. The details of the experimental method are given in reference \cite{wojtaszczyk1993}. It must be pointed out that great care had to be taken to determine the $g(r)$ due to the discrete nature of the positions of the particles determined experimentally because of the finite size of the pixel elements in the camera. The reduced radius $R^*$ of the particles was equal to 3.4. Let us recall that $R*$ is defined by $R^* = a (4\pi \Delta \rho g/(3 k T))^{1/4}$ , where $a$ corresponds to the radius of the colloidal particles, $\Delta\rho$ to the density difference between the particle material and the solvent and $kT$ the thermal energy. In the absence of HI, this is the only parameter which is relevant for the description of the deposition process\cite{senger1992}. It has been shown in reference \cite{wojtaszczyk1993} that this system behaves in first approximation in a ballistic way, even if systematic deviations from the model are observed, e.g. in $g(r)$. However, the experimental data have never been compared to simulations of deposition processes in which the HI are taken into account. This constitutes the main objective of the present work. In order to assess the characteristic features due to the HI we have also compared our results with simulations performed in the framework of the pure BM. From the previous studies it comes out that the main parameters describing the structure of the assembly of deposited particles are the radial distribution function $g(r)$ (section \ref{gr}), the available surface function $\Phi(\theta)$ (section \ref{phitheta1}) and the reduced variance of the density fluctuations of the number of adsorbed particles $\sigma^2/<n>$. These two quantities do contain the same information at low coverage which is quantified by the third virial coefficient $B_3$ in the expansion of $\Phi$\cite{schaaf1995} and to which special attention will be paid in section \ref{phitheta2}. \subsection{Radial distribution function} \label{gr} The radial distribution function $g(r)$ characterizes the correlation between the adsorbed particles, and it can be determined from the positions of the different particles on the surfaces. However, in order to be able to quantitatively compare the $g(r)$ obtained from the simulation with the experimental ones, it is necessary to treat the simulation data exactly in the same way as the experimental ones. In particular, we have to apply the pixelization procedure (discretization of the positions of the particles) to the coordinates of the particles deposited on a surface by means of the BM or the BHM algorithms. Taking into account that the diameter of the melamine particles used in the experiments is 4.5$\mu$m, the dimensions of the experimental pixel ({\em i.e.} 0.48$\mu$m x 0.41$\mu$m) have been scaled to 0.48 / 4.5 and 0.41 / 4.5, along the x- and y-axis, respectively. After performing this rescaling, the unit of distance is the diameter of the particles. The coordinates of the positions of the centers of the particles are then converted into integer numbers of pixels. The center-to-center distances are calculated using these integer coordinates, and the histogram for the relative distribution of particles is evaluated with a width resolution not larger than the smallest side of a pixel ({\em i.e.} 0.41 / 4.5) as was done with the experimental data\cite{wojtaszczyk1993}. This is repeated over the 300 to 500 surfaces available (as indicated in the figure captions). The radial distribution function is finally deduced from this cumulated distance frequency histogram. The results are shown on figure 4 (a-f). For a coverage $\theta$ of about 0.15, only the first peak can be clearly identified (fig. 4 a,b). The introduction of HI in the description of the deposition process when compared to the BM model results mainly {\it (i)} in the lowering of this contact peak, which seems then to be in better agreement with the experimental data, and {\it(ii)} in the broadening of the peak after its maximum, due to the effective repulsion induced by HI between the adsorbing and adsorbed spheres. At intermediate coverages, $\theta\approx$ 0.35 (fig. 4 c,d), the difference in the height of the peak is less marked; however, the agreement between the simulated data and its experimental counterpart is excellent, and they almost coincide in the broaden region and becomes better in the region around $r /(2 a) = 1.5$. This observation may also be done in the case of high coverages $\theta\approx$ 0.50 (fig. 4 e,f). However, for this latter coverage, the simulated data are quite similar, whether or not the HI are taken into account. This is due to the fact that in this regime, the arrival of particles at the surface is almost entirely controlled by geometrical restrictions, since the available area is then only formed by small targets. Therefore, the fraction of rolling particles, which determines the height of the peak, is not so sensitive to HI repulsion because then the arriving particle is forced to enter in one of these target areas and the repulsive friction forces are greatly balanced due to the number of particles surrounding each hole. As a general conclusion, HI do only introduce slight changes in the structure of $g(r)$. They induce, in particular, an effective repulsion between the particles. The most important change is observed in the first peak, because, at low coverage, it is related with the rolling mechanism over one particle, which is indeed very sensitive to HI\cite{ignacio1994}. \subsection{Available surface function} \label{phitheta} \subsubsection{Analysis over the entire coverage range} \label{phitheta1} In order to further analyze to what extent the introduction of the hydrodynamic forces modifies the structure of the particle configurations on the adsorbing surface, we present in this section the comparison of the available surface function $\Phi(\theta)$ corresponding to the BM and the BHM, and denote it by $\Phi^{BM}(\theta)$ and $\Phi^{BHM}(\theta)$ respectively. For a given coverage $\theta$, this quantity is equal to the probability that a deposition trial will be successful. For the ballistic deposition, it is theoretically known that $\Phi^{BM}(\theta)$ behaves as $1-B_3^{BM}\theta^3+O(\theta^4)$. $B_3$ has first been estimated to 9.61205 by Thompson and Glandt\cite{thompson1992}, and later on corrected to 9.94978 by Choi {\em et al.}\cite{choi1993}. It is worth noting that, in fact, $1 - B_3^{BM} \theta^3$ is an acceptable approximation only over a narrow coverage range (up to 10 or 15\%). The absence of the first and second order terms from the series expansion of $\Phi^{BM}(\theta)$ is due to the fact that the presence of at least three absorbed particles is required for an incoming particle to be rejected. Indeed, a new spherical particle can always roll over one or two fixed spheres and reach eventually the surface. This is also true when HI are involved in the deposition process. Hence, the expansion of $\Phi$ as a power series of $\theta$ corresponding to the BHM model must be of the form $\Phi^{BHM}(\theta)=1 - B_3^{BHM} \theta^3+O(\theta^4)$ with $B_3^{BHM}$ {\em a priori} not equal to $B_3^{BM}$. The values of $\Phi(\theta)$ derived from the simulations are shown on figure 5a, where we compare the data obtained in the framework of the BM (400 surfaces) and of the BHM (500 surfaces up to $\theta$ = 0.3, and 300 surfaces from $\theta$= 0.3 up to $\theta$ = 0.5). Without any additional computation it is obvious that both data sets are not identical, even though the sample sizes lead to a non negligible noise level. If we fit a polynomial of the fifth degree (without the $\theta$- and $\theta^2$- terms) to the simulation data over the whole range (coverage from 0 to 0.5), we obtain $B_3^{BM}\approx 9.427$ and $B_3^{BHM}\approx 4.695$. Even though the value $B_3^{BM}$ is not exactly identical to its theoretical prediction (9.94978), it clearly appears that the third-order coefficient is strongly influenced by HI ( $B_3^{BM}/B_3^{BHM}\approx 2$). This reduction indicates that the deposition probability falls off less rapidly when HI are introduced in the model. These values must however be taken with great care due to the poor statistics and to the possible mutual influence of the fitting parameters. Moreover, it must be realized that using the values of the fits for the available surface function leads to the values $\Phi^{BM}(\theta = 0.1)$ = 0.9906 and $\Phi^{BHM}(\theta = 0.1) = 0.9953$, which are almost equal. This confirms the observation following from the comparison of the $g(r)$ (see preceding section) that HI do not deeply alter the deposition process on global average magnitudes in the present conditions. Nonetheless, an additional more precise simulation has been performed, in which 5000 surfaces were covered up to a coverage of 0.1 (fig. 5b). In this regime $\Phi^{BHM}$ ( as $\Phi^{BM}$) should be accurately expressed by $1-B_3 \theta^3$. The value of $B_3^{BHM}$ deduced from these data is 4.849, which is not far from the former estimate derived from the full data set shown in fig. 5a. Figure 6 renders clear that the difference between $\Phi^{BHM}$ and $\Phi^{BM}$ is almost everywhere positive. This systematic character of the sign of $\Phi^{BHM}-\Phi^{BM}$ strengthens the opinion that HI play a significant, though weak, role during the deposition process investigated here. In order to confirm that $B_3$ corresponding to deposition with HI is significantly smaller than its ballistic counterpart, we have developed a new simulation method to determine this coefficient, as explained in the next subsection. \subsubsection{Analysis at low coverage} \label{phitheta2} In the BM, at least three particles are required to form a trap for a depositing particle. The term $B_3^{BM}\theta^3$ appearing in the series expansion of $\Phi^{BM}(\theta)$ precisely reflects the rejection efficiency of three particle configurations leading to the rejection of a new incoming one. As already discussed by Thompson and Glandt\cite{thompson1992} for the BM, an isolated triangle formed by the centers of three adsorbed spheres is a trap if and only if (i) it has no side longer than twice the particle diameter, (ii) all its angles are acute, and (iii) the radius of its circumcircle is not larger than one sphere diameter. For an adsorbing square surface of area $s$, the probability for an incoming particle to be rejected is given by the ratio of the area of the triangle to $s$. When the triangle is not a trap, its exclusion area is evidently equal to zero. In the BM it is easy to build up a large number of representative traps formed by 3 deposited particles and to evaluate their average exclusion area $<A_{ex}>$. Consider now a large adsorbing surface of area $S$ covered with $N$ particles and virtually subdivided into a large number $\nu$ of sub-surfaces of area $s$ ($\nu = S / s$). The probability $p_1$ that a given sub-system contains effectively three particles is given to a good approximation by \begin{equation} p_1=\left(\begin{array}{c}3\\N\end{array}\right) \left(\frac{1}{\nu}\right)^3 \left(1-\frac{1}{\nu}\right)^{N-3} \label{p11} \end{equation} \noindent In this formula we assume that at low coverage the sub-systems have no mutual influence, which is indeed correct. The probability $p$ for an incoming particle over the surface $S$ to be trapped is then given by the product of the probability $p_3$ that it deposits in a sub-system containing at least 3 particles and the conditional probability $q$ that this particle which ends up over a sub-system which is known to contain at least three adsorbed particles is trapped by them. To lowest order in the coverage, $p_3$ is equal to $p_1$ and the conditional probability $q$ is then given by $<A_{ex}>/s$. Thus one gets: \begin{equation} p\approx\left(\begin{array}{c}3\\N\end{array}\right) \left(\frac{1}{\nu}\right)^3 \left(1-\frac{1}{\nu}\right)^{N-3} \frac{<A_{ex}>}{s} \label{p2} \end{equation} The probability $p$ can be identified with $B_3^{BM}\theta^3$ (where $\theta =\pi N/(4 S)$, provided that the diameter of the particles has been taken as unity) in the density expansion of $\Phi$. In the limit when $N\rightarrow\infty, \nu\rightarrow \infty$, with $N/\nu\rightarrow 0$, it follows that: \begin{equation} B_3=\frac{<A_{ex}> s^2}{3! \left(\frac{\pi}{4}\right)^3} \label{b3} \end{equation} A simulation consisting in the deposition of $10^8$ independent sets of three particles on a square surface of side length equal to 5,6,...,40 (in units of the diameter), hence of area ranging from $S=25$ up to 1600, leads to $<A_{ex}> s^2=28.89\pm 0.16$. Inserting this value into eq.(\ref{b3}), one finds the value $B_3^{BM}=9.939\pm 0.055$, in good agreement with the theoretical value 9.94978 given by Choi {\em et al.}\cite{choi1993}. Hence, the method provides a convenient means for estimating the first non-vanishing term of the series expansion of the available surface function $\Phi$. It can also be applied to the deposition process which takes HI into account. In this case, a triangle cannot be a trap if it does not constitute a trap for the BM. However, even though a triangle may act as a trap in the presence of HI, its rejection efficiency is no longer proportional to its geometrical area, but it can be significantly smaller. It can never be larger because, as already pointed out, HI introduce an effective repulsion between the adhering particle and the preadsorbed ones. Therefore, each side of the {\it ballistic} trapping triangle becomes a concave curved line due to the repulsive hydrodynamic effect of the particle located at the opposite vertex. Also as a result of this effective repulsion, the number of traps formed in the presence of HI should be smaller than the ones formed in a pure ballistic "experiment" for the same coverage. The result of these various effects can only be evaluated by simulation. We have therefore developed a special algorithm aimed at the construction of triangles in the presence of HI. A large number of sets of three particles were deposited on surfaces of area $s$ in the presence of HI. In order to evaluate $<A_{ex}>$, for each set of three particles the exclusion area $A_{ex}$ should be determined by depositing a fourth particle on the surface $s$ a large number $N_p$ of times. $ A_{ex}/s$ is then given by the ratio of the number of successful deposition trials of this fourth particle to the total number of trials $N_p$. $<A_{ex}>$ is then simply the average of these exclusion areas $A_{ex}$ over the great number of independent sets of three initial particles. It can be noticed that many of these sets lead to an exclusion area which is zero. However, this procedure to determine $<A_{ex}>$ is very time consuming from a simulation point of view. We have thus, in a first step, approximated the exclusion area of a triangle by its geometrical area. We have generated $10^5$ triangles taking HI into account, counting which fraction of the generated triangles constitute a trap according to BM. This leads obviously to an upper limit for $B_3^{BHM}$, estimated to be approximately 7.7. This rough approximation shows that in the presence of HI, $B_3^{BHM}$ is at least 22\% lower than in the ballistic case. This arises from the fact that, on average, the trapping triangles generated by BHM are larger than the ones obtained in the BM due to the effective repulsion of HI. In order to get a more precise estimate of $B_3^{BHM}$, we should also take into account the change related to the decrease of the excluding area for a given trap. However, the general simulation scheme introduced in the previous paragraph is too time consuming, as already mentioned. We have looked at a simpler procedure by studying first the relative frequencies of the different trapping triangles. We have characterized a triangle by its largest angle and its area, and studied the histogram of trapping triangles. In fig. 7(a-b), we have plotted the histograms of relative frequencies for BM and BHM respectively, constructed using the same procedure which has allowed us to give an upper limit for $B_3$. In both cases, one can see that the equilateral touching triangles are the most probable objects, because the second particle has rolled over the first one, and the third rolls over the two preadsorbed ones. Afterwards, there exists a relatively high curve representing those triangles in which the two latter adhering spheres have rolled only over one preadsorbed particle. One can easily verify that this curve behaves as $\sin (\alpha)/2$, $\alpha$ being the largest angle of the triangle, since this is the area of such triangles. Besides these singular contributions, there exists a plateau, which corresponds to those triangles in which only one adhering sphere has rolled over a preadsorbed particle. Finally, the remaining traps formed without rolling give a negligible contribution to the histogram. Although this general description applies for both models, in the BHM the rolling mechanisms is not as effective, due to the repulsion induced by HI. Nonetheless, we have assumed that by studying only the most probable triangles, one can still improve our first estimate of $B_3$. To this end, we have performed numerical simulations in which we prescribe a rectangle of an area twice a given trapping triangle, and we calculate its excluding area by the general and rigorous method depicted in the previous paragraph. We fix the triangle to have two sides of length 1, and the largest angle to be larger than $60^{o}$, and for each given triangle, we analyze the deposition of $10^5$ particles starting within the prescribed rectangle, counting the fraction of such particles which are able to reach the surface. As shown in table I, the trapping area remains almost constant with the angle. If one compares the results of the BHM to BM, one can clearly see that for these small triangles, the excluding area is reduced to almost half its BM counterpart due to HI repulsions. If we take into account that the mean geometrical area of the triangles is larger in the presence of HI, which reduces the fraction of traps, and that in addition the excluding area is also reduced by the effective repulsion, we get as a better estimate $B_3^{BHM}\approx 7.7\mbox{x}0.526=4.05$, where 0.526 is the ratio of the mean rejection fraction in BHM (0.2634) to the mean rejection fraction obtained by BM (0.5005), according to table I. We can then set bounds to this coefficient, since it should obey, $4.146\pm0.003<B_3^{BHM}<7.7\pm0.003$. It is worth pointing out that the lower bound is close to the one obtained by fitting $\Phi$ with a power series as seen in subsection \ref{phitheta1}. There, we obtained a value of 4.7. Our new estimate has to be a lower estimate, since we have disregarded the influence of the larger triangles, which also have a larger exclusion area. Nonetheless, the lower estimate is quite close to the value obtained by the fitting procedure, indicating that, indeed, $B_3^{BM}/B_3^{BHM}\approx 2$. This result shows that HI strongly influence the local structure of the deposits, modifying the triplet distribution. We have finally looked at the fraction of incoming particles ending inside an equilateral triangle as a function of its side. Again, for a given equilateral triangle, we let deposit $10^4$ particles and calculate the fraction which ends within the triangle. As seen in fig. 8, even for triangles of side length 7 diameters, the fraction is smaller than the value 0.5 predicted by BM. This again shows the long-range character of HI, and its tendency to form looser aggregates on the substrate with respect to BM predictions. It would be interesting to determine the value of $B_3^{BHM}$ more precisely but this requires long computer times. Its study, as well as the evolution of $B_3$ with $R^*$ is currently under way by using the general and rigorous method presented here. It will be the purpose of a future article. \begin{table} \begin{center} \begin{tabular}{ c c c }\hline angle ({\it in degrees}) & BM & BHM \\ \hline \hline 60.0 & 0.4987 & 0.2587 \\ 67.5 & 0.4995 & 0.2581 \\ 75.0 & 0.5077 & 0.2694 \\ 82.5 & 0.5003 & 0.2632 \\ 89.9 & 0.4961 & 0.2676 \\ \hline \label{tabb1} \end{tabular} \caption{Fraction of incoming particles which are trapped on an area which is twice the area of the corresponding rectangle triangle both for the BM and BHM. The largest angle characterizing the trapping triangle is given in the first column, and the shortest side is always of one particle diameter.} \end{center} \end{table} \section{Conclusions} This article presents a first study in which experimental results concerning the deposition of large particles on a solid surface under the influence of gravity have been compared to both the ballistic deposition model and a simulation model which takes {\bf hydrodynamic interactions} (HI) into account. We have performed numerical simulations of the deposition of spherical particles on a surface, incorporating the effect of HI, in the case when Brownian motion can be neglected, thus generalizing for a bidimensional surface the previous simulation data for the deposition on a linear substrate\cite{ignacio1994}. The major effect of HI is the induction of an effective repulsion between the adsorbing sphere and the preadsorbed particles. We have then been able to compare the simulation data with the pair distribution function obtained experimentally for the deposition of melamine particles. We have shown that in the full range of surface coverages, the comparison of the BHM with the experimental results leads to a better agreement than the one obtained when comparing BM to the experimental data. In particular, the height of the first peak of $g(r)$ is improved, especially at low coverages, when the fraction of rolling particles plays an important role. The broadening of the curve after the first peak, which the BM always underestimates, is also in better agreement. These features appear as a result of the effective repulsion induced by HI, although they lead to a quantitative small change in the curve (except in the height of its first peak), because HI does not alter the qualitative features of the adsorption process. As a matter of fact, HI will become more significant the closer we look at the structure of the adsorbed layer. In this sense, we have also studied the third virial coefficient $B_3$ of the available surface function $\Phi$. We have shown that this coefficient drops to half its BM value. This comes from the fact that, due to the effective repulsion, at a given coverage, the fraction of triangles that form a trap is smaller in the BHM, and that the excluding area of a trapping triangle is also smaller than the geometrical area of the triangle. However, the decrease of $B_3$ does not produce a significant change in $\Phi$. As a general remark, we can conclude that HI have only a small effect on global averaged quantities. In this respect, the BM constitutes a good approximation, and this study thus validates the BM which has already been widely studied from a theoretical point of view. On the other hand, for a fine analysis of the local structure, one has to take HI into account. It is not at all obvious, however, whether such conclusions remain valid when the diffusion of the particles in the bulk plays some role {\em i.e}. for value of $R^*$ of the order or smaller than 3. In this case HI can again play a major role and this should be investigated in the near future. \section*{Notation} \noindent BHM : Sequential adsorption model with hydrodynamic interactions at large gravity and BM-like rules\\ \noindent BM : Ballistic model \\ \noindent HI : Hydrodynamic interactions\\ \noindent RSA : Random sequential adsorption model\\ \section*{Acknowledgements} The authors wish to thank D. Bedeaux, G. Koper and E. Mann for fruitful discussions.This work has been supported by the Commission of the European Communities under grant SCI$^{*}$-CT91-0696 and by CICYT (Spain), grant PB92-0895, as well as by an INSERM-CSIC project in the framework of a France-Spain cooperation.
1,116,691,497,949
arxiv
\section{Introduction} In Standard Model (SM), the most general weak basis~(WB) transformation~\cite{b2}, that leaves the physical content invariable and the up- and down-quark mass matrices $M_{u}$ and $M_{d}$ Hermitian~\footnote{The quark mass matrices are Hermitian due to the polar decomposition theorem, where the unitary component can be absorbed in the right-handed quark fields.}, is \begin{equation} \label{1.1a} \begin{split} M_u&\longrightarrow M_u^\prime=U^\dag M_u U,\\ M_d&\longrightarrow M_d^\prime=U^\dag M_d U, \end{split} \end{equation} where $U$ is an arbitrary unitary matrix. We say that the two quark mass matrices $M_{u,d}$ and $M_{u,d}^\prime$ are equivalent each other. So, this implies that the number of equivalent mass matrices is infinity. Hence, we are able to explicitly construct texture zeros in quark mass matrices through WB transformations. If these texture zeros exist, the WB transformation is able to find them. The reason is that, as was shown in my paper~\cite{b1}, the WB transformation is exhaustive~(complete) finding all possibilities, included possible four and five-zero textures. Through WB transformations, Branco et al.~\cite{b2} show that is always possible to find, at most, three zeros in quark mass matrices with no physical meanings. But, this does not restrict the number of zeros can be found by applying the WB transformation to mass matrices, the case is that the model must be put into a physical context. Therefore, we have found additional zeros~{(four- and even five-zero textures~\cite{b1})} by using the recent quark mass and mixing data. These additional zeros now have physical meanings because they were obtained from specific experimental data. {\section{up-quark Mass Matrix in Diagonal Form}} One point of discussion is that to facilitate the analysis the initial quark mass matrices used by me is as follows~\cite{b1} \begin{equation} \label{1.1} \begin{split} M_u&=D_u=\begin{pmatrix} \lambda_{1u}&0&0\\ 0&\lambda_{2u}&0\\ 0&0&\lambda_{3u} \end{pmatrix},\\ M_d&=VD_dV^\dag, \end{split} \end{equation} where the up and down diagonal matrices $D_u$ and $D_d$ contain the respective quark mass eigenvalues, and $V$ is the usual quark Cabibbo-Kobayashi-Maskawa~(CKM) mixing matrix. The authors' comments claim that we do not start with the most general mass matrices. But this is not true. The starting matrices~\eqref{1.1}, used in papers like~\cite{b2,b3}, are as general as any other {one}. The reason is that starting from arbitrary Hermitian matrices $M_u$ and $M_d$, and using their respective diagonalizing matrices $U_u$ and $U_d$, and performing a WB transformation~\eqref{1.1a} using for this case the unitary matrix $U=U_u$. We have \begin{equation*} \begin{split} M_u\longrightarrow M'_u&=U^\dag_u\, M_u\, U_u=D_u, \\ M_d=U_d\,D_d \,U_d^\dag\longrightarrow M'_d&=U^\dag_u\,(U_d \,D_d \,U_d^\dag)\,U_u, \\ &=(U^\dag_u\,U_d)\,D_d\,(U_d^\dag \,U_u), \\ &=V \,D_d \,V^\dag , \end{split} \end{equation*} where the CKM mixing matrix $V=U^\dag_u\, U_d$ was considered. Additionally, note also that the three no-physical-texture zeros mentioned above appear also in~\eqref{1.1}. Although, the crux of the comments is below. \section{Phases and The CMK Mixing Matrix} Let us resolve the problem for a particular case. Let us consider the numerical quark mass matrices given in Eq.~(4.22) of my paper~\cite{b1}. Which was also considered by the authors' comments in row (a) of Table 1. Apparently, the corresponding CKM matrix obtained is not compatible with the recent quark mixing data. The numerical quark mass matrices in discussion are~(in MeV units) \begin{widetext} {\small \begin{align*} M_u&=\begin{pmatrix} 0 & 0 & -{92.3618}+{157.694} i \\ 0 & {5748.17} & {28555.1}+{5911.83} i \\ -{92.3618}-{157.694} i & {28555.1}-{5911.83} i & 166988 \end{pmatrix}, \quad M_d&=\begin{pmatrix} 0 & {13.9899} & 0 \\ {13.9899} & 0 & {424.808} \\ 0 & {424.808} &{2796.9} \end{pmatrix}, \end{align*}} where their diagonalizing matrices are respectively\footnote{These diagonalizing matrices were obtained by using Mathematica, but different matrices are obtained by using other software like maxima, octave, \ldots. Actually, the difference is just in the phases.} {\footnotesize \begin{equation} \label{2.1} U_u=\begin{pmatrix} 0.998779\times e^{2.10064 i} & 0.0493829\times e^{2.10064 i} & 0.00104594\times e^{2.10064 i} \\ 0.0484608\times e^{0.20415i} & 0.983788\times e^{-2.93744i} & 0.172662\times e^{0.204148 i} \\ -0.00955555\times e^{0i} & 0.172401\times e^{0i} & 0.984981\times e^{0i} \end{pmatrix}, \quad U_d=\begin{pmatrix} {0.978718} & -{0.205210} & {0.000718698} \\ {0.202880} & {0.968118} & {0.146926} \\ -{0.0308464} & -{0.143653} & {0.989147} \end{pmatrix}, \end{equation}} from which the authors' comments have obtained the quark mixing matrix, Eq.~(7). Nevertheless, generalizing the result in~\eqref{2.1}, no physical phases~($x$, $y$, $z$, $v$ and $w$) can be added to the matrices, obtaining the following diagonalizing matrices \begin{subequations} \label{2.2} {\small \begin{align} U'_u=&\begin{pmatrix} 0.998779\times e^{2.10064 i}\times e^{x i} & 0.0493829\times e^{2.10064 i}\times e^{y i} & 0.00104594\times e^{2.10064 i} \times e^{z i} \\ 0.0484608\times e^{0.20415i}\times e^{x i} & 0.983788\times e^{-2.93744i}\times e^{y i} & 0.172662\times e^{0.204148 i} \times e^{z i} \\ -0.00955555\times e^{0i}\times e^{x i} & 0.172401\times e^{0i}\times e^{y i} & 0.984981\times e^{0i} \times e^{z i} \end{pmatrix}, \\ U'_d=&\begin{pmatrix} {0.978718}\times e^{iv} & -{0.205210}\times e^{iw} & {0.000718698} \\ {0.202880}\times e^{iv} & {0.968118}\times e^{iw} & {0.146926} \\ -{0.0308464}\times e^{iv} & -{0.143653}\times e^{iw} & {0.989147} \end{pmatrix}, \end{align}} \end{subequations} and where there is no way to distinguish what are the ``true'' matrices of diagonalization. Even further, if you choose the values $x=-1.30524$, $y=0.790611$, $z=-0.00515513$, $v=0.785572$ and $w=-2.14216$, you obtain {\small \begin{align*} U'_u=&\begin{pmatrix} 0.998779\times e^{0.795395 i} & 0.0493829\times e^{2.89125 i} & 0.00104594\times e^{2.09548 i} \\ 0.0484608\times e^{-1.10109 i} &0.983788\times e^{-2.14683 i} & 0.172662\times e^{0.198993 i} \\ 0.00955555\times e^{1.83635 i} & 0.172401\times e^{0.790611 i} & 0.984981\times e^{-0.00515513 i} \end{pmatrix}, \\ U'_d=&\begin{pmatrix} {0.978718}\times e^{0.785572 i} & -{0.205210}\times e^{-2.14216 i} & {0.000718698} \\ {0.202880}\times e^{0.785572 i} & {0.968118}\times e^{-2.14216 i} & {0.146926} \\ -{0.0308464}\times e^{0.785572 i} & -{0.143653}\times e^{-2.14216 i} & {0.989147} \end{pmatrix}, \end{align*}} two diagonalizing matrices that are equally valid. As a result, a quark CKM mixing matrix compatible with the recent mixing data~\cite{b6} is derived {\small \begin{equation} \label{3.3} V_{ckm}=U_u^{\prime\dag}\cdot U'_d= \begin{pmatrix} 0.974276& 0.225334 &0.00124462 - 0.0032841 i \\ -0.225194-0.000106564 i& 0.973443-0.0000294788 i& 0.0411845 \\ 0.00806881 - 0.00319789 i&-0.0404056 - 0.000739786 i& 0.999145 \end{pmatrix}. \end{equation}} \end{widetext} As you can observe, the phases included in~\eqref{2.2} are not against to reduce the number of free parameters. These phases come out naturally from the diagonalizing process, and it is impossible to avoid them. When you establish specific diagonalization matrices, you are choosing specific phases. These phases are just different ways to present the CKM matrix as was told in my paper~\cite{b1} above equation (3.29). Going even further these phases have an interpretation and its nature is clarified in the next section. \section{No Physical Phases} The additional phases introduced in matrices~\eqref{2.2} leave the physical content invariable, including the Jarlskog invariant quantity. This can be seen by bringing the matrices~\eqref{2.2} to the following products \[ U'_u= U_u\,f_1\quad\textrm{and}\quad U'_d= U_d\,f_2, \] where the diagonal matrices $f_1=\textrm{diag}(e^{xi},e^{yi},e^{zi})$ and $f_2=\textrm{diag}(e^{vi},e^{wi},1)$; and $U_u$ and $U_d$ are given in~\eqref{2.1}. Such that the CKM mixing matrix obtained in~\eqref{3.3} becomes % \[ V_{ckm}=U_u^{\prime\dag}\,U'_d=(U_u\,f_1)^\dag\,(U_d\,f_2)=f_1^\dag\,(U_u^\dag\,U_d)\,f_2, \] therefore \begin{equation} \label{2.3} U_u^\dag\,U_d= f_1\,V_{ckm}\,f_2^\dag, \end{equation} which implies the following two results: \begin{enumerate} \item First, the mixing matrix obtained from Eq.~\eqref{2.1}, i.e. $U_u^\dag\,U_d$, apparently does not fit the standard form of the CKM mixing matrix, but is well known that the five phases present in $f_1$ and $f_2$ can be rotated away~\cite{b4}, such that both expressions for the CKM, in~\eqref{2.3}, are equivalent. Therefore the Jarlskog invariant, as well as any other physical quantity, is not affected by adding phases as given in~\eqref{2.2}. And as a result, the phases $x,y,z,v$ and $w$ have no physical meaning. \item Second, if there are still doubts, $f_1$ and $f_2$ in~\eqref{2.3} add phases to the matrix elements of $V_{ckm}$, where each phase of $f_2$ is placed in the same column and each phase of $f_1$ in the same row. Therefore, the Jarlskog invariant $J=\textrm{Im}(V_{us}V_{ub}^*V_{cs}^*V_{cb})$ is not affected by adding these additional phases, because they cancel out in the same row $(V_{us}, V_{ub}^*)$ and $(V_{cs}^*, V_{cb})$, and in the same column $(V_{us}$, $V_{cs}^*)$ and $(V_{ub}^*, V_{cb})$. \end{enumerate} Finally, the Jarlskog invariant quantity is \[ J=\textrm{Im}(V_{us}V_{ub}^*V_{cs}^*V_{cb})=2.96695\times10^{-5}, \] clearly inside the range given by PDG 2012~\cite{b6}, i.e., $(2.80-3.16)\times10^{-5}$. The same for the quark masses~(in MeV): $m_d=2.90, m_s=66, m_b=2860$, $m_u=1.75, m_c=638, m_t=172100$~\cite{b6}. We can consider other phase invariant quantities like the inner angles of the CKM unitarity triangle: $\beta=\arg(-\mbox{\footnotesize ${V_{cd}V_{cb}^*}/{V_{td}V_{tb}^*})$}=21.6^\circ$, $\alpha=\arg(-\mbox{\footnotesize ${V_{td}V_{tb}^*}/{V_{ud}V_{ub}^*})$}=89.1^\circ$, and $\gamma=\arg(-\mbox{\footnotesize ${V_{ud}V_{ub}^*}/{V_{cd}V_{cb}^*})=69.2^\circ$}$. However, in the frame of the SM, the usual formula for the Kaon CP violating parameter $\epsilon_k$, is valid only in the basis where $V_{ud}V_{us}^*$ is real{~\cite{b6,b5}}, for that reason the phases given in~\eqref{2.2} must be considered in order to transform the CKM mixing matrix into its standard convention. \section{Conclusions} To begin with, {the WB transformation is complete, so we can find all possible quark mass matrices representing the model {by} starting from a specific quark mass matrices.} It is important to mention that, in the SM, {is always possible to find a maximum of three no physical vanishing elements in the quark mass matrices by performing a WB transformation.} In the process does not matter the value of physical quantities. But if we want to find additional zeroes is necessary to take into account physical considerations. Another important result, emphasized by other authors, is that the quark mass matrix structure given in~\eqref{1.1}, which was {called} in my paper~\cite{b1} as the{ \it u-diagonal representation}, is so general as any other one. These matrices are deduced from a WB transformation and have the advantage of having available the quark masses and the CKM matrix elements. The phases included in the diagonalizing matrices~\eqref{2.2}, are precisely the five phases that can be rotated away through the phase redefinition of the left-handed up and down quark fields as was shown in~\eqref{2.3}, and as a consequence these phases have no physical meaning. Even further, as a result, these phases does not affect the invariance of the Jarlskog quantity, and so on. Respect to the Kaon CP violating parameter $\epsilon_k$, it must be calculated in a basis where $V_{ud}V_{us}^*$ is real. For that reason, phases must be included in the diagonalizing matrices in order to achieve this requirement. Definitely, the introduction of extra additional phases is not against the basic spirit of the texture specific mass matrices which was to control the number of free parameters. These phases have no physical meaning, but their inclusion is necessary to adjust the resulting CKM matrix to its standard choice. \section*{Acknowledgments} This work was partially supported by Department of Physics in the Universisad de Nari\~no, approval Agreement Number 009.
1,116,691,497,950
arxiv
\section{Introduction} The discrete version of the potential modif\/ied KdV equation that we want to investigate in this paper is the nonlinear partial dif\/ference equation \begin{gather}\label{d-p-mKdV-1} Q(v,v_1,v_2,v_{12};a_1,a_2)\equiv a_1(vv_2-v_{1}v_{12})=a_2(vv_1-v_2v_{12}). \end{gather} The notation we adopted here and later is as follows, with forward shift operators $T_{n_1}$, $T_{n_2}$: \begin{alignat*}{3} & v:=v(n_1, n_2),\qquad & &v_1:=T_{n_1}(v)=v(n_1+1, n_2),& \\ & v_2:=T_{n_2}(v)=v(n_1, n_2+1),\qquad & &v_{12}:=T_{n_1}T_{n_2}(v)=v(n_1+1, n_2+1),& \end{alignat*} and $a_1$, $a_2$ denote lattice parameters associated with the directions $n_1$, $n_2$ respectively. Equation~\eqref{d-p-mKdV-1} was derived in \cite{Nijhoff-2009} from the Cauchy matrix approach, and was originally found in~\cite{Nijhoff-1983, Nijhoff-1984} through the direct linearization approach. Up to a gauge transformation $v\rightarrow i^{n_1+n_2}v$ and changing the lattice parameters as their reciprocals, equation~\eqref{d-p-mKdV-1} is equivalent to the equation H3$_{\delta=0}$ in the Adler--Bobenko--Suris (ABS) classif\/ication~\cite{ABS-2002}, \begin{gather}\label{H3} H3_\delta\equiv a_1(vv_1+v_{2}v_{12})-a_2(vv_2+v_1v_{12})=\delta\big(a_2^2-a_1^2\big). \end{gather} There are several papers dedicated to closed-form $N$-soliton solutions of the `ABS list' \cite{Bulter-2012, Nalini-2010, Hietarinta-2009, Nijhoff-2009}. So, in~\cite{Nijhoff-2009}, based on a Cauchy matrix structure, the closed-form $N$-soliton solution of equation~\eqref{d-p-mKdV-1} was derived, in~\cite{Hietarinta-2009}, following Hirota's method, the authors derive bilinear dif\/ference equations of equation~\eqref{H3} and its $N$-soliton solutions in terms of Casoratian determinants, and in~\cite{Bulter-2012}, by the discrete inverse scattering transform, the authors point out that the soliton solutions of equation~\eqref{H3} derived from the Cauchy matrix approach are exactly the solutions obtained from ref\/lectionless potentials. The Hirota--Miwa equation \cite{Hirota-1981, Miwa-1982} is the three-dimensional discrete integrable system \begin{gather}\label{H-M-1} (a_1-a_2)\tau_{12}\tau_3+(a_2-a_3)\tau_{23}\tau_1+(a_3-a_1)\tau_{13}\tau_2=0, \end{gather} where lattice parameters $a_k$ are constants, $k=1, 2, 3$, and for $\tau=\tau(n_1,n_2,n_3)$ each subscript $i$ denotes a forward shift in the corresponding discrete variable $n_i$. It was discovered by Hirota~\cite{Hirota-1981} as a fully discrete analogue of the two-dimensional Toda equation and later Miwa~\cite{Miwa-1982} showed that it was intimately related to the KP (Kadomtsev--Petviashvili) hierarchy. In paper~\cite{Hirota-1998}, Hirota gives the discretization of the potential modif\/ied KdV equation, which can be transformed into the form~\eqref{d-p-mKdV-1}, and shows that it is a 4-reduction of the Hirota--Miwa equation (which Hirota named as the discrete analogue of a generalized Toda equation). In this paper, we discuss in detail the Darboux and binary Darboux transformations and how these may be used to obtain exact solutions of the discrete potential modif\/ied Korteweg--de~Vries (d-p-mKdV) equation \eqref{d-p-mKdV-1}. In contrast to the approaches presented in \cite{BS-2008,Nijhoff-2009,Nijhoff-1983}, we get \eqref{d-p-mKdV-1} and its Lax pairs by reducing the Hirota--Miwa equation~\eqref{H-M-1} and its Lax pairs. In fact, the 2-periodic reduction method studied here has already been investigated in \cite{ALN-2012} where authors present a multidimensionally consistent hierarchy of discrete systems whose f\/irst member is the equation \eqref{d-p-mKdV-1}. Otherwise, this was ref\/ined and extended to the non-commutative case in~\cite{D-2013}. In \cite{ALN-2012, BS-2002, BS-2008, D-2013, FN-JPA-2001}, as we see that the integrability is understood in the sense of the multidimensional consistency property, which gives a Lax pair directly. We here, through a~2-periodic reduction of the linear systems of the Hirota--Miwa equation~\eqref{H-M-1}, obtain the Lax pairs of the equation~\eqref{d-p-mKdV-1} which allows the application of the classical Darboux transforma\-tions~\mbox{\cite{Matveev-1979, Matveev-1991}}. However, up to gauge transformations, these Lax pairs are coincident with the ones given by the multidimensional consistency property~\cite{BS-2008}. This paper is part of the work which will explore the equations in the ABS list, their Lax pairs and Darboux transformations as reductions of the Hirota--Miwa equation. The outline of this paper is as follows. In Section~\ref{Hirota--Miwa}, we recall important results on Darboux transformations and binary Darboux transformations of the Hirota--Miwa equation. In particular, in a departure from the results in \cite{Nimmo-1997, Nimmo-Chaos-2000, Shi-2014}, we write the linear system of Hirota--Miwa equation in a dif\/ferent form, by the gauge transformation $\phi\rightarrow \prod\limits_{i=1}^{3}a_i^{-n_i}\phi$, which is suitable for making the reduction. In Section~\ref{sec-3}, we show that how the d-p-mKdV equation and its Lax pairs in matrix form arise from the Hirota--Miwa equation by a $2$-periodic reduction. Then its Darboux transformations and binary Darboux transformations are derived and it is shown how these may be used to construct exact solutions. \section{Hirota--Miwa equation}\label{Hirota--Miwa} The Hirota--Miwa equation \eqref{H-M-1} arises as the compatibility conditions of the linear system \begin{gather}\label{H-M-LP-1} \phi_i-\phi_j=(a_i-a_j)u^{ij}\phi, \qquad 1\leq i<j\leq3, \end{gather} where for $\phi=\phi(n_1,n_2,n_3)$ each subscript $i$ denotes a forward shift in the corresponding discrete variable~$n_i$, for example, $\phi_{1} = T_{_{n_1}} (\phi) = \phi(n_1+1,n_2,n_3)$. This linear system~\eqref{H-M-LP-1} is compatible if and only if \begin{subequations}\label{dKP-u-form} \begin{gather} (a_1-a_2)u^{12}+(a_2-a_3)u^{23}+(a_3-a_1)u^{13}= 0, \label{dKP-u-form-a}\\ \big(u^{ij}\big)_{_{k}} u^{ik}=\big(u^{ik}\big)_{_{j}} u^{ij}. \label{dKP-u-form-b} \end{gather} \end{subequations} Note that when one uses the formula $u^{ij}=\tau_{ij}\tau/\tau_i\tau_j$, \eqref{dKP-u-form-a} gives~\eqref{H-M-1} and~\eqref{dKP-u-form-b} is satisf\/ied identically. A second way is to suppose $u^{ij}=(v_j-v_i+(a_i-a_j))/(a_i-a_j)$. This ansatz solves~\eqref{dKP-u-form-a} exactly and~\eqref{dKP-u-form-b} becomes the discrete potential KP (d-p-KP) equation~\cite{Nijhoff-1984-dpkp}. In this paper, in particular we deal with the Hirota--Miwa equation~\eqref{H-M-1} together with its linear system in the form~\eqref{H-M-LP-1}. Using the reversal-invariance property of the Hirota--Miwa equation, i.e., it is invariant with respect to the reversal of all lattice directions $n_i\rightarrow -n_i$, we have the linear system in formal adjoint form~\cite{Nimmo-1997} \begin{gather}\label{H-M-LP-2} \psi_{\b i}-\psi_{\b j}=(a_i-a_j)\frac{\tau_{\b i\b j}\tau}{\tau_{\b i}\tau_{\b j}}\psi,\qquad 1\leq i<j\leq3. \end{gather} The subscript $\b i$ denotes a backward shift with respect to $n_i$, for example, $\psi_{\b 1} = T_{_{n_1}}^{-1}(\psi) = \psi(n_1-1,n_2,n_3)$. \subsection{Darboux and binary Darboux transformations} The basic Darboux transformation for the Hirota--Miwa equation is stated in the following proposition. \begin{Proposition}\label{prop1-HM} Let $\theta$ be a non-zero solution of the linear system \eqref{H-M-LP-1} for some~$\tau$. Then the transformation \begin{gather* \mathrm{DT}^{\theta}\colon \ \phi\rightarrow\frac{C _{_{[i]}} (\theta, \phi)}{\theta}, \qquad \tau\rightarrow\theta\tau, \end{gather*} leaves \eqref{H-M-LP-1} invariant, where $C _{_{[i]}} (\theta, \phi)=\theta\phi_i-\theta_i\phi$, $i=1,2,3$, using the subscript~$[i]$ to designate that the forward shifts of the determinant $C _{_{[i]}} (\theta, \phi)$ is with respect to the variable~$n_i$. \end{Proposition} Next we write down the closed form expression for the result of~$N$ applications of the above Darboux transformation, which give solutions in Casoratian determinant form. To do this we need to def\/ine the Casoratian of $N$ solutions. Let $\bm\theta=({\theta^{1}}(n_1,n_2,n_3), {\theta^{2}}(n_1,n_2,n_3), \dots$, ${\theta^{N}}(n_1,n_2, n_3))^T$ be an $N$-vector solution of~\eqref{H-M-LP-1}. The Casoratian determinant (with forward-shifts) can be written as \begin{gather* C\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big) = \big|\bm\theta,T_{_{n_i}}(\bm\theta), T_{_{n_i}}^2(\bm\theta)\dots, T_{_{n_i}}^{N-1}(\bm\theta)\big|, \qquad 1\leq i \leq 3, \end{gather*} which may also be unambiguously def\/ined in the following notation as \begin{gather* C_{_{[i]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big) = \big|\bm\theta(0),\bm\theta(1), \bm\theta(2), \dots, \bm\theta(N-1)\big|, \qquad 1\leq i \leq 3, \end{gather*} where $\bm\theta(k)$ denotes the $N$-vector $\big({\theta^{1}}(n_1,n_2,n_3), {\theta^{2}}(n_1,n_2,n_3), \dots, {\theta^{N}}(n_1,n_2,n_3)\big)^T$ subject to the~$k$ times shift $T_{_{n_i}}^{k}$ on $n_i$ which gives $n_i \rightarrow n_i+k$, $0\leq k \leq N-1$, and $i=1, 2$ or $3$, the same value being taken for~$i$ in each column in the determinant. Then we have the following. \begin{Proposition}\label{prop1N-HM} Let $\theta^{1},\theta^{2}, \dots, \theta^{N}$ be non-zero, independent solutions of the linear system~\eqref{H-M-LP-1} for some~$\tau$, such that $C_{_{[i]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)\neq0$. Then the $N$-fold Darboux transformation \begin{gather* \phi\rightarrow\frac{C_{_{[i]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N},\phi\big)}{C_{_{[i]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)}, \qquad \tau\rightarrow C_{_{[i]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)\tau, \end{gather*} leaves \eqref{H-M-LP-1} invariant. \end{Proposition} Now we can apply the ref\/lections $n_i \rightarrow -n_i$, $i=1, 2, 3$, to the above results to deduce adjoint Darboux transformation for the second linear system~\eqref{H-M-LP-2}. \begin{Proposition}\label{prop2-HM} Let $\rho$ be a non-zero solution of the linear system \eqref{H-M-LP-2} for some $\tau$. Then the transformation \begin{gather* \mathrm{DT}^{\rho}\colon \ \psi\rightarrow\frac{C _{_{[\b i]}} (\rho, \psi)}{\rho}, \qquad \tau\rightarrow\rho\tau, \end{gather*} leaves \eqref{H-M-LP-2} invariant, where $C _{_{[\b i]}} (\rho, \psi)=\rho_{_{\b i}}\psi-\rho\psi_{_{\b i}}$, $i=1,2,3$, using the subscript $[\b i]$ to designate that the backward shifts of the determinant $C _{_{[\b i]}} (\rho, \psi)$ is with respect to the variable~$n_i$. \end{Proposition} The $N$-fold adjoint Darboux transformation is expressed in terms of the Casoratian \begin{gather* C_{_{[\b i]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big) = |\bm\rho(0),\bm\rho(-1), \bm\rho(-2), \dots, \bm\rho(-N+1)|, \qquad 1\leq i \leq 3, \end{gather*} where $\bm \rho=\big(\rho^1, \rho^2, \dots, \rho^N\big)^T$ and $\bm\rho(-k)=T_{_{n_i}}^{-k}(\bm \rho)=\bm\rho|_{_{n_i\rightarrow n_i-k}}$, $0\leq k\leq N-1$, the same $i=1, 2$ or~$3$ be taken in all columns. \begin{Proposition}\label{prop2N-HM} Let ${\rho^{1}}, {\rho^{2}}, \dots, {\rho^{N}}$ be $N$ non-zero independent solutions of the linear system~\eqref{H-M-LP-2} for some $\tau$, such that $C_{_{[\b i]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big)\neq 0$. Then the $N$-fold adjoint Darboux transformation \begin{gather* \psi\rightarrow\frac{C_{_{[\b i]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N},\psi\big)}{C_{_{[\b i]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big)},\qquad \tau\rightarrow C_{_{[\b i]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big)\tau, \end{gather*} leaves \eqref{H-M-LP-2} invariant. \end{Proposition} To construct a binary Darboux transformation, we introduce the potential $\omega=\omega(\phi, \psi)$, def\/ined by the relations \begin{gather} \Delta_i\omega(\phi, \psi) =\phi \psi_i, \qquad i=1, 2, 3, \label{omega-1} \end{gather} where $\Delta_i=T_{n_i}-1$ is the forward-dif\/ference operator in discrete variable~$n_i$. If~$\phi$ and~$\psi$ satisfy the linear systems~\eqref{H-M-LP-1} and~\eqref{H-M-LP-2} for some~$\tau$, respectively, then~\eqref{omega-1} are compatible in the sense as $\Delta_{i}(\phi \psi_j)=\Delta_{j}(\phi \psi_i)$, for $i<j$. So the potential $\omega$ is well-def\/ined. The following proposition gives the binary Darboux transformation of the Hirota--Miwa equation~\eqref{H-M-1}. \begin{Proposition}\label{prop12-HM} For some $\tau$, let $\theta$ and $\phi$ be two non-zero solutions of the linear system~\eqref{H-M-LP-1}, $\rho$ and $\psi$ be two non-zero solutions of the linear system \eqref{H-M-LP-2}, then \begin{alignat*}{4} & \mathrm{BDT^{\theta, \rho}}\colon \quad && \phi\rightarrow\phi-\theta \omega(\theta, \rho)^{-1}\omega(\phi,\rho), \qquad && \tau\rightarrow\omega(\theta, \rho)\tau,& \\ & \mathrm{aBDT^{\theta, \rho}}\colon \quad && \psi\rightarrow\psi-\rho \omega(\theta, \rho)^{-1}\omega(\theta,\psi), \qquad && \tau\rightarrow\omega(\theta, \rho)\tau, & \end{alignat*} leave \eqref{H-M-LP-1} and \eqref{H-M-LP-2} respectively invariant. \end{Proposition} The $N$-fold iteration of these binary Darboux transformations are given below. \begin{Proposition}\label{prop12N-HM} Let $\bm\theta=\big(\theta^1,\dots,\theta^N\big)^T$ and $\bm\rho=\big(\rho^1,\dots,\rho^N\big)^T$ satisfy linear systems~\eqref{H-M-LP-1} and~\eqref{H-M-LP-2} for some~$\tau$ respectively. Then \begin{alignat*}{3 & \phi\rightarrow\begin{vmatrix} \omega\big(\bm\theta, \bm \rho^T\big) & \bm\theta\\ \omega\big(\phi, \bm \rho^T\big) & \phi \\ \end{vmatrix} \big|\omega\big(\bm\theta, \bm \rho^T\big) \big|^{-1}, \qquad && \tau\rightarrow\big|\omega\big(\bm\theta, \bm \rho^T\big)\big|\tau,& \\ & \psi\rightarrow \begin{vmatrix} \omega\big(\bm\theta^T, \bm \rho\big) &\bm \rho \\ \omega\big( \bm\theta^T,\psi\big) & \psi \\ \end{vmatrix} \big|\omega\big(\bm\theta^T, \bm \rho\big) \big|^{-1},\qquad&& \tau\rightarrow\big|\omega\big(\bm\theta^T, \bm \rho\big)\big|\tau,& \end{alignat*} leave \eqref{H-M-LP-1} and \eqref{H-M-LP-2} respectively invariant. Here $\omega\big(\bm\theta, \bm \rho^T\big)=\big(\omega\big(\theta^{(i)}, \rho^{(j)}\big)\big)_{i,j=1,\dots,N}$, $\omega\big(\bm\theta^T, \bm\rho\big)=\omega\big(\bm\theta, \bm \rho^T\big)^T$ are $N\times N$ matrix, $\omega\big(\phi, \bm \rho^T\big)= \big(\omega\big(\phi,\rho^{(j)}\big)\big)_{j=1,\dots,N}$ and $\omega\big(\bm \theta^T,\psi\big)=\big(\omega\big(\theta^{(i)},\psi\big)\big)_{i=1,\dots,N}$ are $N$-row vectors. \end{Proposition} The proofs of those above propositions are straightforward computation, so we do not give the details. The reader is also referred to the papers \cite{Nimmo-1997, Nimmo-Chaos-2000}. \subsection{Explicit solutions obtained by Darboux transformations} Here we present explicit examples of the classes of solutions that may be obtained by means of the Darboux transformations derived above. We choose the seed solution of the Hirota--Miwa equation \eqref{H-M-1} as $\tau=\tau_0=1$. With this choice, the f\/irst linear system \eqref{H-M-LP-1} reads \begin{gather* \phi_i-\phi_j=(a_i-a_j)\phi, \qquad 1\leq i<j\leq3, \end{gather*} and the basic eigenfunctions, depending on a single parameter $p$ are found to be \begin{gather}\label{seed-linear-solu-1} \phi(n_1, n_2, n_3; p)=\prod_{i=1}^{3}(a_i+p)^{n_i}. \end{gather} In a similar way the basic eigenfunctions of the adjoint linear system~\eqref{H-M-LP-2}, depending on a single parameter $q$, are \begin{gather}\label{seed-linear-solu-2} \psi(n_1, n_2, n_3; q)=\prod_{i=1}^{3}(a_i+q)^{-n_i}. \end{gather} For these eigenfunctions above we may integrate \eqref{omega-1} and obtain the potential \begin{gather}\label{seed-linear-solu-omeg} \omega(\phi, \psi)=\frac{1}{p-q}\prod_{i=1}^{3}\left(\frac{a_i+p}{a_i+q}\right)^{n_i}+c. \end{gather} Given the above expression it is straightforward to write down the following explicit solution for the Hirota--Miwa equation~\eqref{H-M-1} \begin{gather}\label{n-solu-1} \tau(n_1, n_2, n_3)=C_{{_{[i]}}}\big(\theta^1, \theta^2, \dots, \theta^N\big)\tau_0, \end{gather} where $\theta^k=\alpha_k\theta(n_1, n_2, n_3; p_k)+\theta(n_1, n_2, n_3; p^\prime_k)$ where $\theta(n_1,n_2,n_3; p_k)$ and $\theta(n_1,n_2,n_3; p_k^{\prime})$ are given by~\eqref{seed-linear-solu-1} and~$p_k$, $p^\prime_k$, $p_k\neq p^\prime_k$ and $\alpha_k$ are arbitrary constants; \begin{gather* \tau(n_1, n_2, n_3)=C_{{_{[\b i]}}}\big(\rho^1, \rho^2, \dots, \rho^N\big)\tau_0, \end{gather*} where $\rho^k=\beta_k\rho(n_1, n_2, n_3; q_k)+\rho(n_1, n_2, n_3; q^\prime_k)$ where $\rho(n_1,n_2,n_3; q_k)$ and $\rho(n_1,n_2,n_3; q_k^{\prime})$ are given by~\eqref{seed-linear-solu-2} and~$q_k$, $q^\prime_k$, $q_k\neq q^\prime_k$ and $\beta_k$ are arbitrary constants; \begin{gather* \tau(n_1, n_2, n_3)=\det (\omega_{k,l})\tau_0 , \qquad k, l= 1, 2, \dots, N, \end{gather*} where $\omega_{k,l}$ is given by~ \eqref{seed-linear-solu-omeg} with and $p=p_k$, $q=q_l$ and $c=c_{kl}$. \section{Discrete potential modif\/ied KdV equation}\label{sec-3} \subsection[From Hirota--Miwa equation to d-p-mKdV equation: 2-periodic reductions]{From Hirota--Miwa equation to d-p-mKdV equation:\\ 2-periodic reductions} Here, we explain the way to obtain the d-p-mKdV equation \eqref{d-p-mKdV-1} from the Hirota--Miwa equa\-tion~\eqref{H-M-1} through a 2-periodic reduction technique. For the Hirota--Miwa equation \eqref{H-M-1}, from \eqref{n-solu-1}, it is easy to get its one soliton solution in discrete exponential function form \begin{gather}\label{tau-solution} \tau(n_1,n_2,n_3)=1+\alpha\prod_{i=1}^{3}\left(\frac{a_i+p}{a_i+p^\prime}\right)^{n_i}. \end{gather} Introduce $f(n_1, n_2, n_3)$ and $\bar f(n_1, n_2, n_3)$ and impose a 2-periodic property on the $\tau$ func\-tion~\eqref{tau-solution} as below \begin{gather}\label{f-condition} \tau=f=T_{n_3}^2(f),\qquad \b f=T_{n_3}(f). \end{gather} Note here that the reduction condition \eqref{f-condition} gives $a_3=0$, $p^\prime=-p$, and \begin{gather}\label{bf-condition} \b f=T_{n_3}^2(\b f),\quad f=T_{n_3}(\b f). \end{gather} Moreover \eqref{f-condition} and~\eqref{bf-condition} indicate the symmetric property between~$f$ and~$\b f$, with respect to~$n_3$. By applying the reduction condition \eqref{f-condition} to the Hirota--Miwa equation~\eqref{H-M-1}, together with parameter reduction $a_3=0$, we get \begin{subequations}\label{H-M-2} \begin{gather} (a_1-a_2)f_{12}\b f =a_1f_{2}\b f_1-a_2f_1\b f_{2},\label{Bilinear-f-g-1}\\ (a_1-a_2)\b f_{12}f =a_1\b f_{2}f_1-a_2\b f_1f_{2}.\label{Bilinear-f-g-2} \end{gather} \end{subequations} There are two ways to obtain the equation \eqref{Bilinear-f-g-2}, one way is applying the symmetric property between~$f$ and~$\b f$ to the equation~\eqref{Bilinear-f-g-1}, the another one is taking the shift operator~$T_{n_3}$ on the Hirota--Miwa equation~\eqref{H-M-1}, and using the reduction condition~\eqref{bf-condition}. Def\/ine two functions (potentials) \begin{gather}\label{potentials} v(n_1,n_2,n_3)=\frac{\b f}{f},\qquad u(n_1,n_2,n_3)=\frac{f_{12}f}{f_1f_{2}}. \end{gather} By substituting \eqref{potentials} into \eqref{H-M-2}, we get \begin{subequations}\label{d-Miura} \begin{gather} (a_1-a_2)vu =a_1v_1-a_2v_{2},\label{d-Miura-1}\\ (a_1-a_2)v_{12}u =a_1v_{2}-a_2v_1.\label{d-Miura-2} \end{gather} \end{subequations} Eliminating $u$ in \eqref{d-Miura} gives \begin{gather}\label{d-p-mKdV} Q(v,v_1,v_2,v_{12};a_1,a_2)\equiv v_{12}(a_1 v_{1}-a_2v_2)-v(a_1v_2-a_2v_1)=0, \end{gather} which is the d-p-mKdV equation \eqref{d-p-mKdV-1} and is exactly same as the one f\/irst given by Nijhof\/f, cf.~\cite{Nijhoff-2009}, through the Cauchy matrix approach. Moreover, the relation \eqref{d-Miura} serves as the discrete Miura transformation between the d-KdV equation \begin{gather* \frac{1}{u_1}-\frac{1}{u_2}=\frac{a_1-a_2}{a_1+a_2}\left (u_{12}-u\right), \end{gather*} in potential $u$ (or more specif\/ically, say~$u_{\b 2}$~\cite{Shi-2014}) and the d-p-mKdV equation~\eqref{d-p-mKdV} in potential~$v$. Another interesting result is that with the periodic property of $f$ and $\b f$, we have the following formulae on the potentials~$u$ and~$v$ as follows \begin{gather* T_{n_3}(u)=uv_{12}v v_1^{-1} v_2^{-1},\qquad T_{n_3}^2(u)=u, \qquad T_{n_3}(v)=v^{-1}, \qquad T_{n_3}^2(v)=v. \end{gather*} So the potentials $u$ and $v$ also satisfy the 2-periodic property in the virtual variable~$n_3$. We observe that if $v$ is a solution of the d-p-mKdV equation then, as in the continuous case, $-v$ is a solution, but in the discrete case, $v^{-1}$ is a yet another solution. Under the reduction condition \eqref{f-condition}, from the $\tau$ function~\eqref{tau-solution}, we easily get the exact solution of~\eqref{H-M-2} \begin{gather* f(n_1,n_2,n_3)=1+\alpha(-1)^{n_3}\prod_{i=1}^{2}\left(\frac{a_i+p}{a_1-p}\right)^{n_i},\\ \b f(n_1,n_2,n_3)=1-\alpha(-1)^{n_3}\prod_{i=1}^{2}\left(\frac{a_i+p}{a_1-p}\right)^{n_i}, \end{gather*} which directly gives the one soliton solution of the d-p-mKdV equation \begin{gather* v(n_1,n_2)=\frac{\b f}{f}=\frac{1-\alpha(-1)^{n_3} \prod\limits_{i=1}^{2}\left(\frac{a_i+p}{a_1-p}\right)^{n_i}}{1+\alpha(-1)^{n_3} \prod\limits_{i=1}^{2}\left(\frac{a_i+p}{a_1-p}\right)^{n_i}}. \end{gather*} Note here that in the equation \eqref{d-p-mKdV}, there is no shift depends on the discrete variable~$n_3$. So treating the~$n_3$ as a virtual variable for the d-p-mKdV equation is allowable. Next, we show the way of discovering the linear system in matrix form of the d-p-mKdV equation~\eqref{d-p-mKdV} from the linear system of the Hirota--Miwa equation \eqref{H-M-LP-1} through the $2$-periodic reduction technique. Introduce eigenfunctions $\phi(n_1,n_2,n_3)$ and $\b \phi(n_1, n_2, n_3)$ and impose a 2-periodic condition on the eigenfunction~$\phi(n_1,n_2,n_3)$ in the linear system~\eqref{H-M-LP-1} as below \begin{gather}\label{phi-condition} \phi=\lambda^{-2}T_{n_3}^2(\phi),\qquad \b\phi=\lambda^{-1} T_{n_3}(\phi), \end{gather} where the parameter $\lambda$ serves as the spectral parameter. From \eqref{phi-condition}, we have \begin{gather}\label{bphi-condition} \b\phi=\lambda^{-2}T_{n_3}^2(\b\phi),\qquad \phi=\lambda^{-1} T_{n_3}(\b\phi). \end{gather} So \eqref{phi-condition} and \eqref{bphi-condition} mean the symmetric property between~$\phi$ and~$\b\phi$, with respect to~$n_3$. By applying the reduction conditions \eqref{f-condition} and \eqref{phi-condition}, together with $a_3=0$, to the linear system~\eqref{H-M-LP-1}, we get \begin{subequations}\label{H-M-LP2} \begin{gather} \phi_{1}-\phi_{2} =(a_1-a_2)\frac{f_{12}f}{f_{1}f_{2}}\phi,\label{H-M-LP2-1}\\ \phi_{2}-\lambda\b\phi =a_2\frac{\b f_{2}f}{f_{2}\b f}\phi, \label{H-M-LP2-2}\\ \lambda\b\phi-\phi_{1} =-a_1\frac{\b f_1 f}{f_1 \b f}\phi.\label{H-M-LP2-3} \end{gather} \end{subequations} Then by using the symmetric property \eqref{bf-condition} and \eqref{bphi-condition} respectively between~$f$ and~$\b f$, $\phi$ and~$\b\phi$, we get \begin{subequations}\label{H-M-LP3} \begin{gather} \b\phi_{1}-\b\phi_{2}=(a_1-a_2)\frac{\b f_{12}\b f}{\b f_{1}\b f_{2}}\b\phi,\label{H-M-LP3-1}\\ \b\phi_{2}-\lambda\phi=a_2\frac{f_{2}\b f}{\b f_{2}f}\b\phi, \label{H-M-LP3-2}\\ \lambda\phi-\b\phi_{1}=-a_1\frac{f_1 \b f}{\b f_1f}\b\phi.\label{H-M-LP3-3} \end{gather} \end{subequations} Substituting \eqref{potentials} into \eqref{H-M-LP2} and \eqref{H-M-LP3} gives \begin{subequations}\label{H-M-LP4} \begin{gather} \phi_{1}-\phi_{2} =(a_1-a_2)u\phi,\label{H-M-LP4-1}\\ \phi_{2}-\lambda\b\phi =a_2v_2v^{-1}\phi, \label{H-M-LP4-2}\\ \lambda\b\phi-\phi_{1} =-a_1v_1 v^{-1}\phi,\label{H-M-LP4-3} \end{gather} \end{subequations} and \begin{subequations}\label{H-M-LP5} \begin{gather} \b\phi_{1}-\b\phi_{2} =(a_1-a_2)uv_{12}v v_{1}^{-1}v_2^{-1}\b\phi,\label{H-M-LP5-1}\\ \b\phi_{2}-\lambda\phi =a_2v_2^{-1}v\b\phi, \label{H-M-LP5-2}\\ \lambda\phi-\b\phi_{1} =-a_1v_1^{-1}v\b\phi.\label{H-M-LP5-3} \end{gather} \end{subequations} Through the discrete Miura transformation \eqref{d-Miura}, the equations \eqref{H-M-LP4-1} and \eqref{H-M-LP5-1} can be derived by \eqref{H-M-LP4-2} and \eqref{H-M-LP4-3}, \eqref{H-M-LP5-2} and \eqref{H-M-LP5-3}, respectively. Def\/ining a vector eigenfunction $\bm\Phi=(\phi,\b\phi)^T$, which satisf\/ies the condition \eqref{phi-condition}, then \eqref{H-M-LP4-3} and \eqref{H-M-LP5-3}, \eqref{H-M-LP4-2} and \eqref{H-M-LP5-2}, can be respectively written in matrix form as below \begin{subequations}\label{matrix-LP} \begin{gather} \bm\Phi_1=\bm L\bm\Phi,\label{matrix-LP-1}\\ \bm\Phi_2=\bm M\bm\Phi, \label{matrix-LP-2} \end{gather} \end{subequations} where \begin{gather*} \bm L=\left( \begin{matrix} a_1v_1v^{-1} & \lambda \\ \lambda & a_1v_1^{-1}v \end{matrix} \right) ,\qquad \bm M=\left( \begin{matrix} a_2v_2v^{-1} & \lambda \\ \lambda & a_2v_2^{-1}v \end{matrix} \right). \end{gather*} One then f\/inds that \begin{gather* 0=\bm\Phi_{12}-\bm\Phi_{21}=(\bm L_2\bm M-\bm M_1\bm L)\bm\Phi=\lambda Q(v,v_{1},v_{2},v_{{12}};a_1,a_2) \left( \begin{matrix} 0 & -v_1^{-1}v_2^{-1} \\ v^{-1} v_{12}^{-1} & 0 \end{matrix} \right)\bm\Phi. \end{gather*} So the compatibility condition of the above linear system \eqref{matrix-LP} in eigenfunction~$\bm\Phi$ is that~$v$ obeys the d-p-mKdV equation~\eqref{d-p-mKdV}. \subsection{Darboux and binary Darboux transformations}\label{sec-3-1} In this section, we will see that through the reduction conditions~\eqref{f-condition} and~\eqref{phi-condition}, it is easy to investigate the Darboux and binary Darboux transformations of d-p-mKdV equation. Let $v$ be a solution of the d-p-mKdV equation \eqref{d-p-mKdV} and $\bm\Phi=(\phi, \b\phi)^T$ be a vector solution of its Lax pair~\eqref{matrix-LP}. The fundamental Darboux transformation of the d-p-mKdV equation is given as below. \begin{Proposition}\label{prop1-dpmKdV} Suppose $(\theta,\b\theta)^T$, which holds the $2$-periodic property $\theta = \mu^{ -2}T_{_{n_3}}^2 (\theta)$, $\b\theta = \mu^{ -1}T_{_{n_3}} (\theta)$, is a vector solution of the linear system~\eqref{matrix-LP} by taking $\lambda=\mu$ for some $v$, then \begin{gather}\label{dpmKdV-DT-1-1} \mathrm {DT}^{\theta,\b\theta}\colon \ \phi\rightarrow\frac{C_{_{[3]}}(\theta, \phi)} {\theta},\qquad \b\phi\rightarrow\frac{C_{_{[3]}}(\b\theta, \b\phi)} {\b\theta},\qquad v\rightarrow \frac{T_{_{n_3}}(\theta)}{\theta}v=\mu^2\frac{\b\theta}{T_{_{n_3}}(\b\theta)}v \end{gather} leaves \eqref{matrix-LP} invariant. Otherwise, \begin{gather*} \frac{C_{_{[3]}}(\b\theta, \b\phi)} {\b\theta}=\lambda^{-1}T_{_{n_3}}\left(\frac{C_{_{[3]}}(\theta, \phi)} {\theta}\right),\qquad \frac{C_{_{[3]}}(\theta, \phi)} {\theta}=\lambda^{-1}T_{_{n_3}}\left(\frac{C_{_{[3]}}(\b\theta, \b\phi)} {\b\theta}\right). \end{gather*} \end{Proposition} We remark that may also write the gauge transformation of $\bm\Phi=(\phi, \b\phi)^T$ in~\eqref{dpmKdV-DT-1-1} in matrix form as follows \begin{gather* \mathrm {DT}^{\theta,\b\theta}\colon \ \bm\Phi\rightarrow\left(\begin{matrix}-\mu\theta^{-1}\b\theta & \lambda \\ \lambda& -\mu\theta{\b\theta}^{-1}\end{matrix}\right)\bm\Phi. \end{gather*} But for later convenience of the construction of the binary Darboux transformation, we here write in scalar form shown in~\eqref{dpmKdV-DT-1-1}. Next we write down the closed form expression for the result of $N$ applications of the above Darboux transformation, which give solutions in Casoratian determinant form. \begin{Proposition}\label{prop1N-dpmKdV} Let $\big(\theta^{1}, \b\theta^{1}\big)^T , \big(\theta^{2},\b\theta^{2}\big)^T , \dots, \big(\theta^{N}, \b\theta^{N}\big)^T$, satisfying $\theta^{k} = \lambda_k^{-2}T_{_{n_3}}^2 (\theta^{k})$, $\b\theta^{k} = \lambda_k^{-1}T_{_{n_3}} (\theta^{k})$, be~$N$ non-zero independent vector solutions of the linear system~\eqref{matrix-LP} by taking $\lambda=\lambda_k$, $k=1, 2, \dots, N$, for some~$v$, such that $C_{_{[3]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)\neq0$. Then \begin{subequations}\label{N-DT1} \begin{gather} \phi\rightarrow\t\phi=\frac{C_{_{[3]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N},\phi\big)}{C_{_{[3]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)}, \qquad \b\phi\rightarrow\t{\b\phi}=\frac{C_{_{[3]}}\big(\b\theta^{1},\b\theta^{2}, \dots, \b\theta^{N},\b\phi\big)}{C_{_{[3]}}\big(\b\theta^{1},\b\theta^{2}, \dots, \b\theta^{N}\big)},\\ v\rightarrow \t v=\frac{T_{_{n_3}} \big(C_{_{[3]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)\big)}{C_{_{[3]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)}v =\prod_{i=1}^{N}\lambda_i^2\frac{C_{_{[3]}}\big(\b\theta^{1},\b\theta^{2}, \dots, \b\theta^{N}\big)}{T_{_{n_3}} \big(C_{_{[3]}}\big(\b\theta^{1},\b\theta^{2}, \dots, \b\theta^{N}\big)\big)}v, \end{gather} \end{subequations} leaves \eqref{matrix-LP} invariant. Otherwise, $\t{\b\phi}=\lambda^{-1}T_{_{n_3}} (\t\phi)$, $\t{\phi}=\lambda^{-1}T_{_{n_3}} \big(\t{\b\phi}\big)$. \end{Proposition} The d-p-mKdV equation \eqref{d-p-mKdV} is invariant with respect to the reversal of all lattice directions $n_i\rightarrow -n_i$, $i=1,2$. But its linear system~\eqref{matrix-LP} does not have such invariance and so the ref\/lections $n_i\rightarrow -n_i$, $i=1,2$, acting on~\eqref{matrix-LP} give a second linear system on the vector eigenfunction $\bm\Psi=(\psi,\b \psi)^T$, which also satisfy 2-periodic reduction condition \begin{gather}\label{psi-condition} \psi=\lambda^{-2}T_{_{n_3}}^{-2}(\psi), \qquad \b\psi=\lambda^{-1}T_{{n_3}}^{-1}(\psi), \end{gather} as follows \begin{subequations}\label{matrix-LP-2+} \begin{gather} \bm\Psi_{\b1}=\bm U\bm\Psi,\label{matrix-LP-2-1}\\ \bm\Psi_{\b2}=\bm V\bm\Psi, \label{matrix-LP-2-2} \end{gather} \end{subequations} where \begin{gather*} \bm U=\left( \begin{matrix} a_1v_{\b1}v^{-1} & \lambda \\ \lambda & a_1v_{\b 1}^{-1}v \end{matrix} \right) ,\qquad \bm V=\left( \begin{matrix} a_2v_{\b2}v^{-1} & \lambda \\ \lambda & a_2v_{\b2}^{-1}v \end{matrix} \right). \end{gather*} One then f\/inds that \begin{gather* 0=\bm\Psi_{\b1\b2}-\bm\Psi_{\b2\b1}=(\bm U_{\b2}\bm V-\bm V_{\b1}\bm U)\bm\Psi=\lambda Q\big(v,v_{\b1},v_{\b2},v_{\b{12}};a_1,a_2\big) \left( \begin{matrix} 0 & -v_{\b1}^{-1}v_{\b2}^{-1} \\ v^{-1} v_{\b1\b2}^{-1} & 0 \end{matrix} \right)\bm\Psi. \end{gather*} Now we apply the ref\/lections $n_i \rightarrow -n_i$, $i=1, 2$, in order to deduce Darboux transformation for the second linear system as below. \begin{Proposition}\label{prop2-dpmKdV} Suppose $(\rho,\b\rho)^T$, which holds the $2$-periodic property $\rho=\mu^{-2}T_{_{n_3}}^{-2}(\rho)$, $\b\rho=\mu^{-1}T_{_{n_3}}^{-1}(\rho)$, is a vector solution of the linear system~\eqref{matrix-LP-2+} by taking $\lambda=\mu$ for some $v$, then \begin{gather* \mathrm {DT}^{\rho, \b\rho}\colon \ \psi\rightarrow\frac{C_{_{[\b 3]}}(\rho, \psi)} {\rho},\qquad \b\psi\rightarrow\frac{C_{_{[\b 3]}}(\b\rho, \b\psi)} {\b\rho} ,\qquad v\rightarrow \frac{T_{_{n_3}}^{-1}(\rho)}{\rho}v=\mu^2\frac{\b\rho}{T_{_{n_3}}^{-1}(\b\rho)}v \end{gather*} leaves \eqref{matrix-LP-2+} invariant. Otherwise, \begin{gather*} \frac{C_{_{[\b 3]}}(\b\rho, \b\psi)} {\b\rho}=\lambda^{-1}T_{_{n_3}}^{-1} \left(\frac{C_{_{[\b 3]}}(\rho, \psi)} {\rho}\right), \qquad \frac{C_{_{[\b 3]}}(\rho, \psi)} {\rho}=\lambda^{-1}T_{_{n_3}}^{-1} \left(\frac{C_{_{[\b 3]}}(\b\rho, \b\psi)} {\b\rho}\right). \end{gather*} \end{Proposition} Next we write down the closed form expression for the result of $N$ applications of the above Darboux transformation, which give solutions in Casoratian determinant form. \begin{Proposition}\label{prop2N-dpmKdV} Let $\big(\rho^{1}, \b\rho^{1}\big)^T, \big(\rho^{2},\b\rho^{2}\big)^T, \dots, \big(\rho^{N}, \b\rho^{N}\big)^T$, satisfying $\rho^{k} = \lambda_k^{-2}T_{_{n_3}}^{-2}(\rho^{k})$, $\b\rho^{k} = \lambda_k^{-1}T_{_{n_3}}^{-1} (\rho^{k})$, be $N$ non-zero independent vector solutions of the linear system~\eqref{matrix-LP-2+} by taking $\lambda=\lambda_k$, $k=1, 2, \dots, N$, for some $v$, such that $C_{_{[\b 3]}}(\rho^{1},\rho^{2}, \dots, \rho^{N})\neq0$. Then \begin{gather*} \psi\rightarrow\t\psi=\frac{C_{_{[\b 3]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N},\psi\big)}{C_{_{[\b 3]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big)}, \qquad \b\psi\rightarrow\t{\b\psi}=\frac{C_{_{[\b 3]}}\big(\b\rho^{1},\b\rho^{2}, \dots, \b\rho^{N},\b\psi\big)}{C_{_{[\b 3]}}\big(\b\rho^{1},\b\rho^{2}, \dots, \b\rho^{N}\big)},\\ v\rightarrow \t v=\frac{T_{_{n_3}}^{-1} \big(C_{_{[\b 3]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big)\big)}{C_{_{[\b 3]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big)}v =\prod_{i=1}^{N}\lambda_i^2\frac{C_{_{[\b 3]}}\big(\b\rho^{1},\b\rho^{2}, \dots, \b\rho^{N}\big)}{T_{_{n_3}}^{-1} \big(C_{_{[\b 3]}}\big(\b\rho^{1}, \b\rho^{2}, \dots, \b\rho^{N}\big)\big)}v, \end{gather*} leaves \eqref{matrix-LP-2+} invariant. Otherwise, $\t{\b\psi}=\lambda^{-1}T_{_{n_3}}^{-1} (\t\psi )$, $\t{\psi}=\lambda^{-1}T_{_{n_3}}^{-1} \big(\t{\b\psi}\big)$. \end{Proposition} To construct a binary Darboux transformation, we introduce the potentials $\omega=\omega(\phi, \psi)$ and $\b\omega=\omega(\b\phi, \b\psi)$, def\/ined by the relations \begin{subequations}\label{dpmKdV-omega} \begin{gather} \Delta_3(\omega(\phi, \psi)) =\phi T_{_{n_3}} (\psi), \label{d-p-mKdV-omega-1}\\ \Delta_3(\omega(\b\phi, \b\psi)) =\b\phi T_{_{n_3}} (\b\psi). \label{d-p-mKdV-omega-2} \end{gather} \end{subequations} If $(\phi, \b\phi)$ and $(\psi,\b\psi)$ satisfy the linear systems~\eqref{matrix-LP} and~\eqref{matrix-LP-2+} for some~$v$, respectively. Otherwise, together with the reductions~\eqref{phi-condition} and~\eqref{psi-condition}, we have the reduction condition for $(\omega,\b\omega)^T$ as follows \begin{gather*} T_{_{n_3}}(\omega(\phi, \psi))=\omega(\b\phi, \b\psi),\qquad T_{_{n_3}}^2(\omega(\phi, \psi))=\omega(\phi, \psi),\\ T_{_{n_3}}(\omega(\b\phi, \b\psi))=\omega(\phi, \psi),\qquad T_{_{n_3}}^2(\omega(\b\phi,\b\psi))=\omega(\b\phi, \b\psi). \end{gather*} Especially, \begin{gather*} T_{_{n_3}}(\omega(\phi, \rho))=\lambda\mu^{-1}\omega(\b\phi, \b\rho), \qquad T_{_{n_3}}^2(\omega(\phi, \rho))=\lambda^2\mu^{-2}\omega(\phi, \rho), \\ T_{_{n_3}}(\omega(\b\phi, \b\rho))=\lambda\mu^{-1}\omega(\phi, \rho), \qquad T_{_{n_3}}^2(\omega(\b\phi, \b\rho))=\lambda^2\mu^{-2}\omega(\b\phi, \b\rho); \\ T_{_{n_3}}(\omega(\theta, \psi))=\lambda^{-1}\mu~\omega(\b\theta, \b\psi),\qquad T_{_{n_3}}^2(\omega(\theta, \psi))=\lambda^{-2}\mu^2\omega(\theta, \psi),\\ T_{_{n_3}}(\omega(\b\theta, \b\psi))=\lambda^{-1}\mu~\omega(\theta, \psi),\qquad T_{_{n_3}}^2(\omega(\b\theta, \b\psi))=\lambda^{-2}\mu^2\omega(\b\theta, \b\psi); \\ T_{_{n_3}}(\omega(\theta, \rho))=\omega(\b\theta, \b\rho),\qquad T_{_{n_3}}^2(\omega(\theta, \rho))=\omega(\theta, \rho),\\ T_{_{n_3}}(\omega(\b\theta, \b\rho))=\omega(\theta, \rho),\qquad T_{_{n_3}}^2(\omega(\b\theta,\b\rho))=\omega(\b\theta, \b\rho); \end{gather*} and \begin{gather*} T_{_{n_3}}(\omega(\theta^k, \rho^l))=\left(\frac{\lambda_k}{\lambda_l}\right)\omega(\b\theta^k, \b\rho^l),\qquad T_{_{n_3}}^2(\omega(\theta^k, \rho^l))=\left(\frac{\lambda_k}{\lambda_l}\right)^2\omega(\theta^k, \rho^l),\\ T_{_{n_3}}(\omega(\b\theta^k, \b\rho^l))=\left(\frac{\lambda_k}{\lambda_l}\right)\omega(\theta^k, \rho^l),\qquad T_{_{n_3}}^2(\omega(\b\theta^k, \b\rho^l))=\left(\frac{\lambda_k}{\lambda_l}\right)^2\omega(\b\theta^k, \b\rho^l). \end{gather*} The following proposition gives the binary Darboux transformation of the d-p-mKdV equation. \begin{Proposition}\label{binary-dpmKdV} For some $v$, let $(\theta,\b\theta)^T$ and $(\phi,\b\phi)^T$ be two non-zero vector solutions of the linear system \eqref{matrix-LP}, respectively corresponding to spectrum parameters~$\mu$ and~$\lambda$; $(\rho,\b\rho)^T$ and $(\psi, \b\psi)^T$ be two non-zero vector solutions of the linear system~\eqref{matrix-LP-2+}, respectively corresponding to spectrum parameters~$\mu$ and~$\lambda$, then \begin{alignat}{3} & \mathrm{BDT}\colon \quad && \phi \rightarrow \phi-\theta \omega(\theta, \rho)^{-1}\omega(\phi,\rho),~\b\phi \rightarrow \b\phi-\b\theta \omega(\b\theta, \b\rho)^{-1}\omega(\b\phi,\b\rho),& \nonumber\\ &&& v \rightarrow \frac{T_{_{n_3}} (\omega(\theta, \rho))}{\omega(\theta, \rho)}v = \frac{\omega(\b\theta, \b\rho)}{T_{_{n_3}} (\omega(\b\theta, \b\rho))}v,& \label{dpmKdV-bDT-1}\\ & \mathrm{aBDT}\colon \quad && \psi \rightarrow \psi-\rho \omega(\theta, \rho)^{-1}\omega(\theta,\psi),~\b\psi \rightarrow \b\psi-\b\rho\omega(\b\theta, \b\rho)^{-1}\omega(\b\theta,\b\psi),&\nonumber\\ &&& v \rightarrow \frac{T_{_{n_3}} (\omega(\theta, \rho))}{\omega(\theta, \rho)}v = \frac{\omega(\b\theta, \b\rho)}{T_{_{n_3}} (\omega(\b\theta, \b\rho))}v, & \label{dpmKdV-bDT-2} \end{alignat} leave \eqref{matrix-LP} and \eqref{matrix-LP-2+} respectively invariant. Otherwise, \begin{gather*} \b\phi-\b\theta \omega(\b\theta, \b\rho)^{-1}\omega(\b\phi,\b\rho)=\lambda^{-1}T_{_{n_3}}\big(\phi- \theta \omega(\theta, \rho)^{-1}\omega(\phi,\rho)\big),\\ \phi-\theta \omega(\theta, \rho)^{-1} \omega(\phi, \rho) = \lambda^{-1}T_{_{n_3}} \big(\b\phi - \b\theta \omega(\b\theta, \b\rho)^{-1} \omega(\b\phi,\b\rho)\big),\\ \b\psi - \b\rho \omega(\b\theta, \b\rho)^{-1}\omega(\b\theta,\b\psi) = \lambda^{-1}T_{_{n_3}}^{-1} \big(\psi - \rho \omega(\theta, \rho)^{-1}\omega(\theta,\psi)\big),\\ \psi - \rho \omega(\theta, \rho)^{-1}\omega(\theta,\psi) = \lambda^{-1}T_{_{n_3}}^{-1} \big(\b\psi - \b\rho \omega(\b\theta, \b\rho)^{-1}\omega(\b\theta,\b\psi)\big). \end{gather*} \end{Proposition} The $N$-fold iteration of these binary Darboux transformations are given below. \begin{Proposition}\label{N-binary-dpmKdV} Let $\big(\theta^{1} , \b\theta^{1}\big)^T , \big(\theta^{2} ,\b\theta^{2}\big)^T \!, \dots, \big(\theta^{N} , \b\theta^{N}\big)^T $ and $\big(\rho^{1} , \b\rho^{1}\big)^T , \big(\rho^{2} ,\b\rho^{2}\big)^T \!, \dots, \big(\rho^{N} , \b\rho^{N}\big)^T $ be $N$ independent vector solutions, holding $\theta^{k} = \lambda_k^{-2}T_{_{n_3}}^2(\theta^{k})$, $\b\theta^{k} = \lambda_k^{-1}T_{_{n_3}}(\theta^{k})$, and $\rho^{k} = \lambda_k^{-2}T_{_{n_3}}^{-2}(\rho^{k})$, $\b\rho^{k} = \lambda_k^{-1}T_{_{n_3}}^{-1}(\rho^{k})$, by taking $\lambda=\lambda_k$, $k=1, 2, \dots, N$, satisfy linear systems~\eqref{matrix-LP} and~\eqref{matrix-LP-2+} for some~$v$ respectively. Then \begin{gather* \phi \rightarrow \h\phi = \frac{ \begin{vmatrix} \omega\big(\bm\theta, \bm \rho^T \big) & \bm\theta\\ \omega\big(\phi, \bm \rho^T\big) & \phi \end{vmatrix}} {\big|\omega\big(\bm\theta, \bm \rho^T \big) \big|} , \qquad \b\phi \rightarrow \h{\b\phi} = \frac{ \begin{vmatrix} \omega\big(\bm{\b\theta}, \bm{\b \rho}^T \big) & \bm{\b\theta}\\ \omega\big(\b\phi, \bm {\b\rho}^T \big) & \b\phi \end{vmatrix}} {\big|\omega\big(\bm{\b\theta} , \bm {\b\rho^T}\big)\big |}, \\ v \rightarrow \h v = \frac{\big|T_{_{n_3}} \big(\omega(\bm\theta, \bm \rho^T )\big) \big|}{\big|\omega\big(\bm\theta, \bm \rho^T \big) \big|}v = \frac{\big|\omega\big(\bm{\b\theta} , \bm {\b\rho}^T \big) \big|}{\big|T_{_{n_3}} \big(\omega\big(\bm{\b\theta} , \bm {\b\rho}^T \big)\big) \big|}v, \end{gather*} and \begin{gather* \psi \rightarrow \h\psi = \frac{ \begin{vmatrix} \omega\big(\bm\theta^T, \bm \rho\big) & \bm\rho\\ \omega\big(\bm\theta^T, \psi\big) & \psi \end{vmatrix}} {\big|\omega\big(\bm\theta^T , \bm \rho\big) \big|}, \qquad \b\psi \rightarrow \h{\b\psi} = \frac{ \begin{vmatrix} \omega\big(\bm{\b\theta}^T, \bm{\b \rho}\big) & \bm{\b\rho}\\ \omega\big(\bm {\b\theta}^T,\b\psi\big) & \b\psi \end{vmatrix}} {\big|\omega\big(\bm{\b\theta}^T , \bm {\b\rho}\big) \big|}, \\ v \rightarrow \h v = \frac{\big|T_{_{n_3}} \big(\omega\big(\bm\theta^T , \bm \rho\big)\big) \big|}{\big|\omega\big(\bm\theta^T , \bm \rho\big)\big|}v = \frac{\big|\omega\big(\bm{\b\theta}^T , \bm {\b\rho} \big) \big|}{\big|T_{_{n_3}} \big(\omega\big(\bm{\b\theta}^T , \bm {\b\rho} \big)\big) \big|}v, \end{gather*} leave \eqref{matrix-LP} and \eqref{matrix-LP-2+} respectively invariant, where $\bm\theta=\big(\theta^1,\dots,\theta^N\big)^T$ and $\bm\rho=\big(\rho^1,\dots,\rho^N\big)^T$. Otherwise, $\h{\b\phi}=\lambda^{-1}T_{_{n_3}}(\h\phi)$, $\h{\phi}=\lambda^{-1}T_{_{n_3}}(\h{\b\phi})$, $\h{\b\psi}=\lambda^{-1}T_{_{n_3}}^{-1}(\h\psi)$, $\h{\psi}=\lambda^{-1}T_{_{n_3}}^{-1}(\h{\b\psi})$. \end{Proposition} \subsection{Explicit solutions obtained by Darboux transformations} Here we present explicit examples of the classes of solutions that may be obtained by means of the Darboux transformations derived above. We choose the seed solution of the d-p-mKdV equation~\eqref{d-p-mKdV} as $v=v_0=1$. With this choice, the f\/irst linear system \eqref{matrix-LP} reads \begin{gather*} \phi_1=a_1\phi+\lambda\b\phi,\qquad \b\phi_1=a_1\b\phi+\lambda\phi,\\ \phi_2=a_2\phi+\lambda\b\phi,\qquad \b\phi_2=a_2\b\phi+\lambda\phi, \end{gather*} and the eigenfunctions are found to be \begin{subequations}\label{dpmKdV-seed-linear-solu-1} \begin{gather} \phi(n_1, n_2, n_3; \lambda)=\lambda^{n_3}\prod_{i=1}^{2}(a_i+\lambda)^{n_i}+(-\lambda)^{n_3}\prod_{i=1}^{2}(a_i-\lambda)^{n_i},\\ \b\phi(n_1, n_2, n_3; \lambda)=\lambda^{n_3}\prod_{i=1}^{2}(a_i+\lambda)^{n_i}-(-\lambda)^{n_3}\prod_{i=1}^{2}(a_i-\lambda)^{n_i}, \end{gather} \end{subequations} which hold $\phi=\lambda^{-2}T_{_{n_3}}(\phi)$, $\b\phi=\lambda^{-1}T_{_{n_3}}(\phi)$. In a similar way the eigenfunctions of the second linear system \eqref{matrix-LP-2+} are \begin{subequations}\label{dpmKdV-seed-linear-solu-2} \begin{gather} \psi(n_1, n_2, n_3; \lambda)=\lambda^{-n_3}\prod_{i=1}^{2}(a_i+\lambda)^{-n_i}+(-\lambda)^{-n_3}\prod_{i=1}^{2}(a_i-\lambda)^{-n_i},\\ \b\psi(n_1, n_2, n_3; \lambda)=\lambda^{-n_3}\prod_{i=1}^{2}(a_i+\lambda)^{-n_i}-(-\lambda)^{-n_3}\prod_{i=1}^{2}(a_i-\lambda)^{-n_i}, \end{gather} \end{subequations} which hold $\psi=\lambda^{-2}T_{_{n_3}}^{-1}(\psi)$, $\b\psi=\lambda^{-1}T_{_{n_3}}^{-1}(\psi)$. For these eigenfunctions \eqref{dpmKdV-seed-linear-solu-1} and \eqref{dpmKdV-seed-linear-solu-2} above we may integrate \eqref{dpmKdV-omega} and obtain the potentials \begin{subequations}\label{dpmKdV-seed-linear-solu-omeg} \begin{gather} \omega(\phi, \psi)=\frac{1}{2\lambda}\left[(-1)^{n_3}\prod_{i=1}^{2}\left(\frac{a_i+\lambda}{a_i-\lambda}\right)^{n_i} -(-1)^{-n_3}\prod_{i=1}^{2}\left(\frac{a_i+\lambda}{a_i-\lambda}\right)^{-n_i}\right],\\ \omega(\b\phi, \b\psi)=\frac{1}{2\lambda}\left[(-1)^{-n_3}\prod_{i=1}^{2}\left(\frac{a_i+\lambda}{a_i-\lambda}\right)^{-n_i} -(-1)^{n_3}\prod_{i=1}^{2}\left(\frac{a_i+\lambda}{a_i-\lambda}\right)^{n_i}\right], \end{gather} \end{subequations} which hold $\omega(\phi, \psi)=T_{_{n_3}}^2(\omega(\phi, \psi))$, $\omega(\b\phi, \b\psi)=T_{_{n_3}}(\omega(\phi, \psi))$. Otherwise, for $\lambda=\lambda_k$, $v=v_0=1$, the f\/irst linear system \eqref{matrix-LP} has eigenfunctions \begin{subequations}\label{dpmKdV-seed-linear-solu-theta} \begin{gather} \theta^k(n_1, n_2, n_3; \lambda_k)=\lambda_k^{n_3}\prod_{i=1}^{2}(a_i+\lambda_k)^{n_i}+(-\lambda_k)^{n_3}\prod_{i=1}^{2}(a_i-\lambda_k)^{n_i},\\ \b\theta^k(n_1, n_2, n_3; \lambda_k)=\lambda_k^{n_3}\prod_{i=1}^{2}(a_i+\lambda_k)^{n_i}-(-\lambda_k)^{n_3}\prod_{i=1}^{2}(a_i-\lambda_k)^{n_i}, \end{gather} \end{subequations} which hold $\theta^k=\lambda_k^{-2}T_{_{n_3}}^2(\theta^k)$, $\b\theta^k=\lambda_k^{-1}T_{_{n_3}}(\theta^k)$. Similarly, for $\lambda=\lambda_l$, $v=v_0=1$, the second linear system~\eqref{matrix-LP-2+} has eigenfunctions \begin{subequations}\label{dpmKdV-seed-linear-solu-rho} \begin{gather} \rho^l(n_1, n_2, n_3; \lambda_l)=\lambda_l^{-n_3}\prod_{i=1}^{2}(a_i+\lambda_l)^{-n_i}+(-\lambda_l)^{-n_3}\prod_{i=1}^{2}(a_i-\lambda_l)^{-n_i},\\ \b\rho^l(n_1, n_2, n_3; \lambda_l)=\lambda_l^{-n_3}\prod_{i=1}^{2}(a_i+\lambda_l)^{-n_i}-(-\lambda_l)^{-n_3}\prod_{i=1}^{2}(a_i-\lambda_l)^{-n_i}, \end{gather} \end{subequations} which hold $\rho^l=\lambda_l^{-2}T_{_{n_3}}^{-2}(\rho^l)$, $\b\rho^l=\lambda_l^{-1}T_{_{n_3}}^{-1}(\rho^l)$. For these eigenfunctions \eqref{dpmKdV-seed-linear-solu-theta} and~\eqref{dpmKdV-seed-linear-solu-rho} above we may integrate~\eqref{dpmKdV-omega} and obtain the potential, for $\lambda_k\neq\lambda_l$, \begin{subequations}\label{dpmKdV-seed-linear-solu-omeg-theta-rho}\allowdisplaybreaks \begin{gather} \omega\big(\theta^k, \rho^l\big) = \lambda_l^{ -1} \left(\frac{\lambda_k}{\lambda_l}\right)^{n_3} \Bigg[ \left(\frac{\lambda_k}{\lambda_l} - 1\right)^{ -1} \prod_{i=1}^{2} \left(\frac{a_i+\lambda_k}{a_i+\lambda_l}\right)^{n_i}\nonumber\\ \hphantom{\omega\big(\theta^k, \rho^l\big) =}{} - \left(\frac{\lambda_k}{\lambda_l} + 1\right)^{ -1} (-1)^{n_3} \prod_{i=1}^{2}\left(\frac{a_i-\lambda_k}{a_i+\lambda_l}\right)^{n_i} +\left(\frac{\lambda_k}{\lambda_l} + 1\right)^{ -1} (-1)^{n_3} \prod_{i=1}^{2}\left(\frac{a_i+\lambda_k}{a_i-\lambda_l}\right)^{n_i}\nonumber\\ \hphantom{\omega\big(\theta^k, \rho^l\big) =}{} - \left(\frac{\lambda_k}{\lambda_l} - 1\right)^{ -1} \prod_{i=1}^{2} \left(\frac{a_i-\lambda_k}{a_i-\lambda_l}\right)^{n_i}\Bigg], \\ \omega\big(\b\theta^k, \b\rho^l\big) = \lambda_l^{ -1} \left(\frac{\lambda_k}{\lambda_l}\right)^{n_3} \Bigg[ \left(\frac{\lambda_k}{\lambda_l} - 1\right)^{ -1} \prod_{i=1}^{2} \left(\frac{a_i+\lambda_k}{a_i+\lambda_l}\right)^{n_i}\nonumber\\ \hphantom{\omega\big(\b\theta^k, \b\rho^l\big) =}{} + \left(\frac{\lambda_k}{\lambda_l} + 1\right)^{ -1} (-1)^{n_3} \prod_{i=1}^{2}\left(\frac{a_i-\lambda_k}{a_i+\lambda_l}\right)^{n_i} -\left(\frac{\lambda_k}{\lambda_l} + 1\right)^{ -1} (-1)^{n_3} \prod_{i=1}^{2}\left(\frac{a_i+\lambda_k}{a_i-\lambda_l}\right)^{n_i} \nonumber\\ \hphantom{\omega\big(\b\theta^k, \b\rho^l\big) =}{} - \left(\frac{\lambda_k}{\lambda_l} - 1\right)^{ -1} \prod_{i=1}^{2} \left(\frac{a_i-\lambda_k}{a_i-\lambda_l}\right)^{n_i} \Bigg], \end{gather} \end{subequations} which hold \begin{gather*} T_{_{n_3}}\big(\omega\big(\theta^k, \rho^l\big)\big)=\left(\frac{\lambda_k}{\lambda_l}\right)\omega\big(\b\theta^k, \b\rho^l\big), \qquad T_{_{n_3}}^2\big(\omega\big(\theta^k, \rho^l\big)\big)=\left(\frac{\lambda_k}{\lambda_l}\right)^2\omega\big(\theta^k, \rho^l\big). \end{gather*} For $\lambda_k=\lambda_l$, these eigenfunctions are~\eqref{dpmKdV-seed-linear-solu-omeg} taking $\lambda=\lambda_k=\lambda_l$. Given the above expression it is straightforward to write down the following explicit solution for the d-p-mKdV equation \eqref{d-p-mKdV} \begin{gather* v(n_1, n_2)=\frac{T_{_{n_3}} \big(C_{_{[3]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)\big)}{C_{_{[3]}}\big(\theta^{1},\theta^{2}, \dots, \theta^{N}\big)}v_0, \end{gather*} where $\theta^k=\theta^k(n_1, n_2, n_3;\lambda_k)$ is given by \eqref{dpmKdV-seed-linear-solu-theta} and $\lambda_k$ are arbitrary constants; \begin{gather* v(n_1, n_2)=\frac{T_{_{n_3}}^{-1} \big(C_{_{[\b 3]}}\big(\rho^{1},\rho^{2}, \dots, \rho^{N}\big)\big)}{C_{_{[\b 3]}}\big(\rho^{1}, \rho^{2}, \dots, \rho^{N}\big)}v_0, \end{gather*} where $\rho^k=\rho^k(n_1, n_2, n_3;\lambda_k)$ is given by \eqref{dpmKdV-seed-linear-solu-rho} and $\lambda_k$ are arbitrary constants; \begin{gather* v(n_1, n_2, n_3)=\frac{T_{_{n_3}} \left(\det (\omega_{k,l})\right)}{\det (\omega_{k,l})}v_0,\qquad k, l= 1, 2, \dots, N, \end{gather*} where $\omega_{k,l}$ is given by \eqref{dpmKdV-seed-linear-solu-omeg-theta-rho} with $\omega_{k,l}=\omega(\theta^k, \rho^l)$. \section{Conclusions} In this paper, we presents two main results. In the f\/irst we show how the d-p-mKdV equation and its Lax pairs in matrix form arise from the Hirota--Miwa equation by 2-periodic reduction. The second is that Darboux transformations and binary Darboux transformations are derived for the d-p-mKdV equation and we show how these may be used to construct exact solutions. In this paper, we have revisited the Darboux and binary transformations of the Hirota--Miwa equation but in a departure from the results in \cite{Nimmo-1997, Nimmo-Chaos-2000, Shi-2014}, by the gauge transformation \mbox{$\phi\rightarrow \prod\limits_{i=1}^{3}a_i^{-n_i}\phi$}, we write the linear system of Hirota--Miwa equation in a way which is suitable for obtaining the Lax pair of the d-pmKdV equation naturally by a 2-periodic reduction. Up to gauge transformations, these Lax pairs, which allow the application of the classical Darboux transformations, are coincident with the ones given by the multidimensional consistency property~\cite{BS-2008}. Hieta\-rin\-ta and Zhang~\cite{Hietarinta-2009} derived the $N$-soliton solutions to the d-p-mKdV equation using Hirota's direct method and the authors mention that the bilinear equations they get are similar to the Hirota--Miwa equation~\eqref{H-M-1}. The results in this paper, in which similar results are obtained by reduction of the Hirota--Miwa equation, give an explanation of the observations in~\cite{Hietarinta-2009}.
1,116,691,497,951
arxiv
\section{Conclusions} In this paper, we apply a distributed Kiefer-Wolfowitz algorithm to a wireless network optimization problem. We address the distributed tuning of contention windows in WiFi networks and show that nodes can learn the proper contention windows values yielding proportional fair resource allocation even without explicit communication. Specifically, they can estimate the global utility function by overhearing each others' transmissions and use it to solve the log-convex optimization problem collaboratively. We evaluated the algorithm in multiple scenarios and with various levels of coordination between nodes (i.e., time and action synchronization). We conclude that based on the KW algorithm, one can design simple yet powerful learning-based collaboration schemes, which do not require any communication nor coordination among agents. However, the parameters of such an algorithm have to be carefully configured to make the convergence speed practical. The automatic tuning of the parameters is left for further study. \section{Distributed Contention Window Learning} Based on a distributed and asynchronous Kiefer-Wolfowitz algorithm, we propose a simple collaborative learning scheme for WiFi nodes, which allows them to learn the optimal contention window values without coordination and information exchange. \subsection{Practical Issues} There are several practical issues that have to be considered. First, the utility function evaluation is not immediate, as an agent cannot measure the instantaneous throughput. Instead, it has to count an amount of successfully transmitted data by overhearing frames and compute the mean throughput of neighboring nodes over the measurement time slot $\tau$. As agents follow this procedure simultaneously with random phase offsets, the utility that each agent observes is an average of multiple function evaluations during $\tau$ that correspond to changing $CW$ of agents. Second, in contrast to the formal proof where the function is globally defined, the congestion window takes values in a discrete set between $CW_{min}$ and $CW_{max}$. Finally, to make an algorithm responsive to changes (e.g., used data rates), we use constant exploration and learning parameters and remove the termination condition. Therefore, the algorithm cannot converge to the optimum value but only to its neighborhood. \subsection{Proposed Approach} We apply a modified version of the DA-KW algorithm to learn the IEEE 802.11's $CW$ cooperatively. More specifically, as illustrated in Algorithm~\ref{da-spsa-c}, we introduce modifications to match the discussed practical issues. From the point of view of a single agent, our proposed technique works as follows. During initialization, a WiFi node selects a contention window value within the range of $\left [ CW_{min}, CW_{max} \right ]$. The integer $CW$ value is converted to log-transformed variable $y$ using function $L$. Specifically, $L$ computes channel access probability as $\lambda = \frac{2}{CW + 1}$, and then transformed variable as $y=log (\frac{\lambda}{1-\lambda})$. By $L^{-1}$, we designate the operation inverse to $L$. At a random time point $t$, node $i$ perturbs its log-transformed variable by a fixed exploration parameter $\delta_k$, i.e., it replaces $y_k$ by $y_k + \varepsilon_k \delta_k$ and converts that value back to the discrete $CW_t$ value. Next, for the duration of a single measurement slot $\tau$ (e.g., 100\,ms), it transmits all its frames applying the $CW_t$ value to the back-off procedure. Simultaneously, the node observes the environment formed by all WiFi nodes. That is, by overhearing frames, it counts the amount of data transmitted by each neighbor. At the end of the measurement slot, it computes the value of the network utility function, i.e., the sum of the logarithms of the observed throughputs of the different nodes and the known own throughput. Then at $t+\tau$, the node again perturbs its log-transformed variable, i.e., it replaces $y_k$ by $y_k - \varepsilon_k \delta_k$, and repeats the measurement procedure. Finally, at $t+2\tau$, it combines both measurements to compute the gradient estimate and updates its log-transformed variable $y_{k+1}$ accordingly. The value is projected to the decision set defined as $\mathcal{K}_{\alpha} = [A+\alpha,B-\alpha]$, where $\alpha \leq \frac{B-A}{2}$. The projection of $x$ to the nonempty interval $[a, b]$ is defined as $\Pi_{[a,b]} = \mbox{max}\{\mbox{min}\{b, x\} , a\}$. Note that the gradient descent is performed in a continuous domain using log-transformed variable $y$, which is then converted and discretized into an integer $CW$ value. \begin{algorithm}[t!] \SetAlgoLined {\bf Input:} $L$ converts $CW$ to log-transformed value $y$\\ {\bf Input:} Constant parameters $\delta_k = \delta, \eta_k = \eta$ \\ {\bf Input:} Function sample interval $\tau$ \\ {\bf Input:} Random phase offset $p_i \in \left [ 0, \tau \right ]$ \\ {\bf Initialization:} Choose $y_0 \in \left [L(CW_{min}), L(CW_{max}) \right ]$ \\ \For{$k=0,1,\dots,$}{ Draw $\varepsilon_k$ uniformly from $\left\{-1,1 \right \}$ \\ Let $t= k\times 2\tau + p_i$ \\ At $t$ set $CW_t = \left \lceil L^{-1}(y_k + \varepsilon_k \delta_k) \right \rceil$ \\ Observe transmissions of neighbors for $\tau$ and compute $g^+_k = \sum_{i=1}^{N} \tilde{S_i}$. \\ At $t + \tau$ set $CW_{t+\tau} = \left \lceil L^{-1}(y_k - \varepsilon_k \delta_k) \right \rceil$ \\ Observe transmissions of neighbors for $\tau$ and compute $g^-_k = \sum_{i=1}^{N} \tilde{S_i}$. \\ Compute gradient estimate $\tilde{g_k} = \frac{g^+_k - g^-_k}{2\varepsilon_k \delta_k}$ \\ Update $y_{k+1} = \Pi_{k} \left ( y_k - \eta_k \tilde{g_k} \right )$, where $\Pi_k$ is the projection operator onto $\mathcal{K}_{\delta_k}$} \caption{Proposed Algorithm (executed by each agent $i$)} \label{da-spsa-c} \end{algorithm} \subsection{Impact of Diverse Coordination Levels} To evaluate the impact of action synchronization among the distributed nodes, we target the distributed scenario with a multi-step approach that gradually removes the coordination between nodes: \noindent \textbf{Coordinated Learning: } In the case of full coordination, the measurement slots of agents are synchronized, i.e., $p_i = 0$ for $i \in \{1,..,N\}$. Specifically, agents perform the gradient estimation procedure and update the $CW$ at the same time. Therefore, the utility function is evaluated with constant variables in each time slot. \smallskip \noindent \textbf{Slotted Learning:} In the second step, we remove the stage coordination between agents, i.e., the time is still slotted, and agents enter the gradient estimation procedure at the slot boundaries. However, they might be at a different stage of the procedure as $p_i \in \{0, \tau \}$ for $i \in \{1,..,N\}$. As a result, they perform a $CW$ update at two different time points. The environment state does not change during a single measurement period. However, it changes in each slot. Hence, agents perform the first measurement (i.e., $g^-_k $) under a different environment state than the second one (i.e., $g^+_k $). \smallskip \noindent \textbf{Uncoordinated Learning:} In the general case, the actions of distributed agents are not synchronized, and at any point in time $t$ each agent is at a different stage of the algorithm. Note that from each agent's perspective, the environment state could change $N-1$ times within a single measurement period. \section{Performance Evaluation}\label{sec:eval} We evaluate the distributed contention window tuning algorithm by means of simulations using the ns-3 network simulator~\cite{ns3} in conjunction with ns3-gym~\cite{ns3gym}. Specifically, we use the model for IEEE 802.11n in infrastructure-based mode and create multiple overlapping WiFi networks. Note that nodes belonging to separate networks cannot communicate. If not stated otherwise, we create a fully-connected topology (i.e., single collision domain), where nodes are uniformly spread in the area of 10-by-10\,m; hence, every transmitter can sense ongoing transmissions. In order to change network contention conditions, we vary the number of transmitting nodes. Moreover, we change the data rates and frame sizes to influence the solution of the proportional-fairness problem. To turn off the BEB procedure and enable simple uniform back-off, we assign the same value of $CW$ to $CW_{min}$ and $CW_{max}$. We bound the $CW$ value to the range used in WiFi, i.e., $CW \in \{15,1023\}$, hence the log-transformed $CW$ operates in $y \in \{-6.23, -1.94\}$. We show the convergence of the distributed algorithm using the evolution of the contention window, air-time share, and throughput. We compare the performance of our technique against the original IEEE 802.11 BEB technique. \subsection{Selection of Measurement Slot Duration}\label{sec:tau} First, we evaluate the impact of the measurement slot duration on the quality of the network utility estimates and the algorithm's convergence. \begin{figure}[hb!] \begin{minipage}[b]{1.0\linewidth} \includegraphics[width=\linewidth]{figures/tau_1} \end{minipage}\hfill % \begin{minipage}[b]{1.0\linewidth} \includegraphics[width=\linewidth]{figures/tau_2} \end{minipage}\hfill \vspace{-5pt} \caption{Convergence of individual $CW$ under various measurement slot duration $\tau \in \{25, 50, 100, 200\}$\,ms.} \label{fig:tau} \end{figure} To this end, we consider a scenario with five transmitting stations with homogeneous traffic, i.e., each station is backlogged with 1000\,B UDP packets and transmits to its own AP with a data rate equal to 26\,Mbps (MCS 3). Fig.~\ref{fig:tau} shows the evolution of the contention window value (we show $c = \log_2(CW)$) for each of transmitting nodes with four slot durations, $\tau \in \{25, 50, 100, 200\}$\,ms. We observe higher variability for smaller values of $\tau$, which is expected due to the random nature of the frame transmissions. Specifically, with smaller values of $\tau$, the nodes cannot collect enough statistics to accurately estimate the network utility. These effects are alleviated by increasing the duration of $\tau$ so that the throughput estimates become more accurate, but at the cost of longer convergence time because updates are less frequent. For further evaluation, we select $\tau = 200$\,ms, and leave its optimization as future work. \begin{figure*}[ht!] \centering \includegraphics[width=1.0\linewidth]{figures/fc_2_comparison} \vspace{-20pt} \caption{Convergence of the proposed algorithm with two transmitting nodes.} \label{fig:fc_2_comparison} \end{figure*} \begin{figure*}[ht!] \centering \includegraphics[width=1.0\linewidth]{figures/fc_10_comparison} \vspace{-20pt} \caption{Convergence of the proposed algorithm with ten transmitting nodes.} \label{fig:fc_10_comparison} \vspace{-12pt} \end{figure*} \subsection{Homogeneous Traffic} Here, we examine the sensitivity of the proposed distributed algorithm to the type of coordination and learning parameters. Moreover, we vary the number of active nodes with $N \in \{2,10\}$. We use the same homogeneous traffic parameters as in section~\ref{sec:tau}. Note that the exploration parameter $\delta$ and the learning rate $\eta$ were adequately selected to provide good performance. Fig.~\ref{fig:fc_2_comparison} and Fig.~\ref{fig:fc_10_comparison} show a representative evolution of individual contention windows, total network throughput, as well as allocation of air-time in the scenario with two and ten nodes, respectively. In each setting, the nodes start with the same $CW$ values. In the case of two nodes, one starts with $CW = 1023$ and the second one with $CW = 15$. While, in the case of ten nodes, each starts with $CW$ value equal to $2^c$, where $c \in \left [4, 10\right]$. First, we observe that the distributed nodes converge to similar $CW$ values, as expected because of the homogeneous traffic, and the algorithm converges to equal air-time allocation and optimal total network throughput. Note that in the case of ten active nodes, the total throughput is around 20\% higher than that achieved by WiFi. The algorithm converges faster in the case of ten nodes, as the utility function becomes steeper with an increased number of nodes~\cite{Golshan2020bco}. The variability of $CW$ is higher for ten nodes, as the node estimates become noisier. Nevertheless, the network operates with approximately optimal utility. Moreover, in the case of two nodes, we observe an interesting cooperative behavior where the initially more aggressive node slows down to free more air-time for its peer, then both nodes increase their aggressiveness to maximize the network utility. Our results also show that the distributed algorithm behaves similarly with and without learning coordination, i.e., there is no significant advantage to coordination. Finally, the three right-most columns in Fig.~\ref{fig:fc_2_comparison} and Fig.~\ref{fig:fc_10_comparison} show the convergence behavior with different learning rates $\eta \in \{0.05, 0.1, 0.3\}$. Increasing the value of $\eta$ allow the algorithm to take bigger steps and to converge faster in case of two nodes. But, when $N=10$, the higher learning rate brings more fluctuations due to increased noise in the utility estimates. The evaluation with varying exploration parameters was skipped due to the space limit. \subsection{Heterogeneous Traffic} Here, we evaluate the performance of the algorithm under heterogeneous traffic, where the optimal $CW$ values are not identical for all transmitting nodes. We consider two scenarios, where three nodes use diverse transmission parameters: \textit{i)} the same data rate (MCS3, 26\,Mbps), but diverse packet sizes $D_i \in \{250, 500, 1000\}$\,B; \textit{ii)} the same packet size (1500\,B), but diverse data rates $M_i \in \{6.5, 26, 65\}$\,Mbps. In Fig.~\ref{fig:fc_hetero_mcs_comparison} and Fig.~\ref{fig:fc_hetero_pkt_comparison}, we compare the performance of our distributed approach with standard WiFi operation. Specifically, we are interested in air-time allocation, individual throughputs and packets data rate. Due to the symmetric contention process, WiFi assures an equal number of transmission opportunities to all nodes, i.e., frame-fairness. Our results show the \textit{performance anomaly} problem in WiFi, i.e., despite using the higher data rate, the performance of the faster nodes is capped at that of the slowest station. In contrast, the proposed algorithm allows nodes to successfully and quickly converge to equal air-time allocation (i.e., proportional-fair allocation) in both scenarios. \begin{figure}[ht!] \centering \includegraphics[width=1.0\linewidth]{figures/fc_hetero_mcs_comparison} \vspace{-23pt} \caption{Air-time allocation, individual throughput and packet rate in case of heterogenous traffic, i.e., all nodes use the same packet size of 1500\,B, but different data rates $M_i \in \{6.5, 26, 65\}$\,Mbps.} \label{fig:fc_hetero_mcs_comparison} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=1.0\linewidth]{figures/fc_hetero_pkt_comparison} \vspace{-20pt} \caption{Air-time allocation, individual throughput and packet rate in case of heterogenous traffic, i.e., all nodes use the same data rate of 26\,Mbps (MCS3), but different packet sizes $D_i \in \{250, 500, 1000\}$\,B.} \label{fig:fc_hetero_pkt_comparison} \vspace{-10pt} \end{figure} \subsection{Dynamic Scenario} Using a setup similar as in the previous sections, we evaluate the adaptability of the proposed algorithm to network dynamics. Specifically, we consider a dynamic scenario with three transmitters that change data rates (e.g., due to the change of wireless propagation condition). Fig.~\ref{fig:dynamic_scenario} show the individual air-time allocation and throughput. The traffic is saturated, and nodes use a packet size of 1000\,B. The nodes starts with the same data rate of 13\,Mbps (MCS2), then at time $t=20$\,s, nodes change the data rates, i.e., Node-1 switches to 65\,Mbps (MCS7) and Node-2 switches to 6.5\,Mbps (MCS0). At $t=60$\,s, nodes change data rates again, i.e., Node-1 switches to 6.5\,Mbps (MCS0) and Node-2 switches to 65\,Mbps (MCS7). Node-3 never changes its data rate. We observe in Fig.~\ref{fig:dynamic_scenario} that the algorithm can adapt to the changes in data rates. The convergence takes around 10\,s after the change occurs. \begin{figure}[ht!] \centering \includegraphics[width=1.0\linewidth]{figures/dynamic_scenario} \vspace{-12pt} \caption{Air-time allocation and individual throughput in dynamic scenario. Nodes starts with the same data rate. At $t=20$\,s and $t=60$\,s, node 1 and 2 change the used data rate.} \label{fig:dynamic_scenario} \vspace{-12pt} \end{figure} \subsection{Unsaturated traffic} Here, we evaluate the behavior of the proposed algorithm under unsaturated traffic conditions. Specifically, we simulate two scenarios with three transmitting nodes, in which: \textit{i)} the total offered load does not saturate the wireless channel, i.e., $r_i \in \{200, 400, 600\}$\,pkts/s; \textit{ii)} the total offered load saturates the wireless channel as one node operates with saturated traffic, i.e., $r_i \in \{200, 400, 2000\}$\,pkts/s. The nodes use homogeneous transmission parameters, namely a data rate of 26\,Mbps (MCS3), and a packet size of 1000\,B. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figures/nonsat_but1} \vspace{-20pt} \caption{Individual contention window and packet rate in unsaturated scenario (left) with offered load of $r_i \in \{200, 400, 600\}$\,pkts/s and with one node saturated (right), i.e., $r_i \in \{200, 400, 2000\}$\,pkts/s.} \label{fig:nonsat} \vspace{-10pt} \end{figure} Fig.~\ref{fig:nonsat} shows the evolution of individual contention window and packet rate in both scenarios. We observe that in the case of unsaturated traffic the nodes increase their aggressiveness until the offered load is satisfied. Afterward, they operate with a stable $CW$, i.e., the perturbation of the $CW$ does not change the value of the network utility, and hence the estimated gradient equals zero. In the second scenario, the node with a high traffic load increases its aggressiveness until it saturates the wireless channel, but without negatively affecting the slower nodes, i.e., they also adapt their CW properly to get enough transmission opportunities and satisfy their own traffic. \subsection{Flow-In-the-Middle Topology} Finally, we evaluate the behavior of the proposed scheme in a flow-in-the-middle (FIM) topology -- Fig.~\ref{fig:fitm_topology}. The FIM topology is a simple multi-collision domain scenario, where nodes have asymmetric contention information. Specifically, the central transmitter can carrier sense transmissions of both its neighbors, while the edge transmitters cannot carrier sense each other. Therefore, the middle transmitter defers its transmissions whenever at least one of its neighbors transmits a frame, while concurrent transmissions of the edge nodes can occur. Note that the transmissions of the edge nodes may interleave, leaving no silent periods for the middle node. As a result, the throughput of the middle node is lowered due to the lack of transmission opportunities. In the worst case, the middle node suffers from complete starvation~\cite{aryafar2013csma}. \begin{figure}[ht!] \centering \includegraphics[width=0.4\linewidth]{figures/fitm_topology} \caption{Flow-In-the-Middle (FIM) Topology. In the network graph, vertices, dotted lines, and arrows represent nodes, connectivity, and flows, respectively.} \label{fig:fitm_topology} \vspace{-12pt} \end{figure} We simulate a scenario where all three transmitters use the same data rate of 26\,Mbps (MCS3), and packet size of 1500\,B. The traffic is saturated. In the FIM topology, only the middle transmitter can estimate the global utility function, while the edge nodes can estimate the utility function only in their collision domain. We consider two cases of $CW$ learning, namely uncoordinated learning, where nodes locally estimate the utility function, and coordinated learning, where nodes perform gradient estimation procedure synchronously and the global function computed by the middle node is communicated to the edge nodes. Fig.~\ref{fig:fitm_comparison} shows the evolution of the individual $CW$ and the air-time allocation, while the Fig.~\ref{fig:fitm_comparison_air_time} show the air-time allocation averaged over the simulation duration (i.e., 100\,s). Our results confirm the problem of starvation of the middle node in the case of standard WiFi BEB operation. Specifically, the middle node gets only 9\% of the channel air-time, while the edge nodes around 70\%. We also show the optimal air-time allocation found with extensive simulations. The proposed algorithm improves the fairness among nodes, i.e., the middle node gets assigned more air-time while the edge nodes get proportionally less. When using the global utility, the nodes can find the $CW$ values leading to the optimal solution. However, with local utilities, the middle node becomes too aggressive. Moreover, as the goals of the three nodes are not consistent, the distributed algorithm cannot converge. Instead, the algorithm oscillates between two solutions, i.e., the middle node wants to achieve proportional fairness for three nodes, while the edge nodes optimize its operation for two nodes. \begin{figure}[ht!] \centering \includegraphics[width=\linewidth]{figures/fitm_comparison3} \vspace{-20pt} \caption{The individual contention window and air-time allocation in FIM topology for uncoordinated learning with locally estimated utility (left) and coordinated learning with global utility (right).} \label{fig:fitm_comparison} \vspace{-10pt} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\linewidth]{figures/fitm_comparison4} \vspace{-20pt} \caption{The averaged air-time allocation attained by 802.11 WiFi and using the proposed learning-based approach in the FIM topology. Optimal air-time allocation is presented for comparison.} \label{fig:fitm_comparison_air_time} \vspace{-10pt} \end{figure} \section{Introduction} The widely deployed WiFi technology uses Distributed Coordination Function (DCF), a simple decentralized channel access mechanism based on the CSMA/CA protocol to share the limited wireless channel capacity. With DCF, the nodes adjust their channel access probability using a back-off mechanism. Specifically, they maintain a dynamically changing contention window (CW) and select the time lag between two consecutive transmission attempts as a random number within the scope of the CW. By design, DCF ensures equal long term channel access probabilities to all involved stations, thus guaranteeing frame-level fairness. However, the frame-fairness leads to the so-called \textit{performance anomaly} problem when stations use different transmission rates. Stations employ various modulation and coding schemes (MCSs) to preserve transmission robustness in response to the quality of the radio link they experience, which in turn depends on their communication distance, mobility, and other factors. The lower MCSs offer increased robustness against errors at the cost of a drop in data rate. Consequently, the WiFi stations that use lower data rates occupy the channel for a longer period of air-time that could be otherwise used more efficiently by the faster clients. Precisely, under the frame-level fairness, a station with poor transmission conditions captures extensive air-time share and reduces the air time available to other stations, and hence the station \textit{slows down} all stations. This pathological behavior was first identified in 802.11b networks\cite{1208921}. Then, Patras \textit{et al.}\cite{patras2016proportional} demonstrated that the effect is dramatically exacerbated in today's high throughput networks (i.e., 802.11n/ac), where the data rates among stations may vary by orders of magnitude, e.g., the throughput of a station using the data rate of 780\,Mbps become similar to that of a co-existing station with the data rate of 6\,Mbps. The proportional-fair allocation, introduced by Kelly \mbox{\textit{et al.}}\cite{kelly1998proportional}, was shown to address the performance anomaly problem appropriately\cite{checco2011fairness}. By definition, it maximizes the network utility (defined as the sum of the logarithms of individual throughputs) subject to the constraints that the communication conditions impose on the individual stations. In~\cite{5370273}, the proportional fairness allocation in WiFi networks was formulated as a convex optimization problem that is solved by selecting optimal contention window values for the stations. The existing approaches to solve this problem are deployed in a central node (e.g., AP or a controller node) and require knowledge of the average frame duration or throughput of each transmitting station. However, such centralized operation cannot be assumed in the case of overlapping but separately managed WiFi networks. Furthermore, the standard does not envision the possibility of communication between nodes belonging to different networks. Consequently, in this paper, we propose a distributed learning-based approach where WiFi nodes independently tune their own contention window to achieve proportional fairness. Our algorithm is based on a stochastic convex optimization framework. Specifically, it is based on our previous work~\cite{walrand2020distributed} that proves the convergence of the Kiefer-Wolfowitz algorithm~\cite{kiefer1952stochastic} in a distributed and asynchronous setting. Those properties are highly beneficial for coexisting WiFi nodes as they allow learning channel access parameters without any coordination nor explicit information exchange. Instead, the nodes use overheard frames to compute the network utility and independently follow a gradient descent method to optimize overall network performance. In general, we explore the usability of the distributed and asynchronous Kiefer-Wolfowitz (DA-KW) algorithm in a practical use-case of wireless optimization. Specifically, the contributions of this work are as follow: \begin{itemize} \item We propose a simple approach for distributed contention window tuning that achieves proportional fairness in coexisting WiFi scenarios. To this end, we apply the DA-KW algorithm to the WiFi domain while introducing slight modifications to address the practical issues. \item Using simulations, we evaluate the proposed approach in terms of convergence speed and achieved performance in multiple scenarios. Moreover, we compare its performance with the standard WiFi DCF. \item We investigate the impact of the level of coordination (i.e., synchronized execution). It appears that there is no significant gain of the coordination in the case of a single collision domain. However, coordination and information exchange allow achieving optimal channel allocation in the case of overlapping collision domains. \end{itemize} \section{Distributed Stochastic Optimization Primer} Here, we briefly introduce the stochastic convex optimization techniques in centralized (i.e., single agent) and distributed (i.e., multiple agents) settings. \subsection{Stochastic Convex Optimization}\label{sco} A Stochastic Convex Optimization deals with minimizing of the expected value of a function $F(\mathbf{x}, \xi)$ that is convex in $\mathbf{x} \in \mathbb{R}^d$ where $\xi$ is random vector: \begin{equation} \mbox{Find} \: \mathbf{x}^* = \underset{\mathbf{x} \in \mathcal{K}}{\mbox{argmin}} \: f(\mathbf{x}) := E(F(\mathbf{x},\xi)). \end{equation} The setup is that one has access to sample values of $F(\mathbf{x}, \xi)$. If one could measure the gradient $\nabla f(\mathbf{x})$ of the function, one could use a gradient descent algorithm where the parameters at the $t$-th iteration, $\mathbf{x}_t$, are updated according to ${\mathbf{x}}_{t+1} = \mathbf{x}_{t} - \eta_t \nabla f(\mathbf{x})$, where $\eta_t$ is the step size at iteration $t$. However, in practical scenarios there is no access to the actual gradient $ \nabla f( \mathbf{x} )$. Accordingly, the stochastic gradient descent algorithms rely on constructing noisy gradient estimates $\tilde{g_t}$, which are then used to adjust the parameters according to $\mathbf{x}_{t+1} = \mathbf{x}_{t} - \eta_t \tilde{g_t}$. The Kiefer-Wolfowitz (KW) algorithm~\cite{kiefer1952stochastic} is a gradient estimation method that combines two function evaluations with perturbed values of its variable to compute the estimate. The simultaneous perturbation stochastic approximation (SPSA) algorithm~\cite{spall1992spsa} is an extension of the KW algorithm towards multivariate problems. In SPSA, the partial derivatives with respect to the different variables are estimated by simultaneously perturbing each variable by an independent and zero-mean amount, instead of perturbing the variables one at a time. The optimization procedure can be performed in a centralized or distributed setting. In the former case, a single agent knows and controls all the variables in vector $\mathbf{x}$ and has the exclusive right to query the function (we refer to it as \textit{environment}), while in the latter case, those assumptions do not hold. \subsection{Distributed Convex Optimization}\label{dco} In the distributed settings, a set of $N$ distributed agents try to optimize a global objective function. The critical challenge is that agents make individual decisions simultaneously, and the value of the function depends on all agents' actions. Moreover, we assume that the agents are not synchronized (i.e., they query the function and update their variable asynchronously at a random point in time) and cannot communicate. Specifically, each agent adjusts his own variable $x_i$ without knowing the values of the other variables, i.e., $x_{-i}$. Fig.~\ref{da_spsa} shows the interaction between agents and the environment. \begin{figure}[ht!] \centering \vspace{-5pt} \includegraphics[width=0.75\linewidth]{figures/dcbo_scheme_2} \vspace{-12pt} \caption{The interaction between distributed agents and the environment -- agents asynchronously submit their individual variables to the environment and get noisy observations of its value.} \label{da_spsa} \vspace{-5pt} \end{figure} \begin{algorithm}[b] \SetAlgoLined {\bf Input:} Non-increasing sequences $\pa{\delta_k}, \pa{\eta_k}$ \\ {\bf Input:} Function sample interval $\tau \geq 1$ \\ {\bf Input:} Phase offset $p_i \in \{1, \ldots, \tau - 1 \}$ \\ {\bf Initialization:} Choose arbitrary $y_0 \in \mathbb{R}$ \\ \For{$k=0,1,\dots,$}{ Draw $\varepsilon_k$ uniformly from $\left\{-1,1 \right \}$ \\ Let $t= k\times 2\tau + p_i$ \\ At $t$ set $x_t = y_k + \varepsilon_k \delta_k$ \\ Observe $g^+_k = f(\bf{x}_t)$. \\ At $t + \tau$ set $x_{t+\tau} = y_k - \varepsilon_k \delta_k$ \\ Observe $g^-_k = f(\bf{x}_{t+\tau})$ \\ Compute gradient estimate $\tilde{g_k} = \frac{g^+_k - g^-_k}{2\varepsilon_k \delta_k}$ \\ Update $y_{k+1} = y_k - \eta_k \tilde{g_k}$ \\ \textbf{if} $\norm{x_t - x^{*}} < \varepsilon^{*}$ \textbf{break} } \caption{DA-KW (executed by each agent $i$)} \label{da-dkw} \end{algorithm} Following a naive application of the KW algorithm, each agent estimates the gradient by perturbing its own variable by a zero-mean change and querying the global function every time interval $\tau$. Then, it uses those measurements to determine the gradient and updates its variable in proportion to the computed estimate. Note that agents perform such experiments without any coordination. Thus, when an agent attempts to get a second evaluation of the function, the function may have already changed due to another agent's query. Intuitively, each agent gets gradient estimates corrupted due to the actions of other agents. However, we have recently proved that the KW algorithm can converge to the optimal solution in a distributed and asynchronous (DA) setting if the size of perturbation is bounded~\cite{walrand2020distributed}. The algorithm executed by each agent is presented as Algorithm~\ref{da-dkw}. \section{Relevant WiFi Background} In this section, we briefly describe the CSMA operation, present its analytical models, and summarize the throughput optimization in WiFi networks. \subsection{WiFi Random Back-off Operation} WiFi nodes use the DCF mechanism to access the channel. The DCF is based on the Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) method and employs binary exponential back-off (BEB) to control the contention window. Specifically, in DCF, before a frame transmission, a WiFi station has to perform a random back-off procedure. To this end, it initializes its back-off counter with a random number drawn uniformly from $\{0, \ldots, CW-1\}$, where $CW$ is the contention window. Then, the station observes the wireless channel and decrements the counter whenever it is sensed idle during a DCF inter-frame space (DIFS) and freezes the back-off counter otherwise. Finally, when the back-off counter reaches zero, the station transmits a frame. If the transmission is successful (as indicated by the reception of an acknowledgment), the station sets $CW$ to the minimal value, i.e., $CW_{min}$, for the next transmission. Otherwise, it doubles the previous contention window and performs the frame retransmission. The $CW$ is increased until it reaches the maximal value defined by $CW_{max}$. \subsection{Analytical Model of Contention-based Medium Access} Here, we briefly describe the analytical model derived in~\cite{patras2016proportional} which allows computing the total throughput achieved by WiFi nodes. We assume that all nodes are in a single collision domain (i.e., each node overhears transmissions of all other nodes). For the sake of clarity of presentation, in this section, we consider the case where all stations are saturated (i.e., they always have packets to transmit), but in Section VI, we also evaluate our algorithm in scenarios with non-saturated traffic. Note that the model allows for arbitrary packet sizes and selection of MCS. Let us consider a set of $N$ wireless stations, where each active station $i$ accesses the channel with slot transmission probability $\lambda_i$. The relation between channel access probability to a constant contention window is $CW_i = \frac{2-\lambda _i}{\lambda_i}$~\cite{bianchi2000performance}. The transmission failure probability experienced by a station $i$ equals $p_{f,i} = 1 - (1 - p_{n,i})(1 - p_i)$, where $p_{n,i}$ is the probability that the transmission fails due to channel errors (e.g., noise or interference), while $p_{i} = 1 - \prod_{j=1,j \neq i}^{N}(1-\lambda_j)$ denotes the collision probability experienced by a packet transmitted by this station. Then, the throughput of station $i$ equals $S_i$: \begin{equation} \label{eg:si} S_i=\frac{p_{s,i}D_i}{P_e T_e + P_s T_s + P_u T_u} \end{equation} \noindent where, $p_{s,i} = \lambda_i (1-p_{f,i})$ is the probability of a successful transmission performed by station $i$, while $D_i$ denotes its frame payload size in bits. $P_e = \prod_{i=1}^{N}(1-\lambda_i)$ is the probability that the channel is idle during a slot of duration $T_e$ (e.g., $9\mu s$ in 802.11n). $P_s = \sum_{i=1}^{N}p_{s,i}$ and $P_u = 1 - P_e - P_s$ are the expected probabilities of successful and unsuccessful transmission with the expected durations $T_s = \prod_{i=1}^{N} \frac{p_{s,i}}{P_s} T_{s,i}$ and $T_u = \prod_{i=1}^{N} \frac{p_{u,i}}{P_u} T_{u,i}$, respectively. Here, $T_{s,i}$ and $T_{u,i}$ are the durations of a successful and a failed transmissions of each station that depend on the fixed preamble duration, the variable duration of a header, the size of a payload transmitted with the PHY rate $C_{i}$, and whether an acknowledgment is sent (success) or not (failure). Finally, $p_{u,i} = \tau_i p_{n,i} \prod_{j=1,j \neq i}^{N} \left ( 1 - \tau_j \right ) + \tau_i \left ( 1 - \prod_{j=1}^{i-1} ( 1 - \tau_j ) \right ) \prod_{j=i+1}^{N} ( 1 - \tau_j )$ is the probability of an unsuccessful transmission (either due to collision or channel errors) of stations of highest index $i$ (when labeling stations according to their transmission durations). Note that the proper labeling is needed as the duration of a collision is dominated by the frame with the longest duration involved in that collision, and collisions should only be counted once. Using the transformed variable $y_i=\frac{\lambda_i}{1-\lambda_i}$, the expression of a station’s throughput (\ref{eg:si}) can be rewritten as: \begin{equation} \label{eg:si2} S_i=(1-p_{n,i})\frac{y_i}{Y} D_i \end{equation} \noindent where $Y = T_e + \sum_{i=1}^{N}\left ( y_i T_{s,i} \prod_{k=1}^{i-1} \left ( 1 + y_k \right )\right )$. We refer to~\cite{patras2016proportional} for the details of the model and the transition from identity (\ref{eg:si}) to (\ref{eg:si2}). \subsection{Proportional-fair Allocation} Following \cite{checco2011fairness}, we formulate the proportional-fair allocation problem as a convex optimization problem. The global utility function is defined as the sum of the logarithms of individual throughputs, i.e., $U = \sum_{i=1}^{N} \tilde{S_i}$, where $\tilde{S_i} = log(S_i)$. The utility maximization problem is as follows: \begin{align*} \begin{split} & \textup{maximize} \: \: \sum_{i=1}^{N} \tilde{S_i} \\ & \textup{s.t.} \: \: \tilde{S_i} \leq log \left ( z_i \frac{y_i}{Y} D_i \right ), \: \: i = 1,2,...,N \\ & \textup{and } 0 \leq y_i, \: \: i = 1,2,...,N \: \: (0 \leq \lambda_i \leq 1) \end{split} \end{align*} \noindent where $z_i = 1-p_{n,i}$. The constraints ensure that the optimal solution is feasible, i.e., it is within the log-transformed rate region $\tilde{R}$. The rate region $R$ is a set of achievable throughput vectors $S(\mathbf{\lambda}) = \left [ S_1, S_2,...,S_N \right ]$ as the vector $\lambda$ of attempt probabilities ranges over domain $[0, 1]^N$. The set $R$ is known to be non-convex in 802.11 networks, but the log-transformed rate region $\tilde{R}$ is strictly convex~\cite{5370273}. Moreover, as the strong duality and the KKT (Karush-Kuhn-Tucker) conditions are satisfied, a global and unique solution exists. \section{Related Work} \label{chapter:related_work} Proportional fairness in WiFi networks has been extensively studied, e.g.,~\cite{6195728, 6598666, 5463215}. Checco \textit{et al.}~\cite{checco2011fairness} provided an analysis of proportional fairness in WiFi networks. The authors proved that a unique proportional fair rate allocation exists and assigns equal total (i.e., spent on both colliding and successful transmission) air-time to nodes. Patras \textit{et al.}~\cite{patras2016proportional} extended this analysis to multi-rate networks, and confirmed that under proportional fair solution all the stations get an equal share of the air-time, which is inversely proportional to the number of active stations. The authors formulated network utility maximization as a convex optimization problem and provided a closed-form solution that can be solved explicitly. To this end, a WiFi access point (AP) estimates the average frame transmission duration for each station and pass it as an input to an optimization tool. The computed CW values are distributed to nodes in a beacon frame. The optimization is executed periodically (i.e., every beacon interval) to react to changes in the network (e.g., traffic or wireless conditions). An alternative approach was proposed by Famitafreshi \mbox{\textit{et at.}}~\cite{Golshan2020bco}. The authors use a stochastic gradient descent (SGD) algorithm, which can iteratively learn the optimal contention window only by monitoring the network's throughput. Specifically, the learning agent resides in a WiFi AP, where it measures the uplink throughput of each connected station and send the $CW$ value updates to all the stations in a beacon frame. The algorithm combines two utility values measured under different $CW$ values to compute the gradient and update the $CW$ following the SGD algorithm. Both proposed algorithms use global knowledge and centralized operation (i.e., deployed in AP). However, in typical scenarios, multiple networks under separate management domains are co-located and have to coexist, and there is no central entity with the full knowledge to perform the optimization or learning.
1,116,691,497,952
arxiv
\section{Introduction} Quantization of electromagnetic field theory is a textbook task\ct{wheeler}. However, quantization using canonical formulation faces two conflicting requirements: 1)the theory is gauge invariant; 2) there is lower bound to the free field energy. A obvious result from the confliction is that we always introduce ghost states in canonical quantization, such as gauge-dependent temporal photon and longitude photon. As is known, one way to exclude the ghost states is to take gauge fixing, for example, Coulomb gauge. However, under gauge fixings, the theory always loses gauge independence. Furthermore, in temporal gauge, the gauge field is not fixed and ghost states are still needed in canonical quantization. Advanced studies show that there is possibly another way to exclude ghost states, functional approach\ct{9710.3958}. For references we refer to \cite{wheeler,ibb1,ibb2,9710.3958,9306161,ldq,lee} etc. Refs. \cite{9710.3958,9306161,ldq} are on fermions while Refs. \cite{wheeler,ibb1,ibb2} on the ground state of gauge fields. However, in the functional approach, the same as in the canonical quantization, the gauge independence of physical states is not very obvious. Lee argues that all states should be gauge independent to consist with a peculiar phenomenon, color confinement, in QCD\ct{lee}. But such phenomenon does not occur in QED. Here we propose a new approach which can ensure that all physical states in QED should also be gauge independent. This approach takes advantage of the fact that QED possesses an expansion symmetry in gauge space, which is the generalization of local gauge transformation. Since such symmetry is not held for general wave functionals, it is natural to require that the energy of physical wave functional is invariant under such transformation. Such requirement leads to the gauge independence of wave functional in QED. To avoid divergences and ambiguities in continuous theory attributed to infinite ultraviolet and infrared cutoff, we discretize the position space by dividing the box with size $L^3$ into $N(\rightarrow \infty)$ grids with spacing $\Delta x=\Delta y=\Delta z={L\over N^{1/3}}$ to get a finite ultraviolet and infrared cutoff. For instance, if we set $L\rightarrow \infty$, we shall obtain divergent results in Eqs. (\ref{de1}), etc. Furthermore, if space-time is indeed discrete and/or QED is invalid beyond some energy scale, the discreteness will have physical meaning. In section 2 we list results of quantization to free QED. We show that under a reasonable assumption, all the physical states are gauge independent in section 3. Section 4 studies state functional including vacuum in detail. Section 5 is a simple discussion. \section{The Quantization to free QED} This section shows the main results of the quantization to free QED briefly. For simplicity, we set $A_0\equiv 0$. The commutation relations of gauge fields $A_i({\bf x})$ and adjoint fields $\Pi_i({\bf x})$ read \begin{equation} [A_i({\bf x}),\Pi_j({\bf x}^\prime)]={\mathrm{i}\over \tau} \delta_{ij}\delta_{{\bf x},{\bf x}^\prime}, \end{equation} where $\tau=\Delta x^3$. In other words, $\Pi_i({\bf x})=-\mathrm{i}\frac{\partial}{\tau\partial A_i({\bf x})}$. Meanwhile, suppose Fourier transformations of gauge fields and their conjugate fields are defined as \begin{equation} A_i({\bf p})=\sum \tau A_i({\bf x})e^{-i{\bf p}\cdot {\bf x}}, \,\Pi_i({\bf p})=\sum \tau \Pi_i({\bf x})e^{i{\bf p}\cdot{\bf x}} \end{equation} respectively, then \begin{equation} [A_i({\bf p}),\Pi_j({\bf p}^\prime)] = [A_i^*({\bf p}),\Pi_j^*({\bf p}^\prime)]= \mathrm{i}L^3\delta_{ij}\delta_{{\bf p}\p^\prime}. \label{commu}\end{equation} We also introduce the magnetic fields $B_i({\bf x})=\epsilon_{ijk} \partial_jA_k({\bf x})$, or, $B_i({\bf p})=\mathrm{i}\epsilon_{ijk}\hat{p_j}A_k({\bf p})$, where $B_i({\bf p})=\sum \tau B_i({\bf x})e^{-i{\bf p}\cdot {\bf x}}$ and $\hat{p_j}\equiv {1\over \Delta x}\sin p_j\Delta x$ (thereinafter we always ignore the hat symbol without confusion). Thus, for instance, with the notation ${ \Delta{\bf p}^3\over (2\pi)^3}=1/L^3$, \begin{equation} \frac{\partial}{\tau \partial A_i({\bf x})}=\mathrm{i}\epsilon_{ijk}L^3\sum_{\bf p} { \Delta{\bf p}^3\over (2\pi)^3} e^{-i{\bf p}\cdot{\bf x}} p_j{\partial \over \partial B_k({\bf p})}. \end{equation} Since $B_i({\bf x})$ and $A_i({\bf x})$ are both real, state functionals are invariant under transformation $A_i({\bf p})\rightarrow A_i^*(-{\bf p})$ or $B_i({\bf p})\rightarrow B_i^*(-{\bf p})$. We read Hamiltonian as\ct{lee} \begin{eqnarray} H&=&{1\over 2}\sum_x \tau[\Pi_i\Pi^*_i+B_iB^*_i] \label{hh1} \\ &=&{1\over 2}\sum_x \tau[-{\partial^2\over \Delta x^6 \partial A_i\partial A_i^*}+B_iB_i^*]. \label{hh2} \end{eqnarray} Or, in Fourier space, \end{multicols} \leftsep \begin{eqnarray} H&=&{1\over 2}\sum{ \Delta{\bf p}^3\over (2\pi)^3}\{-L^6{\partial^2 \over \partial A_i({\bf p})\partial A_i^*({\bf p})} +p^2A_i({\bf p})A_i^*({\bf p})- p_iA_i({\bf p})p_jA_j^*({\bf p})\} \nonumber \\ &=&{1\over 2}\sum{ \Delta{\bf p}^3\over (2\pi)^3}\{-L^6{p^2\partial^2\over \partial B_k({\bf p})\partial B_k^*({\bf p})} +L^6{p_j p_k\partial^2\over \partial B_k({\bf p})\partial B_j^*({\bf p})} +B_i({\bf p})B_i^*({\bf p})\}. \label{eqhm}\end{eqnarray} \rightsep \begin{multicols}{2} \section{Gauge independence of state functionals} We show here the properties of state functionals under gauge transformation. Hamiltonian in equation (\ref{eqhm}) can be divided into $H=\sum\limits_{\bf p} H_{\bf p}$, where \begin{eqnarray} H_{\bf p} &=&{1\over 2L^3}\{-L^6{\partial^2 \over \partial A_i({\bf p})\partial A_i^*({\bf p})} +p^2A_i({\bf p})A_i^*({\bf p})- \nonumber \\ && p_iA_i({\bf p})p_jA_j^*({\bf p})\}. \end{eqnarray} Therefore, equation $H\Theta=E\Theta$ possesses separable solutions $\Theta=\prod\limits_{\bf p} \Theta_{\bf p}[{\bf A}({\bf p})]$, where $\Theta_{\bf p}$'s satisfy \begin{eqnarray} \label{eqhn} \{-L^6{\partial^2 \over \partial A_i({\bf p})\partial A_i^*({\bf p})} +p^2A_i({\bf p})A_i^*({\bf p})- \nonumber \\ p_iA_i({\bf p})p_jA_j^*({\bf p})\}\Theta_{\bf p}=2E_{\bf p} L^3\Theta_{\bf p}, \end{eqnarray} with the total energy $E=\sum\limits_{\bf p}{ \Delta{\bf p}^3\over (2\pi)^3} E_{\bf p} L^3=\sum E_{\bf p}$. As for a definite ${\bf p}$, the theory is rotation invariance providing $p\ll \Delta x^{-1}$ . One can, therefore, rotate vector ${\bf p}$ into ${\bf p}_0=(0,0,p)$. For such ${\bf p}_0$ we get \end{multicols} \leftsep \begin{eqnarray} &&\{-L^6\sum\limits_i{\partial^2 \over \partial A_i({\bf p}_0)\partial A_i^*({\bf p}_0)} +p^2A_1({\bf p}_0)A_1^*({\bf p}_0)+A_2({\bf p}_0)A_2^*({\bf p}_0)\}\Theta_{{\bf p}_0}=2E_{{\bf p}_0} L^3\Theta_{{\bf p}_0}. \label{eq9}\end{eqnarray} Eq. (\ref{eq9}) also possesses separable solution $\Theta_{{\bf p}_0}=X[A_1({\bf p}_0),A_1(-{\bf p}_0)]Y[A_2({\bf p}_0),A_2(-{\bf p}_0)]Z[A_3({\bf p}_0),A_3(-{\bf p}_0)]$, with $X,\,Y,\,Z$ satisfying \begin{equation} \left\{\begin{array}{c} \{-L^6{\partial^2 \over \partial A_1({\bf p}_0)\partial A_1^*({\bf p}_0)}+p^2A_1({\bf p}_0)A_1^*({\bf p}_0)\}X=2E^XL^3X, \\ \{-L^6{\partial^2 \over \partial A_2({\bf p}_0)\partial A_2^*({\bf p}_0)}+p^2A_2({\bf p}_0)A_2^*({\bf p}_0)\}Y=2E^YL^3Y , \\ -L^6{\partial^2 \over \partial A_3({\bf p}_0)\partial A_3^*({\bf p}_0)}Z=2E^ZL^3Z , \\ \end{array} \right.\label{solt}\end{equation} where $E_{{\bf p}_0}=E^X+E^Y+E^Z$. \rightsep \begin{multicols}{2} Now we have divided $\Theta_{{\bf p}_0}$ into two parts. One of them, $X$ and $Y$, is perpendicular to gauge transformation, and the other, $Z$, is parallel to gauge transformation.The perpendicular part resembles harmonic oscillator while parallel part free particle. As for physical state, $X(Y,Z)$ tends to zero when $|A_i|\rightarrow \infty(i=1,2,3)$. For $X$ and $Y$, with analogy to oscillator, there is no problem. But for $Z$, there is no solution satisfying the condition. Up to a constant, the general solution can be written as $Z=\exp\{a A_3-2E^ZL^{-3}a^{-1}A_3^*\}$, where, for simplicity, $A_3$ and $A_3^*$ stand for $A_3({\bf p}_0)$ and $A_3^*({\bf p}_0)$ respectively. But this solution is not convergent when $|A_3|\rightarrow \infty$, provided $aa^*\neq 2E^ZL^{-3}$. States with $E^Z<0$ can also be ruled out by the divergence of functional at $|A_3|\rightarrow \infty$. Meanwhile, the choice of $|a|= \sqrt{2E^ZL^{-3}}$ ($E^Z\geq 0$) gives a finite but non-vanishing $Z$ when $|A_3|\rightarrow\infty$. Since each eigen-functional, including for $E^Z=0$, has such problem, we take a modified constraint on $Z$: $Z$ is finite when $|A_3|\rightarrow \infty$. Thus we obtain $Z=e^{a A_3-a^*A_3^*}$ with $|a|= \sqrt{2E^ZL^{-3}}$. Here $A_3$ or ${\bf p\cdot A}$ is free completely, correspondingly, $\Pi_3$ or ${\bf p\cdot\Pi}$ is determined absolutely, which can also be seen from the conservation of ${\bf p\cdot\Pi}$, $\,[{\bf p\cdot\Pi},H]\equiv 0$. This is a special case of Heisenberg Uncertainty Principle. It easy to see from Eq. (\ref{hh1}) that the system has symmetry, $A_i\rightarrow A_i-p_i f, \Pi_i\rightarrow \Pi_i$. The local gauge symmetry corresponds to a translation in gauge space, since $f$ is an arbitrary scalar function of ${\bf p}$. However, $f$ can also be a scalar function with respect to gauge fields, for instance, $f=p_iA_i\epsilon$, where $\epsilon$ is independent of $A_i$.\footnote{ This corresponds to a transformation $A_i({\bf x})\rightarrow A_i({\bf x})+\sum \Delta y^3 \frac{\partial A_j({\bf y})}{\partial y_j} \frac{\partial h({\bf x-y})}{\partial x_i}$. The transformation is not local in position space, but in Fourier space, it is. } It is easy to check that under this transformation $\vv{\Pi}$ and $\vv{B}$ remain unchanged. Therefore, besides local gauge symmetry, QED also possesses an expansion symmetry in gauge space. However, unlike the local gauge symmetry, the expansion symmetry is broken after Eq. (\ref{hh2}). To see it we perform a transformation in gauge space, ${\bf A}\rightarrow {\bf A}+{\bf p}h$, where we choose scalar function $h=\epsilon{\bf p\cdot A}/p^2$. At ${\bf p}={\bf p}_0$ we get \begin{equation}\left\{ \begin{array}{c} A_3\rightarrow A_3+a\epsilon A_3, \\ A_3^*\rightarrow A_3^*+a^*\epsilon^* A_3^*, \\ A_1(A_2,A_1^*,A_2^*)({\bf p})\rightarrow A_1(A_2,A_1^*,A_2^*)({\bf p}), \\ \end{array} \right. \end{equation} Then in equation (\ref{solt}) $Z\rightarrow Z^\prime=e^{a(1+\epsilon)A_3-a^*(1+\epsilon^*)A_3^*}$. The new functional has a changed energy, $E^{\prime Z}=|1+\epsilon|^2 E^Z$, that is, the state has a energy of gauge dependence as long as $E^Z\neq 0$. The statement can be considered in another way. As we know, $Z$ is a functional with a (complex) period, which is in proportion to $(E^Z)^{-1}$ up to a phase factor. Since the above transformation changes the period of the functional, it can also change $E^Z$. We are faced with a puzzle: On one hand, $E^Z$ is a conservational quantity, while on the other hand, it can be changed by an unphysical expansion in gauge space. To treat this puzzle, reference \cite{wbg} makes a gauge fixing, such as $A_3=0$, and no $\Pi_3$ existing correspondingly, for it is thought that neither $A_3$ nor $\Pi_3$ has physical meaning, or, in other words, they are both redundant variables at the case of ${\bf p}={\bf p}_0$. This treatment takes gauge dependent functionals and one should also modify the commutation relation (\ref{commu}). Here we can treat it in another way. We do not take the gauge fixing and therefore do not change the commutation relation. On the contrary, we think that all the physical states have a natural constraint: the energy of physical state does not change under the gauge translations and gauge expansions, since these transformations are both unphysical. This requirement will lead to $E^Z=0$ and therefore $Z\equiv 1$. Therefore, although there is no phenomenon similar to color confinement, all the states should be gauge invariant in QED. For general ${\bf p}$, the statement can be written as \begin{equation} p_i\Pi_i\Theta=p_i\Pi^*_i\Theta=0. \label{gginv}\end{equation} The puzzle nominated as color confinement in QCD has been treated by many researches, most of which are based on some combined forces. For instance, in reference \cite{inj} the author introduces a non-local Coulomb interactions between color charge. Here we show a somewhat different viewpoint. In non-Abelian case, especially $SU(3)$ theory or QCD, we face a very tanglesome situation. In QED, interactions in Hamiltonian is local in Fourier space (up to a $\pm {\bf p}$). However, they are nonlocal in non-Abelian theory. This is because there occur cubic and quartic interactions in QCD. An infinitesimal local gauge transformation (in position space) connects different momentum and color direction(A finite local gauge transformation even connects states with different numbers of gluons). Consider a gluon with single momentum (up to a $\pm {\bf p}$) and/or single color direction. Suppose it is an eigen-state of Hamiltonian in QCD, it can be written as $B_i^a({\bf p})\Theta_0$ (or $A_i^a({\bf p})\Theta_0$), where superscript and subscript are color index and direction index respectively. The gluon will be connected with other gluons with different momentum and/or directions, for instance, $B^b_i({\bf p}^\prime)\Theta_0$ (Generally, $|{\bf p}^\prime| \neq|{\bf p}|$) by local gauge transformation, for the QCD vacuum $\Theta_0$ is gauge invariant( This is a significant difference between Abelian theory and non-Abelian theory). Therefore, since $B_i^a({\bf p})\Theta_0$ and $B^b_i({\bf p}^\prime)\Theta_0$ are connected by a local gauge transformation, which does not change state energy, they have the same energy. This is impossible unless single gluon with definite momentum is infinite heavy. Or, a single gluon is eigenstate of Hamiltonian if and only if it is infinite heavy. Unlike in reference \cite{wbg}, our treatment keeps the commutation relations in equation (\ref{commu}) unchanged and does not introduce gauge condition. By a constraint on physical states, we find that, attributed to the gauge expansion symmetry, not only vacuum, but also all the physical states are gauge independent. \section{Solution to state functional } In this section we show the solution to general wave functional. First we review the functional of vacuum, the eigen-functional with the lowest energy. At first, one possibly prefers writing the vacuum state as functional with respect to $B_i$. But such treatment will meet a singularity. To see it we write the vacuum state from equation (\ref{eqhn}) as \begin{equation} \Theta_0= \exp\{-\sum{ \Delta{\bf p}^3\over (2\pi)^3} B_i^*({\bf p})D^0_{ik}({\bf p})B_k({\bf p})\}, \end{equation} due to the translation invariance. Introducing a positive matrix $S^0({\bf p})=D^0({\bf p})+D^{0T}(-{\bf p})$ we have \begin{equation} 1/p^2=S^0(1-\bar{P}/p^2)D^0, \end{equation} where $(\bar{P})_{mn}=p_mp_n$. There is no solution to this equation, for the determinant of l.h.s. equal to $(1/p^2)^3$ while the determinant of r.h.s. equal to zero, unless the determinant of matrix $S^0$ equals to infinity. To see it more clearly, we write $S^0={1\over p}(1-\bar{P}/p^2)^{-1/2}$ naively. Suppose ${\bf p}=(0,0,p_3)$, or $1-\bar{P}/p^2=diag(1,1,0)$, we then obtain a singular $S^0_{33}$. This reveals an obvious fact that there is no longitudinal magnetic fields in free QED. Therefore, a more convenient proposal is to write the vacuum state as functional with respect to $A_i$, \begin{equation} \label{vf} \Theta_0= \exp\{-\sum\limits_{\bf p} { \Delta{\bf p}^3\over (2\pi)^3} A_i^*({\bf p})D_{ik}({\bf p})A_k({\bf p})\}. \end{equation} Repeating the deductions, we obtain, \begin{equation} D={p\over 2}(1-\bar{P}/p^2). \end{equation} It is easy to check that $\Theta_0[{\bf A}({\bf p})]=\Theta_0[{\bf A}({\bf p})+{\bf p} h]$, where $h$ is an arbitrary scalar function. As expected, $\Theta_0$ is gauge independent. Iterate $B_i({\bf p})=\mathrm{i}\epsilon_{ijk}p_jA_k({\bf p})$ into Eq. (\ref{vf}), we write the vacuum functional as, \begin{equation} \Theta_0=\exp\{-\sum{ \Delta{\bf p}^3\over (2\pi)^3} {1\over 2p}B_i^*({\bf p})B_i({\bf p})\}, \label{18}\end{equation} with a constraint $p_iB_i=p_iB^*_i=0$. This result is in agreement with the references \cite{ibb1,ibb2}, except a necessary constraint. Since canonical fields are ${\bf A}({\bf x})$, we prefer (\ref{vf}) to (\ref{18}) as our final result. For the density of the ground state energy, we have \begin{equation} \mathcal{E}_0=E_0/L^3={1\over 2}\sum{ \Delta{\bf p}^3\over (2\pi)^3} 2D_{ii}({\bf p})={d\over 2}\sum{ \Delta{\bf p}^3\over (2\pi)^3} p, \end{equation} where $d=3-1=2$ is just the degree of freedom. Thus, due to the gauge invariance, the degree of freedom is not three but two for each \({\bf p}\). In discrete form $\mathcal{E}_0$ is \begin{equation} \int_{-\pi\over\Delta x}^{\pi\over \Delta x} { \Delta{\bf p}^3\over (2\pi)^3}\sqrt{\sin^2 p_x\Delta x+\sin^2 p_y\Delta x+\sin^2 p_z\Delta x}\simeq{1.19\over \tau}. \end{equation} The ultraviolet cutoff in the Fourier space is just the inverse size of grids $\Delta x^{-1}$. In continuous {\it l.h.s.} in the above equation should be $ \int_{-\pi\over\Delta x}^{\pi\over \Delta x} { \Delta{\bf p}^3\over (2\pi)^3} p$, which is about 630 times larger than the discrete one. It is significant that the zero-point energy in the discrete form is much lower than that in the continuous form. The most possible measurements of canonical fields and their conjugate fields, electric fields, are vanishing at each ${\bf p}$. However, other measurements are still possible. This is uncertainty in quantum mechanism. In fact, any definite configuration, such as ${\bf A}_i({\bf x})\equiv 0$, is never the eigenstate of Hamiltonian, attributed to the uncertainty. Thus, if we put an electric dipole in a box, its motion will be changed by nonzero electric field originated from the uncertainty. Such effect is suppressed by the volume of box. The uncertainty also leads to the condensates of the gauge fields. Without loss of generality we set ${\bf p}_0=(0,0,p_3)$. We have now $<A_3^*({\bf p}_0)A_3({\bf p}_0)>=\infty$ for the vacuum is gauge independent. But, the condensates $<A_1^*({\bf p}_0)A_1({\bf p}_0)>=<A_2^*({\bf p}_0)A_2({\bf p}_0)>$ are finite: \begin{eqnarray} &&<A_1^*({\bf p}_0)A_1({\bf p}_0)>= \frac{\int [d{\bf A}({\bf p})]A_1^*({\bf p}_0)A_1({\bf p}_0) \Theta_0^2}{\int [d{\bf A}({\bf p})] \Theta_0^2} \nonumber \\ &&= \frac{\int dA_1({\bf p}_0)A_1^*({\bf p}_0)A_1({\bf p}_0) \exp\{-{2p_0\over L^3}A_1^*({\bf p}_0)A_1({\bf p}_0)\}}{\int dA_1({\bf p}_0) \exp\{-{2p_0\over L^3}A_1^*({\bf p}_0)A_1({\bf p}_0)\}} \nonumber \\ &&=\frac{L^3}{2p_0},\label{de1} \end{eqnarray} where $p_0=|p_3|$. One can furthermore obtain the gauge independent condensates, \begin{eqnarray} &&<B_i^*({\bf p}_0)B_i({\bf p}_0)>= \nonumber \\&& p^2_0<A_1({\bf p}_0)^*A_1({\bf p}_0)+A_2^*({\bf p}_0)A_2({\bf p}_0)>=\frac{2}{2}p_0L^3, \nonumber \\ && <\Pi_i^*({\bf p}_0)\Pi_i({\bf p}_0)>=\nonumber \\ &&-L^6<{\partial^2\over \partial A_i^*({\bf p}_0)\partial A_i({\bf p}_0)}>=\frac{2}{2}p_0L^3.\label{cde}\end{eqnarray} The gauge invariance implies $\Pi_3({\bf p}_0)=0$ in the vacuum with a completely free $A_3({\bf p}_0)$. Notice here that not only gauge field but also gauge potential, which is gauge dependent, has gauge independent expectations. One can generalize Eq. (\ref{cde}) to general ${\bf p}$ in a straight way, which leads to an expected result, $E_0={1\over 2}\sum{ \Delta{\bf p}^3\over (2\pi)^3}\{<B_i^*({\bf p})B_i({\bf p})>+<\Pi_i^*({\bf p})\Pi_i({\bf p})>\}$. It is also interesting to study correlators of the gauge fields at different positions. The results are \begin{eqnarray} <B_i^*({\bf x})B_i(0)>&=&{1\over L^3}\sum { \Delta{\bf p}^3\over (2\pi)^3} <B_i({\bf p})B_i^*({\bf p})>e^{i{\bf p}\cdot {\bf x}} \nonumber \\ &=&\sum { \Delta{\bf p}^3\over (2\pi)^3} p\, e^{i{\bf p}\cdot{\bf x}}, \nonumber \\ <\Pi_i^*({\bf x})\Pi_i(0)>&=&\sum { \Delta{\bf p}^3\over (2\pi)^3} p\, e^{i{\bf p}\cdot{\bf x}}.\end{eqnarray} The study shows that the vacuum proposes many properties similar to that of the ground state in the harmonic oscillator, but there are also some different properties, due to the gauge invariance of QED vacuum. Because of the quantum effect, or, uncertainty, the vacuum has a complex structure, for instance, the nonvanishing condensates and correlate. The following is a simple study on the general solutions. We emphasize again that all the state must be gauge invariant. We take solution to Eq. (\ref{eqhn}). Setting $A_i^\perp\equiv A_j({\bf p})(\delta_{ij}-p_ip_j/p^2_0)$, $\Theta_{{\bf p}_0}\equiv\Theta_{{\bf p}_0}[A_i^\perp,\,A_i^{\perp *}]=f\Theta_{0{\bf p}_0}$ and $\Theta_{0{\bf p}_0}=\exp\{-{p_0\over L^3}A_i^\perp A_i^{\perp *}\}$ with function $f$ to be determined, we have, \begin{eqnarray} \label{eqhn1} E_{{\bf p}_0} f&=&-L^3(\delta_{ij}-p_ip_j/p^2_0) {\partial^2 f\over \partial A_i^{\perp *}\partial A_j^\perp} +\nonumber \\ && p_0({\partial f\over \partial A_i^\perp}A_i^{\perp}+{\partial f\over \partial A_i^{\perp *}}A_i^{\perp *}) , \end{eqnarray} where we have ignored the ground state energy. One can use equation (\ref{eqhn1}) to study states of photons. For instance, up to a constant, we obtain a quantum state \begin{equation} \Theta^k=(c_kA_k^\perp+c_k^*A_k^{\perp *})\Theta_{0},\end{equation} where $c_k=e^{i\theta_k}$ and $\Theta_0=\prod\limits_{\hbox{pairs of ${\bf p}$}}\Theta_{0{\bf p}}$ is the vacuum. Notice here $p_k\Theta^k\equiv 0$. It is not difficult to verify that $\Theta^k$'s are two eigenstates with linear polarization perpendicular to the momentum \({\bf p}_0\). One can use the skill of state superposition to construct the states of photon corresponding to other direction of linear polarization or corresponding to circular polarization. Furthermore, the study of state of single photon can also be generalized to other states, for instance, states of multi-photon. \section{Discussions} The gauge independencies of QED are studied through a new approach, which is related to a generalization of the local gauge symmetry, the expansibility in gauge space. The study shows clearly that all the physical states are gauge independent. Our study reveals clearly why there are just two degrees of freedom in gauge field and therefore the introduction of ghost states is not needed. Furthermore, we show that not only all physical states, but also expectations of operators, some of which, for instance, gauge potential, $A_i$, is not gauge independent, should be gauge invariant. All the states should be gauge invariant both in QED and in QCD. Whereas, there is a crucial difference between QED and QCD, that is, gauge particle, photon, exists in QED. We hope our approach be helpful to understand color confinement in QCD.
1,116,691,497,953
arxiv
\section{Introduction} Variational ideas play an important role in various areas of mathematical General Relativity ---e.g. in the ADM formalism \cite{ArnDesMis62}, in the analysis of Penrose-like inequalities \cite{Mar09} or in the analysis of area-angular momentum inequalities \cite{Dai12} to mention some. Similarly, spinorial methods constitute a powerful tool for the analysis and manipulation of the Einstein field equations and their solutions ---most notably the proof of the positivity of the mass by Witten \cite{Wit81} and the analysis of linearised gravity, see e.g. \cite{PenRin84}. To the best of our knowledge, all available treatments of calculus of variations and linerisations in spinorial settings make use of computations in terms of components with respect to a dyad. It is therefore of interest to have a setup for performing a dyad-independent calculus of variations and computation of linearisations with spinors. The purpose of the present article is to develop such a setup. We expect this formalism to be of great value in both the analysis of the notion of non-Kerrness introduced in \cite{BaeVal10a,BaeVal10b} and positivity of the mass in \cite{BaeVal11a}, as well as in a covariant analysis of linearised gravity. The transformation properties of tensors and spinors pose some conceptual subtleties which have to be taken into account when computing variations of the basic tensorial and spinorial structures. It is possible to have variations of of these structures which are \emph{pure gauge}. This difficulty is usually dealt with by a careful fixing of the gauge in some geometrically convenient manner. One thus makes calculus of variations in a specific gauge and has to be careful in distinguishing between properties which are specific to the particular gauge and those which are generic. This situation becomes even more complicated as, in principle, both the tensorial and spinorial structures are allowed to vary simultaneously. \usemedskip In this article it is shown that it is possible to define a \emph{modified variation operator} which absorbs gauge terms in the variation of spinorial fields and thus, allows to perform \emph{covariant variations}. The idea behind this modified variation operator is similar to that behind the derivative operators in the GHP formalism which absorb terms associated to the freedom in a NP tetrad ---see \cite{GerHelPen73}. As a result of our analysis we are able to obtain expressions involving abstract tensors and spinors ---thus, they are valid in any system of coordinates, and therefore invariant under diffeomorphisms which are constant with respect to variations. However, linearisations of diffeomorphisms do affect our variational quantities. This is discussed in Section~\ref{sec:diffeomorphisms}, where we also find that the diffeomorphism freedom can be controled by a gauge source function. \usemedskip Finally, we point out that although our primary concern in this article is the construction of a formalism for the calculus of variations of expressions involving spinors in a 4-dimensional Lorentzian manifold, the methods can be adapted to a space-spinor formalism on 3-dimensional Riemannian manifolds. This is briefly discussed in Section~\ref{sec:spacespinors}. \usemedskip The calculations in this article have been carried out in the Mathematica based symbolic differential geometry suite \emph{xAct} \cite{xAct}, in particular \emph{SymManipulator} \cite{Bae11a} developed by TB. \subsection*{Notation and conventions} All throughout, we use abstract index notation to denote tensors and spinors. In particular, the indices $a, b, c,\ldots$ and $i, j, k, \ldots$ are abstract spacetime and spatial tensor indices respectively, while $A, B, C,\ldots$ denote abstract spinorial indices. The boldface indices $\bfa, \bfb, \bfc,\ldots$ and $\bfA, \bfB, \bfC,\ldots$ will be used as tensor frame indices and spinor frame indices, respectively. We follow the tensorial and spinorial conventions of Penrose \& Rindler \cite{PenRin84}. \usemedskip Our signature convention for 4-dimensional Lorentzian metrics is $(+,-,-,-)$, and 3-dimen\-sional Riemannian metrics have signature $(-,-,-)$. \usemedskip The standard positions for the basic variations are $\delta g_{ab}$, $\delta \sigma_{a}{}^{AA'}$, $\delta \sigma_{k}{}^{AB}$, $\delta \omega^{\bf a}{}_{b}$ , $\delta \epsilon^{\bf A}{}_{B}$, $\delta \epsilon_{AB}$, $\delta\gamma_a{}^b{}_c$. If any other index positions appear, this means that the indices are moved up or down with $g_{ab}$ or $\epsilon_{AB}$ after the variation. The definitions of the above objects will be given in the main text. \section{Basic setup} In this section we discuss our basic geometric setup, which will be used in Section~\ref{Section:SLCalculus} to perform calculus of variations. \subsection{Families of metrics} In what follows, let $(\mathcal{M},\mathring{g}_{ab})$ denote a 4-dimensional Lorentzian manifold (\emph{spacetime}). The metric $\mathring{g}_{ab}$ will be known as the \emph{background metric}. In what follows, in addition to $\mathring{g}_{ab}$, we consider \emph{arbitrary} families of Lorentzian metrics $\{g_{ab}[\lambda]\}$ over $\mathcal{M}$ with $\lambda\in\mathbb{R}$ a parameter such that $g_{ab}[0]=\mathring{g}_{ab}$. Intuitively, a particular choice of family of metrics can be thought of as a curve in the moduli space of Lorentzian metrics over $\mathcal{M}$. The fact that we allow for arbitrary families of metrics enables us to probe all possible directions of this space in a neighbourhood of $\mathring{g}_{ab}$ and thus, we can compute \emph{Fr\'echet} derivatives of functionals depending on the metric ---see Section~\ref{Section:CVBasics}. In order to make possible the discussion of spinors, it will be assumed that the spacetimes $(\mathcal{M},g_{ab}[\lambda])$ for fixed $\lambda$ are orientable and time orientable and admit a spinorial structure. \medskip \noindent \textbf{Notational warning.} In what follows, for the ease of the presentation, we often suppress the dependence on $\lambda$ from the various objects. Thus, unless otherwise stated, all objects not tagged with a \emph{ring} $(\mathring{\phantom{X}})$ are assumed to depend on a parameter $\lambda$. \subsection{Frames} In what follows, we assume that associated to each family of metrics $\{ g_{ab}\}$ one has a family $\{ e_{\bf a }{}^a\}$ of $g_{ab}$-orthonormal frames. Let $\{ \omega^{\bf a}{}_a \}$ denote the family of associated cobases so that for fixed $\lambda$ one has $e_{\bf a}{}^a \omega^{\bf b}{}_a =\delta_{\bf a}{}^{\bf b}$. Following the conventions of the previous section, we write $\mathring{e}{}_{\bf a}{}^a\equiv e_{\bf a}{}^a[0]$ and $\mathring{\omega}{}^{\bf a}{}_a \equiv \omega^{\bf a}{}_a[0]$. By assumption, one has that \begin{equation} g_{ab} e_\bfa{}^a e_\bfb{}^b =\eta_{\bfa\bfb}, \qquad g_{ab} = \eta_{\bfa\bfb} \omega^\bfa{}_a \omega^\bfb{}_b. \label{OrthonormalityMetricFrame} \end{equation} where, as usual, $\eta_{\bfa\bfb} =\mbox{diag}(1,-1,-1,-1)$. \begin{remark} Observe that in view of the relations \eqref{OrthonormalityMetricFrame} any family of frames and coframes $\{e'_\bfa{}^a\}$ and $\{ \omega'^{\bfa}{}_a \}$ related to $\{e_\bfa{}^a\}$ and $\{ \omega^{\bfa}{}_a \}$ through a family of Lorentz transformations $\{\Lambda^\bfa{}_\bfb\}$ give rise to the the same family of metrics $\{ g_{ab} \}$ ---see Appendix \ref{Section:LorentzTransformations}. \end{remark} \subsection{Spinors} By assumption, the spacetimes $(\mathcal{M},g_{ab})$ are endowed with a spinorial structure. Accordingly, we consider families of antisymmetric spinors $\{ \epsilon_{AB}\}$ such that for fixed $\lambda$ the spinor $\epsilon_{AB}$ gives rise to the spinor structure of $(\mathcal{M},g_{ab})$. Moreover, we set $\mathring{\epsilon}_{AB} \equiv \epsilon_{AB}[0]$. \usemedskip Associated to the family $\{ \epsilon_{AB}\}$ one considers a family $\{ \epsilon_{\bf A}{}^A \}$ of normalised spin dyads ---that is, one has that \begin{equation} \epsilon_{AB} \epsilon_{\bf A}{}^A \epsilon_{\bf B}{}^B =\epsilon_{\bf AB}, \qquad \epsilon_{\bf AB} \equiv \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right). \label{Definition:EpsilonSpinor} \end{equation} Let $\{ \epsilon^{\bfA}{}_A \}$ denote the family of dual \emph{covariant bases} for which the relation $\epsilon^{AB} \epsilon^{\bfA}{}_A \epsilon^{\bf B}{}_B=\epsilon^{\bf AB}$ with $(\epsilon^{\bf AB}) \equiv -(\epsilon_{\bf AB})^{-1}$ holds. It follows that one has \[ \delta_A{}^B= \epsilon_A{}^{\bf A} \epsilon_{\bf A}{}^B, \qquad \epsilon_{AB} = \epsilon_{\bf AB} \epsilon_A{}^{\bf A} \epsilon_B{}^{\bf B}, \qquad \epsilon^{AB} = \epsilon^{\bf AB}\epsilon_{\bf A}{}^A \epsilon_{\bf B}{}^B. \] \begin{remark} As in the case of tensor frames, any family of dyads $\{\epsilon'_{\bf A}{}^A \}$ related to $\{ \epsilon_{\bf A}{}^A \}$ through a family of Lorentz transformations $\{\Lambda^\bfA{}_\bfB\}$ gives rise to the same spinorial structures associated to the family of antisymmetric spinors $\{ \epsilon_{AB} \}$ ---see Appendix \ref{Section:LorentzTransformations}. \end{remark} \subsection{Infeld-van der Waerden and soldering forms} The well-known correspondence between tensors and spinors is realised by the \emph{Infeld-van der Waerden symbols} $\sigma_{\bf a}{}^{\bf AA'}$ and $\sigma^{\bf a}{}_{\bf AA'}$. Given an arbitrary $ v^a \in T \mathcal{M}$ and $\beta_a \in T^* \mathcal{M}$ one has that \[ v^{\bf a} \mapsto v^{\bf AA'} = v^{\bf a} \sigma_{\bf a}{}^{\bf AA'}, \qquad \beta_{\bf a} \mapsto \beta_{\bf AA'} =\beta_{\bf a}\sigma^{\bf a}{}_{\bf AA'} \] where for \emph{fixed} $\lambda$ \[ v^{\bf a} \equiv v^a \omega^{\bf a}{}_a, \qquad \beta_{\bf a} \equiv \beta_a e_{\bf a}{}^a, \] denote the components of $v^a$ and $\beta_a$ with respect to the orthonormal basis $e_\bfa{}^a[\lambda]$ of $(\mathcal{M},g_{ab}[\lambda])$. In more explicit terms, the correspondence can be written as \[ (v^{\bf 0}, v^{\bf 1}, v^{\bf 2}, v^{\bf 3}) \mapsto \frac{1}{\sqrt{2}} \left( \begin{array}{cc} v^{\bf 0} + v^{\bf 3} & v^{\bf 1} + \mbox{i} v^{\bf 2} \\ v^{\bf 1} - \mbox{i} v^{\bf 2} & v^{\bf 0} - v^{\bf 3} \end{array} \right), \; (\beta_{\bf 0}, \beta_{\bf 1}, \beta_{\bf 2}, \beta_{\bf 3}) \mapsto \frac{1}{\sqrt{2}} \left( \begin{array}{cc} \beta_{\bf 0} + \beta_{\bf 3} & \beta_{\bf 1} - \mbox{i}\beta_{\bf 2} \\ \beta_{\bf 1} + \mbox{i}\beta_{\bf 2} & \beta_{\bf 0} - \beta_{\bf 3} \end{array} \right). \] From the \emph{Infeld-van der Waerden symbols} we define the \emph{soldering form} $\sigma_a{}^{AA'}$ and the dual of the soldering form $\sigma^a{}_{AA'}$ by \begin{subequations} \begin{align} \sigma_a{}^{AA'} \equiv \epsilon_{\bf A}{}^A \bar{\epsilon}_{\bf A'}{}^{A'} \omega^{\bf a}{}_a\sigma_{\bf a}{}^{\bf AA'},\label{Definition:SolderingForm}\\ \sigma^a{}_{AA'} \equiv \epsilon^{\bf A}{}_A \bar{\epsilon}^{\bf A'}{}_{A'} e_{\bf a}{}^a\sigma^{\bf a}{}_{\bf AA'} .\label{Definition:DualSolderingForm} \end{align} \end{subequations} By direct calculation, we can then verify the relations \begin{subequations} \begin{align} g_{ab} ={}&\epsilon_{AB} \bar\epsilon_{A'B'} \sigma_a{}^{AA'} \sigma_b{}^{BB'},\label{eq:gepsilonrel}\\ \delta_{a}{}^{b} ={}& \sigma_a{}^{BB'} \sigma^b{}_{BB'}. \label{eq:dualsolderingform} \end{align} \end{subequations} It is important to note that $\sigma_a{}^{AA'}$ and $\sigma^a{}_{AA'}$ are tensor frame and spin dyad dependent, while the relations \eqref{eq:gepsilonrel} and \eqref{eq:dualsolderingform} are universal. Following our approach, in the sequel we consider families $\{\sigma_a{}^{AA'}\}$ and $\{ \sigma^a{}_{AA'} \}$ of soldering forms such that $\mathring{\sigma}_a{}^{AA'}\equiv \sigma_a{}^{AA'}[0]$ and $\mathring{\sigma}^a{}_{AA'}\equiv \sigma^a{}_{AA'}[0]$ are the soldering forms associated to $(\mathring{\omega}^{\bf b}{}_{a},\mathring{\epsilon}^{\bf B}{}_{A})$. \begin{remark} In this article we adopt the point of view that the metric structure provided by $g_{ab}$ and the spinorial structure given by $\epsilon_{AB}$ are independent from each other. After a choice of frame and spinor basis these structures are linked to each other ---in an, admittedly, arbitrary manner--- through the relations in \eqref{Definition:SolderingForm} and \eqref{eq:gepsilonrel}. \end{remark} \section{Calculus of variations} \label{Section:SLCalculus} \subsection{Basic formalism} \label{Section:CVBasics} The main objective of our calculus of variations is to describe how real valued functionals depend on their arguments ---in particular, in the case the arguments are covariant spinors. To motivate our analysis, we first consider a real valued functional $\mathcal{F}[\omega^{\bf a}{}_a, \xi^{\bf a}]$, where $\xi^{a}$ is a vector field and $\xi^{\bf a}=\omega^{\bf a}{}_a\xi^a$. Given a \emph{particular} family of fields $\{\omega^{\bf a}{}_a[\lambda],\, \xi^{\bf a}[\lambda] \}$ depending on a parameter $\lambda$, we define the variations $\{ \delta \omega^{\bf a}{}_a,\, \delta \xi^{\bf a} \}$ through the expressions \[ \delta \omega^{\bf a}{}_a \equiv \frac{\mbox{d}\omega^{\bf a}{}_a}{\mbox{d}\lambda} \bigg|_{\lambda=0}, \qquad \delta \xi^{\bf a} \equiv \frac{\mbox{d}\xi^{\bf a}}{\mbox{d}\lambda} \bigg|_{\lambda=0}. \] In terms of the above fields over $\mathcal{M}$ we define \emph{the G\^ateux derivative of $\mathcal{F}[\omega^{\bf a}{}_a, \xi^{\bf a}]$ at $\{\mathring{\omega}^{\bf a}{}_a,\, \mathring{\xi}^{\bf a} \}$ in the direction of the family} $\{\omega^{\bf a}{}_a[\lambda],\, \xi^{\bf a}[\lambda] \}$ as \begin{align*} \delta_{\{\omega^{\bf a}{}_a,\, \xi^{\bf a} \}} \mathcal{F}[\mathring\omega^{\bf a}{}_a, \mathring\xi^{\bf a}] \equiv{}& \frac{\mbox{d}}{\mbox{d}\lambda} \mathcal{F}[\omega^{\bf a}{}_a[\lambda], \xi^{\bf a}[\lambda]] \bigg|_{\lambda=0}\\ ={}& \frac{\mbox{d}}{\mbox{d}\lambda} \mathcal{F}[\mathring{\omega}^{\bf a}{}_a + \lambda \delta \omega^{\bf a}{}_a, \mathring{\xi}^{\bf a} + \lambda \delta \xi^{\bf a}] \bigg|_{\lambda=0}. \end{align*} Now, if $\delta_{\{\omega^{\bf a}{}_a,\, \xi^{\bf a} \}} \mathcal{F}$ exists for \emph{any} choice of family $\{\omega^{\bf a}{}_a,\, \xi^{\bf a} \}$ one then says that $\mathcal{F}[\omega^{\bf a}{}_a, \xi^{\bf a}]$ is \emph{Fr\'echet differentiable} at $\{\mathring{\omega}^{\bf a}{}_a,\, \mathring{\xi}^{\bf a} \}$. If this is the case, there exists a functional $\delta \mathcal{F}$, the \emph{Fr\'echet derivative}, from which $\delta_{\{\omega^{\bf a}{}_a,\, \xi^{\bf a} \}} \mathcal{F}$ can be computed if a particular choice of the family of the variations $\{\delta \omega^{\bf a}{}_a,\, \delta \xi^{\bf a} \}$ is considered. For more details concerning the notions of G\^ateaux and Fr\'echet derivative and their relation see \cite{Tro83}. The functional $\mathcal{F}[\omega^{\bf a}{}_a, \xi^{\bf a}]$ considered in the previous paragraph depends on the coframe and components of a tensor fields in terms of this basis. As the particular choice of frame involves the specification of a gauge, instead of regarding the functional $\delta \mathcal{F}$ as depending on the fields $[\mathring{\omega}^{\bf a}{}_a,\mathring{\xi}^{\bf a},\delta\omega^{\bf a}{}_a, \delta\xi^{\bf a}]$ it will be convenient to regard it as depending on $[\mathring{g}_{ab}, \mathring{\xi}^a, \delta g_{ab},T_{ab},\delta\xi^{a}]$, where the field $T_{ab}$ describes the frame gauge choice and \[ \delta g_{ab} \equiv \frac{\mbox{d}g_{ab}}{\mbox{d}\lambda} \bigg|_{\lambda=0}, \] where $\{ g_{ab} \}$ is a family of metrics over $\mathcal{M}$ such that for fixed $\lambda$ the coframe $\mathring{\omega}^{\bf a}{}_a$ is $g_{ab}$-orthonormal. \usemedskip Next, we consider real valued functionals depending on spinors. For concreteness consider the a functional of the form $\mathcal{F}[g_{ab}, \epsilon^{\bf A}{}_A, \kappa_{\bf A}]$. The G\^ateaux and Fr\'echet derivatives of this functional are defined in the natural way by considering arbitrary families of fields $\{g_{ab}, \, \epsilon^{\bf A}{}_A, \,\kappa_{\bf A}\}$ depending on a parameter $\lambda$. The variations implied by this family of fields is then defined by \[ \delta g_{ab} \equiv \frac{\mbox{d}g_{ab}}{\mbox{d}\lambda} \bigg|_{\lambda=0}, \qquad \delta \epsilon^{\bf A}{}_A \equiv \frac{\mbox{d}\epsilon^{\bf A}{}_A}{\mbox{d}\lambda} \bigg|_{\lambda=0}, \qquad \delta \kappa_{\bf A} \equiv \frac{\mbox{d}\kappa_{\bf A}}{\mbox{d}\lambda} \bigg|_{\lambda=0}. \] In analogy to the example considered in the previous paragraphs, it will be convenient to regard the Fr\'echet derivative $\delta \mathcal{F}$, which in principle depends on $[\mathring{g}_{ab}, \mathring{\epsilon}^{\bf A}{}_A, \mathring{\kappa}_{\bf A}, \delta g_{ab}, \delta\epsilon^{\bf A}{}_A, \delta \kappa_{\bf A}]$, as a functional of the arguments $[\mathring{g}_{ab}, \mathring{\epsilon}_{AB}, \mathring{\kappa}_A, \delta g_{ab}, \delta \kappa_A, T_{ab}, S_{AB}]$ where the field $S_{AB}$ describes the dyad gauge choice. In this way one obtains a formalism that separates the tensor frame and spin dyad gauge in the Fr\'echet derivatives. The main observation in the sequel is that is is possible to obtain a \emph{modified variation operator} $\vartheta$ which absorbs the frame and dyad gauge terms so that the Fr\'echet derivative depends on the parameters $[\mathring{g}_{ab}, \mathring{\kappa}_A, \delta g_{ab}, \vartheta \kappa_A]$. \medskip \noindent \textbf{Notational warning.} In what follows, for ease of presentation, we mostly suppress the ring $\mathring{\phantom{x}}$ from the background quantities appearing in expressions involving variations. If an expression does not involves variations then it holds for both the background quantities and any other one in the family. \subsection{Basic formulae for frames} \label{Subsection:BasicVariationFrames} Consider first the expression for the metric $g_{ab}$ in terms of the coframe $\{ \omega^{\bf a}{}_a \}$ ---namely \[ g_{ab} = \eta_{\bf a b} \omega^{\bf a}{}_a \omega^{\bf b}{}_b. \] Applying the variational operator $\delta$ to the above expression, using the Leibnitz rule, and that $\eta_{\bf a b}$ are constants, yields \begin{equation} \delta g_{ab} = \eta_{\bf ab} \delta \omega^{\bf a}{}_a \omega^{\bf b}{}_b + \eta_{\bf ab}\omega^{\bf a}{}_a \delta \omega^{\bf b}{}_b. \label{VariationMetric} \end{equation} In certain computations it is useful to be able to express $\delta \omega^{\bf a}{}_a$ in terms of $\delta g_{ab}$. In order to do this, it is noticed that from \eqref{VariationMetric} it follows that \[ \delta g_{ab} = 2 \eta_{\bf ab} \omega^{\bf a}{}_a \delta \omega^{\bf b}{}_b - 2 T_{ab}, \] where \[ T_{ab} \equiv \eta_ {{\bf c} {\bf d}}\omega^{\bf d}{}_{[a} \delta \omega^{{\bf c}}{}_{b]}. \] It then follows that \begin{equation} \delta (\omega^{{\bf a}}{}_{a}) = \tfrac{1}{2} e_{{\bf b}}{}^{b} \eta^{{\bf b} {\bf a}} \delta g_{ab} - e_{{\bf b}}{}^{b} \eta^{{\bf b} {\bf a}} T_{ab}. \label{VariationCoframeInTermsVariationMetric} \end{equation} A formula for the variation of the inverse metric can be computed by taking variations of the defining relation $\delta_a{}^b = g_{ac} g^{cb}$. One finds that \[ \delta(g^{dc}) ={}- g^{ad} g^{bc} \delta g_{ab}. \] A formula for the variation of the frame vectors $\{ e_{\bf a}{}^a\}$ in terms of the variation of $\delta \omega^{{\bf c}}{}_{b}$ is obtained by computing the variation of the expression $\delta_{\bf a}{}^{\bf b} = e_{\bf a}{}^a \omega^{\bf b}{}_a$. One finds that \[ \delta(e_{{\bf a}}{}^{d}) ={}- e_{{\bf a}}{}^{b} e_{{\bf c}}{}^{d} \delta \omega^{{\bf c}}{}_{b}. \] \usemedskip The previous expressions can be used to compute a formula for the variation of a covector $\xi_a$. Writing $\xi_a = \xi_{\bf a} \omega^{\bf a}{}_a$, one obtains that \begin{align*} \delta\xi_{a}={}&\omega^{\bf b}{}_{a} \delta(\xi_{{\bf b}}) + \tfrac{1}{2} e_{{\bf c}}{}^{b} \eta^{{\bf c} {\bf d}} \xi_{{\bf d}}\delta g_{ab} - e_{{\bf c}}{}^{b} \eta^{{\bf c} {\bf d}} \xi_{{\bf d}}T_{ab}. \end{align*} \begin{remark} An interpretation of the tensor $T_{ab}$ appearing in equation \eqref{VariationCoframeInTermsVariationMetric} can be obtained by considering a situation where $\delta g_{ab}=0$. In that case equation \eqref{VariationCoframeInTermsVariationMetric} reduces to \[ \delta \omega^{{\bf a}}{}_{a} = - e_{{\bf b}}{}^{b} \eta^{{\bf b} {\bf a}} T_{ab}. \] Writing $T_{ab} = T_{\bf ab} \omega^{\bf a}{}_a \omega^{\bf b}{}_b$ where $T_{\bf ab}$ denote the components of $T_{ab}$ with respect to the coframe $\{ \omega^{\bf a}{}_a \}$ one has that \begin{equation*} \delta \omega^{\bf a}{}_a = - e_{\bf b}{}^b \eta^{\bf ba} T_{\bf cd} \omega^{\bf c}{}_a \omega^{\bf d}{}_b = T^{\bf a}{}_{\bf c} \omega^{\bf c}{}_a, \end{equation*} where $T^{\bf a}{}_{\bf c} \equiv - \eta^{\bf da} T_{\bf cd} \omega^{\bf c}{}_a$. Comparing with the discussion in Section \ref{Section:LorentzTransformations} one sees that $T_{ab}$ encodes a rotation of the basis. With this observation, in what follows we interpret the second term in equation \eqref{VariationCoframeInTermsVariationMetric} as a gauge term. \end{remark} \subsection{Basic formulae for spinors} \label{Subsection:BasicVariationSpinors} The analysis in the previous section admits a straightforward spinorial analogue. Given a covariant spinorial dyad $\{ \epsilon^{\bf A}{}_A\}$ one can write \[ \epsilon_{AB} = \epsilon_{\bf AB} \epsilon^{\bf A}{}_A \epsilon^{\bf B}{}_B. \] Thus, one has that \begin{align*} \delta \epsilon_{AB} ={}& \epsilon_{\bf AB} \epsilon^{\bf B}{}_B\delta \epsilon^{\bf A}{}_A + \epsilon_{\bf AB} \epsilon^{\bf A}{}_A \delta\epsilon^{\bf B}{}_B \nonumber\\ ={}& 2\epsilon_{\bf AB} \epsilon^{\bf B}{}_B \delta \epsilon^{\bf A}{}_A - 2 S_{AB}, \end{align*} where \[ S_{AB} \equiv \epsilon_{\bf AB} \epsilon^{\bf B}{}_{(B} \delta \epsilon^{\bf A}{}_{A)}. \] The variation of the contravariant antisymmetric spinor $\epsilon^{AB}$ can be computed from the above formulae by first computing the variation of $\epsilon_{AB} \epsilon^{BC} = - \delta_{A}{}^{C}$ and then multiplying with $\epsilon^{AD}$. We obtain that \[ \delta(\epsilon^{DC}) ={} - \epsilon^{AD} \epsilon^{BC}\delta \epsilon_{AB}. \] As, $\delta\epsilon_{AB}$ is antisymmetric we can fully express it in terms of its trace as $\delta\epsilon_{AB}=-\tfrac{1}{2}\epsilon_{AB}\delta\epsilon^C{}_{C}$. Now, if one wants to compute $\delta \epsilon^{\bf A}{}_A$ in terms of $\delta \epsilon_{AB} $ one has that \begin{equation} \delta \epsilon^{{\bf A}}{}_{A} = \tfrac{1}{2} \epsilon^{{\bf A} {\bf B}} \epsilon_{{\bf B}}{}^{B} \delta \epsilon_{AB} + \epsilon^{{\bf A} {\bf B}} \epsilon_{{\bf B}}{}^{B} S_{AB}. \label{VariationSpinBasisInTermsVariationEpsilon} \end{equation} If we compute the variation of $\delta_{{\bf A}}{}^{{\bf C}} = \epsilon^{\bf C}{}_{B} \epsilon_{{\bf A}}{}^{B}$ and multiply with $\epsilon_{{\bf C}}{}^{D}$ we get \[ \delta (\epsilon_{{\bf A}}{}^{A}) = - \epsilon_{{\bf A}}{}^{B} \epsilon_{{\bf C}}{}^{A} \delta \epsilon^{{\bf C}}{}_{B}. \] \usemedskip Now consider a covariant spinor $\phi_A$ and expand it with respect to the spinor dyad $\{ \epsilon^{\bf A}{}_A\}$ as \[ \phi_A = \phi_{\bf A} \epsilon^{\bf A}{}_A. \] A calculation using equation \eqref{VariationSpinBasisInTermsVariationEpsilon} yields the expression \begin{eqnarray*} && \delta \phi_A = \delta \phi_{\bf A} \epsilon^{\bf A}{}_A + \phi_{\bf A} \delta \epsilon^{\bf A}{}_A \\ && \phantom{\delta \phi_A} = \delta \phi_{\bf A} \epsilon^{\bf A}{}_A + \tfrac{1}{2}\phi_{\bf A} \epsilon^{{\bf A} {\bf P} } \epsilon_{\bf P} {}^B \delta \epsilon_{AB} + \phi_{\bf A} \epsilon^{{\bf A}{\bf P} } \epsilon_{\bf P}{}^B S_{AB}. \end{eqnarray*} Using the identity $\epsilon^{{\bf A}{\bf C} }\phi_{\bf A} \epsilon_{\bf C}{}^B = \epsilon^{CB}\phi_C$ the variation $\delta \phi_A$ can be reexpressed as \[ \delta \phi_A = (\delta \phi_{\bf A}) \epsilon^{\bf A}{}_A + \tfrac{1}{4}(\delta \epsilon^Q{}_Q) \phi_A - S_A{}^B \phi_B. \] \begin{remark} As in the case of equation \eqref{VariationCoframeInTermsVariationMetric} and the tensor $T_{ab}$, the spinor $S_{AB}$ admits the interpretation of a rotation. Indeed, considering a situation where $\delta \epsilon_{AB}=0$, writing $S_{AB} = \epsilon^{\bf A}{}_A \epsilon^{\bf B}{}_B S_{\bf AB}$ one finds that \begin{eqnarray*} && \delta \epsilon^{\bf A}{}_A = \epsilon^{\bf AB} \epsilon_{\bf B}{}^B S_{AB}\\ && \phantom{\delta \epsilon^{\bf A}{}_A} = \epsilon^{\bf AB} \epsilon_{\bf B}{}^B \epsilon^{\bf P}{}_A \epsilon^{\bf Q}{}_B S_{\bf PQ}\\ && \phantom{\delta \epsilon^{\bf A}{}_A} = S^{\bf A}{}_{\bf B} \epsilon^{\bf B}{}_A. \end{eqnarray*} Comparing with Appendix~\ref{Section:LorentzTransformations}, we find that $S_{AB}$ encodes a rotation of the spin dyad. \end{remark} \subsection{Variation of the soldering form} In the reminder of this article we will consider a more general setting in which both the metric $g_{ab}$ and the antisymmetric spinor $\epsilon_{AB}$ can be varied simultaneously. To analyse the relation between the variations of these two structures it is convenient to consider the soldering form $\sigma{}_a{}^{AA'}$. To compute the variation of the soldering form, one starts by computing the variation of the relation \eqref{Definition:SolderingForm}. As we are treating the Infeld-van der Waerden symbols as constants, their variation vanishes --- that is, although both the metric and spinor structure may vary, the formal relation between tetrads and spin dyads will be preserved. A direct combination of the methods of Sections \ref{Subsection:BasicVariationFrames} and \ref{Subsection:BasicVariationSpinors} on formula \eqref{Definition:SolderingForm} lead, after a computation, to the expression \begin{align} \delta \sigma_a{}^{AA'}={}& \tfrac{1}{2} \delta \epsilon^{A}{}_{B} \sigma_a{}^{BA'}+\tfrac{1}{2} \delta\bar \epsilon^{A'}{}_{B'} \sigma_a{}^{AB'} + \tfrac{1}{2} g^{bc}\delta g_{ab} \sigma_c{}^{AA'} \nonumber\\ &- \bar{S}^{A'}{}_{B'} \sigma_a{}^{AB'} - S^{A}{}_{B} \sigma_a{}^{BA'} - T_a{}^b \sigma_b{}^{AA'}. \label{deltasigmaeq} \end{align} The terms in the second line of the previous expression are identified as \emph{gauge terms}. Observe that in this case one has two types of gauge terms: one arising from the variation of the tensor frame and one coming from the variation of the spin frame. If we compute the variation of equation \eqref{eq:dualsolderingform} and multiply with $\sigma^a{}_{AA'}$ we get \[ \delta(\sigma^b{}_{AA'}) = - \delta(\sigma_a{}^{BB'}) \sigma^a{}_{AA'} \sigma^b{}_{BB'}. \] Multiplying equation \eqref{deltasigmaeq} with $g^{ac}\sigma_c{}^{BB'}$ and splitting into irreducible parts, we get the relations \begin{align*} \delta \sigma^a{}_{(A}{}^{(A'}\sigma_{|a|B)}{}^{B')} ={}&\tfrac{1}{2} \delta g_{(AB)}{}^{(A'B')},\\ \delta \sigma^{a(A|B'|}\sigma_a{}^{B)}{}_{B'}={}&T^{AB} - 2 S^{AB},\\ \delta \sigma^{aB(A'}\sigma_{|a|B}{}^{B')}={}&\bar{T}^{A'B'} - 2 \bar{S}^{A'B'},\\ \delta \sigma^{aBB'} \sigma_{aBB'}={}&\tfrac{1}{2} \delta g^{B}{}_{B}{}^{B'}{}_{B'} + \delta \epsilon^{B}{}_{B} + \delta \bar\epsilon^{B'}{}_{B'}, \end{align*} where we have defined \begin{align*} T_{AB}\equiv{}& T_{ab} \sigma^a{}_{A}{}^{A'} \sigma^b{}_{BA'},& \delta g_{ABA'B'}\equiv{}& \delta g_{ab} \sigma^a{}_{AA'} \sigma^b{}_{BB'}. \end{align*} \subsection{General variations of spinors} The formulae for the variations of the soldering form and its dual can now be used to compute the variation of arbitrary spinors under variations of the metric and spinor structures. To this end, consider spinors $\zeta^{AA'}$ and $\xi_{AA'}$. Making use of the Leibnitz rule one obtains the expressions \begin{subequations} \begin{align} \sigma_{a}{}^{AA'}\delta\zeta^a={}&\delta(\zeta^{AA'}) - \tfrac{1}{4} \delta \epsilon^{B}{}_{B} \zeta^{AA'} - \tfrac{1}{4} \delta \bar\epsilon^{B'}{}_{B'} \zeta^{AA'} - \tfrac{1}{2} \delta g^{A}{}_{B}{}^{A'}{}_{B'} \zeta^{BB'} \nonumber\\ &- \tfrac{1}{2} \bar{T}^{A'}{}_{B'} \zeta^{AB'} + \bar{S}^{A'}{}_{B'} \zeta^{AB'}- \tfrac{1}{2} T^{A}{}_{B} \zeta^{BA'} + S^{A}{}_{B} \zeta^{BA'} , \label{GeneralVariation1}\\ \sigma^{a}{}_{AA'}\delta\xi_a ={}&\delta(\xi_{AA'}) + \tfrac{1}{4} \delta \epsilon^{B}{}_{B} \xi_{AA'} + \tfrac{1}{4} \delta \bar\epsilon^{B'}{}_{B'} \xi_{AA'}+ \tfrac{1}{2} \delta g_{A}{}^{B}{}_{A'}{}^{B'} \xi_{BB'}\nonumber\\ &+ \tfrac{1}{2} \bar{T}_{A'}{}^{B'} \xi_{AB'} - \bar{S}_{A'}{}^{B'} \xi_{AB'}+ \tfrac{1}{2} T_{A}{}^{B} \xi_{BA'} - S_{A}{}^{B} \xi_{BA'}, \label{GeneralVariation2} \end{align} \end{subequations} where $\zeta^a\equiv\sigma^{a}{}_{BB'}\zeta^{BB'}$ and $\xi_a\equiv\sigma_{a}{}^{BB'}\xi_{BB'}$. We observe that both expressions contain a combination of \emph{gauge terms} involving the spinors $T_{AB}$ and $S_{AB}$. \usemedskip In view of the discussion in the previous paragraph we introduce a general \emph{modified variation operator}. \begin{definition}\label{def:modvar1} The \emph{modified variation operator} $\vartheta$ is for valence 1 spinors defined by \begin{align*} \vartheta\phi_{A}\equiv{}&\delta\phi_{A} + \tfrac{1}{4} \delta \epsilon^{B}{}_{B} \phi_{A} + \tfrac{1}{2} T_{A}{}^{B} \phi_{B} - S_{A}{}^{B} \phi_{B},\\ \vartheta\phi^{A}\equiv{}&\delta\phi^{A} - \tfrac{1}{4} \delta \epsilon^{B}{}_{B} \phi^{A} - \tfrac{1}{2} T^{A}{}_{B} \phi^{B} + S^{A}{}_{B} \phi^{B},\\ \vartheta\bar{\phi}_{A'}\equiv{}&\delta\bar{\phi}_{A'} + \tfrac{1}{4} \delta \bar\epsilon^{B'}{}_{B'} \bar{\phi}_{A'} + \tfrac{1}{2} \bar{T}_{A'}{}^{B'} \bar{\phi}_{B'} - \bar{S}_{A'}{}^{B'} \bar{\phi}_{B'},\\ \vartheta\bar{\phi}^{A'}\equiv{}&\delta\bar{\phi}^{A'} - \tfrac{1}{4} \delta \bar\epsilon^{B'}{}_{B'} \bar{\phi}^{A'} - \tfrac{1}{2} \bar{T}^{A'}{}_{B'} \bar{\phi}^{B'} + \bar{S}^{A'}{}_{B'} \bar{\phi}^{B'}, \end{align*} and extended to arbitrary valence spinors by the Leibnitz rule. \end{definition} In particular, using the above definitions in expressions \eqref{GeneralVariation1}-\eqref{GeneralVariation2} one finds that \begin{align*} &\sigma_{a}{}^{AA'}\delta\zeta^{a} ={}\vartheta\zeta^{AA'} - \tfrac{1}{2} \delta g^{A}{}_{B}{}^{A'}{}_{B'} \zeta^{BB'},\\ &\sigma^{a}{}_{AA'}\delta\xi_{a} ={}\vartheta\xi_{AA'} + \tfrac{1}{2} \delta g_{A}{}^{B}{}_{A'}{}^{B'} \xi_{BB'}, \end{align*} showing that $\vartheta\zeta^{AA'}$ and $\vartheta\zeta_{AA'}$ are frame gauge independent. Moreover, a further calculation shows that \[ \vartheta \epsilon_{AB}=0 \] so that the process of raising and lowering spinor indices commutes with the modified variation $\vartheta$ operator. \begin{remark} Expanding the $\phi_{A}$ in terms of the spin dyad in the $\delta\phi_{A}$ term in Definition~\ref{def:modvar1} gives \begin{align} \vartheta(\phi_{A})={}& \epsilon^{\bf B}{}_{A} \delta(\phi_{{\bf B}}) + \tfrac{1}{2} T_{A}{}^{B} \phi_{B}. \end{align} Observe that the $S_{AB}$ and $\delta\epsilon_{AB}$ terms cancel out. \end{remark} \section{Variations and the covariant derivative} \label{Section:CovDevs} The purpose of this section is to to analyse te relation between the variation operators $\delta$ and $\vartheta$ and the Levi-Civita connection $\nabla_a$ of the metric $g_{ab}$. \subsection{Basic tensorial relations} Our analysis of the variations of expressions involving covariant derivatives is based on the following basic assumption: \begin{assumption} For any scalar field $f$ over $\mathcal{M}$ one has that \begin{equation} \nabla_a\delta f=\delta(\nabla_a f) \label{CommutationNabladelta} \end{equation} \end{assumption} In what follows, define the \emph{frame dependent tensor} \[ \gamma_a{}^b{}_c \equiv{} - e_{{\bf c}}{}^b \nabla_{a}\omega^{\bf c}{}_{c}. \] The tensor $\gamma_a{}^b{}_c$ can be regarded as a convenient way of grouping the connection coefficients $\gamma_\bfa{}^\bfb{}_\bfc$ of the connection $\nabla_a$ with respect to the frame $\{ e_\bfa{}^a\}$. A calculation shows, indeed, that \[ \gamma_a{}^b{}_c = \gamma_\bfa{}^\bfb{}_\bfc \,\omega^\bfa{}_a \omega^\bfc{}_c e_\bfb{}^b. \] We can express all covariant derivatives of the cobasis and the basis in terms of $\gamma_a{}^b{}_c$ via \[ \nabla_{a}\omega^{{\bf f}}{}_{c}={}- \omega^{\bf f}{}_{b} \gamma_{a}{}^{b}{}_{c},\qquad \nabla_{d}e_{{\bf f}}{}^{b}={}e_{{\bf f}}{}^{a} \gamma_{d}{}^{b}{}_{a}. \] Differentiating the orthonormality condition $\eta^{{\bf a} {\bf b}} = \omega^{\bf a}{}_{c} \omega^{\bf b}{}_{d} g^{cd}$ and multiplying with $e_{{\bf a}}{}^{h} e_{{\bf b}}{}^{l}$ we get the relation \begin{equation} \gamma_{f}{}^{(a}{}_{c}g^{b)c}=0\label{eq:metriccompgamma} \end{equation} encoding the metric compatibility of $\nabla_a$. The variation of this gives \begin{align} \delta \gamma_{f}{}^{(ab)}={}&\gamma_{f}{}^{(a|c|}\delta g^{b)}{}_{c}.\label{eq:symdeltagamma} \end{align} Now, for any covector $\xi_a$, its covariant derivative can be expanded in terms of the frame as \begin{align*} \nabla_{a}\xi_{b}={}&- \omega^{\bf c}{}_{d} \gamma_{a}{}^{d}{}_{b} \xi_{{\bf c}} + \omega^{{\bf c}}{}_{b} \nabla_{a}\xi_{{\bf c}}. \end{align*} Computing the variation of this last expression, and using the relations above, gives after some straightforward calculations \begin{align} \delta(\nabla_{a}\xi_{b})={}&- \delta \gamma_{a}{}^{c}{}_{b} \xi_{c} + T_{c}{}^{d} \gamma_{abd} \xi^{c} - T_{b}{}^{d} \gamma_{acd} \xi^{c} + \tfrac{1}{2} \gamma_{ac}{}^{d} \delta g_{bd} \xi^{c} + \tfrac{1}{2} \gamma_{ab}{}^{d} \delta g_{cd} \xi^{c} + \xi^{c} \nabla_{a}T_{bc}\nonumber\\ & - \tfrac{1}{2} \xi^{c} \nabla_{a}\delta g_{bc} + \nabla_{a}\delta \xi_{b}.\label{eq:deltanablaxi1} \end{align} In the previous calculation Assumption \eqref{CommutationNabladelta} has been used. If we use relation \eqref{eq:deltanablaxi1} with $\xi_a=\nabla_a f$, antisymmetrize over $a$ and $b$, and assume that the connection is torsion free, we get \begin{align*} 0={}& (T_{c}{}^{d} \gamma_{[ab]d} + \tfrac{1}{2} \delta g_{c}{}^{d} \gamma_{[ab]d} - \delta \gamma_{[a|c|b]} + \nabla_{[a}T_{b]c} - \tfrac{1}{2} \nabla_{[a}\delta g_{b]c} + T_{[a}{}^{d}\gamma_{b]cd} + \tfrac{1}{2} \gamma_{[a|c|}{}^{d}\delta g_{b]d})\nabla^{c}f. \end{align*} Hence, the torsion free condition is encoded by \begin{align} \delta \gamma_{[a|c|b]}={}&T_{c}{}^{d} \gamma_{[ab]d} + \tfrac{1}{2} \delta g_{c}{}^{d} \gamma_{[ab]d} + \nabla_{[a}T_{b]c} - \tfrac{1}{2} \nabla_{[a}\delta g_{b]c} + T_{[a}{}^{d}\gamma_{b]cd} + \tfrac{1}{2} \gamma_{[a|c|}{}^{d}\delta g_{b]d}.\label{eq:torsioncond} \end{align} Now, using the identity \begin{align*} \delta \gamma_{abc}={}&\delta \gamma_{[a|b|c]} - \delta \gamma_{[a|c|b]} + \delta \gamma_{[b|a|c]} + \delta \gamma_{a(bc)} - \delta \gamma_{b(ac)} + \delta \gamma_{c(ab)}, \end{align*} we can use equations \eqref{eq:symdeltagamma} and \eqref{eq:torsioncond} to compute \begin{align} \delta \gamma_{abc}={}&- T_{c}{}^{d} \gamma_{abd} + T_{b}{}^{d} \gamma_{acd} + \tfrac{1}{2} \gamma_{ac}{}^{d} \delta g_{bd} + \tfrac{1}{2} \gamma_{ab}{}^{d} \delta g_{cd} - \nabla_{a}T_{bc} - \tfrac{1}{2} \nabla_{b}\delta g_{ac} + \tfrac{1}{2} \nabla_{c}\delta g_{ab}. \end{align} It follows then that equation \eqref{eq:deltanablaxi1} can therefore be simplified to \begin{equation} \delta(\nabla_{a}\xi_{b})={}\nabla_{a}(\delta \xi_{b}) - \tfrac{1}{2} g^{cd} (\nabla_{a}\delta g_{bc} + \nabla_{b}\delta g_{ac} - \nabla_{c}\delta g_{ab}) \xi_{d} .\label{eq:varcovd1} \end{equation} It is important to observe that this formula is a tensorial expression. Hence, it allows to define a \emph{transition tensor} \begin{equation} Q_b{}^a{}_c\equiv \tfrac{1}{2} g^{ad} (\nabla_{b}\delta g_{dc} + \nabla_{c}\delta g_{bd} - \nabla_{d}\delta g_{bc}) \label{Definition:TransitionTensor} \end{equation} relating the connections $\nabla_a$ and $\delta \nabla_a$. This is not surprising as it is well known that the space of covariant derivatives on a manifold is an affine space. Making use the definition of $Q_b{}^a{}_{c}$, equation \eqref{eq:varcovd1} takes the suggestive form \begin{equation} \delta(\nabla_{a}\xi_{b})={}\nabla_{a}(\delta \xi_{b}) - Q_b{}^d{}_a \xi_d.\label{eq:deltanablaxi} \end{equation} Furthermore, making use of the Leibnitz rule one finds that for an arbitrary vector $v^a$ one has \[ \delta(\nabla_a v^b) = \nabla_a (\delta v^b) + Q_c{}^b{}_a v^c. \] The extension to higher valence tensors follows in a similar manner. \subsection{Spinorial expressions} In order to discuss the variations of the spinor covariant derivative $\nabla_{AA'}$ associated to the Levi-Civita connection $\nabla_a$ it is convenient to define a spinorial analogue of the tensor $\gamma_a{}^b{}_c$ ---namely \[ \gamma{}_{a}{}^{B}{}_{C} \equiv{} - \epsilon_{{\bfC}}{}^{B} \nabla_{a}\epsilon^{\bfC}{}_{C}. \] The hybrid $\gamma{}_{a}{}^{B}{}_{C}$ is related to $\gamma_{a}{}^{BB'}{}_{CC'}\equiv\gamma_a{}^b{}_c\sigma_b{}^{BB'}\sigma^c{}_{CC'}$ through the decomposition \[ \gamma_{a}{}^{BB'}{}_{CC'} = \gamma_{a}{}^B{}_C \delta_{C'}{}^{B'} + \bar{\gamma}_{a}{}^{B'}{}_{C'} \delta_C{}^B. \] It follows then that \begin{equation} \gamma_{a}{}^{B}{}_{C} = \tfrac{1}{2} \gamma_a{}^{c}{}_{b}\sigma^{b}{}_{CB'} \sigma_{c}{}^{BB'}. \label{GammaTensorToGammaSpinor} \end{equation} From this last expression can then be verified that \[ \gamma_{aBC}= \gamma_{aCB}. \] \usemedskip The variational derivative of $\gamma_{a}{}^{B}{}_{C}$ can be computed using equation \eqref{GammaTensorToGammaSpinor}. One finds that \begin{align} \delta(\gamma_a{}^{B}{}_{C})={}&\tfrac{1}{4} Q_{acd} \sigma^{dBB'} \sigma^{c}{}_{CB'} - \tfrac{1}{4} Q_{acd} \sigma^{dBB'} \sigma^{c}{}_{CB'} - \tfrac{1}{2} \gamma_{acd} S_{CD} \sigma^{cBB'} \sigma^{dD}{}_{B'}\nonumber\\ &- \tfrac{1}{2} \gamma_{acd} S^{B}{}_{D} \sigma^{c}{}_{C}{}^{B'} \sigma^{dD}{}_{B'} - \tfrac{1}{2} \nabla_{a}T^{B}{}_{C}. \label{VariationGammaSpinor} \end{align} In this last expression observe, in particular, the appearance of the gauge spinors $S_{AB}$ and $T_{AB}$. In turn, equation \eqref{VariationGammaSpinor} can be used to compute the variation of the covariant derivative of an arbitrary spinor $\kappa_A$. Expanding $\kappa_A$ in terms of the spin dyad and differentiating we get \[ \nabla_{a}\kappa_{B} = \epsilon^{\bf C}{}_{B} \nabla_{a}\kappa_{{\bf C}}- \gamma_a{}^{C}{}_{B} \kappa_{C} . \] It follows that the variation of this last expression is given by \begin{align*} \delta(\nabla_{a}\kappa_{A})={}& \nabla_{a}\delta \kappa_{A} - \tfrac{1}{2} \kappa^{B} \nabla_{a}T_{AB} + \kappa^{B} \nabla_{a}S_{AB} + \tfrac{1}{4} \kappa_{A} \nabla_{a}\delta \epsilon^{B}{}_{B} \nonumber\\ &+\tfrac{1}{4} Q_{abc} \kappa^{B} \sigma^{b}{}_{A}{}^{A'} \sigma^{c}{}_{BA'} - \tfrac{1}{4} Q_{acb} \kappa^{B} \sigma^{b}{}_{A}{}^{A'} \sigma^{c}{}_{BA'}\nonumber\\ ={}& \nabla_{a}\vartheta \kappa_{A} - \tfrac{1}{4} \delta \epsilon^{B}{}_{B} \nabla_{a}\kappa_{A} + \tfrac{1}{2} T_{AB} \nabla_{a}\kappa^{B} - S_{AB} \nabla_{a}\kappa^{B}\nonumber\\ &+\tfrac{1}{4} Q_{abc} \kappa^{B} \sigma^{b}{}_{A}{}^{A'} \sigma^{c}{}_{BA'} - \tfrac{1}{4} Q_{acb} \kappa^{B} \sigma^{b}{}_{A}{}^{A'} \sigma^{c}{}_{BA'}. \end{align*} \usemedskip In order to write the spinorial derivative $\nabla_{AA'} \kappa_B$ (rather than $\nabla_a \kappa_B$) it is convenient to define the spinor \begin{equation}\label{Definition:QopSpinor} \Qop_{AA'BC}\equiv - \tfrac{1}{2} \sigma^{a}{}_{AA'} \sigma^{b}{}_{B}{}^{B'} \sigma^{c}{}_{CB'} Q_{[bc]a}. \end{equation} \begin{theorem} The variation of the covariant derivative of a spinor is given by \begin{subequations} \begin{align}\label{eq:ThmVarDerSpin} \vartheta(\nabla_{AA'}\kappa_{B})={}& \nabla_{AA'}\vartheta \kappa_{B} +\Qop_{AA'BC} \kappa^{C} - \tfrac{1}{2} \delta g_{ACA'B'} \nabla^{CB'}\kappa_{B},\\ \vartheta(\nabla_{AA'}\bar{\kappa}_{B'})={}& \nabla_{AA'}\vartheta\bar{\kappa}_{B'} +\bar\Qop_{A'AB'C'} \bar{\kappa}^{C'} - \tfrac{1}{2} \delta g_{ABA'C'} \nabla^{BC'}\bar{\kappa}_{B'}.\label{eq:ThmVarDerSpinDg} \end{align} \end{subequations} \end{theorem} \begin{proof} Using the expressions in the previous paragraphs one has that \begin{align*} \delta(\nabla_{AA'}\kappa_{B})={}& \nabla_{AA'}\vartheta \kappa_{B} +\Qop_{AA'BC} \kappa^{C} - \tfrac{1}{4} \delta \epsilon^{C}{}_{C} \nabla_{AA'}\kappa_{B} + \tfrac{1}{2} T_{BC} \nabla_{AA'}\kappa^{C} - S_{BC} \nabla_{AA'}\kappa^{C}\nonumber\\ & - \delta \sigma_a{}^{CB'} \sigma^a{}_{AA'} \nabla_{CB'}\kappa_{B}\\ ={}& \nabla_{AA'}\vartheta \kappa_{B} +\Qop_{AA'BC} \kappa^{C} - \tfrac{1}{2} \delta \epsilon^{C}{}_{C} \nabla_{AA'}\kappa_{B} - \tfrac{1}{4} \delta \bar\epsilon^{B'}{}_{B'} \nabla_{AA'}\kappa_{B} + \tfrac{1}{2} T_{BC} \nabla_{AA'}\kappa^{C}\nonumber\\ & - S_{BC} \nabla_{AA'}\kappa^{C} + \tfrac{1}{2} \bar{T}_{A'B'} \nabla_{A}{}^{B'}\kappa_{B} - \bar{S}_{A'B'} \nabla_{A}{}^{B'}\kappa_{B} + \tfrac{1}{2} T_{AC} \nabla^{C}{}_{A'}\kappa_{B}\nonumber\\ & - S_{AC} \nabla^{C}{}_{A'}\kappa_{B} - \tfrac{1}{2} \delta g_{ACA'B'} \nabla^{CB'}\kappa_{B}. \end{align*} Expressing the above formula in terms of the modified variation $\vartheta$, we get \eqref{eq:ThmVarDerSpin}. The equation \eqref{eq:ThmVarDerSpinDg} is given by complex conjugation. \end{proof} \subsubsection{Decomposition of $\Qop_{AA'BC}$} Starting from the definition in equation \eqref{Definition:QopSpinor}, a calculation yields \begin{align*} \Qop_{AA'BC}={}&- \tfrac{1}{4} \sigma^{a}{}_{AA'} \sigma^{b}{}_{B}{}^{B'} \sigma^{c}{}_{CB'} \nabla_{b}\delta g_{ac} + \tfrac{1}{4} \sigma^{a}{}_{AA'} \sigma^{b}{}_{B}{}^{B'} \sigma^{c}{}_{CB'} \nabla_{c}\delta g_{ab}\\ ={}&- \tfrac{1}{2} \nabla_{(B}{}^{B'}\delta g_{C)AB'A'}. \end{align*} The above expression can be conveniently decomposed in irreducible terms. To this end, one defines \[ G\equiv \delta g^{C}{}_{C}{}^{C'}{}_{C'},\qquad G_{ABA'B'} \equiv {}\delta g_{(AB)(A'B')}. \] If we also decompose $\Qop_{ABCA'}$ into irreducible parts, we get \begin{align}\label{Decomposition:SpinorQop} \Qop_{AA'BC}={}&- \tfrac{1}{2} \nabla_{(A}{}^{B'}G_{BC)A'B'} + \tfrac{1}{8} \epsilon_{A(B}\nabla_{C)A'}G - \tfrac{1}{6} \epsilon_{A(B}\nabla^{DB'}G_{C)DA'B'}. \end{align} For future use we notice the following relations which follow from the decomposition in irreducible components of equation \eqref{Decomposition:SpinorQop} and the reality of $\delta g_{ABA'B'}$: \begin{align*} \Qop^{B}{}_{A'AB}={}&- \tfrac{3}{16} \nabla_{AA'}G + \tfrac{1}{4} \nabla_{BB'}G_{A}{}^{B}{}_{A'}{}^{B'},\\ \bar\Qop^{B'}{}_{AA'B'}={}&\Qop^{B}{}_{A'AB},\\ \nabla_{BB'}G_{CDA'}{}^{B'}={}&2 \Qop_{BA'CD} - 4 \Qop^{A}{}_{A'(C|A|}\epsilon_{D)B} - \tfrac{1}{2} \epsilon_{(C|B|}\nabla_{D)A'}G,\\ \nabla_{BA'}G_{A}{}^{B}{}_{B'C'}={}&2 \bar\Qop_{A'AB'C'} - 4 \Qop^{B}{}_{(B'|AB|}\bar\epsilon_{C')A'} - \tfrac{1}{2} \bar\epsilon_{(B'|A'}\nabla_{A|C')}G. \end{align*} We also define the field \begin{align} F^{AA'}\equiv{}&\nabla_{BB'}\delta g^{ABA'B'} - \tfrac{1}{2} \nabla^{AA'}\delta g^{B}{}_{B}{}^{B'}{}_{B'}\label{eq:gaugesourcedef}\\ ={}&\nabla_{BB'}G^{ABA'B'} - \tfrac{1}{4} \nabla^{AA'}G.\nonumber \end{align} In the next section we will see that this can be interpreted as a gauge source function for the linearised diffeomorphisms. \subsection{Diffeomorphism dependence}\label{sec:diffeomorphisms} We will now briefly consider the dependence on diffeomorphisms. Let $\phi_\lambda$ be a one parameter group of diffeomorphisms generated by a vector field $\xi^a$ and such that $g_{ab}[\lambda]=\phi^*_{-\lambda}\mathring g_{ab}$. The metrics in this family have the same geometric content and one readily finds that \begin{align} \delta g_{ab}={}&\mathcal{L}_\xi g_{ab} =2 \nabla_{(a}\xi_{b)}.\label{eq:puregauge} \end{align} Moreover, a further computation yields \begin{align*} \Qop_{AA'BC}={}&- \tfrac{1}{2} \nabla_{(C}{}^{B'}\nabla_{B)B'}\xi_{AA'} - \tfrac{1}{2} \nabla_{(C}{}^{B'}\nabla_{|AA'|}\xi_{B)B'},\\ F^{AA'}={}&\nabla_{BB'}\nabla^{BB'}\xi^{AA'} -6 \Lambda \xi^{AA'} + 2 \Phi^{A}{}_{B}{}^{A'}{}_{B'} \xi^{BB'}. \end{align*} Given a general family of metrics $g_{ab}[\lambda]$, we can compute the field $F^{AA'}$ associated to the family. Given any $\widetilde F^{AA'}$, we can then solve the wave equation \begin{align*} \widetilde F^{AA'} - F^{AA'}={}&-6 \Lambda \xi^{AA'} + 2 \Phi^{A}{}_{B}{}^{A'}{}_{B'} \xi^{BB'} + \nabla_{BB'}\nabla^{BB'}\xi^{AA'}. \end{align*} The solution $\xi^a=\sigma^a_{AA'}\xi^{AA'}$ to this equations will then give a one parameter group of diffeomorphisms $\phi_\lambda$, such that $g_{ab}[\lambda]=\phi^*_{-\lambda}g_{ab}[\lambda]$ has the same geometric content, but with corresponding $\widetilde F^{AA'}$. With this observation, we can interpret \eqref{eq:gaugesourcedef} as a gauge source function for the linearised diffeomorphisms. \section{Variation of curvature} \label{Section:Curvature} The purpose of this section is to compute the variation of the various spinorial components of the curvature tensor. As it will be seen below, the starting point of this computation is the commutator of covariant derivatives. \usemedskip We start by computing the variation of \[ \square_{(AB} \kappa_{C)} =\nabla_{(A}{}^{A'}\nabla_{B|A'|}\kappa_{C)}=-\Psi_{ABCD} \kappa^{D} \] for an arbitrary spinor $\kappa_A$. A direct calculation using the Leibnitz rule for the modified commutator $\vartheta$ gives \begin{align*} \Psi_{ABCD} \vartheta \kappa^{D} + \vartheta(\Psi_{ABCD}) \kappa^{D}={}&- \nabla_{(A}{}^{A'}\nabla_{B|A'|}\vartheta \kappa_{C)} + \tfrac{1}{4} G \nabla_{(A}{}^{A'}\nabla_{B|A'|}\kappa_{C)}\nonumber\\ & - \tfrac{1}{2} G_{(A}{}^{DA'B'}\nabla_{B|A'}\nabla_{DB'|}\kappa_{C)} + \tfrac{1}{2} G_{(A}{}^{DA'B'}\nabla_{|DA'|}\nabla_{B|B'|}\kappa_{C)}\nonumber\\ & + \Qop_{(A}{}^{A'}{}_{B}{}^{D}\nabla_{|DA'|}\kappa_{C)} + \bar\Qop^{A'}{}_{(A|A'|}{}^{B'}\nabla_{B|B'|}\kappa_{C)}\nonumber\\ & - \kappa^{D}\nabla_{(A}{}^{A'}\Qop_{B|A'|C)D} + \tfrac{1}{8} \nabla_{(A}{}^{A'}G\nabla_{B|A'|}\kappa_{C)}\nonumber\\ & + \tfrac{1}{2} \nabla_{(A}{}^{A'}G_{B}{}^{D}{}_{|A'}{}^{B'}\nabla_{DB'|}\kappa_{C)}\\ ={}&\Psi_{ABCD} \vartheta \kappa^{D} - \tfrac{1}{4} G \Psi_{ABCD} \kappa^{D} - \kappa^{D} \nabla_{(A}{}^{A'}\Qop_{B|A'|C)D}\nonumber\\ & + \tfrac{1}{2} \kappa^{D} G_{(AB}{}^{A'B'}\Phi_{C)DA'B'}. \end{align*} The above expression holds for all $\kappa^A$, and therefore we can conclude that \[ \vartheta(\Psi_{ABCD})=- \tfrac{1}{4} G \Psi_{ABCD} - \nabla_{(A}{}^{A'}\Qop_{B|A'|C)D} + \tfrac{1}{2} G_{(AB}{}^{A'B'}\Phi_{C)DA'B'}. \] The symmetry of $\Psi_{ABCD}$ can be used to simplify this last expression ---the trace of the right hand side can be shown to vanish due to the commutators. \usemedskip If we compute the variation of \[ \Phi_{BAA'B'} \kappa^{A}=- \nabla^{A}{}_{(A'}\nabla_{|A|B')}\kappa_{B} \] we get \begin{align} \Phi_{BAA'B'} \vartheta \kappa^{A} + \vartheta(\Phi_{BAA'B'}) \kappa^{A}={}&- \nabla^{A}{}_{(A'}\nabla_{|A|B')}\vartheta \kappa_{B} + \tfrac{1}{4} G \nabla^{A}{}_{(A'}\nabla_{|A|B')}\kappa_{B}\nonumber\\ & - \tfrac{1}{2} G^{AC}{}_{(A'}{}^{C'}\nabla_{|A|B')}\nabla_{CC'}\kappa_{B} + \tfrac{1}{2} G^{AC}{}_{(A'}{}^{C'}\nabla_{|AC'}\nabla_{C|B')}\kappa_{B}\nonumber\\ & + \Qop^{A}{}_{(A'|A}{}^{C}\nabla_{C|B')}\kappa_{B} + \bar\Qop_{(A'}{}^{A}{}_{B')}{}^{C'}\nabla_{AC'}\kappa_{B}\nonumber\\ & - \kappa^{A}\nabla^{C}{}_{(A'}\Qop_{|C|B')BA} + \tfrac{1}{8} \nabla^{A}{}_{(A'}G\nabla_{|A|B')}\kappa_{B}\nonumber\\ & + \tfrac{1}{2} \nabla^{A}{}_{(A'}G_{|A|}{}^{C}{}_{B')}{}^{C'}\nabla_{CC'}\kappa_{B} \nonumber\\ ={}&\Phi_{BAA'B'} \vartheta \kappa^{A} + G_{BAA'B'} \Lambda \kappa^{A} - \tfrac{1}{4} G \Phi_{BAA'B'} \kappa^{A}\nonumber\\ & + \tfrac{1}{2} G^{CD}{}_{A'B'} \Psi_{BACD} \kappa^{A} - \kappa^{A} \nabla^{C}{}_{(A'}\Qop_{|C|B')BA}. \nonumber \end{align} The last relation holds for all $\kappa^A$, and therefore we can obtain an expression for $\vartheta \Phi_{ABA'B'}$. \usemedskip Now, using the definition of $\Qop_{ABCA'}$, commuting derivatives and exploiting the irreducible decomposition of the various fields involved one gets \begin{align} \nabla_{AA'}\Qop^{CA'}{}_{BC}={}&- \tfrac{1}{2} G_{B}{}^{CA'B'} \Phi_{ACA'B'} - \tfrac{1}{2} G_{A}{}^{CA'B'} \Phi_{BCA'B'} \nonumber \\ & + \nabla_{CA'}\Qop_{A}{}^{A'}{}_{B}{}^{C} + \epsilon_{AB} \nabla^{CA'}\Qop^{D}{}_{A'CD}. \label{eq:nablaQopEq2} \end{align} If we compute the variation of \[ \Lambda \kappa_{A}=\tfrac{1}{3} \nabla_{(A}{}^{A'}\nabla_{B)A'}\kappa^{B} \] we get, after a lengthy computation, that \begin{subequations} \begin{align} \Lambda \vartheta \kappa_{A} + \vartheta(\Lambda) \kappa_{A}={}&\tfrac{1}{6} \kappa^{B} \nabla_{AA'}\Qop^{CA'}{}_{BC} - \tfrac{1}{6} \nabla_{AA'}\nabla_{B}{}^{A'}\vartheta \kappa^{B} + \tfrac{1}{24} G \nabla_{AA'}\nabla_{B}{}^{A'}\kappa^{B}\nonumber\\ & + \tfrac{1}{6} \bar\Qop^{B'}{}_{BA'B'} \nabla_{A}{}^{A'}\kappa^{B} - \tfrac{1}{12} G_{BCA'B'} \nabla_{A}{}^{B'}\nabla^{CA'}\kappa^{B} - \tfrac{1}{48} \nabla_{A}{}^{A'}G \nabla_{BA'}\kappa^{B}\nonumber\\ & - \tfrac{1}{6} \nabla_{BA'}\nabla_{A}{}^{A'}\vartheta \kappa^{B} + \tfrac{1}{24} G \nabla_{BA'}\nabla_{A}{}^{A'}\kappa^{B} + \tfrac{1}{6} \bar\Qop^{B'}{}_{AA'B'} \nabla_{B}{}^{A'}\kappa^{B}\nonumber\\ & - \tfrac{1}{12} G_{ACA'B'} \nabla_{B}{}^{B'}\nabla^{CA'}\kappa^{B} + \tfrac{1}{48} \nabla_{AA'}\kappa_{B} \nabla^{BA'}G - \tfrac{1}{6} \kappa^{B} \nabla_{CA'}\Qop_{A}{}^{A'}{}_{B}{}^{C}\nonumber\\ & - \tfrac{1}{6} \Qop_{AA'BC} \nabla^{CA'}\kappa^{B} - \tfrac{1}{6} \Qop_{BA'AC} \nabla^{CA'}\kappa^{B} + \tfrac{1}{12} \nabla_{AB'}G_{BCA'}{}^{B'} \nabla^{CA'}\kappa^{B}\nonumber\\ & + \tfrac{1}{12} \nabla_{BB'}G_{ACA'}{}^{B'} \nabla^{CA'}\kappa^{B} + \tfrac{1}{12} G_{BCA'B'} \nabla^{CB'}\nabla_{A}{}^{A'}\kappa^{B}\nonumber\\ & + \tfrac{1}{12} G_{ACA'B'} \nabla^{CB'}\nabla_{B}{}^{A'}\kappa^{B}\nonumber\\ ={}&\Lambda \vartheta \kappa_{A} - \tfrac{1}{4} G \Lambda \kappa_{A} + \tfrac{1}{6} G_{A}{}^{CA'B'} \Phi_{BCA'B'} \kappa^{B} + \tfrac{1}{6} \kappa^{B} \nabla_{AA'}\Qop^{CA'}{}_{BC}\nonumber\\ & - \tfrac{1}{6} \kappa^{B} \nabla_{CA'}\Qop_{A}{}^{A'}{}_{B}{}^{C}\nonumber \\ ={}&\Lambda \vartheta \kappa_{A} - \tfrac{1}{4} G \Lambda \kappa_{A} + \tfrac{1}{12} G^{BCA'B'} \Phi_{BCA'B'} \kappa_{A} - \tfrac{1}{6} \kappa_{A} \nabla_{CA'}\Qop^{BA'}{}_{B}{}^{C}.\nonumber \end{align} \end{subequations} In the last equality we have used the relation \eqref{eq:nablaQopEq2} and the irreducible decomposition of $ G_{A}{}^{CA'B'} \Phi_{BCA'B'}$. From here we can deduce an expression for $\vartheta\Lambda$. \usemedskip We summarise the discussion of this section in the following: \begin{theorem} The modified variation of the curvature spinors is given by \begin{align*} \vartheta\Psi_{ABCD}={}&- \tfrac{1}{4} G \Psi_{ABCD} - \nabla_{(A}{}^{A'}\Qop_{B|A'|CD)} + \tfrac{1}{2} G_{(AB}{}^{A'B'}\Phi_{CD)A'B'},\\ \vartheta\Phi_{ABA'B'}={}&G_{ABA'B'} \Lambda - \tfrac{1}{4} G \Phi_{ABA'B'} + \tfrac{1}{2} G^{CD}{}_{A'B'} \Psi_{ABCD} - \nabla^{C}{}_{(A'}\Qop_{|C|B')AB},\\ \vartheta\Lambda={}&- \tfrac{1}{4} G \Lambda + \tfrac{1}{12} G^{BCA'B'} \Phi_{BCA'B'} - \tfrac{1}{6} \nabla_{CA'}\Qop^{BA'}{}_{B}{}^{C}. \end{align*} \end{theorem} \begin{remark} For a pure gauge transformation \eqref{eq:puregauge}, we get after a lengthy but straightforward calculation using commutators, that \begin{align*} \vartheta(\Lambda)={}&(\mathcal{L}_{\xi}\Lambda),\\ \vartheta(\Phi_{AB}{}^{A'B'})={}&(\mathcal{L}_{\xi}\Phi)_{AB}{}^{A'B'} - \Phi^{C}{}_{(A}{}^{C'(A'}\nabla_{B)}{}^{B')}\xi_{CC'} - \Phi^{C}{}_{(A}{}^{C'(A'}\nabla_{|CC'|}\xi_{B)}{}^{B')},\\ \vartheta(\Psi_{ABCD})={}&(\mathcal{L}_{\xi}\Psi)_{ABCD} - \Psi_{ABCD} \nabla^{FA'}\xi_{FA'}, \end{align*} where \footnote{The primed indices are moved up after the Lie derivative is taken to allow the symmetrizations to be written nicely.} \begin{align*} (\mathcal{L}_{\xi}\Phi)_{AB}{}^{A'B'}\equiv{}&\xi^{CC'} \nabla_{CC'}\Phi_{AB}{}^{A'B'} + 2 \Phi^{C}{}_{(A}{}^{C'(A'}\nabla_{B)}{}^{B')}\xi_{CC'},\\ (\mathcal{L}_{\xi}\Psi)_{ABCD}\equiv{}&\xi^{FA'} \nabla_{FA'}\Psi_{ABCD} + 2 \Psi_{(ABC}{}^{F}\nabla_{D)}{}^{A'}\xi_{FA'}. \end{align*} In this last calculation we have used the Bianchi identity in the form \begin{align*} \nabla^{D}{}_{A'}\Psi_{ABCD}={}&\nabla_{(A}{}^{B'}\Phi_{B)CA'B'} + \epsilon_{C(A}\nabla_{B)A'}\Lambda. \end{align*} \end{remark} \section{Variations of space-spinor expressions}\label{sec:spacespinors} The analysis of Sections \ref{Section:SLCalculus}, \ref{Section:CovDevs} and \ref{Section:Curvature} can be adapted to consider variations of spinorial fields in a space-spinor formalism. This formalism can be used to analyse variational problems in 3-dimensional Riemannian manifolds. \subsection{Basic formalism} In what follows, let $(\mathcal{S},h_{ij})$ denote a 3-dimensional Riemannian manifold with negative-definite metric. On $(\mathcal{S},h_{ij})$ we assume the existence of a spinor structure with an antisymmetric spinor $\epsilon_{AB} $. In addition, we assume that the spinor structure is endowed with an Hermitian product. It follows from this assumption that there exists an Hermitian spinor $\varpi_{AA'}$ such given two spinors $\xi_A$ and $\eta_B$ the Hermitian inner product can be expressed as \[ \xi_A \hat{\eta}^A \equiv \varpi_{AA'} \bar{\eta}^{A'} \xi^A. \] The spinor $\hat{\eta}^A$ defined by the above relation is called the Hermitian conjugate of $\eta^A$. Let $e_{\bf k}{}^l$, $\omega^{\bf k}{}_l$ denote, respectively, an orthonormal frame and coframe of $(\mathcal{S},h_{ij})$ and let $\epsilon^{\bf A}{}_B$ denote a normalised spin dyad such that the components of $\epsilon_{AB}$ and $\varpi_{AA'}$ are given, respectively, by \[ \epsilon_{\bf AB} = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right),\qquad \varpi_{\bf AA'} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right). \] The transformations of the spin dyad respecting the above expressions is given by $SU(2,\mathbb{C})$ matrices $O_{\bf A}{}^{\bf B}$. The correspondence between spatial tensors and spinors is realised by the \emph{spatial Infeld-van der Waerden symbols} $\sigma_{\bf k}{}^{\bf AB}$ and $\sigma^{\bf k}{}_{\bf AB}$. Given an arbitrary $ v^k \in T \mathcal{S}$ and $\beta_k \in T^* \mathcal{S}$ one has that \[ v^{\bf k} \mapsto v^{\bf AB} = v^{\bf k} \sigma_{\bf k}{}^{\bf AB}, \qquad \beta_{\bf k} \mapsto \beta_{\bf AB} =\beta_{\bf k}\sigma^{\bf k}{}_{\bf AB}, \] where \[ v^{\bf k} \equiv v^k \omega^{\bf k}{}_k, \qquad \beta_{\bf k} \equiv \beta_k e_{\bf k}{}^k. \] In more explicit terms, the correspondence is \[ (v^{\bf 1}, v^{\bf 2}, v^{\bf 3}) \mapsto \frac{1}{\sqrt{2}} \left(\begin{array}{cc} - v^{\bf 1} - \mbox{i} v^{\bf 2} & v^{\bf 3}\\ v^{\bf 3} & v^{\bf 1} - \mbox{i} v^{\bf 2} \end{array}\right) , \qquad (\beta_{\bf 1}, \beta_{\bf 2}, \beta_{\bf 3})\mapsto \frac{1}{\sqrt{2}} \left(\begin{array}{cc} - \beta_{\bf 1} + \mbox{i} \beta_{\bf 2} & \beta_{\bf 3}\\ \beta_{\bf 3} & \beta_{\bf 1} + \mbox{i} \beta_{\bf 2} \end{array}\right). \] From these, we define the spatial soldering form to be \begin{subequations} \begin{align} \sigma_{k}{}^{AB}\equiv{}&\omega^{\bf l}{}_{k} \epsilon_{{\bf C}}{}^{A} \epsilon_{{\bf D}}{}^{B} \sigma_{{\bf l}}{}^{{\bf C} {\bf D}},\label{eq:SSsigmadef}\\ \sigma^{k}{}_{AB}\equiv{}&\epsilon^{\bf C}{}_{A} \epsilon^{\bf D}{}_{B} e_{{\bf l}}{}^{k} \sigma^{{\bf l}}{}_{{\bf C} {\bf D}}. \end{align} \end{subequations} As we allow the spinor and tensor frames to be independent, the soldering form will therefore be frame dependent. However, we will always have the universal relations \begin{subequations} \begin{align} \sigma_{k}{}^{CD} \sigma^{l}{}_{CD}={}&\delta_{k}{}^{l},\label{eq:SSsigmatodelta}\\ h_{kl}={}&\sigma_{k}{}^{AB} \sigma_{l}{}^{CD} \epsilon_{CA} \epsilon_{DB}. \end{align} \end{subequations} The Hermitian conjugate of \[ \phi_{A}= \phi_{\bf 0}\epsilon^{\bf 0}{}_{A} + \phi_{\bf 1}\epsilon^{\bf 1}{}_{A} \] is given by \[ \hat{\phi}_{A}= - \bar{\phi}_{\bf 1'}\epsilon^{\bf 0}{}_{A} + \bar{\phi}_{\bf 0'}\epsilon^{\bf 1}{}_{A}. \] It clearly follows that \[ \hat{\hat{\phi}}_{A}=-\phi_{A}. \] The Hermitian conjugation can be extended to higher valence space spinors by requiring that the conjugate of a product equals the product of conjugates. We also get \[ \hat{\hat{\mu}}_{A_1\cdots A_k} = (-1)^k \hat{\mu}_{A_1\cdots A_k}. \] Furthermore, it is important to note \begin{equation} \hat{\epsilon}_{AB}=\epsilon_{AB},\qquad \hat{\sigma}^{a}{}_{AB}=- \sigma^{a}{}_{AB}.\label{eq:epsilonsigmaHermitian} \end{equation} \subsection{Basic variational formulae} As in the case of standard spacetime spinors, we can compute the variations of the frames and the inverse metrics from the relations \begin{align*} \delta(e_{{\bf i}}{}^{l}) ={}& - e_{{\bf i}}{}^{j} e_{{\bf k}}{}^{l} \delta \omega^{{\bf k}}{}_{j},\\ \delta(\epsilon_{{\bf A}}{}^{D}) ={}& - \epsilon_{{\bf A}}{}^{B} \epsilon_{{\bf C}}{}^{D} \delta \epsilon^{{\bf C}}{}_{B},\\ \delta(h^{kl}) ={}& - (\delta h_{ij}) h^{ik} h^{jl},\\ \delta(\epsilon^{CD}) ={}& - \delta \epsilon_{AB} \epsilon^{AC} \epsilon^{BD}. \end{align*} Likewise, from the relation \eqref{eq:SSsigmatodelta} we get \[ \delta(\sigma^{l}{}_{AB}) = - \sigma^{k}{}_{AB} \sigma^{l}{}_{CD} \delta \sigma_{k}{}^{CD}. \] We can also split the variation of the coframes in terms of the variation of the metric and spin metric and gauge pieces \begin{align*} \delta \omega^{{\bf m}}{}_{a} ={}& - e_{{\bf h}}{}^{b} h^{{\bf h} {\bf m}}T_{ab} + \tfrac{1}{2} e_{{\bf h}}{}^{b} h^{{\bf h} {\bf m}} \delta h_{ab},\\ \delta \epsilon^{{\bf P}}{}_{A} ={}& - \epsilon_{{\bf H}}{}^{B} \epsilon^{{\bf H} {\bf P}}S_{AB} - \tfrac{1}{2} \epsilon_{{\bf H}}{}^{B} \epsilon^{{\bf H} {\bf P}} \delta \epsilon_{AB} , \end{align*} where the tensor and spinor frame gauge fields are \begin{align*} T_{ab} \equiv{}& h_{{\bf c} {\bf d}}\omega^{\bf d}{}_{[a} \delta \omega^{{\bf c}}{}_{b]},& S_{AB} \equiv{}& \omega^{\bf D}{}_{(A} \delta \omega^{{\bf C}}{}_{B)} \epsilon_{{\bf C} {\bf D}}. \end{align*} A calculation following the same principles as for the spacetime version starting from the relation \eqref{eq:SSsigmadef} gives the variation of the spatial soldering form: \begin{align*} \delta \sigma_{k}{}^{AB}={}&- T_{k}{}^{l} \sigma_{l}{}^{AB} + \tfrac{1}{2} \sigma^{lAB} \delta h_{kl} - 2 \sigma_{k}{}^{(A|C|}S^{B)}{}_{C} + \sigma_{k}{}^{(A|C|}\delta \epsilon^{B)}{}_{C}. \end{align*} The irreducible parts are given by \begin{align*} \sigma^{k(CD}\delta \sigma_{k}{}^{AB)}={}&\tfrac{1}{2} \delta h^{(ABCD)},\\ \sigma^{k(C}{}_{B}\delta \sigma_{k}{}^{A)B}={}&T^{AC} - 2 S^{AC},\\ \sigma^{k}{}_{CD} \delta \sigma_{k}{}^{CD}={}&\tfrac{1}{2} \delta h^{CD}{}_{CD} + \tfrac{3}{2} \delta \epsilon^{C}{}_{C}, \end{align*} where \begin{align*} T_{AB}\equiv{}&T_{kl} \sigma^{k}{}_{A}{}^{C} \sigma^{l}{}_{BC},\\ \delta h_{ABCD}\equiv{}&\sigma^{k}{}_{AB} \sigma^{l}{}_{CD} \delta h_{kl}. \end{align*} We can now use this to see how the variation of vectors and covectors in space-spinor and tensor form differ: \begin{align*} \sigma_{k}{}^{AB} \delta\zeta^k={}&\delta(\zeta^{AB}) - \tfrac{1}{2} \delta \epsilon^{C}{}_{C} \zeta^{AB} - \tfrac{1}{2} \delta h^{AB}{}_{CD} \zeta^{CD} + T^{(A|C|}\zeta^{B)}{}_{C} - 2 S^{(A|C|}\zeta^{B)}{}_{C},\\ \sigma^{k}{}_{AB} \delta\xi_k={}&\delta(\xi_{AB}) + \tfrac{1}{2} \delta \epsilon^{C}{}_{C} \xi_{AB} + \tfrac{1}{2} \delta h_{ABCD} \xi^{CD} + T_{(A}{}^{C}\xi_{B)C} - 2 S_{(A}{}^{C}\xi_{B)C}, \end{align*} where $\zeta^k=\sigma^{k}{}_{CD}\zeta^{CD}$ and $\xi_k=\sigma_{k}{}^{CD}\zeta_{CD}$. This leads us to define a modified variation that cancels the gauge terms and the variation of the spin metric. \begin{definition} For valence 1 space spinors we define the modified variation operator $\vartheta$ via \begin{align*} \vartheta(\phi_{A})\equiv{}&\delta(\phi_{A}) + \tfrac{1}{4} \delta \epsilon^{B}{}_{B} \phi_{A} + \tfrac{1}{2} T_{A}{}^{B} \phi_{B} - S_{A}{}^{B} \phi_{B},\\ \vartheta(\phi^{A})\equiv{}&\delta(\phi^{A}) - \tfrac{1}{4} \delta \epsilon^{B}{}_{B} \phi^{A} - \tfrac{1}{2} T^{A}{}_{B} \phi^{B} + S^{A}{}_{B} \phi^{B}. \end{align*} These relations extend to higher valence spinors via the Leibnitz rule. \end{definition} In the same way as for the spacetime variations, we get a relation between $\vartheta$ and spin frame component variation: \begin{align} \vartheta \phi_{A}={}&\epsilon^{\bf B}{}_{A} \delta(\phi_{{\bf B}}) + \tfrac{1}{2} T_{A}{}^{B} \phi_{B}.\label{eq:SSvarthetatocomps} \end{align} The reality of $T_{ab}$ and \eqref{eq:epsilonsigmaHermitian} gives \[ \widehat{T}_{AB}= T_{AB}. \] Expanding the frame index in equation \eqref{eq:SSvarthetatocomps} and taking Hermitian conjugate yields \begin{align*} \widehat{\vartheta \phi}_{A}={}&\epsilon^{\bf 1}{}_{A} \delta(\bar{\phi}_{{\bf 0'}}) - \epsilon^{\bf 0}{}_{A} \delta(\bar{\phi}_{{\bf 1'}}) + \tfrac{1}{2} T_{A}{}^{B} \hat{\phi}_{B}\nonumber\\ ={}&\epsilon^{\bf 0}{}_{A} \delta(\hat{\phi}_{{\bf 0}}) + \epsilon^{\bf 1}{}_{A} \delta(\hat{\phi}_{{\bf 1}}) + \tfrac{1}{2} T_{A}{}^{B} \hat{\phi}_{B}\nonumber\\ ={}&\vartheta(\hat\phi)_{A}. \end{align*} Hence, the operation of Hermitian conjugation and the modified variation $\vartheta$ commute. \subsection{Variations of the spatial connection} Let $\mathcal{R}_{ABCD}$ denote the space spinor version of the trace free Ricci tensor, and let $\mathcal{R}$ be the Ricci scalar. Define \begin{align*} H^{ABCD}\equiv{}&\delta h^{(ABCD)},\\ H\equiv{}&\delta h_{AB}{}^{AB},\\ \Qop_{ABCD}\equiv {}&- \tfrac{1}{2} D_{(C}{}^{F}\delta h_{D)FAB},\\ F^{AB}\equiv{}&- \tfrac{1}{2} D^{AB}\delta h^{CD}{}_{CD} + D_{CD}\delta h^{ABCD}. \end{align*} Similarly to the case of spacetime spinors, we can compute the variation of a covariant derivative. \begin{theorem} The variation of a covariant space-spinor derivative is given by \begin{align*} \vartheta(D_{AB}\kappa_{C})={}& D_{AB}\vartheta \kappa_{C} +\Qop_{ABCD} \kappa^{D} - \tfrac{1}{2} \delta h_{ABDF} D^{DF}\kappa_{C}. \end{align*} \end{theorem} We also get \begin{align*} \Qop_{A}{}^{C}{}_{BC}={}&- \tfrac{1}{6} D_{AB}H + \tfrac{1}{4} D_{CD}H_{AB}{}^{CD},\\ F_{AB}={}&- \tfrac{1}{6} D_{AB}H + D_{CD}H_{AB}{}^{CD},\\ D_{DF}H_{ABC}{}^{F}={}&2 \Qop_{(ABC)D} + 2 \epsilon_{D(A}\Qop_{B}{}^{F}{}_{C)F} + \tfrac{1}{2} \epsilon_{D(A}D_{BC)}H. \end{align*} \subsection{Diffeomorphism dependence} To analyse the dependence of the formalism on diffeomorphisms, we proceed in the same way as in Section~\ref{sec:diffeomorphisms}. Accordingly, let $\phi_\lambda$ be a one parameter group of diffeomorphisms generated by a vector field $\xi^a$. Now, let $h_{ab}[\lambda]=\phi^*_{-\lambda}\mathring h_{ab}$. All members of the family $h_{ab}[\lambda]$ will have the same geometric content and we get \begin{align} \delta h_{ab}={}&\mathcal{L}_\xi h_{ab} =2 D_{(a}\xi_{b)}.\label{eq:spatialpuregauge} \end{align} Moreover, one has that \begin{align*} \Qop_{ABCD}={}&- \tfrac{1}{2} D_{(C}{}^{F}D_{D)F}\xi_{AB} - \tfrac{1}{2} D_{(C}{}^{F}D_{|AB|}\xi_{D)F},\\ F^{AB}={}&D_{CD}D^{CD}\xi^{AB} - \tfrac{1}{3} \mathcal{R} \xi^{AB} - \mathcal{R}^{AB}{}_{CD} \xi^{CD}. \end{align*} Again, we see that $F^{AB}$ can be interpreted as a gauge source function for the linearised diffeomorphisms, but this time one needs to solve an elliptic equation instead of a wave equation to obtain $\xi^{AB}$ from $F^{AB}$. \subsection{Variations of the spatial curvature} By computing the variation of the commutator relations \begin{subequations} \begin{align} \mathcal{R} \kappa_{A}={}&8 D_{(A}{}^{C}D_{B)C}\kappa^{B},\label{eq:SScommutatorR0}\\ \mathcal{R}_{ABCD} \kappa^{D}={}&2 D_{(A}{}^{D}D_{B|D|}\kappa_{C)},\label{eq:SScommutatorR4} \end{align} \end{subequations} we get, after calculations similar to those carried out in the spacetime case, the variation of the curvature. \begin{theorem} The variation of the spatial curvature spinors is given by \begin{align*} \vartheta(\mathcal{R})={}&- \tfrac{1}{3} H \mathcal{R} - H^{BCDF} \mathcal{R}_{BCDF} - 4 D_{CD}\Qop^{BC}{}_{B}{}^{D},\\ \vartheta(\mathcal{R}_{ABCD})={}&- \tfrac{1}{12} H_{ABCD} \mathcal{R} - \tfrac{1}{3} H \mathcal{R}_{ABCD} + 2 D_{(A}{}^{F}\Qop_{B|F|CD)} + \tfrac{1}{2} H_{(AB}{}^{FH}\mathcal{R}_{CD)FH}. \end{align*} \end{theorem} \begin{proof} Computing the variation of relation \eqref{eq:SScommutatorR0} gives \begin{align*} \mathcal{R} \vartheta \kappa_{A} + \vartheta(\mathcal{R}) \kappa_{A}={}&-4 D_{AC}D_{B}{}^{C}\vartheta \kappa^{B} + \tfrac{4}{3} H D_{AC}D_{B}{}^{C}\kappa^{B} + 4 \Qop_{B}{}^{D}{}_{CD} D_{A}{}^{C}\kappa^{B} + 4 \kappa^{B} D_{AD}\Qop^{CD}{}_{BC}\nonumber\\ & - 2 H_{BCDF} D_{A}{}^{F}D^{CD}\kappa^{B} - \tfrac{2}{3} D_{A}{}^{B}H D_{BC}\kappa^{C} - 4 D_{BC}D_{A}{}^{C}\vartheta \kappa^{B}\nonumber\\ & + \tfrac{4}{3} H D_{BC}D_{A}{}^{C}\kappa^{B} + 4 \Qop_{A}{}^{D}{}_{CD} D_{B}{}^{C}\kappa^{B} - 2 H_{ACDF} D_{B}{}^{F}D^{CD}\kappa^{B}\nonumber\\ & + \tfrac{2}{3} D_{AC}\kappa_{B} D^{BC}H - 4 \kappa^{B} D_{CD}\Qop_{A}{}^{C}{}_{B}{}^{D} - 4 \Qop_{ACBD} D^{CD}\kappa^{B} - 4 \Qop_{BCAD} D^{CD}\kappa^{B}\nonumber\\ & + 2 D_{AF}H_{BCD}{}^{F} D^{CD}\kappa^{B} + 2 D_{BF}H_{ACD}{}^{F} D^{CD}\kappa^{B} + 2 H_{BCDF} D^{DF}D_{A}{}^{C}\kappa^{B}\nonumber\\ & + 2 H_{ACDF} D^{DF}D_{B}{}^{C}\kappa^{B}\\ ={}&\mathcal{R} \vartheta \kappa_{A} - \tfrac{1}{3} H \mathcal{R} \kappa_{A} - 2 H_{A}{}^{CDF} \mathcal{R}_{BCDF} \kappa^{B} + 4 \kappa^{B} D_{AD}\Qop^{CD}{}_{BC}\nonumber\\ & - 4 \kappa^{B} D_{CD}\Qop_{A}{}^{C}{}_{B}{}^{D}\\ ={}&\mathcal{R} \vartheta \kappa_{A} - \tfrac{1}{3} H \mathcal{R} \kappa_{A} - H^{BCDF} \mathcal{R}_{BCDF} \kappa_{A} - 4 \kappa_{A} D_{CD}\Qop^{BC}{}_{B}{}^{D}. \end{align*} Computing the variation of relation \eqref{eq:SScommutatorR4} gives \begin{align*} \mathcal{R}_{ABCD} \vartheta \kappa^{D} + \vartheta(\mathcal{R}_{ABCD}) \kappa^{D}={}&2 D_{(A}{}^{D}D_{B|D|}\vartheta \kappa_{C)} - \tfrac{2}{3} H D_{(A}{}^{D}D_{B|D|}\kappa_{C)}\nonumber\\ & + H_{(A}{}^{DFH}D_{B|D}D_{FH|}\kappa_{C)} - H_{(A}{}^{DFH}D_{|DF|}D_{B|H|}\kappa_{C)}\nonumber\\ & - 2 \Qop_{(A}{}^{D}{}_{B}{}^{F}D_{|DF|}\kappa_{C)} - 2 \Qop_{(A}{}^{D}{}_{|D|}{}^{F}D_{B|F|}\kappa_{C)}\nonumber\\ & + 2 \kappa^{D}D_{(A}{}^{F}\Qop_{B|F|C)D} - \tfrac{1}{3} D_{(A}{}^{D}HD_{B|D|}\kappa_{C)}\nonumber\\ & - D_{(A}{}^{D}H_{B|D}{}^{FH}D_{FH|}\kappa_{C)}\\ ={}&\mathcal{R}_{ABCD} \vartheta \kappa^{D} - \tfrac{1}{12} H_{ABCD} \mathcal{R} \kappa^{D} - \tfrac{1}{3} H \mathcal{R}_{ABCD} \kappa^{D}\nonumber\\ & + 2 \kappa^{D} D_{(A}{}^{F}\Qop_{B|F|C)D} + \tfrac{1}{2} \kappa^{D} H_{(AB}{}^{FH}\mathcal{R}_{C)DFH}. \end{align*} \end{proof} \begin{remark} For a pure gauge transformation \eqref{eq:spatialpuregauge}, we get \begin{align*} \vartheta(\mathcal{R})={}&\mathcal{L}_{\xi}\mathcal{R},\\ \vartheta(\mathcal{R}_{ABCD})={}&\mathcal{L}_{\xi}\mathcal{R}_{ABCD} - \mathcal{R}_{(AB}{}^{FH}D_{CD)}\xi_{FH} - \mathcal{R}_{(AB}{}^{FH}D_{|FH|}\xi_{CD)}, \end{align*} where \begin{align*} \mathcal{L}_{\xi}\mathcal{R}_{ABCD}={}&\xi^{FH} D_{FH}\mathcal{R}_{ABCD} + 2 \mathcal{R}_{(AB}{}^{FH}D_{CD)}\xi_{FH}. \end{align*} \end{remark} \section*{Acknowledgements} We thank L. B. Szabados for helpful conversations at the beginning of this project. We also thank L. Andersson and S. Aksteiner for discussions regarding gauge dependence. TB was supported by the Engineering and Physical Sciences Research Council [grant number EP/J011142/1].
1,116,691,497,954
arxiv
\section{Introduction} In the current framework of the $\Lambda$CDM cosmological model, the possibility that dark matter (DM) annihilation effects may impact stellar evolution has recently received renewed attention. After the original works of Bouquet, Dearborn, Freese, Gould, Griest, Krauss, Olive,Press, Raffelt, Renzini, Salati, Silk, Spergel, Srednicki and Wilczek in the`80s and early `90s, several authors have recently re--examined the effects that DM, if made of weakly interacting massive particles (WIMPs), would have on compact objects \citep{Moskalenko:2007ak, Bertone:2007ae} and on the zero-age main sequence of low--mass stars \citep{Fairbairn:2007bn}. This exciting activity has been motivated by a double scope: any peculiar and distinguishable feature of WIMP annihilation on observable stellar quantities is extremely precious in the ``quest'' for dark matter evidence; on the other hand, all possible effects impacting the life of celestial objects must be taken into account by astrophysicists in the current precision era. In particular, the first stellar episode at high redshift occurs under very different conditions from those in the present universe. The higher concentration of dark matter, the short Hubble time which prevents DM self--annihilation from severely affecting the central density, and the characteristic formation of a single PopIII star in the center of the halo are the most favorable conditions for DM annihilation effects to be very efficient in the first stars. In their pioneering work, \citet{Spolyar:2008qv} first found the possibility of very high DM density in primordial halos at high redshift to such an extent that energy released from DM annihilation at the center may halt the gravitational collapse of the baryonic cloud, calling such a DM powered object a {\it dark} star. \citet[hereafter I08]{Iocco:2008xb} and \citet{Freese:2008ur} also noticed that WIMP capture is most efficient in Population III stars. More recently, \citet{freese08a, freese08b} and \citet{Iocco:2008rb} further investigated the role of annihilation of adiabatically contracted DM in the formation of the first stars. \citet{Iocco:2008rb} have also followed the evolution from the pre-main sequence phase to helium exhaustion in the presence of WIMP capture and annihilation (hereafter, DM burning), showing that this can severely delay the evolution of pre--MS objects in the early universe, as well as extend their MS lifetimes. All of these studies motivate us to explore possible consequences of DM burning for the final fate of the first stars and their feedback effects on the evolution of the early universe, even if the role of DM annihilation in the formation of the first stars still remains subject to many uncertainties \citep[see][for a recent review]{FS3proc}. In this Letter, we address the issue by discussing the evolution of the first stars of $20\le M/\mathrm{M_\odot} \le 300$ up to the carbon burning stage (Sect.~\ref{sect:results}). We also investigate the interplay of rotation with DM burning in the evolution of the first stars of $100~\mathrm{M_\odot}$, given the particular importance of rotation for the evolution and deaths of metal poor massive stars \citep[e.g,][]{Meynet08, Yoon08}. Implications of our results for the history of reionization in the early universe are briefly discussed (Sect.~\ref{sect:discussion}). \section{Physical Assumptions and Results}\label{sect:results} We have implemented the DM capture and annihilation process in a hydrodynamic stellar evolution code, following \citet{gould87}. The DM capture rate $C_*$ is calculated using Gould's equations as reported in Eqs. (1,2) of I08. Throughout this Letter we assume the DM--baryon scattering cross section $\sigma_0$ is $5\times10^{-39}~\mathrm{cm^2}$ for the spin-dependent scattering, to which only hydrogen is sensitive, and $10^{-43}~\mathrm{cm^2}$ for the spin-independent one. These correspond to the current upper limits of WIMP direct detection search~\citep{Desai:2004pq,Angle:2008we}. We adopt the same values for other parameters as those used in I08 to calculate $C_*$, except for the ambient DM density (see below). The WIMPs captured by a star eventually reach thermal equilibrium with the gas, in a configuration dictated by the gravitational potential: the consequent DM density can be given by $n_\chi(r) = n_\chi^\mathrm{c} \exp(-r^2/r_\chi^2)$, where $r_\chi = c ( 3kT_\mathrm{c}/2\pi \rho_\mathrm{c}m_\mathrm{\chi})^{1/2}$~\citep{Griest87}. Here $T_{c}$ and $\rho_{c}$ are temperature and density at the stellar center, $c$ and $k$ are the speed of light and Boltmann's constant, respectively. The energy generation rate due to DM annihilation is given by \begin{equation} \epsilon_\chi (r) = \frac{2}{3} <\sigma v>n_\chi^2(r) m_\chi ~~[\mathrm{erg~cm^{-3}~s^{-1}}]~~, \end{equation} where $m_{\chi}$ is the mass of the DM particle; we assume it to be m$_\chi$=100 GeV, which is often taken as a fiducial value in astrophysical studies for DM search. We use $<\sigma v> = 3 \times 10^{-26}~\mathrm{cm^3~s^{-1}}$, the value best fitting the relic DM abundance ~\citep[see, e.g., ][for a recent review]{Bertone05}. The factor $2/3$ is to consider that part of the energy is carried away by neutrinos. We consider the time dependent evolution of the total number of WIMPs ($N_\mathrm{tot}$) in the star, and the number of thermally relaxed WIMPs in the core ($N_\mathrm{th} := \int n(r) dV$) by the following equations in order to normalize $n(r)$: \begin{eqnarray} \frac{dN_\mathrm{tot}}{dt} & = & C_* - \int n(r)^2 <\sigma v> dV, ~\mathrm{and} \\ \frac{dN_\mathrm{th}}{dt} & = & \Gamma_\mathrm{th} - \int n(r)^2 <\sigma v> dV~, \end{eqnarray} where $\Gamma_\mathrm{th}$ is the thermalization rate that can be approximated by \begin{equation} \Gamma_\mathrm{th} = \frac{N_\mathrm{tot} - N_\mathrm{th}}{\tau_\mathrm{th}},~~~\tau_\mathrm{th} = \frac{4\pi}{3\sqrt{2G}}\frac{m_\chi}{\sigma_0}\frac{R^{7/2}}{M^{3/2}}~. \end{equation} See \citet{Iocco:2008rb} for detailed discussion on the thermalization time scale $\tau_\mathrm{th}$. Our models show that equilibrium between $C_*$ and $\Gamma_\mathrm{th}$ is well maintained up to the core helium burning phase (see Fig.~\ref{fig2} below). We consider several different values for the ambient DM energy density $\rho_\chi$ ranging from 0 to $2\times10^{12}~\mathrm{GeV~cm^{-3}}$. For the initial composition of the first stars, we assume the mass fractions of $^1\mathrm{H}$, $^4\mathrm{He}$ and $^3\mathrm{He}$ to be 0.76, 0.23999, and 0.00001, respectively. The mass loss rate from metal free stars is assumed to be zero if the Eddington factor $\Gamma_\mathrm{E}$ is smaller than 0.84 and $10^{-14}~\mathrm{M_\odot~yr^{-1}}$ otherwise, following~\citet{Krticka06}. If a star becomes a helium star by rotationally induced mixing (see below), we assume that the mass loss rate from such a metal free Wolf-Rayet (WR) star is the same as that from a corresponding WR star at $Z=10^{-6}$, as implied by \citet{Vink05}. The code also implements the transport of chemical species and angular momentum due to rotationally induced hydrodynamic instabilities and the Spruit-Tayler dynamo. Other details on the stellar evolution code are described in \cite{Yoon06}. \begin{figure}[t] \epsscale{1.0} \plotone{f1a.eps} \plotone{f1b.eps} \caption{\emph{Upper panel:} HR diagram of the non-rotating first star models on the ZAMS of different masses for different adopted values of $\sigma_0^{SD} \rho_\chi$ as indicated by the labels. Here $\sigma_0^{SD}$ is the spin dependent WIMP scattering cross section, and $\rho_\chi$ is the ambient WIMP density. Stars in the grey shaded region are supposed to be only powered by DM burning, without nuclear reactions (i.e., $T_\mathrm{c} < 10^7~\mathrm{K}$). We use $\sigma_0^\mathrm{SD} = 5\times10^{-39}~\mathrm{cm^2}$ \emph{Lower panel:} Life times of the non-rotating first stars as a function of the initial mass for different values of $\sigma_0^\mathrm{SD} \rho_\chi$ given in the unit of $10^{-26}~\mathrm{GeV cm^{-1}}$. These life times are obtained from stellar models up to carbon burning for $\rho_\chi \le 4 \times 10^{10}~\dmrho$ (i.e.,$\sigma_0^\mathrm{SD} \rho_\chi \le 0.02 \times 10^{-26}~\mathrm{GeV cm^{-1}}$), while only approximate estimates are given for $\rho_\chi \ge 10^{11} \dmrho$ (i.e.,$\sigma_0^\mathrm{SD} \rho_\chi \ge 0.05\times10^{-26}~\mathrm{GeV cm^{-1}}$) . }\label{fig1} \end{figure} Fig.~\ref{fig1} shows the HR diagram of the constructed non--rotating first star models on the zero age main sequence (ZAMS), for different values of $\rho_\chi$. Note that the luminosity for a given mass does not significantly change with varying $\rho_\chi$. This represents the well--known nature of the mass--luminosity relation, which is largely independent of any particular mode of energy generation~\citep{Kippenhahn90}. The stellar structure is adjusted such that the ratio of the nuclear to the DM luminosity decreases with increasing $\rho_\chi$, while the total luminosity remains nearly constant, leading to prolonged lifetimes of the stars \footnote{For a given $\rho_\chi$, our models give longer life times than those by \citet{Iocco:2008rb}. This is because these authors used an approximation for the DM capture rate $C_*$, while we integrate Eq (2) of I08 over the entire stellar structure.} (Fig.~\ref{fig1}). If $\rho_\chi$ is above a critical value ($\rho_\mathrm{\chi, crit} \approx 2\times10^{11}~\dmrho$, see Fig.~\ref{fig1}) at a given stellar mass the central density and temperature decrease to such an extent that the stars would live forever on the ZAMS, having no nuclear reactions. This is in qualitative agreement with \citet{Iocco:2008rb} who find that pre--MS evolution is ``frozen'' before reaching the ZAMS at high enough $\rho_\chi$; and with \citet{Fairbairn:2007bn}, who find (for M$\leq$4$M_\odot$) that stars move rightward of the ZAMS locus on the HR, and eventually join the Hayashi track, if ``fed'' with increasing DM annihilation luminosities. \begin{figure}[t] \epsscale{1.00} \plotone{f2a.eps} \plotone{f2b.eps} \caption{ \emph{Upper Panel:} Evolution of the surface luminosity (solid line), neutrino luminosity (three-dotted-dashed line), nuclear luminosities due to hydrogen burning (dot-dashed line), helium burning (dotted line), carbon burning (long dashed line) and DM burning (short dashed line) in the non-rotating $100~\mathrm{M_\odot}$ model sequence. The contribution of DM annihilation to the neutrino luminosity, which corresponds to $1/3L_\mathrm{DM}$, is not included here. The rapid increase in $L_\mathrm{H}$ and $L_\mathrm{He}$ after helium exhaustion is due to the hydrogen and helium shell burning. \emph{Lower Panel:} Evolution of the WIMP capture rate ($C_*$, solid line), the thermalization rate ($\Gamma_\mathrm{th}$, dashed line) and the number of thermalized WIMPs ($N_\mathrm{th}$, dotted line) in the corresponding model. See Eqs.~(2), (3) and (4). }\label{fig2} \end{figure} No meaningful change in the stellar structure according to different values of $\rho_\chi$ ($< \rho_\mathrm{\chi,crit}$) is observed in non-rotating models. Since DM burning only occurs within a very small radius $r_\chi (<< R_\mathrm{core})$, stars with different $\rho_\chi$ at a given mass have similar amounts of energy flux from the core and produce helium cores of a similar size (e.g. $\sim 40~\mathrm{M_\odot}$ from $100~\mathrm{M_\odot}$ stars). The luminosity resulting from DM burning gradually increases early on the main sequence as the star expands, but continuously decreases in later stages since the significant reduction of the number of hydrogen atoms lowers the DM capture rate. Rapid increase of the stellar radius after helium exhaustion makes the thermalization time very long, leading to reduction of the number of thermalized WIMPs (see Fig.~\ref{fig2}). The DM luminosity accordingly decreases further from $ 7 \times 10^5~\mathrm{L_\odot}$ to about $10^5~\mathrm{L_\odot}$ during the carbon burning phase, in the given example with $100~\mathrm{M_\odot}$ (Fig.~\ref{fig2}). Carbon burning and particularly neutrino cooling ($L_\mathrm{\nu} > 10^{10}~\mathrm{L_\odot}$) dominate the evolution at this stage as shown in Fig~\ref{fig2}. As the evolution of the star beyond carbon exhaustion should also be governed by neutrino cooling and other nuclear reactions such as oxygen burning, the effect of DM burning on the pre-supernova structure must be minor. As the situation remains similar in the other models of $20 \le M/\mathrm{M_\odot} \le 300$, we conclude that DM burning may not change the final fate of the non-rotating first stars. \begin{figure}[t] \epsscale{1.00} \plotone{f3a.eps} \plotone{f3b.eps} \caption{ Evolution of the internal structure of rotating $100~\mathrm{M_\odot}$ star models without DM burning (upper panel) and with DM burning (lower panel, $\rho_\chi = 4\times10^{10}~\mathrm{GeV cm^{-3}}$). Convective layers are hatched, and semi--convective layers are marked by red dots. The color shading indicates nuclear energy generation rates. The adopted initial rotational velocity at the equatorial surface is $132~\mathrm{km~s^{-1}}$, which corresponds to 10 \% of the Keplerian value. }\label{fig3} \end{figure} It is noteworthy, however, that rotation can dramatically change the evolution with DM burning. Fluids in rotating massive stars are subject to various rotationally induced hydrodynamic instabilities such as Eddingon-Sweet circulations, which can cause mixing of chemical species across the boundary between the hydrogen burning core and the radiative envelope. Such mixing is usually stabilized by the strong buoyancy potential due to the chemical stratification between the hydrogen burning core and the envelope. However, if chemical mixing can occur faster than the building-up of chemical gradients by nuclear burning (i.e., $\tau_\mathrm{mix}/\tau_\mathrm{nuc} < 1$), then (quasi-) chemical homogeneity can be maintained on the main sequence (so-called chemically homogeneous evolution)~\citep{Maeder87}. The condition, $\tau_\mathrm{mix}/\tau_\mathrm{nuc} < 1$, can be met either by reducing $\tau_\mathrm{mix}$ or increasing $\tau_\mathrm{nuc}$. Rapid rotation tends to do the former, and we find that DM burning tends to do the latter. This is because the mixing time scale due to Eddington--Sweet circulations remains almost unchanged with increasing $\rho_\chi$ (for $ \rho_\chi < \rho_\mathrm{\chi, crit}$), while the nuclear burning time scale increases due to the DM burning. This explains the remarkable impact of DM burning in the evolution of our rotating models as shown in Fig.~\ref{fig3}. As shown in the upper panel, rotation in this model ($v_\mathrm{init}/v_\mathrm{Kepler} = 0.1$) is not rapid enough to cause the strong mixing by itself without DM burning. With $\rho_\chi = 4 \times 10^{10}~\mathrm{GeV~cm^{-3}}$ (see lower panel), however, the $100~\mathrm{M_\odot}$ star lives about 10 times longer than in the corresponding non--DM burning case, and the star undergoes the quasi- chemically homogeneous evolution even with slow initial rotation because of the DM burning effect on nuclear burning time scale \citep[cf.][]{Yoon06}. The star is thus gradually transformed into a massive helium star by the end of main sequence as almost all of the hydrogen atoms in the star are fused into heliums due to mixing. \section{Discussion}\label{sect:discussion} Our results indicate that DM burning should not significantly alter our view on the final fate of the non--rotating first stars: pair--instability supernovae for $ 140 \la M/\mathrm{M_\odot} \la 260$, and core--collapse events for other masses \citep{Heger02}, although their life times may be significantly prolonged. However, the impact of DM burning appears more important for rotating stars. The quasi-chemically homogeneous evolution can be rather easily realized even with moderate rotation velocities with $10^{10} \la \rho_\chi~[\dmrho] \la 10^{11}$. Such evolution can lead to production of massive helium stars that emit large amounts of helium ionizing photons, as well as abundant production of primary nitrogen, as discussed in \citet{Yoon08}. Note also that the quasi-chemically homogeneous evolution scenario (CHES) is one of the favored ones for the production of long GRBs from metal poor stars~\citep{Yoon05, Woosley06}. Our result therefore indicates that DM burning might promote the production of long gamma--ray bursts from the first stars of $12 \la M/\mathrm{M_\odot} \la 60$ via the CHES channel \citep[see][]{Yoon06}. DM burning in the first stars must have consequences in the history of reionization in the early universe. Table~1 lists the number of hydrogen and helium ionizing photons emitted from $100~\mathrm{M_\odot}$ models. If $\rho_\chi \la 2\times10^{11} \dmrho$, the total number of ionizing photons increases proportionally to the DM density, as a direct consequence of the life--prolonging effect of DM burning. For a given $\rho_\chi$ rotation does not significantly alter hydrogen ionizing photon counts nor the lifetime. However, for models that take path of the CHES, the helium ionizing photon counts is increased by more than a factor of 2 compared to the non--rotating case. If $\rho_\chi$ is very large, on the other hand, the surface temperature of the star drops significantly enough (Fig. 1) that the total number of ionizing photons is reduced even with much longer lifetimes due to the DM burning. For example, with $\rho_{\chi}= 2\times10^{12}~ \dmrho$, it would have to take $\sim$ 10 Gyr to emit as many hydrogen ionizing photons as a non--DM burning counterpart. Therefore, if most of the first stars had been born with such high $\rho_\chi$ their contribution to reionization would have been dramatically reduced. The effect of the temperature drop on the number of helium ionizing photons is even more prominent: with $\rho_\chi = 2\times10^{12}~\dmrho$, it decreases by more than 19 orders of magnitude compared to the other cases with lower $\rho_\chi$. Future study on the history of helium ionization at high redshift might therefore be a strong probe of DM burning in the first stars. In this study we assume that the background DM halo density stays constant throughout the stellar evolution. This assumption may be valid if the stellar life times are shorter than about 100 Myr -- which is the expected merger timescale of DM halos at at z $\sim$ 20~\citep[e.g.,][]{Lacey93} -- as in our model sequences with $\rho_\chi \la 4\times10^{10}~\dmrho$. The evolution of DM burners in halos with higher $\rho_\chi$, however, should be critically determined by the change of DM halo environments. If $\rho_\chi$ is sufficiently reduced due to merger events and/or to displacement of the star from the densest region of the DM halo, the DM burners will become ``normal'' stars, dominated by nuclear burning. The star may then die quickly as we expect for normal stars, or become a ``born--again'' DM burner if $\rho_\chi$ increases again in later stages for some reason. The detailed history of the feedback from the first stars (e.g. metal enrichment and reionization) on the evolution of the early universe may depend on the nature and evolution of DM halos where the first stars are formed. This issue should be addressed in future work. \begin{table} \begin{center} \caption{Number of hydrogen and helium ionizing photons from $100~\mathrm{M_\odot}$ star models}~\label{tab1} \begin{tabular}{crccc} \tableline \tableline $v_\mathrm{rot}/v_\mathrm{K}$ & $\rho_\chi~[\dmrho]$ & $N_\mathrm{H}$ & $N_\mathrm{He}$ & Duration \\ \tableline 0.0 & 0.00 & $1.2 \times 10^{64}$ & $2.2 \times 10^{62}$ & 3.2 Myr \\ 0.0 & $2\times10^{10}$ & $2.1 \times 10^{64}$ & $3.4 \times 10^{62}$ & 5.5 Myr \\ 0.0 & $4\times10^{10}$ & $8.5 \times 10^{64}$ & $1.5 \times 10^{63}$ & 22.3 Myr \\ 0.0 & $10^{11}$ & $2.9 \times 10^{65}$ & $6.3 \times 10^{62}$ & 100.0 Myr\tablenotemark{*} \\ 0.0 & $2\times10^{11}$ & $2.0 \times 10^{65}$ & $1.7 \times 10^{61}$ & 100.0 Myr\tablenotemark{*} \\ 0.0 & $2\times10^{12}$ & $1.3 \times 10^{62}$ & $9.4 \times 10^{43}$ & 100.0 Myr\tablenotemark{*} \\ \tableline 0.1 & 0.00 & $1.5 \times 10^{64}$ & $2.5 \times 10^{62}$ & 3.4 Myr \\ 0.1 & $2\times10^{10}$ & $2.7 \times 10^{64}$ & $5.2 \times 10^{62}$ & 6.0 Myr \\ 0.1 & $4\times10^{10}$ & $8.7 \times 10^{64}$ & $3.8 \times 10^{63}$ & 19.6 Myr \\ \tableline \end{tabular} \vspace{-0.5mm} \tablenotetext{*}{The numbers are calculated only for the first 100 Myr.} \end{center} \end{table} \acknowledgements As a note added in proof, we acknowledge that \citet{Taoso08} independently report similar results about the effect of DM burning on the MS life time and some stellar properties, using a different numerical code. S.~A. appreciates helpful discussion with M. Alvarez and M.Busha. S.~A. is supported by a Department of Energy Contract to SLAC DE--AC3--76SF00515. F.~I. is grateful to P. Scott and M. Taoso for helpful discussion. F.~I. is supported by MIUR through grant PRIN--2006. S.C.~Y. is grateful to C. Church for the help with the text, and to W. Hillebrandt and S. Woosley for supporting his visit to MPA, Garching, in June, 2008, where part of the manuscript has been prepared. S.C.~Y. is supported by the DOE SciDAC Program (DOE DE-FC02-06ER41438).
1,116,691,497,955
arxiv
\section{Introduction} The inverse kinematic (IK) problem in computational kinematics involves finding joint values of a manipulator for a specified position and orientation of its end-effector tool. The development of computer algebra systems led to notable improvements of obtaining solutions to this problem. Chapelle and Bidaud \cite{chap} obtained a mathematical function that approximates joint values of general 6R manipulators through genetic programming. The same manipulator was studied by Wang et al.~\cite{wang} using Gröbner bases. The works of Husty et al.~\cite{ik6r} and Pfurner \cite{pfurner} gave an algebraic-geometric insight to the problem and addressed it using classical results in multi-dimensional geometry. Gan et al.~\cite{gan} solved the problem for the case of a 7-link 7R mechanism using dual quaternions and Dixon's resultant. Most of the developments mentioned above deal with manipulators having purely revolute joints, while manipulators with prismatic joints have not yet been fully explored. Joints of manipulators in the industry are mostly prismatic or revolute. We provide a solution to the IK problem of some serial manipulators that contain prismatic joints, in particular, 2RP3R, 2R2P2R, 3RP2R and 6R manipulators, using an approach similar to that of Husty and Pfurner \cite{ik6r,pfurner} but based on the algebra of dual quaternions. The solution turns out to be not as easy and straightforward as the case of 6R manipulators. The rest of the paper is organized as follows. We start with some basic concepts in Section~2. Hyperplanes for the 2-chains are computed in Section~3, while linear spaces for the workspaces are computed in Section~4. The procedure for solving the IK problem is shown in Section~5 along with some examples, and conclusions are discussed in Section~6. \section{Preliminaries} \subsection{Manipulator structure} A serial 6-chain manipulator with 2RP3R structure is a sequence of seven links connected by six joints starting from the base: two revolute joints (2R), a prismatic joint (P) and three more revolute joints (3R). See Figure~\ref{fig:2rp3rwig}. Serial 6-chain manipulators with 2R2P2R, 3RP2R and 6R structures are defined analogously. \begin{figure} \centering \includegraphics[scale=0.4]{2rp3r_new.jpg} \caption{A 2RP3R Manipulator} \label{fig:2rp3rwig} \end{figure} For the analysis of this mechanism, we adopt the Denavit-Hartenberg (DH) convention on assigning coordinate frames \cite{spong}. The base frame $F_1$ is associated to the base link in which the $z_1$-axis coincides with the first rotation axis, and the $x_1$- and $y_1$-axes are placed to form a right-handed coordinate frame. For $i=2,3,\ldots,6$, the $z_i$-axis is placed along the $i$th joint axis and the origin is at the intersection of the $z_i$-axis and the common normal between the $z_{i-1}$- and $z_i$-axes. The $x_i$-axis is set along the direction of this normal and the $y_i$-axis is set to form a right-handed frame $F_i$. The end-effector frame $F_7$ (or EE frame) has its $z_7$-axis placed along the $z_6$-axis, and the $x_7$- and $y_7$-axes are placed to form a right-handed frame. The displacement between two consecutive frames $F_{i}$ and $F_{i+1}$ can be described using four parameters: \emph{rotation angle} $\theta_{i}$, \emph{offset} $d_{i}$, \emph{distance} $a_{i}$ and \emph{twist angle} $\alpha_{i}$, where $i=1,2,\ldots,6$. However, rotation and twist angles can be parametrized by points in the real projective space $\PS^1(\R)$. For simplicity we will disregard half-rotations (or rotations that are odd multiples of $\pi$) which allows us to parametrize the angles by half-angle tangents. That is, we use the parameters $l_i := \tan\tfrac{\alpha_i}{2}$ and $v_i:= \tan\tfrac{\theta_i}{2} $ which gives an algebraic description of the manipulator's workspace. We refer to $v_1,v_2,d_3,v_4,v_5$ and $v_6$ as the \textit{joint variables} and the rest of the parameters as the \textit{DH-parameters} of a 2RP3R manipulator. These variables and parameters are referred analogously for 2R2P2R, 3RP2R and 6R manipulators. Note that $a_6=l_6=0$. We assume $d_1=d_6=0$. Rigid transformations in $\SE$ are usually represented by homogeneous transformation matrices. In particular, the displacement between frames $F_{i}$ and $F_{i+1}$ consists of a rotation by $\theta_i$, a translation $(a_i,0,d_i)$ and a rotation by $\alpha_i$, and is represented by the matrix product $$B_i=\left( \begin{array}{cccc} \cos(\theta_i) & -\sin(\theta_i) & 0 & 0 \\ \sin(\theta_i) & \cos(\theta_i) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \left( \begin{array}{cccc} 1 & 0 & 0 & a_i \\ 0 & \cos(\alpha_i) & -\sin(\alpha_i) & 0 \\ 0 & \sin(\alpha_i) & \cos(\alpha_i) & d_i \\ 0 & 0 & 0 & 1 \\ \end{array} \right)$$ for $i=1,2,\ldots,6.$ The kinematic equation of a 6-chain manipulator can be formulated as \begin{equation} B_1 B_2 \cdots B_6 = E \label{eq:ke} \end{equation} where $E$ is the matrix representing the displacement of the EE frame relative to the base frame. Given the DH-parameters of a manipulator and a specified pose of its end-effector $E$, the inverse kinematics problem aims to find values of its joint variables for which equation~\eqref{eq:ke} holds. In general, it is difficult to solve this problem with trigonometric functions because the system of equations are nonlinear. To overcome this hurdle, the equations may be expressed algebraically via the kinematic map and tangent of half-angle substitutions (see the works of Husty~\cite{ik6r} and Pfurner~\cite{pfurner}). This allows one to express each rigid transformation $B_i$ above as points in the projective space $\PS^7(\R)$. Moreover, products of rigid transformations are obtained by identifying them as dual quaternions. Indeed, the algebra of dual quaternions is enough to directly obtain algebraic equations for solving the IK problem. \subsection{Dual quaternions} We write \textit{quaternions} as $ p = p_0+\mbf{p} $ where $p_0 \in \R$ and $\mathbf{p} \in \R^3$. The set of quaternions $\H$ is an algebra over $\R$ where addition and multiplication are given by \begin{equation*}\begin{split} p_0+\mbf{p} + q_0+\mbf{q} &= p_0 +q_0 + \mathbf{p+q}, \textnormal{ and }\\ (p_0+\mbf{p}) (q_0+\mbf{q}) &= p_0 q_0 - \mathbf{p\cdot q}+ (p_0 \mathbf{q} + q_0 \mathbf{p} + \mathbf{p} \times \mathbf{q}) \end{split} \end{equation*} respectively, for any $p_0+\mbf{p},q_0+\mbf{q}\in\H$, and where $\cdot$ is the usual inner product in $\R^3$. Moreover, $\lambda (p_0+\mbf{p}) =(p_0+\mbf{p})\lambda= \lambda p_0+\lambda\mbf{p}$ for any $\lambda \in \R$. The real numbers can be embedded in $\H$ via the map $r\mapsto r+\mbf{0}$. The $\R$-algebra of \emph{dual quaternions}, denoted $\D$, consists pairs of quaternions that are written as a formal sum $\sigma = p + \epsilon q$ where $p\in \H$ and $q \in \H$. Operations in $\D$ are as follows: \begin{align*} (p + \epsilon q) + (s + \epsilon t) &= (p+s) + \epsilon (q+t)\\ (p + \epsilon q) (s + \epsilon t) &= ps + \epsilon (pt+qs)\\ \lambda(p + \epsilon q) &= \lambda p + \epsilon \lambda q \end{align*} for any $p + \epsilon q$, $s + \epsilon t \in \D$ and $\lambda\in \R$. Here, we take $\epsilon^2=0$ and $\epsilon\neq0$. Note that $\H$ can be embedded in $\D$ via the map $p\mapsto p+\epsilon 0$. The \emph{conjugate} of a quaternion $p=p_0+\mathbf{p}$ is $p^*:=p_0-\mathbf{p}$, and the conjugate of a dual quaternion $\sigma=p + \epsilon q=(p_0+\mathbf{p})+\epsilon(q_0+\mathbf{q})$ is $\sigma^* := p^* + \epsilon q^*$. It can be verified that $\sigma\sigma^*=\sigma^*\sigma$ is a nonnegative real number if and only if $p_0q_0+\mbf p \cdot \mbf q =0$. In this case, set $|\sigma|:=\sigma\sigma^*$. We consider the set $$\D_s:=\left\{\sigma\in\D \mid \sigma=p + \epsilon q \textnormal{ with } p\neq0 \textnormal{ and } \sigma\sigma^*\in \R_{\geq0}\right\}$$ which is a subgroup of the group of units of $\D$. The multiplicative inverse of $\sigma\in \D_s$ is $\sigma^{-1}=\tfrac{\sigma^*}{|\sigma|}$. Quaternions can be used to describe rotations in $\R^3$. For instance, a rotation by $\theta$ about a unit axis $\mbf{n}\in\R^3$ through the origin can be represented by a quaternion $\cos\tfrac{\theta}{2}+\sin\tfrac{\theta}{2}\mbf{n}$. On the other hand, dual quaternions can be used to describe rigid transformations in $\R^3$ \cite{selig}. For instance, the rigid transformation determined by a rotation $p\in\H$ and a translation $t=0+\mbf{t}\in\H$ can be represented by the dual quaternion $$ \sigma=p+\epsilon \tfrac{1}{2}tp=(\cos\tfrac{\theta}{2}+\sin\tfrac{\theta}{2}\mbf{n})+\epsilon (-\tfrac{1}{2}\sin \tfrac{\theta}{2} \mbf{t}\cdot\mbf{n}+ \tfrac{1}{2}\cos \tfrac{\theta}{2}\mbf{t}+\tfrac{1}{2}\sin \tfrac{\theta}{2} (\mbf t\times\mbf n)).\label{eq:dqSE} $$ \noindent Naive storage and multiplication of homogenous transformation matrices is less efficient than dual quaternions. Naive matrix multiplication involves 122 elementary operations, while naive multiplication of dual quaternions requires 88 elementary operations. There are other advantages of dual quaternions over matrices such as in interpolations and handling numerical errors (see \cite{markley}). The kinematic equation in \eqref{eq:ke} may now then be expressed as \begin{equation*} \sigma_1\sigma_2\cdots \sigma_6=\sigma_E \label{eq:kineq} \end{equation*} where $\sigma_E$ is a dual quaternion representing the displacement of the EE frame $F_7$ relative to the base frame $F_1$. The DH-parameters and joint variables of the manipulator are encoded in each $\sigma_i$. To solve the IK problem, we decompose the 6-chain into two parts -- the \textit{left} and \textit{right chains} -- consisting of transformations that correspond to the left and right sides of the equation \begin{equation} \sigma_1\sigma_2\sigma_3=\sigma_E\sigma_6^{-1}\sigma_5^{-1}\sigma_4^{-1} \label{eq:kineqdec} \end{equation} respectively. Hence, the problem is reduced to solving for joint variables in equation~\eqref{eq:kineqdec} for which frame $F_4$ on the left and on the right chains coincide. \subsection{Workspaces} We identify elements in $\SE$ with points in the real projective space $\PS^7(\R)$ via dual quaternions of the form $\sigma=p+\epsilon\tfrac{1}{2}tp$. Nonzero scalar multiples of $\sigma$ represent the same elements in $\SE$. Writing $\sigma=(x_0+\mbf{x})+\epsilon(y_0+\mbf{y})$ where $\mbf{x}=(x_1,x_2,x_3)$ and $\mbf{y}=(y_1,y_2,y_3)$ as an 8-tuple, it can be verified that $\sigma$ is in the set $$\{(x_0:x_1:\cdots:y_3)\in\mathbb{P}^7(\R) \,\big{|}\, \sum_{i=0}^3 x_iy_i =0 \text{ and } \sum_{i=0}^3 x_i^2 \ne 0\}.$$ We call $(x_0:x_1:\dots:y_3)$ as the \textit{Study parameters} of the rigid transformation. Thus we may also identify $\SE$ with the above subset of $\mathbb{P}^7(\R)$. In this identification, $\SE$ is the intersection of the Study quadric $S$ \begin{equation} x_0y_0 + x_1y_1+x_2y_2 +x_3y_3=0 \label{eq:study} \end{equation} with the complement of the linear space $x_0=x_1=x_2=x_3=0$ in $\mathbb{P}^7(\R)$. Recall that a linear space is the intersection of a number of hyperplanes. Consider the frame $F_4$ of a 2RP3R chain. Given the DH-parameters and a pose of the end-effector, the set of all possible poses of $F_4$ relative to $F_1$ on the left 2RP chain is called the \textit{left workspace}, while all possible poses for $F_4$ relative to $F_1$ on the right 3R chain is called the \textit{right workspace}. These two workspaces in $\SE$ are obviously the same and described by the given DH-parameters. Just as in \cite{pfurner}, we build upon Selig's theory \cite{selig} that the workspace of a 2-chain lies in a linear 3-space to describe the left and right workspaces as intersections of hyperplanes in $\PS^7(\R)$ and $\SE$. The hyperplane equations then serve as constraints of these workspaces. To solve the IK problem we regard the workspace of a 3-chain as the workspace of a 2-chain parametrized by a joint. For instance in a 2RP3R manipulator, the left workspace could be the kinematic image of a 2R-chain parametrized by $d_3$ while the right workspace is the kinematic image of a 2R-chain parametrized by say $v_6$. Some advantages of looking at the workspace in the projective setting are (1) it allows us to geometrically analyze dimensions of intersection of hyperplanes before solving the IK problem, and (2) we solve IK only using basic techniques in linear algebra and algebraic geometry \cite{ik6r}. \section{Computing hyperplanes for the left chain} Let $\sigma_i=R_{z}(v_i)T_{z}(d_i) T_{x}(a_i) R_{x}(l_i)$ in $\SE$, where $R_z$ and $R_x$ are rotations about the $z$- and $x$-axes, and $T_z$ and $T_x$ are translations along $z$- and $x$-axes, respectively. Note that $\sigma_i=\sigma_i(v_i)$ (a function of $v_i$) if the $i$th joint is revolute, and $\sigma_i=\sigma_i(d_i)$ if the $i$th joint is prismatic. Along the same axis, note also that $R_{z}$ and $T_{z}$ commute; likewise, $R_{x}$ and $T_{x}$ commute. We want to get the linear forms parametrized by joint $v_1$ which define the linear spaces that contain the kinematic image of the left chain $$V_L:=\{\sigma_1(v_1)\sigma_2(v_2) \sigma_3(d_3) \mid v_1,v_2,d_3 \in \R \}.$$ To this end, we first compute the linear space that contains the kinematic image (leaving out the first joint) $$V_1:=\{R_{z}(v_2) T_{x}(a_2) R_{x}(l_2) T_{z}(d_3) \mid v_1,d_3\in \R \}$$ where the fixed transformations $\sigma_1 T_{z}(d_2)$ and $R_{z}(v_3) T_{x}(a_3) R_{x}(l_3)$ are removed (recall that we assume $d_1=0$ and, except for the joint variables $v_1,v_2,d_3$, the other DH-parameters are fixed). Our aim is to find hyperplanes $$ax_0+bx_1+cx_2+dx_3+ey_0+fy_1+gy_2+hy_3=0$$ in $\PS^7(\R)$ whose intersection contain $V_1$. To do this, we substitute the Study parameters of $V_1$ into this equation thereby obtaining a polynomial equation in $v_2$ and $d_3$ $$z_0v_2d_3+z_1v_2+z_2d_3+z_3=0$$ with coefficients \begin{center} \begin{tabular}{l} $z_0=-4 e+4l_2f$\\ $z_1=8l_2c+8 d+4 a_2 g-4 a_2l_2 h$\\ $z_2=-4l_2g+4h$\\ $z_3=8 a+8 l_2b-4 a_2l_2e+4 a_2 f.$ \end{tabular} \end{center} \noindent Since $z_i=0$ for $i\in\left\{0,1,2,3\right\}$, the system of equations above can be written in the following matrix form: $$\left( \begin{array}{cccccccc} 0 & 0 & 0 & 0 & -2 & 2 l_2 & 0 & 0 \\ 0 & 0 & 4 l_2 & 4 & 0 & 0 & 2 a_2 & -2 a_2 l_2 \\ 0 & 0 & 0 & 0 & 0 & 0 & -2 l_2 & 2 \\ 4 & 4 l_2 & 0 & 0 & -2 a_2 l_2 & 2 a_2 & 0 & 0 \\ \end{array} \right) \left(\begin{array}{c} a\\ b \\ \vdots \\ h \end{array}\right)=\left(\begin{array}{c} 0\\ 0 \\ \vdots \\ 0 \end{array}\right)$$ \noindent Solving for the kernel of the above $4\times8$ coefficient matrix $A$, we get four solutions for $(a:b:\ldots:h)$ in $\PS^7(\R)$. Note that in computing for the kernel of $A$, we need to pay attention to entries of $A$ that involve the DH-parameters since they may assume the value of 0. Hence, the linear 3-space defined by \begin{align*} -l_2 x_0+x_1&=0 \\ a_2 \left(l_2^2-1\right) x_0+2l_2 y_0+2y_1&=0 \\ a_2 \left(l_2^2-1\right) x_3+2y_2+2l_2 y_3&=0 \\ x_2-l_2 x_3&=0 \end{align*} contains $V_1$ and lies in $S$ if and only if $l_2=\pm1$. From the above linear space, we need to get the linear 3-space parametrized by joint $v_1$ that contain $V_L$, which we denote by $T(v_1)$. This is done by the following change of variables (operations are dual quaternion arithmetic and we view $(x_0:x_1:\dots : y_3)$ on the right as a dual quaternion) \begin{equation}\label{full_linear_space} (x_0:x_1:\dots : y_3) \rightarrow (\sigma_1(v_1)T_z(d_2))^{-1}(x_0:x_1:\dots:y_3)(R_z(v_3) T_x(a_3) R_x(l_3))^{-1} \end{equation} where the inverse transformations are represented by the dual quaternion conjugates. Similar procedures can be applied to obtain the linear space parametrized by $d_3$ that also contain $V_L$, denoted $T(d_3)$. Here, the linear space that contain the kinematic image (leaving out the third joint) $$V_3:=\{R_{z}(v_1) T_{x}(a_1) R_{x}(l_1) R_{z}(v_2) \mid v_1,v_2\in\R \}$$ is defined by \begin{equation}\begin{split} a_1 l_1 x_0 +2y_0 &= 0 \\ -a_1 x_1 +2l_1 y_1 &=0 \\ -a_1 x_2 +2l_1 y_2 &= 0 \\ a_1 l_1 x_3 +2y_3 &= 0 \end{split}\label{eq:left3R}\end{equation} when $a_1$ and $l_1$ are non-zero. If $a_1=l_1=0$ then we have the projective line $$x_1=x_2=y_0=y_1=y_2=y_3=0.$$ The linear 3-space defined by equations~\eqref{eq:left3R} lies inside $S$ if and only if either $a_1$ or $l_1$ is 0. The linear space parametrized by $d_3$, $T(d_3)$, is then obtained by accounting for the fixed transformations and the parametrizing joint variable $d_3$, i.e. by applying the following change of variables $$(x_0:x_1:\dots : y_3) \rightarrow (x_0:x_1:\dots:y_3)(T_z(d_2) T_x(a_2) R_x(l_2)\sigma_3(d_3))^{-1}.$$ The DH parameter values for which both $T(v_1)$ and $T(d_3)$ lie in $S$ are $$\{a_1=0 \textnormal{ or } l_1=0\} \textnormal{ and } l_2=\pm1$$ For convenience, we will assume that the given DH parameters do not satisfy these values so that $T(v_1)$ or $T(d_3)$ is not contained in $S$. When this assumption is not true, we still need to compute for $T(v_2)$ and the DH-parameter values for which $T(v_2)$ lies in $S$. To save space, we have not included this case in this paper. A detailed discussion can be found in \cite{manongsong}. \vspace{5mm} The procedures described in this section can be applied analogously to compute linear spaces that contain the kinematic image of the 3R joint type with joint variables $v_1,v_2$ and $v_3$. We only need to verify that the linear space containing $$V_1:=\{R_{z}(v_2) T_{x}(a_2) R_{x}(l_2) R_{z}(v_3) \mid v_2,v_3\in\R\}$$ is the set of vanishing points of linear forms similar to equation~\eqref{eq:left3R} but by replacing $a_1$ with $a_2$ and $l_1$ with $l_2$ (we assume $a_2$ and $l_2$ are non-zero). For RRR, the kinematic image $V_3$ is the same as for RRP, so equation~\eqref{eq:left3R} also describes the linear 3-space containing $V_3$. To obtain $T(v_1)$ or $T(v_3)$ we account for the fixed transformation and the parametrizing joint variables by substituting variables similar to equation~\eqref{full_linear_space}. This is also described in \cite{pfurner}. For simplicity we will assume that $a_6=d_6=l_6=0$ (otherwise the linear space is another easy change of variables taking these fixed transformations into account). \section{Computing hyperplanes for the right chain} Recall that $\sigma_E$ is the pose of the end-effector. In order to obtain parametrized linear spaces that contain the kinematic image of the right chain $$V_R:=\{\sigma_E\sigma_6^{-1}(v_6)\sigma_5^{-1}(v_5) \sigma_4^{-1}(v_4) \mid v_4,v_5,v_6 \in \R \}$$ we do the following steps. \begin{enumerate}[1.] \item Consider the ``reverse joint type" of the right chain. For instance, in a 3RP2R manipulator the reverse joint type of the right chain is RRP. \item Depending on the joint, obtain parametrized linear spaces $T(v_i)$, $i=1,3$, that contain its workspace. \item Apply the following parameter substitutions \begin{equation}\begin{aligned} &v_1\rightarrow-v_6,\quad a_1\rightarrow-a_5,\quad l_1\rightarrow-l_5\\ &v_2\rightarrow-v_5,\quad a_2\rightarrow-a_4,\quad l_2\rightarrow-l_4,\quad d_2\rightarrow-d_5\\ &v_3\rightarrow-v_4, \quad a_3\rightarrow 0, \qquad\, l_3\rightarrow 0,\quad\quad d_3\rightarrow-d_4.\\ \end{aligned}\label{eq:substitutions}\end{equation} \item Perform a change of variables $$(x_0:x_1:\dots : y_3) \rightarrow \sigma_E^* (x_0:x_1:\cdots:y_3)$$ \end{enumerate} In a 2RP3R manipulator, the parametrized linear spaces $T(v_4)$ and $T(v_6)$ obtained from $T(v_3)$ and $T(v_1)$, respectively, each contain $V_R$. Moreover $T(v_4)$ lies in $S$ if and only if $a_5$ or $l_5$ is 0, while $T(v_6)$ lies in $S$ if and only if $a_4$ or $l_4$ is 0. For convenience, we assume that the given DH parameters do not satisfy $$\{a_4=0 \textnormal{ or } l_4=0\} \textnormal{ and } \{a_5=0 \textnormal{ or } l_5=0\}$$ so that neither $T(v_4)$ nor $T(v_6)$ is contained in $S$. It is also possible to compute $T(v_5)$, but we do not include this case in this work (for more details, see \cite{manongsong}). Obtaining these parametrized linear spaces is an important step in the HuPf algorithm to solve the inverse kinematics problem for 2RP3R manipulators. In this setting and with the information that we have now obtained, the IK problem reduces to a linear algebra problem. This will become clear in the next section. \section{Solving the IK problem of 2RP3R manipulators} The main idea for the inverse kinematics of 2RP3R manipulators is to obtain points in the intersection of left and right workspaces in a suitable ambient space (see proof of Proposition~\ref{prop:finite}). This entails solving a system of nine constraint equations: eight linear forms and the Study equation. For the proposed IK procedure to work, both parametrized linear spaces must not lie in $S$. This means we need to check the given DH parameters as follows: if $l_2\neq\pm1$, use $T(v_1)$, otherwise, use $T(d_3)$; if $(a_4,l_4)\neq(0,0),$ use $T(v_6)$, otherwise, use $T(v_4)$. Let $T(u)$ and $T(w)$ be the chosen parametrized linear spaces for the left and right workspaces, respectively (so $u\in\left\{v_1,d_3\right\}$ and $w\in\left\{v_4,v_6\right\}$). In the end, we want to obtain finite complex solutions (thus also real solutions will be finite) to the problem. We argue using complex solutions so that we can use basic results in classical algebraic geometry rather than rely on more sophisticated results in real algebraic geometry. \begin{proposition}\label{prop:finite} Given the DH-parameters of a general 2RP3R manipulator. If its set of complex IK solutions is finite, then the eight hyperplanes from $T(u)$ and $T(w)$ must be in \textit{general position} i.e.\ the $8$ hyperplanes are described by linear forms over $\C[u,w]$ such that coefficients of these linear forms are linearly independent vectors in an $8$- dimensional vector space over $\C(u,w)$. \end{proposition} \begin{proof} We regard $T(u)$ and $T(w)$ as subvarieties of $\PS^7(\C)\times \PS^1(\C)\times \PS^1(\C)$ (i.e.\ a Segre subvariety, see \cite[\S2\, pp.27]{harris}) and denote their intersection by $\mathcal W$. We now prove by contradiction and assume that we cannot find eight hyperplanes from $T(u)$ and $T(w)$ in general position. In particular, $\mathcal W$ will contain a parametrized family $$\bigcup_{\alpha,\beta\in \mathbb P^1} X(\alpha,\beta) \subset \PS^7\times \PS^1\times \PS^1$$ where $X(\alpha,\beta)$ is a finite set of points in $\PS^7$ belonging to a section of $\mathcal W$ with the linear space $\PS^7\times \{(\alpha,\beta)\}$. Since we assume the set of IK solutions is finite, the fiber of the projection to $\PS^1\times \PS^1$ which is $X(\alpha,\beta)$ is also finite. So the canonical projection of the $\mathcal W$ to $\PS^7$ contains a surface (two-dimensional). This 2-dimensional projection has a non-empty intersection with the 6-dimensional Study quadric $S$ \cite[Chapter~I. Theorem~7.2]{harts}. In particular, this intersection contains a curve in $S$. However, this intersection yields an infinite subset of the set of solutions to the IK problem of the manipulator, a contradiction. \end{proof} The above proof makes use of an argument involving dimensions of varieties. A discussion on dimensions of varieties can be found in many introductory books in algebraic geometry (see for instance \cite[\S11]{harris}). \begin{remark}\label{rem:step2} We actually require that the projection of $\mathcal W$ in the proof of Proposition~\ref{prop:finite} does not lie in the Study quadric. This is used in \ref{step2} where we chose a suitable seven out of eight linear forms. \end{remark} The following discussion shows our procedure for solving the IK based on the HuPf algorithm (we assume throughout that there is at least one real IK solution to a given end-effector pose): \begin{none}\label{step1} Consider eight linear forms describing $T(u)$ and $T(w)$ as polynomials in eight variables (i.e.\ $x_0,x_1,\dots,y_3$) with coefficients in $\C[u,w]$. The coefficient matrix of this system of equations with entries in $\C[u,w]$ must be non-singular for the procedure to work. If the coefficient matrix is singular then, by the proof of Proposition~\ref{prop:finite}, there will be infinite solutions to the IK problem. Hence, we only proceed when it is nonsingular. \vspace{3mm} We choose seven out of the eight linear forms and regard their vanishing points as hyperplanes in $\PS^7(\C(u,w))$. Using linear algebra, we can easily solve for the intersection of these hyperplanes in $\PS^7(\C(u,w))$. We denote this point $P(u,w)$ with coordinates $$P(u,w):=(x_0(u,w):\cdots:y_3(u,w))$$ We may clear denominators and assume that the coordinates are in $\C[u,w]$ and not all are zero. The point $P$ will eventually give us all solutions to the pose of $F_4$ for the given EE transformation. \end{none} \begin{none}\label{step2} We now substitute the coordinates of $P$ into the quadratic form defining $S$. We obtain a bivariate polynomial $$f(u,w) := \sum_{i=0}^3 x_i(u,w)y_i(u,w)$$ in $\R[u,w]$. If this polynomial is identical to $0$ then we will choose another seven linear form in \ref{step1} and recompute $P(u,w)$ and again substitute in $S$. We assume we can find seven linear forms for which the polynomial is not identical to $0$ (see Remark~\ref{rem:step2}), otherwise one would need a different algorithm to solve the inverse kinematics problem (this possibility was handled by \cite{pfurner} for 6R manipulators). In this case, one may still have finite solutions for the IK problem. Hence, we may assume that $f(u,w)$ is non-zero. Substituting the coordinates of $P$ into the unaccounted linear form (recall we have $8$ linear forms from $T(u)$ and $ T(v)$ and we only used $7$ of them to find $P$) should yield a non-constant $g(u,w)\in \R[u,w]$. The reason why $g$ is non-constant is because we have an IK solution and a constant $g$ would imply that $g\equiv 0$. However, $g\equiv 0$ implies that the eight hyperplanes from $T(u)$ and $T(w)$ are not in general position and this contradicts the conclusion of Proposition~\ref{prop:finite}. But this case was already eliminated in \ref{step1}. \end{none} \begin{none}\label{step3} Take the resultant (see \cite{cox}) of $f(u,w),g(u,w)\in \R[u,w]$ from \ref{step2} by viewing them as multivariate polynomials over the ring $\R[u]$. The resultant will thus itself be a polynomial $r(u)\in \R[u]$. Since we are only considering solutions to the IK problem where the rotations are not an odd multiple of $\pi$, the solution to the joint variables $u$ and $w$ will be in the intersection of the plane curves defined by $f$ and $g$. The resultant $r$ cannot be identical to $0$ because we have eliminated this possibility in \ref{step1} and \ref{step2}. Finally, the resultant is not a non-zero constant because we know there is a solution to the IK problem. The finite number of roots of $r$ (values for $u$ only) will yield the possible values for the joint variable $u$. In applications, only the real roots of $r$ are of interest. Substituting these real roots to $f$ and $g$ gives us pairs of polynomials in $\R[w]$ and their common real root will give us possible values for $w$ for a given real value for $u$. Thus we obtain all the real intersections of the plane curves defined by $f$ and $g$ respectively. The corresponding $P(u,w)$ for joint variables $u$ and $w$ need to be in $\SE$. Namely, we discard all values from the possible pairs $(u,w)$ such that $$x_0(u,w)=x_1(u,w)=x_2(u,w)=x_3(u,w)=0$$ \end{none} \begin{none} The points $P(u,w)$ computed so far are points in the intersection of the left and right workspaces for which the left and right chains may coincide at frame $F_4$. The pairs $(u,w)$ comprise two of the 6 joint variables needed for the IK problem. For each $(u,w)$ solved in \ref{step3}, the other four joint values on the left and the right chain can be computed as follows: \begin{enumerate}[1.] \item We first solve the unknown joint value of the left chain that is not $v_2$. Say, if the unknown joint is $d_3$ (i.e. $u=v_1$), then choose one linear form from $T(d_3)$ and solve for $d_3$ by substituting $P(u,w)$ (if unknown joint variable is $v_1$ then we choose linear form from $T(v_1)$). \item To solve for $v_2$, we need to first determine $T(v_2)$ (i.e. linear space parametrized by $v_2$ containing the left workspace $V_L$). We perform the following steps: \begin{enumerate}[(a)] \item With the given DH-parameters, compute the Study parameters of $V_L$ i.e.\, compute $$\sigma_1(v_1)\sigma_2(v_2)\sigma_3(d_3)$$ considered as an element in $\PS^7(\C(v_1,v_2,d_3))$ where each coordinates are polynomials in $\C[v_1,v_2,d_3]$. Suppose the Study parameters are $$(s_0(v_1,v_2,d_3):s_1(v_1,v_2,d_3):\cdots: s_7(v_1,v_2,d_3)).$$ \item Substitute the Study parameters ($s_0,s_1,\dots, s_7$) into \begin{equation}\label{eq:Tv2} (a+iv_2)x_0+(b+jv_2)x_1+(c+kv_2)x_2+\cdots +(g+ov_2)y_2+(h+pv_2)y_3 \end{equation} and rewrite as a polynomial in $v_1,v_2$ and $d_3$ with coefficients $z_i$, $i=1,2,\ldots,12$. \item Create a $12\times 16$ matrix $B$ where each row $i$ consists of the coefficients of $a,b,\ldots,p$ in $z_i$. \item Determine an element of $\ker(B)$ such that when it is substituted (i.e. $a,b,\dots, p$) in equation \eqref{eq:Tv2} one obtains a linear form parametrized by $v_2$ (i.e. one of the $i,j,\dots, p$ is non-zero). Such an element of $\ker(B)$ exists otherwise there are infinite solutions to $v_2$ and we assumed only finite solutions to the IK problem. \item Substitute $P(u,w)$ to the obtained linear form and solve for $v_2$. \end{enumerate} \item The above steps can be done analogously for finding the unknown joint values of the right chain wherein in Item 1 we solve the joint value that is not $v_5$ and in Item 2 we compute the Study parameters of $V_R$ and substitute them into an equation similar to \eqref{eq:Tv2} but linear in $v_5$. Thus one obtains a linear form parametrized by $v_5$. \end{enumerate} \end{none} \begin{none} Finally, substitute all possible real IK solutions, e.g. $(v_1,v_2,d_3,v_4,v_5,v_6)$ for the 2RP3R, into $\sigma_1\sigma_2\cdots\sigma_6$. The real IK solutions are the scalar multiples of $\sigma_E.$ \end{none} Analogous procedures in this section can be applied for solving the IK of 2R2P2R, 3RP2R and 6R manipulators. For instance, for the 2R2P2R manipulator one solves joint values $(u,v)$ on the left and right chain, respectively. In particular, on the right chain, one chooses a linear form from $T(d_4)$ if $v=v_6$ and solves for $d_4$. Thus, the IK solutions are values for $(v_1,v_2,d_3,d_4,v_5,v_6)$. \section{Examples} Consider a 2RP3R manipulator with DH-parameters given in Table~\ref{tab:2rp3rDH}. The parametrized linear spaces $T(v_1)$ and $T(v_6)$ are not in the Study quadric. \begin{table}[htbp!] \centering \begin{tabular}{|c|c|c|c|c|} \hline $i$ & $\theta_i$(deg) & $d_i$ & $a_i$ & $\alpha_i$(deg)\\ \hline 1 & * & 0 & 0.1 & 90 \\ 2 & * & 0 & $-$0.425 & 0 \\ 3 & $0$ & * & $-$0.39225 & 0 \\ 4 & * & 0.10915 & 0.01 & 90 \\ 5 & * & 0.09465 & 0 & $-$90 \\ 6 & * & 0 & 0 & 0 \\ \hline \end{tabular} \caption{DH-parameters} \label{tab:2rp3rDH} \end{table} \vspace{3mm}\noindent Take a desired pose of the end-effector given by $$(190.335, 213.413, 9.36544, 164.774, -35.3968, -32.883, 74.4773,79.2444).$$ Since $l_2\neq\pm1$, choose $T(v_1)$ for the left chain. Since $a_5=0$ we have $T(v_4)\subset S$, hence we choose $T(v_6)$ for the right chain. We obtain 4 real solutions to the IK problem as shown in Table~\ref{tab:sols1} and Figure~\ref{fig1}. These solutions were obtained using Mathematica (codes are available in~\cite{link}). \begin{table}[!ht] \centering \begin{tabular}{|c|c|c|c|c|} \hline & Solution 1 & Solution 2 & Solution 3 & Solution 4\\ \hline $\theta_1$ &$-$16.1819 & 40.9555 & 60 & 79.2813 \\ $\theta_2$ &$-$70.8614 & $-$58.2515 & $-$70 & $-$71.455 \\ $d_3$ &0.0810177 & $-$0.123834 & $-$0.2 & $-$0.266949 \\ $\theta_4$ &81.6927 & 133.782 & 40 & 55.7351 \\ $\theta_5$ &$-$60.0253 & $-$9.67835 & 19 & 36.9289 \\ $\theta_6$ &32.9097 & $-$36.9601 & 67 & 51.0502 \\ \hline \end{tabular}\normalsize \caption{Real inverse kinematics solutions to the given 2RP3R manipulator} \label{tab:sols1} \end{table} \begin{figure}[htbp!]\centering \subfloat[Solution 1]{\fbox{\includegraphics[width=45mm]{2rp3r_s1}}}\hspace{1mm} \subfloat[Solution 2]{\fbox{\includegraphics[width=45mm]{2rp3r_s2}}}\\[-1.5mm] \subfloat[Solution 3]{\fbox{\includegraphics[width=45mm]{2rp3r_s3}}}\hspace{1mm} \subfloat[Solution 4]{\fbox{\includegraphics[width=45mm]{2rp3r_s4}}} \caption{Illustration of IK solutions to the 2RP3R manipulator} \label{fig1} \end{figure} \noindent Consider a 2R2P2R manipulator with given DH parameters in Table~\ref{tab:2r2p2r} and EE pose $$(-5.37543, 64.9811, -75.9243, 69.384, -59.0113, 6.15132, -22.5377, -34.995).$$ \begin{table}[ht]\centering \begin{tabular}{|c|c|c|c|c|} \hline $i$ & $\theta_i$(deg) & $d_i$ & $a_i$ & $\alpha_i$(deg) \\ \hline 1 & * & 0 & 0.2 & 23 \\ 2 & * & 0.3 & 0.2 & 23 \\ 3 & $-$45 & * & 0.3 & 45 \\ 4 & 71 & * & 0.4 & 35 \\ 5 & * & 0.3 & 0 & 20 \\ 6 & * & 0 & 0 & 0 \\ \hline \end{tabular} \caption{DH-parameters } \label{tab:2r2p2r} \end{table} \noindent The IK solutions can be computed from $T(v_1)$ and $T(v_6)$ since both do not lie in $S$. Four real solutions are obtained and are shown in Table~\ref{tab:solutionsO} and Figure~\ref{fig2}. \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|c|} \hline & Solution 1 & Solution 2 & Solution 3 & Solution 4 \\ \hline $\theta_1$& $-$4.2843 & 10. & 104.328 & 128.106 \\ $\theta_2$&70.5556 & 20. & $-$64.1116 & $-$72.0589 \\ $d_3$ &$-$0.378641 & 0.1 & 0.0119017 & $-$0.33362 \\ $d_4$ &0.639115 & $-$0.1 & 0.579609 & 1.03798 \\ $\theta_5$&$-$110.876 & 31. & 10.0365 & $-$35.978 \\ $\theta_6$&$-$167.772 & 55. & 107.717 & 157.911 \\ \hline \end{tabular} \normalsize \caption{Real inverse kinematics solutions to the given 2R2P2R manipulator }\label{tab:solutionsO} \end{table} \begin{figure}[!htbp]\centering \subfloat[Solution 1]{\fbox{\includegraphics[width=5cm]{2r2p2r_1}}}\hspace{1mm} \subfloat[Solution 2]{\fbox{\includegraphics[width=5cm]{2r2p2r_2}}}\\[-1.5mm] \subfloat[Solution 3]{\fbox{\includegraphics[width=5cm]{2r2p2r_3}}}\hspace{1mm} \subfloat[Solution 4]{\fbox{\includegraphics[width=5cm]{2r2p2r_4}}} \caption{Illustration of IK solutions to the 2R2P2R manipulator} \label{fig2} \end{figure} \section{Conclusion} A 2RP3R manipulator can be described as a composition of two rotations followed by a translation and three rotations in space. Computing the constraint varieties of such manipulators can be greatly simplified by considering the two 3-subchains namely 2RP and 3R. The parametrized linear spaces (linear sections of a Segre variety) describing the workspaces of these 3-chains are the key to the solution of the IK problem. Computing the intersection of a quasi-projective variety (identified with $\SE$) with the projection of a linear section of Segre varieties using elimination theory and linear algebra allows us to solve the IK problem algebraically. This algorithm can be applied to all general 6R/P manipulators with the same 3-subchain types (one takes the reverse joint type for the right 3-chain), namely: 2RP, 3R. Our algorithm differs from the original HuPf algorithm because it accounts for prismatic joints, and we provide a systematic and efficient way (via dual quaternions) of finding and choosing the proper parametrized linear spaces. Note that in this paper we only considered four different 6R/P manipulators. The methods discussed here can be modified so that they are applicable to other 6R/P manipulators, and they will be dealt with in a forthcoming paper. \section*{Acknowledgements} J.~Capco was supported and funded by the Austrian Science Fund (FWF): Project P28349-N32 and W1214-N14 Project DK9. M.J.C.~Loquias and S.M.M.~Manongsong acknowledges the Office of the Chancellor of the University of the Philippines Diliman, through the Office of the Vice Chancellor for Research and Development, for funding support through the Outright Research Grant.
1,116,691,497,956
arxiv
\section{Introduction} Investigation of the reaction fronts formed in the reaction-diffusion processes of the type A $+$ B $\to$ 0\ with initially separated species A and B has attracted a lot of recent interest. This is due to the fact that not only the kinetics of such systems exhibits many surprising features \cite{G-R,Overview,HaimExperiment,Hav95,Koza96}, but they are also amenable to experimental studies \cite{HaimExperiment,Experiment,HaimKudowa}. A standard way to treat the initially separated problem analytically is to solve the partial differential equations \cite{G-R} \begin{equation} \label{GR} \begin{array}{rcl} \PT{\rho_{\mathrm{A}}} &=& D_{\mathrm{A}} \PXX{\rho_{\mathrm{A}}} - R \,, \\[2ex] \PT{\rho_{\mathrm{B}}} &=& D_{\mathrm{B}} \PXX{\rho_{\mathrm{B}}} - R \,, \end{array} \end{equation} with the initial state given by \begin{equation} \label{IniCond} \begin{array}{rcl} \rho_{\mathrm{A}}(x,t=0) &=& a_0 H(-x) \,,\\[1ex] \rho_{\mathrm{B}}(x,t=0) &=& b_0 H(x) \,, \end{array} \end{equation} where $\rho_{\mathrm{A}}(x,t)$ and $\rho_{\mathrm{B}}(x,t)$ are the mean local concentrations of A's and B's, $R$ is the macroscopic reaction rate, $H(x)$ denotes the Heavyside step function, and $a_0$, $b_0$, $D_{\mathrm{A}}$ and $D_{\mathrm{B}}$ are some constants related to the initial concentrations of species A and B and their diffusion coefficients, respectively. Equations (\ref{GR}) present the macroscopic approach to the problem. However, it has not been established yet how to relate the macroscopic reaction rate $R$ to the microscopic picture of the initially separated reaction-diffusion systems below or at the critical dimension $d_{\mathrm{c}}=2$. Dimensional \cite{CD-Steady,Krapivsky,CDC91} and renormalisation group analyses \cite{RG-Rapid,RG-Front,RG-Homo} lead to the conclusion that above $d_{\mathrm{c}}$ one can adopt the mean-field approximation $R=k\rho_{\mathrm{A}}\rho_{\mathrm{B}}$, with $k$ being a constant. For 2D systems one expects logarithmic corrections to the mean-field picture \cite{Krapivsky,CDC91,RG-Homo}. One dimensional systems are usually studied by examining microscopic models in which, upon contact, the members of a pair A--B react with some probability $p$ \cite{CDC91,RG-Rapid,RG-Front,RG-Homo,Bstatic,Cornell95,LAHS3-8,% Barkema,Argentina}. The analytical form of $R(x,t)$ was derived for $D_{\mathrm{B}} = 0$ \cite{Bstatic}, and for $D_{\mathrm{B}} > 0$ and $|x| \to \infty$ \cite{RG-Front}. There are, however, several techniques which enables one to derive a lot of useful information from (\ref{GR}) even for $d\le d_{\mathrm{c}}$, i.e. when the explicit form of $R$ remains unknown. They are concentrated on the asymptotic, long-time behaviour of the reaction-diffusion systems, and include the renormalisation group analysis \cite{RG-Rapid,RG-Front,RG-Homo,Barkema}, the scaling ansatz \cite{G-R,CDC91}, the quasistationary approximation \cite{CD-Steady,BenRedner}, and our approach of Ref.~\cite{Koza}. According to the scaling ansatz \cite{G-R}, the long-time behaviour of the reaction-diffusion system inside the reaction layer can be described with a help of some scaling functions $S_{\mathrm{A}}$, $S_{\mathrm{B}}$ and $S_{\mathrm{R}}$ through \begin{eqnarray} \label{SA} \rho_{\mathrm{A}}(x,t) &=&\eta_{\mathrm{A}} t^{-\gamma_{\mathrm{A}}}S_{\mathrm{A}}\BRA{x - x_{\mathrm{f}}(t) \over w(t)}\,,\\[1ex] \label{SB} \rho_{\mathrm{B}}(x,t) &=& \eta_{\mathrm{B}} t^{-\gamma_{\mathrm{B}}}S_{\mathrm{B}}\BRA{x - x_{\mathrm{f}}(t) \over w(t)}\,,\\[1ex] \label{SR} R(x,t) &=& \eta_{\mathrm{R}} t^{-\beta} S_{\mathrm{R}}\BRA{x - x_{\mathrm{f}}(t) \over w(t)}\,, \end{eqnarray} where $x_{\mathrm{f}}(t)\propto t^{1/2}$ denotes the point at which the reaction rate $R$ attains its maximal value, $w(t) \propto t^\alpha \ll t^{1/2}$ is the width of the reaction zone, $\eta_{\mathrm{A}}$, $\eta_{\mathrm{B}}$ and $\eta_{\mathrm{R}}$ are some parameters independent of $x$ and $t$, and exponents $\alpha$, $\beta$, $\gamma_{\mathrm{A}}$ and $\gamma_{\mathrm{B}}$ are some positive constants given, for $R \propto \rho_{\mathrm{A}}\rho_{\mathrm{B}}$ and nonzero diffusion constants, by $\alpha = \frac{1}{6}$, $\beta = \frac{2}{3}$ and $\gamma_{\mathrm{A}} = \gamma_{\mathrm{B}} = \frac{1}{3}$. The quasistationary approximation \cite{CD-Steady,BenRedner} consists in the assumption that at sufficiently long times the kinetics of the front is governed by two characteristic time scales. One of them, $\tau_{\mathrm{J}} \propto (\!{\d}\log\! J/ \!{\d} t)^{-1} \propto t$, controls the rate of change in the diffusive current $J$ of particles arriving at the reaction layer. The other one, $\tau_{\mathrm{F}} \propto w^2 \propto t^{2\alpha}$, is the equilibration time of the reaction front. If $\alpha < 1/2$ then $\tau_{\mathrm{F}}/\tau_{\mathrm{J}} \to 0$ as $t \to \infty$. Therefore, as $t\to\infty$, in the vicinity of $x_{\mathrm{f}}$ the left sides of (\ref{GR}) become negligibly small compared to other terms. Consequently, if $D_{\mathrm{A}}$ and $D_{\mathrm{B}}$ are both nonzero, the asymptotic form of $\rho_{\mathrm{A}}$ and $\rho_{\mathrm{B}}$ inside the reaction layer is governed by much simpler equations \begin{equation} \label{STAC} \begin{array}{rcl} D_{\mathrm{A}} \PXX{\rho_{\mathrm{A}}} &=& R \,,\\[2ex] D_{\mathrm{B}} \PXX{\rho_{\mathrm{B}}} &=& R \,, \end{array} \end{equation} which are to be solved with the boundary conditions \begin{equation} \label{BC} \begin{array}{rcl} \partial\rho_{\mathrm{A}}/\partial x \to -J_{\mathrm{A}}(t),\; \rho_{\mathrm{B}} \to 0 &\; \mbox{as}\;& x \to -\infty\,,\\[1ex] \partial\rho_{\mathrm{B}}/\partial x \to J_{\mathrm{B}}(t),\;\rho_{\mathrm{A}} \to 0 &\;\mbox{as}\;& x \to +\infty\,. \end{array} \end{equation} The most important feature of the quasistationary equations (\ref{STAC}) is that they depend only on $x$, with time $t$ being a parameter entering their solutions $\rho_{\mathrm{A}}(x,t)$ and $\rho_{\mathrm{B}}(x,t)$ only through the time dependent boundary currents $J_{\mathrm{A}} = J_{\mathrm{B}}$ whose dependence on $t$, $D_{\mathrm{A}}$, $D_{\mathrm{B}}$, $a_0$ and $b_0$ has recently been derived analytically \cite{Koza}. In a recent paper \cite{Koza} we employed the quasistatic approximation to develop a new method of investigating the asymptotic kinetics of the initially separated reaction-diffusion systems. A peculiar feature of that approach is that it is concentrated mainly on the properties of the system outside the reaction zone. In this way, without imposing any special restrictions on the form of $R$, it relates {\em exactly\/} many quantities of physical interest to the values of external parameters $a_0$, $b_0$, $D_{\mathrm{A}}$ and $D_{\mathrm{B}}$, which will enable us to investigate the limit $D_{\mathrm{B}}\to 0$ analytically. In particular it was shown that there exist two limits, $C_{\mathrm{f}} = \lim_{t\to\infty} x_{\mathrm{f}}(t)/\sqrt{t}$ and $C_{\mathrm{J}} = \lim_{t\to\infty} J(t)\sqrt{t}$. Given $a_0$, $b_0$, $D_{\mathrm{A}}$ and $D_{\mathrm{B}}$, the value of $C_{\mathrm{f}}$ can by computed by solving \begin{equation} \label{cfeq} \Phi\!\left( \frac{-C_{\mathrm{f}}}{2\sqrt{D_{\mathrm{A}}}} \right) = \frac{a_0\sqrt{D_{\mathrm{A}}}}{b_0\sqrt{D_{\mathrm{B}}}} \: \Phi\!\left( \frac{C_{\mathrm{f}}}{2\sqrt{D_{\mathrm{B}}}} \right) \,, \end{equation} where \begin{equation} \label{defphi} \Phi (x) \equiv \SQR{1 - \erf{x}}\exp(x^{2})\,, \end{equation} and $\;\erf{x} \equiv 2\pi^{-1/2}\!\int_{0}^{x} \exp(-\eta^2) {\d}\eta\;$ is the error function \cite{Luke}. Then $C_{\mathrm{J}}$ can be calculated by solving \begin{equation} \label{CF} C_{\mathrm{f}} = 2\sqrt{D_{\mathrm{A}}}\ierfs{(a_0 - C_{\mathrm{A}})/C_{\mathrm{A}}} = 2\sqrt{D_{\mathrm{B}}}\ierfs{(C_{\mathrm{B}}-b_0)/C_{\mathrm{B}}} \end{equation} and \begin{equation} \label{CJ} C_{\mathrm{J}} = C_{\mathrm{A}}\sqrt{D_{\mathrm{A}}\over\pi}\exp\!\BRA{-\frac{C_{\mathrm{f}}^2}{4D_{\mathrm{A}}}} = C_{\mathrm{B}}\sqrt{D_{\mathrm{B}}\over\pi}\exp\!\BRA{-\frac{C_{\mathrm{f}}^2}{4D_{\mathrm{B}}}} \,, \end{equation} where $C_{\mathrm{A}}$ and $C_{\mathrm{B}}$ are some constants controlling the form of $\rho_{\mathrm{A}}$ and $\rho_{\mathrm{B}}$ outside the reaction zone; specifically, for $x \ll x_{\mathrm{f}} - w$ we have \begin{equation} \label{ra} \rho_{\mathrm{A}}(x,t) = a_0 - C_{\mathrm{A}}\SQR{ \erf{x / \!\sqrt{4D_{\mathrm{A}} t}} + 1}\,, \end{equation} and for $x \gg x_{\mathrm{f}} + w$ there is \begin{equation} \label{rb} \rho_{\mathrm{B}}(x,t) = b_0 + C_{\mathrm{B}} \SQR{\erf{ x / \!\sqrt{4D_{\mathrm{B}} t}} - 1}\,. \end{equation} It was confirmed by several methods, including the renormalisation group analysis \cite{RG-Rapid,RG-Front}, numerical simulations \cite{J-E} and heuristic arguments \cite{Koza}, that the values of the exponents $\alpha$, $\beta$, $\gamma_{\mathrm{A}}$ and $\gamma_{\mathrm{B}}$ as well as the form of the scaling functions $S_{\mathrm{A}}$, $S_{\mathrm{B}}$ and $S_{\mathrm{R}}$ do not depend on $D_{\mathrm{A}}$, $D_{\mathrm{B}}$, $a_0$ and $b_0$ if the values of these parameters are nonzero. However, when one of the components is immobile (or `static'), the asymptotic kinetics of the reaction front can change dramatically. For example, in the mean-field approximation the width of the front converges to a stationary value, and the reaction rate at $x_{\mathrm{f}}$ decreases as $t^{-1/2}$, which corresponds to $\alpha = 0$ and $\beta = \frac{1}{2}$ \cite{Hav95,J-E}. We have therefore two asymptotic universality classes: one characteristic for the `dynamic' systems in which both components diffuse, and the `quasistatic' one observed if one of the diffusion constants is zero. Henceforth we shall assume $D_{\mathrm{A}}>0$, so the two asymptotic universality classes will be distinguished by determining whether $D_{\mathrm{B}}=0$ or $D_{\mathrm{B}}>0$. Although the peculiar kinetics of the systems with $D_{\mathrm{B}} = 0$ was noticed quite early, so far nearly all of the research has been concentrated on the systems in which both of the diffusion constants $D_{\mathrm{A}}$ and $D_{\mathrm{B}}$ are nonzero. Closer inspection of the powerful methods developed in the last decade to investigate the reaction-diffusion systems reveals that only the scaling ansatz, which is a purely mathematical concept, can be trivially employed to investigate both kinds of the systems. However, since even this relatively simple approach has so far been carried out only for the `dynamic' problem \cite{G-R,CDC91}, the theories of the two asymptotic universality classes remain practically disconnected. Several factors have lead to this situation. Theoretical results \cite{Bstatic} derived for the case $D_{\mathrm{B}} = 0$ utilized the basic property of such systems, immobility of particles B, in a way that cannot be extended for systems with $D_{\mathrm{B}} > 0$. On the other hand, most of the fundamental techniques developed for the case $D_{\mathrm{B}}>0$ has been based on the quasistationary approximation which requires that the ratio $J_{\mathrm{A}}/J_{\mathrm{B}}$ of two opposite currents of particles A and B entering the reaction zone should asymptotically go to 1. This condition cannot be met by the systems with $D_{\mathrm{B}}=0$, as in this case $J_{\mathrm{B}}(t) \equiv 0$. The aim of our paper is to unify our understanding of these two kinds of the reaction-diffusion systems. The procedure we are using consists in detailed examination of the case $D_{\mathrm{B}}=0$ and comparison of the results with those already derived for $D_{\mathrm{B}}>0$. We study the systems with $D_{\mathrm{B}}=0$ by means of various methods and for arbitrary (positive) values of $a_0$, $b_0$, and $D_{\mathrm{A}}$. In particular we show the counterpart of the quasistationary approximation (\ref{STAC}) which should be used if $D_{\mathrm{B}} = 0$. We argue that the form of these equations, as well as their boundary conditions, determine the properties of the asymptotic universality classes. In subsequent sections we study these properties for the systems with $D_{\mathrm{B}}=0$ using the heuristic theory of Ref.~\cite{Koza}, the scaling ansatz, the mean-field approximation, and numerical analysis. Although some aspects of the systems with $D_{\mathrm{B}}=0$ turn out to be the same as of those with $D_{\mathrm{B}}>0$, our analysis implies that these two cases should always be considered separately. \section{The limit $D_{\mathrm{B}} \to 0$} \label{LIMIT} Consider equations (\ref{cfeq}) -- (\ref{rb}) with the values of $D_{\mathrm{A}}$, $a_0$ and $b_0$ fixed at some positive values, and $D_{\mathrm{B}}$ going to 0. For physical reasons we expect that as $D_{\mathrm{B}}$ goes to 0, $C_{\mathrm{f}}$ converges to some nonzero limiting value, and so the argument of $\Phi$ on the right hand side of (\ref{cfeq}) diverges to infinity. We can therefore use an asymptotic property of the error function, $x(1-\erf{x})\exp(x^2) \to 1/\sqrt{\pi}$ as $x\to\infty$ \cite{Luke}, and reduce (\ref{cfeq}) to \begin{equation} \label{cfeq0} \Phi\!\left( \frac{-C_{\mathrm{f}}}{2\sqrt{D_{\mathrm{A}}}} \right) = \frac{2 a_0 \sqrt{D_{\mathrm{A}}}}{\sqrt{\pi}b_0 C_{\mathrm{f}}} \,. \end{equation} Since $\Phi(x)$ diminishes monotonically from $\infty$ to $0$ as $x$ grows from $-\infty$ to $\infty$, the above equation has a unique, positive solution. Consequently, (\ref{CF}) and (\ref{CJ}) imply that $C_{\mathrm{J}}$ and $C_{\mathrm{A}}$ also converge to some positive values, but $C_{\mathrm{B}}$ rapidly diverges to infinity. One can now use (\ref{CF}), (\ref{CJ}) and (\ref{cfeq0}) to arrive, after some algebra, at an important relation valid only if $D_{\mathrm{B}} = 0$ \begin{equation} \label{relcfcj} b_0 C_{\mathrm{f}} = 2C_{\mathrm{J}}\,. \end{equation} This equation has a very natural physical interpretation. On the one hand the total number $M(t)$ of reactions occurring by time $t$ is asymptotically equal \begin{equation} \int_0^t\! J_{\mathrm{A}}(\tau) {\d}\tau \,\sim\, \int_0^t\! C_{\mathrm{J}}/\sqrt{\tau}{\d}\tau \,\sim\, 2C_{\mathrm{J}}\sqrt{t}\,. \end{equation} On the other hand, however, for $x \gg x_{\mathrm{f}} + w$ we have $\rho_{\mathrm{B}} \sim b_0$, and for $x \ll x_{\mathrm{f}} - w$ we expect $\rho_{\mathrm{B}} \sim 0$. Neglecting terms of order $b_0 w \ll b_0t^{1/2}$ we thus conclude that $M(t)$ can as well be estimated by \begin{equation} \int_{-\infty}^\infty [\rho_{\mathrm{B}}(x,t)-\rho_{\mathrm{B}}(x,0)] {\d} x \,\sim\, b_0 x_{\mathrm{f}}(t) \,\sim\, b_0 C_{\mathrm{f}} \sqrt{t} \,, \end{equation} which leads to (\ref{relcfcj}). Our theory is consistent with the numerical simulations of Larralde {\em et al} \cite{Bstatic}, who considered the one dimensional system (i.e. for $d<d_{\mathrm{c}}$) with $D_{\mathrm{B}}=0$, $D_{\mathrm{A}} = 1/2$ and $a_0/b_0 = 1$. They found that, respectively, for $t = 500$, 1000 and 5000 the value of $x_{\mathrm{f}}(t)$ was approximately equal to $11\pm 0.5$, $16\pm 0.5$ and $36\pm 0.5$, so that $x_{\mathrm{f}}/\sqrt{t} \approx$ $0.492\pm0.022$, $0.506\pm0.016$ and $0.509\pm0.07$, respectively. This is in excellent agreement with our equation (\ref{cfeq0}), whose numerical solution reads $C_{\mathrm{f}} \approx 0.5060$. Below, in Section \ref{NUMS}, we will also verify our theory using the mean-field approximation (i.e.~for $d > d_{\mathrm{c}}$). \section{General consequences of the scaling ansatz} \label{SectionSA} Let $D_{\mathrm{B}} =0$ and $D_{\mathrm{A}}$, $a_0$ and $b_0$ take on arbitrary (positive) values. Assume that the asymptotic solutions of the G\'alfi and R\'acz problem (\ref{GR}) in the long-time limit take on the scaling form (\ref{SA}) -- (\ref{SR}) with $\alpha < 1/2$. Inserting (\ref{SA}) and (\ref{SB}) into (\ref{GR}) and taking the limit $t\to\infty$ we find that at any $x$ such that $|x-x_{\mathrm{f}}|\ll t^{1/2}$ there is \begin{equation} \label{prop1} \PT{\rho_{\mathrm{A}}} \,\propto\, t^{-\gamma_{\mathrm{A}} - \alpha - 1/2} \quad\mbox{and}\quad D_{\mathrm{A}}\PXX{\rho_{\mathrm{A}}} \,\propto\, t^{-\gamma_{\mathrm{A}} -2\alpha}\,. \end{equation} Hence, because $\alpha < 1/2$, in the limit $t\to\infty$ the term $\partial\rho_{\mathrm{A}}/\partial t$ becomes negligibly small compared to $D_{\mathrm{A}}\partial^2\rho_{\mathrm{A}}/\partial x^2$. This implies that for $|x-x_{\mathrm{f}}|\ll t^{1/2}$ the form of the scaling functions can be determined by solving \begin{equation} \label{QSTAC} \begin{array}{rcl} D_{\mathrm{A}} \PXX{\rho_{\mathrm{A}}} &=& R\,,\\[2ex] -\PT{\rho_{\mathrm{B}}} &=& R \,. \end{array} \end{equation} The appropriate boundary conditions for these equations read \begin{equation} \label{bcro} \begin{array}{rcl} \partial\rho_{\mathrm{A}}/\partial x \to -J_{\mathrm{A}}(t),\; \rho_{\mathrm{B}} \to 0 &\;\mbox{as}\;& x \to -\infty\,,\\[1ex] \rho_{\mathrm{A}} \to 0,\; \rho_{\mathrm{B}} \to b_0 &\;\mbox{as}\;& x \to +\infty\,, \end{array} \end{equation} where $J_{\mathrm{A}}(t) \propto t^{-1/2}$ \cite{Koza}. Equations (\ref{QSTAC}) are the general counterparts of the quasistatic approximation (\ref{STAC}) if $D_{\mathrm{A}}>0$ and $D_{\mathrm{B}}=0$. The boundary conditions (\ref{bcro}) determine the form of the boundary conditions for $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ except for a constant multiplier. We will take advantage of the fact that we are at liberty to multiply $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ by arbitrary constants (which can be compensated for by appropriate changes in $\eta_{\mathrm{A}}$ and $\eta_{\mathrm{B}}$) and assume that the boundary conditions for $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ read \begin{equation} \label{bcS} \begin{array}{rcl} S_{\mathrm{A}}'(z) \to -1,\; S_{\mathrm{B}}(z) \to 0 &\;\mbox{as}\;& z\to -\infty\,,\\[1ex] S_{\mathrm{A}} (z)\to 0,\; S_{\mathrm{B}}(z) \to 1 &\;\mbox{as}\;& z \to +\infty\,, \end{array} \end{equation} where the prime denotes the derivative with respect to $z$. Equations (\ref{bcS}) immediately imply that \begin{eqnarray} \label{rex1} \gamma_{\mathrm{B}} &=& 0\,,\\[1ex] \label{eb} \eta_{\mathrm{B}} &=& b_0\,. \end{eqnarray} The diffusive current $J_{\mathrm{A}}$ of particles A for $-\sqrt{D_{\mathrm{A}} t} \ll x \ll x_{\mathrm{f}}-w$ is asymptotically expected to be equal to $C_{\mathrm{J}}/\sqrt{t}$ \cite{Koza}. On the other hand, however, we can calculate it by inserting (\ref{SA}) into $J_{\mathrm{A}} = -D_{\mathrm{A}}\partial\rho_{\mathrm{A}}/\partial x$, which leads to \begin{eqnarray} \label{rex2} \gamma_{\mathrm{A}} + \alpha &=& 1/2\,,\\[1ex] \label{ea} \eta_{\mathrm{A}} &=& C_{\mathrm{w}} C_{\mathrm{J}} / D_{\mathrm{A}}\,, \end{eqnarray} where we denoted $C_{\mathrm{w}} \equiv \lim_{t\to\infty} w(t)/t^{\alpha}$. Upon inserting the scaling ansatz into the first of equations (\ref{QSTAC}) we come to $\gamma_{\mathrm{A}} + 2\alpha = \beta$. Combining it with (\ref{rex2}) we arrive at \begin{equation} \label{rex3} \beta - \alpha = 1/2\,, \end{equation} We thus see that the scaling ansatz imposes on the values of the scaling exponents three relations (\ref{rex1}), (\ref{rex2}) and (\ref{rex3}). Only the first of them takes on a different form if $D_{\mathrm{B}} >0$, as in that case, by symmetry, $\gamma_{\mathrm{A}} = \gamma_{\mathrm{B}}$ \cite{CDC91}. Equations (\ref{QSTAC}) imply that inside the reaction zone \begin{equation} \label{QSTAC2} D_{\mathrm{A}} \PXX{\rho_{\mathrm{A}}} + \PT{\rho_{\mathrm{B}}} = 0\,. \end{equation} Inserting into it the scaling ansatz (\ref{SA}) and (\ref{SB}), and carrying out our `standard' limiting procedure ($t\to\infty$ at any $x$ such that $|x-x_{\mathrm{f}}(t)|\ll t^{1/2}$) we conclude, after some algebra involving (\ref{relcfcj}), that $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ are related by a simple formula $ S_{\mathrm{A}}''(z) \,=\, S_{\mathrm{B}}'(z)$. Upon integrating this equation and using the boundary conditions (\ref{bcS}) to determine the integration constant we finally come to \begin{equation} \label{sasb} S_{\mathrm{A}}'(z) = S_{\mathrm{B}}(z) - 1\,. \end{equation} Note that for $D_{\mathrm{B}}>0$ the scaling functions $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ satisfy an entirely different relation $S_{\mathrm{A}}(z) = S_{\mathrm{B}}(-z)$, which comes from the asymptotic symmetry of the system. Note also that relations (\ref{QSTAC2}) and (\ref{sasb}) are independent of $R$ and, consequently, of $d$. As a quick application of (\ref{sasb}) consider a one dimensional system with $D_{\mathrm{B}}=0$. It is known \cite{Bstatic} that in this case $S_{\mathrm{B}} = \frac{1}{2}(1+\erf{z})$. So (\ref{sasb}) and (\ref{bcS}) lead to \begin{equation} S_{\mathrm{A}}(z) = \half \ierfc{z} \equiv \half \left[ z \erf{z} - z + \pi^{-1/2} \exp(-z^2) \right] \,. \end{equation} \section{The mean-field systems with $D_{\mathrm{B}} = 0$} \label{SMFA} Consider a system governed by (\ref{GR}) with $D_{\mathrm{A}} > 0$, $D_{\mathrm{B}} = 0$ and $R = k\rho_{\mathrm{A}}\rho_{\mathrm{B}}$. This particular form of $R$ immediately implies $\gamma_{\mathrm{A}} + \gamma_{\mathrm{B}} = \beta$. Using now (\ref{rex1}), (\ref{rex2}) and (\ref{rex3}) we conclude that \begin{equation} \alpha = 0,\quad \beta = 1/2,\quad \gamma_{\mathrm{A}} = 1/2,\quad \mbox{and} \quad \gamma_{\mathrm{B}} = 0\,. \end{equation} These values are consistent with numerical simulations \cite{Hav95} and heuristic ar\-gu\-ments \cite{J-E}. It is easy to see that if length and time are measured in units of $\lambda= (D_{\mathrm{A}}/kb_0)^{1/2}$ and $\tau = 1/k a_0$, respectively, then the solutions of (\ref{QSTAC}) reduce to those obtained for the particular case $D_{\mathrm{A}}=a_0=b_0=k=1$. Thus, in investigating the mean-field reaction-diffusion systems, it is sufficient to examine in detail only a system with some convenient values of the material parameters $D_{\mathrm{A}}$, $a_0$, $b_0$ and $k$. The solutions for arbitrary values of these parameters can be then easily found by an appropriate choice of the units. This property guarantees that any asymptotic length $l$ satisfies $l(D_{\mathrm{A}},a_0,b_0,k) = \lambda l(1,1,1,1)$. In particular, the asymptotic width of the reaction zone is given by \begin{equation} \label{w} w = \tilde{w}\sqrt{\frac{D_{\mathrm{A}}}{k b_0}}\,, \end{equation} where $\tilde{w}$ denotes the asymptotic width of the reaction zone in the system with $D_{\mathrm{A}}=a_0=b_0=k=1$. Numerical estimation of this parameter yields $\tilde{w}\approx 1.47$ (see the next section for more details). Essentially the same line of reasoning was used in Ref.~\cite{Koza} to show that in the mean-field system with $D_{\mathrm{B}}>0$ the asymptotic width of the reaction zone is given by $w = \W_1(D_{\mathrm{A}} D_{\mathrm{B}})^{1/3}(k C_{\mathrm{J}})^{-1/3}t^{1/6}$. However, in that paper it was incorrectly assumed that $\W_1~\equiv~1$. We estimated the correct value of $\W_1$ numerically, obtaining $\W_1\approx1.38$. Inserting now the scaling ansatz into (\ref{QSTAC}) and using $R = k\rho_{\mathrm{A}}\rho_{\mathrm{B}}$ we conclude that $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ satisfy \begin{eqnarray} \label{eSA} S_{\mathrm{A}}'' &=& \tilde{w}^2 S_{\mathrm{A}} S_{\mathrm{B}}\,,\\[1ex] \label{eSB} S_{\mathrm{B}}' &=& \tilde{w}^2 S_{\mathrm{A}} S_{\mathrm{B}}\,, \end{eqnarray} Upon inserting (\ref{sasb}) into (\ref{eSA}) we arrive at the nonlinear differential equation for the mean-field scaling function $S_{\mathrm{A}}$ \begin{equation} \label{saeq} S_{\mathrm{A}}'' = \tilde{w}^2 S_{\mathrm{A}}(S_{\mathrm{A}}'+1)\,. \end{equation} We can use it to estimate the behaviour of $S_{\mathrm{A}}(z)$ and $S_{\mathrm{R}}(z)$ for $z \gg 1$. In this region we expect $S_{\mathrm{A}}' \ll 1$ and $S_{\mathrm{B}} \approx 1$, so (\ref{saeq}) reduces to $S_{\mathrm{A}}'' = \tilde{w}^2 S_{\mathrm{A}}$, which implies \begin{equation} \label{tailA} S_{\mathrm{R}}(z) \propto S_{\mathrm{A}}(z) \propto \exp(-\tilde{w} z) \,. \end{equation} We can also investigate the tail of particles B which forms for $z\ll-1$. In this region we can assume $S_{\mathrm{A}}(z) \sim -z$, so that (\ref{eSB}) reduces to $S_{\mathrm{B}}' = -\tilde{w}^2 zS_{\mathrm{B}}$, which leads to \begin{equation} \label{tailB} S_{\mathrm{B}}(z) \propto \exp(-\half(\tilde{w} z)^2) \quad \mbox{and} \quad S_{\mathrm{R}}(z) \propto |z|\exp(-\half(\tilde{w} z)^2)\,. \end{equation} Thus, if $D_{\mathrm{B}} = 0$, the mean-field form of the scaling function $S_{\mathrm{R}}(z)$ is asymmetric, whereas for $D_{\mathrm{B}} > 0$ this function is always symmetric, which is most easily seen in the symmetric case $D_{\mathrm{A}} = D_{\mathrm{B}}$ and $a_0 = b_0$. We will now investigate some properties of the limit $D_{\mathrm{B}} \to 0$ which could not be analysed within the framework of the general theory presented in Section~\ref{LIMIT}. As we already showed in our previous paper \cite{Koza}, if $D_{\mathrm{B}}>0$ then the mean-field density of particles B at $x_{\mathrm{f}}$ is asymptotically proportional to $D_{\mathrm{B}}^{-2/3}t^{-1/3}$. As this quantity has to be less than $b_0$, we conclude that $D_{\mathrm{B}}^{-2/3} t^{-1/3} < \mbox{const}_1$. Therefore the time $t^*$ at which the mean-field system enters the long-time regime must satisfy \begin{equation} \label{time} t^* > (D_{\mathrm{B}})^{-2}\cdot\mbox{const}_2\,. \end{equation} Only for times satisfying this relation can one use the quasistatic approximation (\ref{STAC}). However, as $D_{\mathrm{B}}\to 0$, the right hand side of (\ref{time}) diverges to infinity. Consequently, as $D_{\mathrm{B}}$ goes to 0, $t^*$ diverges to infinity and in the limiting case $D_{\mathrm{B}} = 0$ the kinetics of the system can never be described with the quasistatic approximation equations (\ref{STAC}). Although we derived these conclusions only within the mean-field approximation, it is reasonable to expect that $t^* \to \infty$ as $D_{\mathrm{B}}\to 0$ in any initially separated reaction-diffusion system. To summarize the differences between the two asymptotic universality classes, in Table~\ref{Tab1} we list their main properties in the mean-field approximation. The data for the case $D_{\mathrm{B}} > 0$ come from References \protect\cite{G-R}, \protect\cite{Koza} and \protect\cite{Linear}. \begin{table}[hbt] \caption{Comparison of the two asymptotic universality classes in the mean-field approximation.} \label{Tab1} \begin{center} \begin{tabular}{|c|c|c|} \hline & $D_{\mathrm{A}}>0$, $D_{\mathrm{B}}>0$ & $D_{\mathrm{A}}>0$, $D_{\mathrm{B}}=0$ \\ \hline Diff. eqs. for $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ & \parbox{4cm}{ \begin{center} $S_{\mathrm{A}}'' = S_{\mathrm{A}} S_{\mathrm{B}}$\\ $S_{\mathrm{B}}'' = S_{\mathrm{A}} S_{\mathrm{B}}$ \end{center} } & \parbox{4cm}{ \begin{center} $S_{\mathrm{A}}'' = \tilde{w}^2 S_{\mathrm{A}} S_{\mathrm{B}}$\\ $S_{\mathrm{B}}' = \tilde{w}^2 S_{\mathrm{A}} S_{\mathrm{B}}$ \end{center} } \\ Diff. equation for $S_{\mathrm{A}}$ & $S_{\mathrm{A}}'' = S_{\mathrm{A}}(S_{\mathrm{A}} + z)$& $S_{\mathrm{A}}'' = \tilde{w}^2 S_{\mathrm{A}}(S_{\mathrm{A}}'+1)$\\ \parbox{4cm}{ \begin{center} $S_{\mathrm{R}}(z)$, $z\gg w$\\ $S_{\mathrm{R}}(z)$, $z\ll w$ \end{center} } & $z^{3/4}\exp(-\frac{2}{3}z^{3/2})$ & \parbox{4cm}{ \begin{center} $\exp(-\tilde{w} z)$\\ $z\exp(-\half(\tilde{w} z)^2)$ \end{center} } \\ $w(t)$ & $\W_1 (D_{\mathrm{A}} D_{\mathrm{B}} /k C_{\mathrm{J}})^{1/3}t^{1/6}$ & $\tilde{w} \sqrt{D_{\mathrm{A}}/kb_0}$ \\ $\alpha$ & 1/6 & 0\\ $\beta $ & 2/3 & 1/2\\ $\gamma_{\mathrm{A}}$ & 1/3 & 1/2\\ $\gamma_{\mathrm{B}}$ & 1/3 & 0\\ \hline \end{tabular} \end{center} \end{table} \renewcommand{\arraystretch}{1} \section{Numerical results} \label{NUMS} To check the theory presented in the previous sections for the case $D_{\mathrm{B}} = 0$ we solved numerically, using the finite-difference FTCS (Forward Time Centred Space) method, partial differential equations (\ref{GR}) with the mean-field reaction rate $R=k\rho_{\mathrm{A}}\rho_{\mathrm{B}}$. We present the data obtained for $a_0 = 0.1$, $b_0=0.1$, $D_{\mathrm{A}} = 0.1$, and $k=0.02$. Other values of these parameters yielded similar results. First of all we verified the theory presented in section \ref{LIMIT}. In Fig.~1 we show the plot of $\rho_{\mathrm{A}}$ and $\rho_{\mathrm{B}}$ in the vicinity of $x_{\mathrm{f}}$ at $t=10^7$. The dotted line was computed from (\ref{ra}). It perfectly matches the numerical solutions up to $x\sim 700$, i.e.~outside the reaction layer. Actually, in the region $-1000 < x < 697$, the relative error is less than $7\!\cdot\!10^{-4}$. Also, the value of $C_{\mathrm{f}}$ computed from (\ref{cfeq0}) is 0.2263, whereas its numerical estimation $x_{\mathrm{f}}/\sqrt{t}$ for $t = 10^7$ is 0.2253. Next we investigated the scaling properties of the considered system. As some aspects of this problem were already considered \cite{Hav95}, we present briefly only those results which are relevant to our paper. In Fig.~2 we show the log-log plot of $w(t)$. We can see that it initially grows as $\sqrt{t}$, which is a typical short time limit behaviour \cite{Haim91}, but beginning from $t\approx10^3$ it quickly converges to a constant value. This enabled us to estimate $\tilde{w}\approx 1.47$. Finally, in Fig.~3 we present the scaling plot of $\sqrt{t}R \propto S_{\mathrm{R}}$ as a function of $z = (x-x_{\mathrm{f}})/w$, where we used (\ref{w}) with $\tilde{w} = 1.47$. The plots for $t=10^6$ and $10^7$ are practically indistinguishable. Note also two facts. First, for $z > 2$ the semilog plot of $\sqrt{t}R$ is linear in $z$, in accordance with (\ref{tailA}). Second, $S_{\mathrm{R}}(z)$ is discontinuous at $z=-x_{\mathrm{f}}/w$, which reflects the fact that $\rho_{\mathrm{B}}(x,t)$ is discontinuous at $x=0$. \begin{figure} \epsfysize 3.2in \epsfbox[0 160 567 670]{fig1.eps} \caption{The concentrations $\rho_{\mathrm{A}}$ and $\rho_{\mathrm{B}}$ of A's and B's in the vicinity of $x_{\mathrm{f}}\approx 713$ at $t=10^7$. The dotted line was computed from (\protect\ref{ra}).} \end{figure} \vfill \begin{figure} \epsfysize 3in \epsfbox[0 166 567 670]{fig2.eps} \caption{The log-log plot of the width of the reaction front $w$ as a function of time.} \end{figure} \clearpage \begin{figure}[thp] \epsfysize 3.6in \epsfbox[0 166 567 670]{fig3.eps} \caption{The scaling plot of $R\protect\sqrt{t}$ as a function of $z = (x-x_{\mathrm{f}})/w$.} \end{figure} \section{Summary and conclusions} We have investigated the long-time behaviour of the concentrations $\rho_{\mathrm{A}}$ and $\rho_{\mathrm{B}}$ of species A and B in the initially separated diffusion limited systems with $D_{\mathrm{B}} = 0$. All of our analysis was carried out for arbitrary values of $D_{\mathrm{A}}$, $a_0$ and $b_0$. First we derived the formulae (\ref{cfeq0}) and (\ref{relcfcj}) which together with (\ref{ra}) describe the behaviour of $\rho_{\mathrm{A}}(x,t)$ outside the reaction zone. An interesting feature of these equations is that we expect them to be valid for any reaction-diffusion system of the type A $+$ B(static) $\to$ 0\ that exhibits the scaling behaviour with $\alpha < \frac{1}{2}$. Thus we conclude that many important properties of the reaction-diffusion systems with $D_{\mathrm{B}}=0$ depend only on the values of $a_0$, $b_0$ and $D_{\mathrm{A}}$, but not on the explicit form of $R$. This includes the location of the reaction zone centre (controlled by $C_{\mathrm{f}}$), the total reaction rate (controlled by $C_{\mathrm{J}}$), and the concentration profile of particles A outside the reaction zone (controlled by $C_{\mathrm{A}}$ and $C_{\mathrm{f}}$). A similar situation is observed in the systems with $D_{\mathrm{B}} > 0$ \cite{Koza}. Next we investigated general consequences of the scaling ansatz. We concluded that it determines the value of $\gamma_{\mathrm{B}} = 0$ and imposes relations (\ref{rex2}) and (\ref{rex3}) on the values of $\alpha$, $\beta$ and $\gamma_{\mathrm{A}}$. These relations are also valid for $D_{\mathrm{B}}>0$. We proved that the scaling functions $S_{\mathrm{A}}$ and $S_{\mathrm{B}}$ are related by a simple formula~(\ref{sasb}). We also concluded that for $D_{\mathrm{B}} = 0$ the asymptotic forms of $\rho_{\mathrm{A}}$ and $\rho_{\mathrm{B}}$ inside the reaction front can be derived from equations (\ref{QSTAC}) which are to be solved with time depending boundary conditions (\ref{bcro}). Thus these equations determine the asymptotic properties of the reaction front. We also examined in detail the properties of the reaction zone in the mean-field approximation. In particular we determined the functional forms of $S_{\mathrm{A}}$, $S_{\mathrm{B}}$ and $S_{\mathrm{R}}$ far from $x_{\mathrm{f}}$ and the dependence of the width of the reaction zone on $k$, $a_0$, $b_0$ and $D_{\mathrm{A}}$. Our analysis showed that the main differences in the behaviour of the initially separated reaction-diffusion systems with $D_{\mathrm{B}} = 0$ and $D_{\mathrm{B}}>0$ arise from the fact that the term $\partial \rho_{\mathrm{B}}/\partial t$ can be neglected in (\ref{GR}) only if the corresponding diffusion constant $D_{\mathrm{B}}$ is nonzero. Therefore, depending on whether $D_{\mathrm{B}}$ is zero or not, the long-time behaviour of the reaction front is governed by entirely different partial differential equations. If $D_{\mathrm{B}} > 0$, then we use the usual quasistationary equations (\ref{STAC}), otherwise we must employ (\ref{QSTAC}). The different forms of these equations and their boundary conditions imply the different forms of their solutions and, consequently, different asymptotic properties of the two universality classes. Therefore the cases $D_{\mathrm{B}}=0$ and $D_{\mathrm{B}}>0$ should always be considered separately. There is, however, evidence that in some special cases the two asymptotic universality classes may be very much alike. Such surprising conclusion follows from the extensive numerical simulations of the one dimensional system carried recently out by Cornell \cite{Cornell95}. He considered a system with $D_{\mathrm{A}}=D_{\mathrm{B}}>0$ and concluded that $\alpha = 1/4$, $\beta = 3/4$, and that asymptotically $R$ is a Gaussian centred at $x_{\mathrm{f}}$ with its width growing as $t^\alpha$. The same results were derived analytically for the one dimensional system with $D_{\mathrm{B}}=0$ by Larralde {\em et al} \cite{Bstatic}. Note also that this form of $R$, or more generally --- any $R$ that depends on $x$ and $t$ explicitly rather than through $\rho_{\mathrm{A}}(x,t)$ and $\rho_{\mathrm{B}}(x,t)$, uniquely determines the form of $\rho_{\mathrm{A}}(x,t)$. This comes from the fact that whether or not $D_{\mathrm{B}} = 0$, asymptotical forms of $\rho_{\mathrm{A}}$ and $R$ are related by the same differential equation $D_{\mathrm{A}}\partial^{2}\rho_{\mathrm{A}}(x,t)/\partial x^{2} = R(x,t)$ with the same boundary conditions. Therefore it is quite possible that the only difference between the one dimensional systems with $D_{\mathrm{B}} = 0$ and $D_{\mathrm{B}} > 0$ lies in the form of $S_{\mathrm{B}}$ and the value of $\gamma_{\mathrm{B}}$. It should be stressed, however, that the asymptotic kinetics of one dimensional systems with $D_{\mathrm{B}}>0$ has recently become the subject of controversial discussions \cite{Cornell95,LAHS3-8,Barkema,CommentC,CommentALHS}, and further exploration of this topic is still required before the final conclusions can be made. \vspace{1cm} \noindent {\bf Acknowledgments} \nopagebreak \\ \nopagebreak This work was supported by University of Wroc{\l}aw Grant No 2115/W/IFT/95.
1,116,691,497,957
arxiv
\section{Introduction} A detailed and physically self-consistent modelling of post Main Sequence (MS) stellar evolution has been a challenging effort of theoretical astrophysics since the decade of the sixties \citep[see, e.g.][and references therein for a documental overview of the relevant pioneering works since then]{iben67}. One main issue in this regard is that post-MS lifetime of stars (at every mass range) is no longer governed by the nuclear timescale alone, but different mechanisms intervene to strongly affect stellar structure, acting from ``inside out'' (e.g.\ convection) and from ``outside in'' (e.g.\ mass loss by stellar wind). This forcedly restrains any theoretical output to a preliminary empirical validation process by matching real observations of nearby resolved stellar systems in order to suitably tune up the wide range of free parameters potentially allowed by theory.\footnote{The study of Galactic globular cluster c-m diagrams is an illuminating example in this sense \citep[e.g.][]{rffp,chiosi}.} Any successful approach in this sense, however, suffers from evident limitations as far as distant (unresolved) galaxies are taken into account in our analysis. By surveying different cosmic epochs and environment conditions, in fact, one might need to leave apart the local interpretative framework, typically constrained by the observation of low-mass metal-poor stars. To overcome this potentially shifty bias in the analysis of deep cosmological data, and considering that post-MS stars alone provide typically over 2/3 of galaxy total luminosity in bolometric \citep{buzzoni95,buzzoni05}, it is of paramount importance to assess on a more firm basis the leading phenomena that constrain stellar evolution along its latest stages, like the horizontal (HB) and the asymptotic (AGB) giant branches, among galaxies of different morphological type. \section{Horizontal branch morphology and the ``UV-upturn'' phenomenon in elliptical galaxies} The so-called ``UV-upturn'', that is the rising ultraviolet emission shortward of 2000 \AA, sometimes featuring the spectral energy distribution (SED) of elliptical galaxies and the bulges of spirals \citep{code79} has been for long a puzzling problem for such old galaxy environments dominated by stars of mass comparable to the Sun. \begin{figure} \centerline{ \psfig{file=buzzoni_fig1.ps,width=0.78\hsize,clip=} } \caption{The observed SED of the Virgo elliptical galaxy NGC~4649, one of the best example of the ``UV upturn'' phenomenon. The ultraviolet raising branch, shortward of 2000~\AA is matched with three black-body curves, for 20, 40 and 80,000~K, as labelled on the plot. It is evident that stars around 40,000~K should be the main contributors to the observed UV galaxy emission, supplying in total about 2\% of the galaxy bolometric luminosity.} \label{n4649} \end{figure} In fact, the implied existence of an important contribution of (long-lived) O-B stars, hotter than 30\,000 - 40\,000~K and providing in the most striking cases about 2\% of the galaxy bolometric luminosity (see Fig.~\ref{n4649}), has been sometimes identified with binaries, blue stragglers, blue HB stars, AGB {\it manqu\'e} stars, and post-AGB nuclei of planetary nebulae (PNe) \citep[see][for an updated review on this subject]{yiyoon04}. Spectroscopy \citep{brown97} and imaging \citep{brown00} of resolved c-m diagrams for stellar populations in local galaxies, like M32, have definitely established that this UV excess mostly arises from the hot tail of a broad temperature distribution of HB stars further complemented, to a lesser extent, by a PN contribution. As a blue HB morphology is more comfortably produced in old metal-poor globular clusters \citep[e.g.][]{rood73} then, on this line, one should admit that UV stars in ellipticals represent the $Z \ll Z_\odot$ tail of an ostensibly broad metallicity distribution peaked at much higher values, around the solar abundance.\footnote{A metal-rich chemical composition should be advocated for the star bulk in ellipticals given, for instance, a much stronger integrated Mg$_2$ Lick index for these galaxies, compared to Galactic globular clusters \citep[see Fig.~\ref{oconnell}, and an extensive discussion in][]{oconnell99}.} \begin{figure}[!t] \centerline{ \psfig{file=buzzoni_fig2.ps,width=0.78\hsize,clip=} } \label{oconnell} \caption{An illustrative sketch comparing UV-to-optical emission of Galactic globular clusters (vertical ellipse to the left) and elliptical galaxies (square-marker sequence to the right of the plot) after \citet{oconnell99}. The UV color is defined as $(15-V) = -2.5\,\log [F(1500{\rm \AA})/F(V)]$. Some relevant objects are labelled for both groups. The Mg$_2$ Lick index is taken as standard metallicity indicator on the x-axis. It is evident that UV-enhanced integrated colors can be reached at the two extremes of the metallicity scale, in case of a blue HB morphology for metal-poorer globulars and due to EHB stellar contribution in case of the most metal-rich (giant) ellipticals. See text for discussion. } \end{figure} On the other hand, hot metal-rich HB stars might also be naturally predicted providing stars to approach the HB phase with a conveniently low external envelope, compared to their inner Helium core mass \citep{dorman,dcruz96}. Figure~\ref{hbmorph} is an illustrative example in this sense, displaying a full set of HB models of solar metallicity (i.e.\ red, intermediate and very blue stars) and their involved post-HB evolutionary paths, based on the \citet{dorman} work. So-called ``Extreme HB'' stars (EHB), to be associated with hot SdO/SdB spectral types, have actually been observed, for example in $\omega$~Cen \citep{dcruz00} and in some old Galactic open clusters as well, like NGC~6791 \citep{kaluzny92,buson06}. \begin{figure}[!t] \centerline{ \psfig{file=buzzoni_fig3.ps,width=0.64\hsize,clip=} } \caption{A selected set of HB evolutionary tracks with $Z = Z_\odot$ from \citet{dorman}. The starting track envelope roughly identify the HB locus with the value of three relevant stellar masses (namely 0.47, 0.52 and 0.90~M$_\odot$) labelled on the plot. Note the increasingly hotter HB temperature with decreasing stellar mass, with stars below $\sim 0.50$~M$_\odot$ occupying the EHB region of the diagram, pertinent to SdO/SdB spectral types, fully escaping the AGB phase (these are the so-called {\it ``AGB-manqu\'e''} stars) and directly fading along the white-dwarf (WD) cooling sequence. The reported value of $\sim 0.52$~M$_\odot$ is the threshold for post-HB stars to end up as PNe after completing AGB evolution and undergoing the thermal pulsing phase. The position of the Sun on the plot is located as a general reference for the reader. } \label{hbmorph} \end{figure} Though supplying a straightforward evolutionary framework for UV-en\-hanced elliptical galaxies, this hypothesis implies a quite delicate tuning of core mass size at the HB onset (as a result of RGB nuclear burning processes) and mass loss efficiency (to suitably ``peel off'', at the same time, stellar envelope). As a consequence, one has to expect the UV-to-optical color to be a quite fragile and quickly evolving feature in the SED of elliptical galaxies \citep{park97}. This is confirmed in Fig.~\ref{uvupturn}, where we track back-in-time evolution of a 15 Gyr simple stellar population (SSP) of solar metallicity and intermediate HB morphology (such as to closely resemble the temperature distribution of stars in the M3 globular cluster), according to \citet{buzzoni89} population synthesis code. Due to the presence of stars of increasingly higher mass at early epochs (giving rise to a red HB), one sees from the figure that the full UV burst event quickly recovers in about 3 Gyr, that is barely a $\sim 20$~\% of galaxy's entire life. The UV-upturn can therefore fade by several magnitudes as the lookback time increases by a few Gyrs, making the effect in principle detectable at intermediate redshift ($z = 0.2-0.3$) \citep[][see also Ree et al, this conference]{brown03}. \section{Planetary Nebulae and the Initial-to-Final mass relation} Along the SSP evolution, a substantial fraction (up to 50\%) of star mass can be lost during the AGB phase via stellar wind. If the mass-loss process is strong enough, low-mass stars entering the AGB phase can easily approach a critical C-O core mass threshold about $M_{\rm core} \simeq 0.52$~M$_\odot$. This is the minimum mass for stars to fully complete AGB evolution and experience the so-called ``thermal-pulsing'' phase \citep[][see also Fig.~\ref{hbmorph} above]{dorman,blocker}. Along thermal pulses, stars venture in the region of Mira variables and end up through the so-called ``superwind phase'' by quickly ejecting their residual envelope and originate a PN \citep[see][for an exhaustive discussion of the process and its variants]{iben}. \begin{figure}[!t] \centerline{ \psfig{file=buzzoni_fig4.ps,width=0.68\hsize,clip=} } \caption{Integrated SED for a SSP of solar metallicity, Salpeter IMF and \citet{reimers} mass-loss parameter $\eta = 0.5$ after \citet{buzzoni89}. Spectral evolution is computed at steps of 200~Myr from $t = 15.2$ to 14~Gyr, plus a further model at 12.5~Gyr, as labelled on the plot. A broad temperature distribution of HB stars (matching the observed M3 HB morphology) is assumed at $t = 15.0$~Gyr. Note the quick evolution of the UV excess between 1000 and 2500~\AA, that already disappeared in the 12.5~Gyr model in consequence of the corresponding reddening of the HB color distribution. A second and nearly steady minor UV bump can be recognized shortward of 1000~\AA\ due to the contribution of PN nuclei.} \label{uvupturn} \end{figure} Models indicate that the lack of a full AGB deployment (when $M_{\rm core} \la 0.52~M_\odot$) leads to a range of post-HB evolutionary paths (see, again, the sketch in Fig.~\ref{hbmorph}),\footnote{From the physical point of view, this would correspond to the He+H double-shell burning regime for low- and intermediate-mass stars.} as discussed in detail by \citet{greggio}. One relevant case in this regard is that of EHB objects, that evolve as {\it ``AGB-manqu\'e''} stars, thus fully escaping the PN ejection and fading directly along the high-temperature white-dwarf cooling sequence \citep{castellani92,castellani,dorman,dcruz96,yi}. Quite interestingly, therefore, the successful detection of PNe, even in distant unresolved galaxies, places a further interesting constraint to the final mass of the composing stellar population. More specifically, the PN number density per unit galaxy luminosity, a parameter often referred to in the literature as the ``$\alpha$ ratio'' \citep{jacoby}, can directly be linked to the characteristic lifetime of the nebulae, the latter closely tracing the mass distribution of their nuclei \citep[see][for a full discussion]{buzzoni06}. On the basis of these arguments, PNe can eventually help constraining the initial-to-final mass relation (IFMR) also in extragalactic systems. \begin{figure}[!t] \centerline{ \psfig{file=buzzoni_fig5.ps,width=0.70\hsize,clip=} } \caption{The initial-to-final mass relation according to different calibrations. The solid strip is the theoretical relation of \citet{iben} for a standard mass loss parameter $\eta$ in the range between 0.3 and 0.5, as labelled on the plot. Short- and long-dashed curves are the theoretical loci for stars to set on the AGB thermal pulsing phase ($M_{\rm TP}$), according to \citet{iben} (IR83) and \citet{ww94} (WW94). Finally, big squares and solid curve report the \citet{weidemann} (W00) empirical relation based on the mass estimate of white dwarfs in Galactic open cluster.} \label{mfin} \end{figure} For our galaxy, the IFMR can be derived empirically from the observation of the white dwarfs in nearby open clusters (like Hyades or Praesepe) of known age (as obtained, for instance by the isochrone fitting method of the cluster c-m diagram). In an exhaustive study of the available observed database, \citet{weidemann} found evidence that white-dwarf masses, for low- and intermediate-mass stars, closely match the theoretical core masses expected at the beginning of the thermal-pulsing AGB. This claim is accounted for in Fig.~\ref{mfin}, where we compare the \citet{weidemann} IFMR with the \citet{ww94} updated set of AGB stellar tracks for Pop I stars, and with the original analytical relation for thermal-pulsing core mass for intermediate-mass stars by \citet{iben}. It is clear from Fig.~\ref{mfin} that, for a standard range of the \citet{reimers} mass-loss parameters pertinent to Galactic globular clusters \citep[namely $\eta \simeq 0.4 \pm 0.1$, according to][]{fusipecci} the \citet{iben} theoretical IFMR predicts unreliably high final masses for young ($t \la 2$~Gyr) SSPs, requiring a value of $\eta \gg 1$ to match the \citet{weidemann} empirical relation. \begin{figure}[!t] \centerline{ \psfig{file=buzzoni_fig6a.ps,width=0.53\hsize,clip=} \psfig{file=buzzoni_fig6b.ps,width=0.47\hsize,clip=} } \caption{{\it Left panel:} the observed relationship between PN luminosity-specific rate $\alpha$ and integrated ultraviolet-to-optical emission (as defined in Fig.~\ref{oconnell}) for a sample of elliptical galaxies in the Local Group and in the Virgo, Leo and Fornax clusters according to \citet{buzzoni06}. The relevant case of outliers like NGC~205 (star-forming) and NGC~1316 (merger elliptical) are singled out in the plot. {\it Right panel:} the corresponding relationship in place among the \citet{cantiello03} sample of quiescent ellipticals vs.\ \citet{tsch} effective magnitudes in the H and K infared bands. The most UV-enhanced galaxies display the faintest infrared effective magnitudes, sensitive of a less deployed AGB. } \label{alfauv} \end{figure} As far as external galaxies are concerned, the Local Group represents a natural benchmark to assess the IFMR through the estimated $\alpha$ ratio from deep PN surveys. Quite surprisingly, in spite of the extreme variety of star formation histories among local galaxies, \citet{buzzoni06} have demonstrated that observations support a fairly constant value of $\alpha$, with an average PN rate per unit galaxy luminosity between 1 and 6~PNe per $10^7$~L$_{\odot}$ among systems representative of the whole late-type Hubble morphological sequence. Such a value is related to a quite narrow range of final stellar masses about 0.60-0.65~$M_\odot$. Even in case of Local Group member galaxies, therefore, the mass-loss scenario supported by the PN observations better agrees with the \citet{weidemann} IFMR, which implies a substantially stronger mass loss for intermediate and high-mass stars compared to the standard scenario for Pop II stars as in Galactic globular clusters. \section{UV evolution and AGB connection in elliptical galaxies} One important consequence of the {\it ``AGB-manqu\'e''} evolution of EHB stars is that a tight and {\it inverse} relationship must be in place, especially involving elliptical galaxies, between the most PN-poor and UV-enhanced stellar systems. A prevailing fraction of hot (low-mass) HB stars in the galaxy stellar population, in fact, can be strongly favoured by a more efficient mass loss (at least along the RGB evolution), and this scenario, by itself, also plays against any full AGB deployment. Therefore, if this is the case, few stars (if any) can eventually reach the AGB feeding the PN production channel. As shown in Fig.~\ref{alfauv} (left panel) this correlation is actually displayed between PN luminosity-specific rate $\alpha$ and the $(1550-V)$ color for elliptical galaxies in the Virgo and Fornax clusters, and in the Leo group, after \citet{buzzoni06}. The sense is that more massive metal-rich systems (traced by a higher velocity dispersion $\sigma_v$ and a stronger integrated Lick Mg$_2$ index) display at a time a stronger UV-upturn {\it and} a poorer PN population per unit galaxy luminosity. The \citet{buzzoni06} relationship settles an old (and so far unexplained) empirical evidence for a trend of PN rate, seen to decrease among the reddest ellipticals \citep{hui}. As $\alpha$ can actually be considered as an indirect probe of galaxy AGB extension above the thermal-pulses threshold, this would naturally lead to expect some correlation between the PN luminosity-specific rate and galaxy infrared colors or, even better, with more sensitive and unbiased tracers of the cool galaxy stellar component. This is, for instance, the case of the infrared effective magnitudes, as derived from the surface-brightness fluctuation method of \citet{tsch}. Again, our guess is fully confirmed by a study of the Virgo and Leo elliptical sample (Buzzoni \& Gonz\'alez-L\'opezlira, in preparation), as shown in the right panel of Fig.~\ref{alfauv} based on \citet{cantiello03} compilation database.
1,116,691,497,958
arxiv
\section{Introduction} The entanglement plays a key role in the main tasks of quantum information\cite{Nielsen00, Amico07}. In practice, entangled qubits need be accessed individually for measurements. Consequently, they are well separated in space. Recently, the long-distance entanglement \cite{Venuti06, Hartmann06} has been attractive in the field of quantum information processing. A selected pair of distant qubits can retain a sizable amount of entanglement at zero temperature if they are weakly coupled to some spin models. Because spin chains can serve as an efficient communication channel for quantum teleportation \cite{Bowen01} and state transfer \cite{Bose03}, these models are extensively studied. In many schemes \cite{Ferreira08, Zhu08}, spin-$\frac 12$ Heisenberg chains acted as the medium for the generation of quantum entanglement when the chain is kept at the ground state. It is found out that the long-distance entanglement decreases and vanishes with the length of the gapless spin chains \cite{Venuti07}. As an appealing spin model, the spin-$1$ chain exhibits the massive and gapped ground state, which can be realized through confining an $S=1$ spinor condensate \cite{Demler02, Yip03} in optical lattices. The quantum communication in the spin-$1$ chain has been investigated by \cite{Sanpera07}. Here we expect that the long-distance entanglement can also be generated by the spin-$1$ chains and show the scaling property which is different from spin-$\frac 12$ chains. In the realistic optical lattices, the thermal decoherence from the temperature is unavoidable \cite{Hofstetter06}. Therefore, it is of fundamental importance to study the impacts of the thermal noise on the long-distance entanglement. Using the long-distant entangled state as the channel, we also suggest the standard scheme of quantum teleportation. In this report, the thermal entanglement between a pair of distant qubits is present when they are weakly coupled to the general isotropic spin-$1$ chain with bilinear-biquadratic interactions at finite low temperatures. To study the decoherence, the effective Hamiltonian between two distant sites is analytically obtained by the Fr\"{o}hlich transformation \cite{Frohlich1952, Nakajima1953} in Sec. II. The scaling property of the effective coupling is also given by the exact diagonalization method. The effects of the temperature and the relative strength of biquadratic interactions are considered. In Sec. III, we draw on the master equation to investigate the decay of the long-distance entanglement. The protocol of the quantum teleportation is put forward. Finally, a short discussion concludes the paper. \section{The effective Hamiltonian at finite low temperatures} In the optical lattices, a selected pair of two-level atoms $A$ and $B$ can weakly interact with two open ends of a spin-$1$ chain. At finite low temperatures, the whole quantum system exhibits the thermal equilibrium state. To study the time evolution of quantum states, the total Hamiltonian can be expressed by \begin{equation} H=H_{0}+H_{I}=H_{q}+H_{c}+H_{I}, \end{equation} where \begin{equation} H_{q}=\omega(s_A^z+s_B^z), \end{equation} describes the intrinsic Hamiltonian of two distant atoms, \begin{equation} H_{c}=J\sum_{i=1}^{L-1}[\cos \theta(\vec{S}_{i}\cdot \vec{S}_{i+1})+\sin \theta (\vec{S}_{i}\cdot \vec{S}_{i+1})^2], \end{equation} is the Hamiltonian of general isotropic spin-$1$ chain with the even length $L$ and \begin{equation} H_{I}=J_{p}(\vec{s}_A \cdot \vec{S}_1+\vec{S}_N \cdot \vec{s}_B). \end{equation} denotes the weak interaction between two distant atoms and open ends of the chain. Here $\vec{s}_{A(B)}=\frac 12\sigma_{A(B)}$ and $\vec{S}_i$ refer to the spin operators of distant atoms and the $i$th site of the chain respectively. The parameter $\omega$ describes the transition energy from the ground state to the excited one for each atom, and $J\cos \theta(\sin \theta)$ gives the strength of the bilinear(biquadratic) coupling. As is well known, the energy property of the spin model is determined by the angle $\theta$ \cite{Affleck87}. In the context, the biquadratic coupling for $|\theta|<\tan^{-1}\frac 13$ need be so weak that the general ground state of $H_c$ is a total singlet $|\phi_0\rangle$ with the energy $\epsilon_0$ and the first excited ones are the degenerate triplet states $|\phi_1^{\lambda=0,1,2}\rangle$ with $\epsilon_1$. Here the energy gap $\Delta=\epsilon_1-\epsilon_0$ is the famous Haldane gap. In general, the thermal equilibrium state $\rho_c(T)=\sum_i\frac{e^{-\epsilon_i/T}}{Z}|\phi_i\rangle\langle \phi_i|$ where $\epsilon_i$ is the $i$-th eigenvalue of $H_c$ and $|\phi_i\rangle$ is the corresponding eigenstate. When the low temperatures satisfy $kT<\Delta$, the components of the ground state and first excited ones become dominant in the thermal equilibrium state. For lower temperatures, this assumption of considering just these states in the thermal fluctuations is more reliable. The approximate expression of the thermal state can be given by $\rho_c(T)\simeq \frac {e^{-\epsilon_0/T}}{Z}\left (|\phi_0\rangle\langle \phi_0|+e^{-\Delta/T}\sum_{\lambda}|\phi_1^{\lambda}\rangle\langle \phi_1^{\lambda}|\right )$ where $Z\simeq e^{-\epsilon_0/T}+3e^{-\epsilon_1/T}$ is the partition function. For the convenience, the Plank constant $\hbar$ and the Boltzman constant $k$ are assumed to be one. In general, the Fr\"{o}hlich transformation \cite{Frohlich1952, Nakajima1953} is widely used in condensed matter physics. Recently, this method has been applied to the regime of quantum information processing \cite{Li05}. As a second-order perturbation \cite{Frohlich1952, Nakajima1953}, the effective Hamiltonian of the whole system is $H_{eff}\approx H_{0}+\frac 12[\hat{S},H_{I}]$ where the anti-Hermitian operator $\hat{S}$ satisfies the relation of $[H_0,\hat{S}]=H_I$ and the elements of this matrix are given by $\langle \phi_i^{m}|\hat{S}|\phi_j^{n}\rangle=\frac {\langle \phi_i^{m}|H_{I}|\phi_j^{n}\rangle}{\epsilon_{i}-\epsilon_{j}}, (i\neq j)$ and the diagonal ones are zero for $m=n,i=j$ \cite{Li05}. Here $|\phi_{i}^{m}\rangle(m=0,1,\cdots,d_i-1)$ is the energy state of $H_{c}$ with the corresponding energy $\epsilon_{i}$ and $d_i$ is the degree of degeneracy. In the case with $J_p\ll J$ at lower temperature, the spin-$1$ chain is at the state of $\rho_c$ and then the effective Hamiltonian between two distant atoms is obtained by \begin{equation} H_{eff}^{A,B}=\mathrm{Tr_c}\left \{H_{0} \rho_c +\frac 12[\hat{S},H_{I}]\rho_c \right \}. \end{equation} where $\mathrm{Tr_c}$ denotes the trace over the complete energy space of $H_c$. To simplify the calculation, we assume that the parameter $ \langle \phi_k^{m}|S^{\alpha}_{i}|\phi_l^{n}\rangle=\tau^{km,ln}_{i,\alpha}=\tau_{i,\alpha}$ where the spin operator $S^{\alpha}=S^{\pm},S^z$. Due to the invariant symmetry, it is found out that $\sum_{k\neq l,m,n}\tau_{i,\alpha}\tau^{*}_{j,\beta}/(\epsilon_{k}-\epsilon_{l})=\Omega^{l,\alpha}_{i,j}\delta_{\alpha,\beta}$ for $l=0,1$. Here the sum is always zero if $\alpha \neq \beta$ and the values $\Omega^{l,\pm}_{i,j}=2\Omega^{l,z}_{i,j}$ are real. As a consequence, the effective Hamiltonian can be simplified by the isotropic Heisenberg one \begin{equation} H_{eff}^{A,B}=J_{eff}\vec{s}_A \cdot \vec{s}_B+H_q+C. \end{equation} Here the constant of $C$ is irrelevant to the long distant entanglement. The effective Heisenberg coupling $J_{eff}=-\frac {2J_{p}^2e^{-\epsilon_0/T}}{Z}(\Omega_{1,L}^{0,z}+e^{-\Delta/T}\Omega_{1,L}^{1,z})$ is closely dependent on the energy property of $H_0$. By means of the exact diagonalization method, the scaling property of $J_{eff}$ at finite low temperatures is demonstrated in Fig. 1. It is shown that the values increase almost exponentially and arrive at a steady one with the length of the chain. According to \cite{Venuti07}, the effective coupling $J_{eff}$ is mainly determined by the singlet-triplet gap of the whole system $H$. From the numerical results of \cite{White93}, the gap of $H$ for $L\sim20$ is almost the steady one. Therefore, the values of $J_{eff}$ saturate rapidly with the increase of the length. This means that the effective coupling can be obtained at finite low temperatures when distant sites are taken infinitely far away. Notice that the parameter $\Omega^{l,\alpha}_{i,j}$ must be calculated by all of eigenvectors of $H_c$ and $\rho_c$ is approximately expressed in the singlet-triplet subspace. For a simplest example of $L=2$, the Hamiltonian of $H_c$ can be expanded by \begin{equation} H_c=\sum_{\lambda=0}^{2}\epsilon_{\lambda}\hat{P}_{\lambda} \end{equation} where the projectors $\hat{P}_{\lambda}=\sum_{S^z_{tot}}|S_{tot}=\lambda,S^z_{tot}\rangle \langle S_{tot}=\lambda,S^z_{tot}|$ and $S^z_{tot}=-\lambda,\cdots,\lambda$. For very small $|\theta|<\tan^{-1}\frac 13$, the energy spectrum is given by the ground energy $\epsilon_0=-2J(\cos\theta-2\sin \theta)$, the first excited one $\epsilon_1=-J(\cos\theta-\sin \theta)$ and the second $\epsilon_2=J(\cos\theta+\sin \theta)$. Thus the effective coupling is analytically written by \begin{equation} J_{eff}=\frac {e^{-\epsilon_0/T}}{3Z}\left (\frac {4J_p^2-4J_p^2e^{-\Delta/T}}{\epsilon_1-\epsilon_0}-\frac {5J_p^2e^{-\Delta/T}}{\epsilon_2-\epsilon_1} \right ). \end{equation} It is necessary to consider the effects of the temperatures and relative strength of biquadratic coupling $\theta$ on the effective coupling. From Fig. 2, it is seen that the values $J_{eff}$ are increased by the slight increase of $\theta$. For the even length of the chain, the parameters $\Omega^{l,\alpha}_{1,L}(l=0,1)$ are negative. In accordance with Eq.(8), the values of $J_{eff}$ can be enhanced slightly because the bigger angle $\theta$ leads to the smaller energy gap $\Delta$. For the low temperature, the effective coupling is mainly determined by the first item of $\frac {2J_{p}^2e^{-\epsilon_0/T}}{Z}|\Omega_{1,L}^{0,z}|=\frac {|\Omega_{1,L}^{0,z}|}{1+3e^{-\Delta/T}}$ which is decreased with increasing the temperature. \section{Decoherence of entanglement in thermal noise} The state of two distant atoms $\rho^{A,B}$ can be gained by tracing out the variables of the chain from the thermal state of the whole system. However, if the temperature $kT\ll \Delta$, we do not expect real excitations of the spin chain to be present \cite{Ferreira08}. Only the subspace of the states described by $H^{A,B}_{eff}$ will be populated and then we can calculate the correlations between two atoms using $\rho^{A,B}=e^{-H^{A,B}_{eff}/T}/Z_q$ where $Z_q=\mathrm{Tr}[e^{-H^{A,B}_{eff}/T}]$. When two distant sites are simultaneously coupled to the chain, the thermal state $\rho^{A,B}$ can be generated. In accordance with \cite{Wang02,Bayat05,Nielsen,Arnesen01,Wang01}, the concurrence of $\rho^{A,B}$ can be written by $C=\frac 1{Z_q} \max \{0, e^{3J_{eff}/4T}-3e^{-J_{eff}/4T} \}$. Therefore, thermal entanglement exists if the effective coupling satisfies $\frac {J_{eff}}{T}>\ln3$. From the point of view of practice, the local operations concerning two distant entangled atoms are needed. It is reasonable to assume that two atoms are coupled with its local thermal reservoirs $E_A,E_B$. According to \cite{Yu04}, the two independent reservoirs can lead to the local decoherence of entanglement. Suppose that the initial state at $t=0$ is $\rho_{tot}=\rho^{A,B}\otimes (|0_{E_A}0_{E_B}\rangle\langle0_{E_A}0_{E_B}|)$ where $|0_{E_A}0_{E_B}\rangle$ denotes the vacuum state of the two local reservoirs. The evolution of quantum state between atoms $A$ and $B$ is given by the master equation \begin{equation} \dot{\rho}(t)=-i[H_{eff},\rho]+\hat{L}(\rho), \end{equation} where the Lindbald operator \begin{align} \hat{L}(\rho)=\sum_{i=A,B} &(\bar{n}_i+1)\Gamma_i(2\sigma_i^{-}\rho\sigma_i^{+}-\rho\sigma_i^{+}\sigma_i^{-}-\sigma_i^{+}\sigma_i^{-}\rho)& \nonumber \\ &+\bar{n}_i\Gamma_i(2\sigma_i^{+}\rho\sigma_i^{-}-\rho\sigma_i^{-}\sigma_i^{+}-\sigma_i^{-}\sigma_i^{+}\rho).& \end{align} Here $\bar{n}_i=\bar{n}$ is the mean number of the thermal reservoir and $\Gamma_i=\Gamma$ signifies the rate of spontaneous emission for each atom. If one of two weak couplings $J_p$ is turned off after the preparation of the long-distance entanglement, the effective Hamiltonian of two atoms is obtained by $H_{eff}=H_q+C^{'}_{eff}$ which means there is no mutual interaction between atoms. In this case, the evolution of $\rho(t)$ can be described by a completely positive trace-preserving map \cite{Aolita08}. For a general two-qubit mixed state $\rho(0)=\sum_{kl,mn}a_{mn,kl}|kl\rangle_{AB}\langle mn|$, the evolved state in time can be written by $\rho(t)=\sum_{kl,mn}\sum_{j,j'}a_{mn,kl}(K_{Aj}|k\rangle_A\langle m|K^{\dag}_{Aj})\otimes(K_{Bj'}|l\rangle_B\langle l|K^{\dag}_{Bj'}) $ where the Kraus operators $K_{i0}=\sqrt{\frac {\bar{n}+1}{2\bar{n}+1}}(|g\rangle_i\langle g|+\sqrt{1-p}|e\rangle_i\langle e|)$, $K_{i1}=\sqrt{\frac {(\bar{n}+1)p}{2\bar{n}+1}}|g\rangle_i\langle e|$, $K_{i2}=\sqrt{\frac {\bar{n}}{2\bar{n}+1}}(\sqrt{1-p}|g\rangle_i\langle g|+|e\rangle_i\langle e|)$ and $K_{i3}=\sqrt{\frac {\bar{n}p}{2\bar{n}+1}}|e\rangle_i\langle g|$. Here $|g(e)\rangle_i$ is the ground(excited) state of atoms $i=A,B$ and $p(t)=1-e^{\frac {\Gamma(2\bar{n}+1)t}{2}}$ means the probability of the atom exchanging a quantum with the reservoir. The density matrix of the quantum state at any time is expanded in the Hilbert space of $\{|gg\rangle_{AB},|ge\rangle_{AB},|eg\rangle_{AB},|ee\rangle_{AB}\}$ \begin{equation} \rho(t)=\frac {1}{Z_q}\left(\begin{array}{cccc} u&0&0&0\\ 0&x&y&0\\ 0&y&x&0\\ 0&0&0&v \end{array}\right). \end{equation} The elements of $\rho(t)$ are expressed by $u=(1-a)^2e^{-(\frac {J_{eff}}{4}-\omega)/T}+a^2e^{-(\frac {J_{eff}}{4}+\omega)/T}+a(1-a)(e^{-\frac {J_{eff}}{4T}}+e^{\frac {3J_{eff}}{4T}})$, $v=(1-a)^2e^{-(\frac {J_{eff}}{4}+\omega)/T}+a^2e^{-(\frac {J_{eff}}{4}-\omega)/T}+a(1-a)(e^{-\frac {J_{eff}}{4T}}+e^{\frac {3J_{eff}}{4T}})$, $x=a(1-a)(e^{-(\frac {J_{eff}}{4}+\omega)/T}+e^{-(\frac {J_{eff}}{4}-\omega)/T})+\frac 12[(1-a)^2+a^2](e^{-\frac {J_{eff}}{4T}}+e^{\frac {3J_{eff}}{4T}})$ and $y=\frac {1-p}{2}(e^{-\frac {J_{eff}}{4T}}-e^{\frac {3J_{eff}}{4T}})$ where $a=\frac {\bar{n}p}{2\bar{n}+1}$. The concurrence \cite{Wang02,Bayat05,Nielsen,Arnesen01,Wang01} is used to evaluate the long distant entanglement \begin{equation} C=\frac 2{Z_q}\max \{0, |y|-\sqrt{uv} \}. \end{equation} On the other hand, it is assumed that the two atoms directly interact with each other in the form of the Hamiltonian given by Eq.(6). In this case, the analytical solution of the master equation is tedious. The expression of the density matrix of quantum states is also similar to that of Eq.(11). The decoherence of the thermal entanglement in two cases can be illustrated by Fig. 3(a). It is seen that the entanglement of two qubits without mutual interactions is decreased much more slowly than that of two directly interacting qubits. This point demonstrates that the decoherence time for long distant entanglement is so long as to be useful for the implementation of solid-state quantum computation. The standard teleportation through the mixed states can be regarded as a general depolarising channel \cite{Bowen01}. An arbitrary unknown quantum state $|\Psi\rangle=\cos \frac {\theta}2|g\rangle+\sin \frac {\theta}2e^{i\varphi}|e\rangle,(0\leq\theta\leq \pi,0\leq\varphi\leq 2\pi)$ is destroyed and its replica state appears at the remote place after applying the Bell measurement and the corresponding local operations. When single-qubit state $\rho_{in}=|\Phi\rangle \langle \Psi|$ is teleported via the noisy channel of $\rho$ like Eq.(12), the output state $\rho_{out}$ is written by \begin{equation} \rho_{out}=\sum_i \mathrm{Tr}[E^i\rho]\sigma^i\rho_{in}\sigma^i. \end{equation} In the above equation, $i=0,x,y,z$ and the projectors $E^{0}=|\psi^{-}\rangle\langle\psi^{-}|,E^{i}=\sigma^iE^0\sigma^i$ where a Bell state $|\psi^{-}\rangle=\frac 1{\sqrt 2}(|ge\rangle-|eg\rangle)$. The average fidelity of this teleportation is given by \begin{equation} F_{A}=\frac {\displaystyle \int_{0}^{2\pi}d\phi\!\int_{0}^{\pi} F\,\sin\theta\,d\theta} {4\pi}=\frac 16+\frac {3x-2y}{3Z_q} \end{equation} According to \cite{Nielsen00}, the fidelity for a pure input state $F=\{\mathrm{Tr}[\sqrt{(\rho_{in})^{1/2}\rho_{out}(\rho_{in})^{1/2}}]\}^{2}=\mathrm{Tr}[\rho_{out}\rho_{in}]$. The effect of the thermal noise on the average fidelity of the standard teleportation is illustrated by Fig. 3(b). It is shown that the average fidelity of quantum teleportation with thermal decoherence is larger than $2/3$ before a certain time. This means that the quantum teleportation via the channel of long-distance entangled state is better than the classic communication in the range of finite time. In the condition of the thermal noise, the quantum teleportation as the channel of the long-distant thermal entangled state is better than that of the thermal entangled state between two qubits interacting directly. \section{Discussion} The long-distance thermal entanglement can be obtained when two atoms are weakly interacting with the isotropic spin-$1$ chain at finite low temperatures. For the massively gapped quantum systems, the scaling law for the effective coupling shows the exponential increase with the length of the spin chain. Under the influence of thermal noise, the entanglement of two distant qubits without mutual interactions is decreased much more slowly. It is demonstrated that the resource of long-distance entanglement can be used for quantum information processing. We suggest the efficient scheme of the standard teleportation via the channel of long-distance entanglement. \section{Acknowledgement} X.H. was supported by the Initial Project of Research in SUST and the National Natural Science Foundation of China No. 10774108.
1,116,691,497,959
arxiv
\section{Introduction} IGR J19294+1816 was discovered by the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) using IBIS/ISGRI camera at R.A.=292.42 deg and Dec=+18.28 deg ($\pm$3' at 68\%, J2000) on 2009 March 27 \citep{2009ATel.1997....1T}. Follow up analysis of the Swift archival data by \citet{2009ATel.1998....1R} led them to the conclusion that the source Swift J1929.8+1821, observed on 2007 December 9 and 13, to be the same as IGR J19294+1816 with an improved position at J2000, RA= 19h 29m 55.9s \& Dec=+18deg 18' 39"($\pm$ 3.5" at 90\%). They detected a periodicity of 12.4 seconds from the power density spectrum showing a feature at $8.04_{-0.05}^{+0.02}\times10^{-2}$ Hz using Swift/XRT data. Their analysis of the timing and the spectral features suggested the source to be an accreting pulsar. \cite{2009ATel.2002....1S} confirmed IGR J19294+1816 to be an accreting pulsar with the presence of 12.44 seconds pulsation from the source using the Rossi X-ray Timing Explorer (RXTE)/Proportional Counter Array (PCA) observation on 2009 March 31. \cite{2009ATel.2008....1C} suggested the source to be a Be/X-ray class of binary system based on its position on the pulse period vs orbital plot of \citet{1986MNRAS.220.1047C}. \cite{2009A&A...508..889R} identified an infrared counterpart of the source. From the studies of infrared magnitudes dereddened with different values of the interstellar absorptions, they estimated the source to be at a distance of, d$\gtrsim$8 Kpc. They suggested that the source could possibly be a supergiant fast X-ray transients instead of a Be/X-ray transient because of short ($\sim$ 2000-3000 s) and intense flares that are more typical of supergiant fast X-ray transients. However, during 2010 outburst of the source, the source showed a very smooth and gradual change in flux. Further the spectral parameters during the two months spanned by the Swift observations, along with its low spin period, seem to confirm that the source is a Be/X-ray transient \citep{2011A&A...531A..65B}. In this paper, we report the timing and spectral properties of the source during the decay phase of the 2009 and 2010 outburst as observed by the RXTE/PCA. Our spectral analysis reveals the first ever detection of Cyclotron Resonance Scattering Features (CRSF) in the source. \section{Data Analysis} RXTE, (PCA) \citep{2006ApJS..163..401J} data of the source were obtained from High Energy Astrophysics Science Archive Research Center (HEASARC) data archive (http://heasarc.gsfc.nasa.gov), the details of which are presented in Table \ref{tbl-1}. The observed spectral parameters are presented in Tables \ref{tbl-2} and \ref{tbl-3}. Among the 5 Proportional Counter Units (PCU's) sensitive in 3-60 keV range, only the PCU 2 data (from all the layers) are reported in this paper as it was the only common PCU consistently covering all the observations. The X-ray spectra were extracted using standard 2 mode data with 16 second binning and the lightcurves were extracted from the event mode data. The data was filtered with data selection criteria that removed the stretches of observation that had the South Atlantic Anomaly passage time and included the stretches of observations for which the Earth elevation $>$ 10$^\circ$ \& the pointing offset $<$ 0.02$^\circ$. Faint background model was used for background estimation of the spectral analysis except for the observation on MJD 55499.46 (Obs.id.: 95438-01-01-00) when the source was just bright enough to warrant the strong background model. The background and dead time corrections were applied to the spectra while barycentric corrections were performed, using the ftools task `fxbary', for all the timing analyses reported here. The source lies in the Galactic plane where flux from the Galactic ridge needs to be included in the spectral modeling \citep{1998ApJ...505..134V}. We have strictly followed the recipe of \citet{2009A&A...508..889R} while adding the Galactic ridge spectrum to the instrumental background spectrum. The data reduction and analysis was carried out using \texttt{HEASOFT}\footnote{http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/}, which consists of (chiefly) \texttt{FTOOLS} for general data extraction and analysis, \texttt{XRONOS} \citep{1992EMIS..59...59S} for the timing analysis and \texttt{XSPEC} \citep{1996ASPC..101...17A} for the spectral analysis. The source was observed for a total of ten PCA pointings, five during the 2009 outburst (MJD 54921.32-54925.83) and five during the 2010 outburst (MJD 55499.46-55507.24). As evident from the flux evolution in Table \ref{tbl-2}, on both occasions the observations are during the decay phase of the outburst. The flux values, measured from the extracted wide band, 3-60 keV, spectra, are comparatively lower during the 2009 outburst (Table \ref{tbl-2}). Details of timing and spectral analysis are discussed in the following subsections. \subsection{Timing properties} The daily averaged ASM lightcurve of IGR J19294+1816 from MJD 50087 to MJD 55906 in the energy range from 1.5-12 keV is shown in Figure \ref{fig_1}(a). PCA observations during 2009 and 2010 outbursts are indicated by arrows. The lightcurve shows the presence of small flares in every $\sim$ 350 days. Figure \ref{fig_1}(b) presents the 2009 outburst of the source starting from MJD 54796 to MJD 55100 covering 298 days which is the efolding time of the outburst. At the onset of the outburst on MJD 54796 the ASM count rate was 2.12$\pm$0.76 counts/s. PCA observed the source in the decay phase of the 2009 outburst on five occasions from MJD 54921.32 to MJD 54925.82 during which the average PCA flux varied from 6.19$\pm$0.15 counts/s to 3.71$\pm$0.13 counts/s. Similarly Figure \ref{fig_1}(c) shows the ASM lightcurve of the 2010 outburst that started from MJD 55490 (ASM flux = 0.61$\pm$0.34 counts/s). This outburst decayed comparatively faster in $\sim$ 29 days and the ASM count rate declined to 0.14$\pm$0.90 counts/s on MJD 55522. The PCA observations were made on 5 occasions, starting from MJD 55499.46 to MJD 55507.24 during which the average PCA flux varied from 21.22$\pm$0.87 counts/s to 6.80$\pm$0.13 counts/s. Figure \ref{fig_1}(d) shows Swift/BAT hard X-ray transient monitor daily averaged lightcurve \citep{2013ApJS..209...14K} for IGR J19294+1816 in 15-50 keV energy band during MJD 54831-55559. \begin{figure} \figurenum{1} \includegraphics[scale=0.5,angle=-90]{fig1a.eps} \includegraphics[scale=0.5,angle=-90]{fig1b.eps} \caption{Top panel (a) shows the ASM lightcurve of the source IGR J19294+1816 in 1.5-12 keV energy range. Panel (b) shows the faint long 2009 outburst profile of the ASM lightcurve and panel (c) shows the 2010 outburst of the source. Bottom panel (d) shows the 2009 and 2010 lightcurve of the source observed in 15-50 keV by SWIFT/BAT hard X-ray transient monitor from MJD 54831-55559. The PCA pointed mode observations are indicated with arrows in all the panels. \label{fig_1}} \end{figure} Lightcurves with 0.01 sec binning was used to generate the power density spectra (PDS) (using ftools task `powspec'), from all the 10 PCA observations. The lightcurves were divided into stretches of 16384 bins per interval. PDS from all the segments were averaged to produce the final PDS for each observation. The PDS of the source exhibits a continuum that is best fit by a power law in the frequency range 5 mHz to 50Hz, and in addition there is a strong peak at $\sim$ $0.0803\pm0.0021$ Hz attributed to the pulsation from the source. The error values are obtained following the standard procedure in XRONOS. A Lorentzian model component is best fit for the observed pulsation peak and its harmonic, whenever present. During the 2009 outburst the pulsations were detected clearly on three occasions and marginally on one occasion on MJD 54921.70, while no pulsation peak was detected at the lowest flux level on MJD 54925.83. During the 2010 outburst prominent pulsation peaks were detected in the PDS in all the 5 observations. Details of the parameters pertaining to the pulsation are presented in Table \ref{fig_1}. The PDS obtained from the lightcurves of the 5 observations of 2009 outburst are shown in left column of panels of Figure \ref{fig_2} and those from 2010 outburst are shown in the right column of panels of the same. The PDS corresponding to the highest flux value in the top panel with the other PDS corresponding to progressively lower flux values, as shown in Table \ref{tbl-2} below, are shown in the lower panels. On occasions where the total flux (3-60 keV) is higher then 2.12 $\times$ 10$^{-10}$ ergs cm$^{-2}$ s$^{-1}$ a second peak at $\sim$ $0.160\pm0.004$ Hz, representing the first harmonic of the pulsation peak, is observed. \begin{figure} \figurenum{2} \includegraphics[scale=0.4,angle=-90]{fig2.eps} \caption{The power density spectra (PDS) of the source IGR J19294+1816 from 2009 (left panel a) and 2010 (right panel b) RXTE PCA observations. PDS are arranged with highest to lowest fluxes in both the outbursts. Pulsation peak at $\sim$ $0.0803{\pm0.0021}$ Hz are observed in the PDS except for the lowest flux on MJD 54925.83 during 2009 outburst. Harmonic of the pulse peak $\sim$ $0.160\pm0.004$ Hz along with pulse period are detected from 2010 outburst on MJD 55499.46, 55502.53 \& 55501.16. \label{fig_2}} \end{figure} A better estimate of the pulsation period is obtained by $\chi^{2}$ maximization method after folding the lightcurves around an approximate period $\sim$ 12.44 s (as obtained from the PDS) using the ftools task `efsearch'. The lightcurves with a binning of 5 ms were folded with 25000 different periods around 12.44 sec with a resolution of 1 ms with 32 phasebins per period. This pulsation peak was fit with a Gaussian model whose width provided the error on the observed periodicity. The results of this search for pulsation periodicity from all the observations of the two outbursts in 2009 and 2010 are presented in Figure \ref{fig_3}. A prominent peak corresponding to pulse period $\sim$ 12.44 sec is observed in 2-60 keV energy band of the source except for MJD 54925.83. Best estimated pulse period of all observations are tabulated in Table \ref{tbl-1}. Pulse profiles for each PCA observation were generated by folding the lightcurve using the ftool 'efold' over the exact pulsation period obtained above. The pulse profiles were generated with 8 phase bins per period. Energy dependent pulse profiles were also generated in the range of 2-7 keV, 7-15 keV, 15-25 keV, 25-60 keV and overall 2-60 keV, using the best estimated periods corresponding to different observations. Energy dependent pulse profiles of all the ten PCA observations are shown in Figure \ref{fig_4}. We observed single peaked pulse profiles in 2-7 keV, 7-15 keV, 15-25 keV and 2-60 keV in 9 of the observations (Figure \ref{fig_4}). While the pulsation was not observed in one case (MJD 54925.83) the pulse profile were obtained by folding the lightcurves at 12.44 sec. This pulse profile was then used to obtain the pulse fraction of the emitted radiation using the traditional method of obtaining the pulse fraction given by \(\tfrac{C_{max}-C_{min}}{C_{max}+C_{min}}\) \citep{1998ApJ...508..328H}. The evolution of the pulse fraction with the flux of the source in different energy bands, for both the outbursts, is shown in Figure \ref{fig_5}. The pulse fraction shows a logarithmically increasing trend with increasing flux in the energy ranges of 2-7 keV, 7-15 keV, 15-25 keV and 2-60 keV which is a commonly reported pattern for accretion powered pulsars \citep{2004MNRAS.349..173I, 2009MNRAS.395.1662I}. In the 25-60 keV energy band the pulse fraction is very low for all the observations (Figures \ref{fig_4}, \ref{fig_5}). This results in the overall 2-60 keV pulse profile having a comparatively lower pulse fraction (Figure \ref{fig_5}) in all the nine observations as compared to the soft X-ray emissions less then 25 keV. Evidently, from the Figures \ref{fig_2}, \ref{fig_3}, \ref{fig_4} and \ref{fig_5}, the pulsation is not detected for MJD 54925.83 in the total 2-60 keV energy band when the flux was at its lowest. Nevertheless during this observation, the emission in the range of 2-7 keV and 7-15 keV do exhibit a weak single peaked pulse profile when the lightcurve is folded with the average periodicity of 12.44 sec. \begin{table} \scriptsize \begin{center} \caption{Best period search results of IGR J19294+1816.\label{tbl-1}} \begin{tabular}{cccclccc} \tableline Observation ID & MJD & Date & Exposure &\multicolumn{2}{r}{Power Density Spectra} & Efsearch &Pulse \\ \cline{5-6} & & &&Pulsation&Harmonic &Spin Period &Fraction \\ & & & (sec)&Peak (Hz)&Peak (Hz)& (Sec)&(\%) \\ \tableline 94103-01-01-00 &54921.32 &2009-03-31~07:42:19.6 &2533&$0.081^{+0.002}_{-0.001}$&--- &12.44$\pm$0.02 &7.38$\pm$0.47\\ 94103-01-01-01 &54921.70 &2009-03-31~16:53:26.6 &3354&$0.079^{+0.058}_{-0.038}$&--- &12.45$\pm$0.02 &3.48$\pm$0.64\\ 94103-01-01-03 &54922.82 &2009-04-01~19:37:24.1 &3371&$0.078^{+0.005}_{-0.004}$&--- &12.44$\pm$0.02 &6.91$\pm$0.43\\ 94103-01-01-02 &54923.73 &2009-04-02~17:36:24.8 &1448&$0.078^{+0.001}_{-0.001}$&--- &12.44$\pm$0.03 &4.54$\pm$0.71\\ 94103-01-02-00 &54925.83 &2009-04-04~19:52:11.4 &3335&---&--- &12.44 &1.01$\pm$0.62\\ 95438-01-01-00 &55499.46 &2010-10-30~11:03:58.2 &9781&$0.0805^{+0.0004}_{-0.0001}$&$0.160^{+0.002}_{-0.001}$ &12.45$\pm$0.01 &21.12$\pm$0.21 \\ 95438-01-02-00 &55501.16 &2010-11-01~03:48:31.7 &2935&$0.079^{+0.001}_{-0.001}$&$0.158^{+0.002}_{-0.003}$ &12.44$\pm$0.02 &15.12$\pm$0.60\\ 95438-01-02-01 &55502.53 &2010-11-02~12:43:54.3 &2831&$0.079^{+0.001}_{-0.002}$&$0.161^{+0.001}_{-0.001}$ &12.45$\pm$0.07 &18.77$\pm$0.42 \\ 95438-01-03-00 &55505.16 &2010-11-05~03:54:04.3 &2328&$0.080^{+0.002}_{-0.001}$&--- &12.45$\pm$0.02 &9.37$\pm$0.72\\ 95438-01-03-01 &55507.24 &2010-11-07~05:44:38.1 &3583&$0.080^{+0.001}_{-0.001}$&--- &12.44$\pm$0.01 &7.87$\pm$0.42\\ \tableline \end{tabular} \end{center} \end{table} \begin{figure} \figurenum{3} \epsscale{0.85} \includegraphics[scale=0.5,angle=-90]{fig3.eps} \caption{The best pulse period of the source IGR J19294+1816 estimated using ftool "efsearch" on 10 PCA observations during 2009 (left panel) \& 2010 (right panel) of the source IGR J19294+1816. Panels are arranged with highest to lowest fluxes in both the outbursts.\label{fig_3}} \end{figure} \begin{figure} \figurenum{4} \epsscale{0.80} \includegraphics[scale=0.28,angle=-90]{fig4.eps} \caption{The evolution of pulse profile of all the 10 observations RXTE/PCA pointings during the 2009 and 2010 outburst of the source IGR J19294+1816 in panels (a) 2-7 keV, (b) 7-15 keV, (c) 15-25 keV, (d) 25-60 keV and (e) 2-60 keV energy bands with clear detection of pulsation folded with their estimated pulse periods and with 8 phasebins/period. \label{fig_4}} \end{figure} \begin{figure} \figurenum{5} \includegraphics[scale=0.5,angle=-90]{fig5.eps} \caption{The variation of pulse fraction in different energy bands (a) 2-7 keV, (b) 7-15 keV, (c) 15-25 keV, (d) 25-60 keV and (e) 2-60 keV with flux of all the observations (total ten PCA pointings) during 2009 and 2010 outburst of the source IGR J19294+1816. Solid line shows the logarithmic fit to the data. \label{fig_5}} \end{figure} \begin{table} \begin{center} \caption{Observed spectral parameters of IGR J19294+1816.\label{tbl-2}} \begin{tabular}{llcccc} \tableline & phabs (nH)& Spectral & Flux (3-60 keV)&Powerlaw Flux& Iron line Flux \\ MJD & $10^{22}$ atoms cm$^{-2}$&Index ($\Gamma$)&($10^{-10}erg cm^{-2}s^{-1}$)&($10^{-10}erg cm^{-2}s^{-1}$)&($10^{-12}erg cm^{-2}s^{-1}$) \\ \hline 54921.32 & 0.32$^{+1.83}_{-0.32}$ & $1.51^{+0.14}_{-0.10}$ & 1.48$\pm$0.04& 1.47$\pm$0.04&1.48$\pm$0.57 \\ 54921.70 & 0.28$^{+1.23}_{-0.28}$ & $1.95^{+0.57}_{-1.66}$ & 0.56$\pm$0.02& 0.60$\pm$0.02&1.05$\pm$0.50 \\ 54922.82 & 0.97$^{+1.69}_{-0.97}$ & $1.60^{+0.15}_{-0.13}$ & 1.03$\pm$0.02& 1.37$\pm$0.04&1.63$\pm$0.52 \\ 54923.73 & 0.13$^{+3.19}_{-0.13}$ & $1.78^{+0.27}_{-0.15}$ & 0.72$\pm$0.04& 0.76$\pm$0.04&1.23$\pm$0.71 \\ 54925.83 & 0.80$^{+0.79}_{-0.80}$ & $2.07^{+0.14}_{-0.12}$ & 0.51$\pm$0.02& 0.51$\pm$0.02&1.03$\pm$0.48 \\ 55499.46 & 2.9$^{+0.4}_{-0.6}$ & $1.23^{+0.03}_{-0.05}$ & 5.39$\pm$0.03& 8.05$\pm$0.05&2.17$\pm$0.44 \\ 55501.16 & 4.1$^{+1.8}_{-1.8}$ & $1.22^{+0.11}_{-0.12}$ & 2.12$\pm$0.03& 4.05$\pm$0.06&1.51$\pm$0.55 \\ 55502.53 & 3.2$^{+1.5}_{-1.3}$ & $1.19^{+0.13}_{-0.10}$ & 2.65$\pm$0.03& 5.63$\pm$0.08&2.13$\pm$0.74 \\ 55505.16 & 1.7$^{+2.0}_{-1.6}$ & $1.35^{+0.16}_{-0.21}$ & 1.44$\pm$0.04& 2.21$\pm$0.06&1.51$\pm$0.64 \\ 55507.24 & 2.8$^{+1.1}_{-1.4}$ & $1.76^{+0.07}_{-0.10}$ & 1.14$\pm$0.03& 1.39$\pm$0.03&0.77$\pm$0.52 \\ \tableline \end{tabular} \end{center} \end{table} \begin{table} \small \begin{center} \caption{Observed best fit parameters all the 10 observations of IGR J19294+1816 for Cyclotron line and Iron line .\label{tbl-3}} \begin{tabular}{ccccccccc} \tableline MJD& \multicolumn{2}{c}{Cyclabs}& \multicolumn{3}{c}{Iron line}& $\chi^{2}$ (dof) & $\chi^{2}$ (dof)& $\chi^{2}$ (dof) \\ & Energy &Depth&Width&Energy&norm& & without & without \\ \cline{2-4} \cline{5-6} & E$_{cycl}$ (keV)&D$_f$&W$_f$ (keV)&E$_{Fe}$ (keV) &($\times$10$^{-4}$) &&cyclabs&Fe line \\ \hline 54921.32 & 35.5 & $0.01^{+3.11}_{-0.01}$ &5.45 &$6.40^{+0.27}_{-0.25}$&$1.4^{+0.7}_{-0.7}$ &51.7 (84)&51.7 (85)&64.7 (86) \\ 54921.70 & 35.5 & $0.1^{+5.0}_{-0.1}$ &5.45 &$6.55^{+0.30}_{-0.30}$&$1.0^{+0.5}_{-0.6}$ &68.6 (82)&68.4 (85) &79.6 (84) \\ 54922.82 & 35.5 & $2^{+5}_{-2}$ &5.45 &$6.21^{+0.28}_{-0.24}$&$1.7^{+0.6}_{-0.7}$ &68.5 (84) &71.3 (85) &86.4 (86) \\ 54923.73 & 35.5 & $0.5^{+9.1}_{-0.5}$ &5.45 &$6.50^{+0.27}_{-0.28}$&$1.2^{+0.8}_{-0.8}$ &59.7 (85) &59.7 (85) &65.8 (86) \\ 54925.83 & 35.5 & --- &5.45 &$6.65^{+0.12}_{-0.17}$&$0.96^{+0.29}_{-0.30}$&62.2 (82) &62.3 (85) &71.8 (86) \\ \textbf{55499.46} & 35.5$^{+2.1}_{-1.7}$ & $2.10^{+2.0}_{-0.8}$ &$5.45^{+3.10}_{-1.98}$&$6.40^{+0.10}_{-0.15}$&$2.1^{+0.6}_{-0.4}$ &77.9 (82) &\textbf{216.0 (85)} &122.6(84) \\ 55501.16 & 38$^{+22}_{-4}$ & $3.5^{+2.1}_{-1.6}$ &5.45 &$6.38^{+0.21}_{-0.24}$&$1.5^{+0.7}_{-0.8}$ &82.6 (82) &114.8 (84) &94.2 (84) \\ 55502.53 & 41$^{+20}_{-9}$ & $4.2^{+2.6}_{-2.0}$ &5.45 &$6.31^{+0.24}_{-0.24}$&$2.1^{+0.9}_{-1}$ &68.1 (84)&102.7 (86) &81.4 (86) \\ 55505.16 & 35$^{+14}_{-6}$ & $2.9^{+5.5}_{-2.6}$ &5.45 &$6.15^{+0.50}_{-0.41}$&$1.5^{+0.8}_{-0.8}$ &58.4 (84)&62.3 (86) &67.7 (86) \\ 55507.24 & 35.5 & $1.1^{+5.0}_{-1.1}$ &5.45 &$6.62^{+0.23}_{-0.29}$&$0.73^{+0.35}_{-0.38}$ &59.8 (83)&60.5 (84) &63.7 (85) \\ \tableline \end{tabular} \flushleft{All values without errors are freezed values. All uncertainties are expressed as 90\% confidence.} \end{center} \end{table} \subsection{Spectral properties} The 3-60 keV wide band continuum X-ray spectrum of the source is fit by a simple `power law' model (Table \ref{tbl-2}), with an absorption component for the Galactic inter stellar medium parameterized by the spectral model component `phabs' (available in XSPEC package) which provides the measure of absorption as the effective equivalent hydrogen column density (in units of 10$^{22}$ atoms/cm$^{2}$) using the photoelectric absorption cross section of \citet{1992ApJ...400..699B}. In addition we used a Gaussian line to account for the statistically significant presence of the 6.4 keV iron fluorecence line produced in the accretion process. The width of the iron line is fixed at 0.01 keV. On MJD 55499.46, corresponding to the maximum value of the measured flux the fit to the simple model yielded a high chi-square and the residuals indicated a possible absorption feature at about $\sim$ 35 keV (MJD 55499.46 is indicated in bold fonts in Table \ref{tbl-3}). The spectrum of this particular observation is shown in Figure \ref{fig_6}. The top panel shows the ratio of the spectrum to the best fit powerlaw model, and the residual absorption feature is best modeled by `cyclabs' model component which signifies the physical presence of cyclotron resonance scattering features (CRSF). The middle panel of Figure \ref{fig_6} depicts the \texttt{XSPEC}'s unfolded spectra. The bottom panel shows the model that best fits the spectrum of this observation. For consistency we have used the same model (phabs*(powerlaw + gaussian)*cyclabs), in the XSPEC terminology, to fit all the ten observations during 2009 and 2010 outbursts i.e, including the iron line and cyclotron line. The folded energy spectrum of IGR J19294+1816 obtained with the five PCA/RXTE observations each during 2009 and 2010 outburst of the source along with the best fit model indicated by solid line are shown in Figure \ref{fig_7} and \ref{fig_8} respectively. Bottom panel of each of the spectrum of Figure \ref{fig_7} and \ref{fig_8} gives the residuals to the best fit model for each observed spectra. To estimate the flux and error on flux in the 3-60 keV range, an additional model component `cflux' (convolution model in XSPEC) is used to obtain the overall flux as well as the flux pertaining to the individual modeled component, viz unabsorbed powerlaw, Gaussian line, etc. The best fit values of the model and the flux obtained from the spectra are tabulated in Tables \ref{tbl-2} and \ref{tbl-3}. In Table \ref{tbl-3} the value of $\chi^{2}$ and degree of freedom without the inclusion of `cyclabs' model used for modeling the CRSF is also reported. We observed that the inclusion of cyclotron line model in the spectra of the three bright observations (MJD 55499.46, 55501.16 and 55502.53) during 2010 outburst showed significant improvement in the $\chi^{2}$ and degree of freedom. Similarly, the value of $\chi^{2}$ and degree of freedom without the inclusion of Gaussian model is also reported in the Table \ref{tbl-3}. This provides an estimate of the significance of the respective model components for each spectra. \begin{figure} \figurenum{6} \includegraphics[scale=0.65,angle=-90]{fig6.eps} \caption{The spectra of the observation on MJD 55499.46, Obs. Id.: 95438-01-01-00. The absorption at 35.5$^{+2.1}_{-1.7}$ keV and the Fe line at 6.40$^{+0.10}_{-0.15}$ keV are the two prominent features of the observation on this particular day.\label{fig_6}} \end{figure} \begin{figure} \figurenum{7} \includegraphics[scale=0.3,angle=-90]{fig7a.eps} \includegraphics[scale=0.3,angle=-90]{fig7b.eps} \includegraphics[scale=0.3,angle=-90]{fig7c.eps} \includegraphics[scale=0.3,angle=-90]{fig7d.eps} \includegraphics[scale=0.3,angle=-90]{fig7e.eps} \caption{Energy spectrum of IGR J19294+1816 obtained with the 5 PCA/RXTE observations (MJD 54921.32, 54921.70, 54922.82, 54923.73 \& 54925.83) during 2009 outburst of the source along with the best fit model phabs*(powerlaw + gaussian)*cyclabs indicated by solid line. Bottom panel shows the residuals to the best fit model for each observation.\label{fig_7}} \end{figure} \begin{figure} \figurenum{8} \includegraphics[scale=0.3,angle=-90]{fig8a.eps} \includegraphics[scale=0.3,angle=-90]{fig8b.eps} \includegraphics[scale=0.3,angle=-90]{fig8c.eps} \includegraphics[scale=0.3,angle=-90]{fig8d.eps} \includegraphics[scale=0.3,angle=-90]{fig8e.eps} \caption{Energy spectrum of IGR J19294+1816 obtained with the 5 PCA/RXTE observations (MJD 55499.46, 55501.16, 55502.53, 55505.16 \& 55507.24) during 2010 outburst of the source along with the best fit model phabs*(powerlaw + gaussian)*cyclabs indicated by solid line. Bottom panel shows the residuals to the best fit model for each observation.\label{fig_8}} \end{figure} \begin{figure} \figurenum{9} \includegraphics[scale=0.5,angle=-90]{fig9.eps} \caption{The variation of spectral parameters, (a) hydrogen column density (in units of $10^{22}$ atoms cm$^{-2}$), (b) Photon Index ($\Gamma$), (c) Powerlaw flux (in the units of ergs/cm$^2$/s), (d) Iron line normalization, (e) Iron line flux (in the units of ergs/cm$^2$/s), (f) Cyclotron line depth with 3-60 keV absorbed total flux from 10 observations. Solid lines indicate the fit of linear trend in panels (a), (c), (d) and (e). Solid line in Panel (b) shows logarithmic trend used to fit the spectral correlation. \label{fig_9}} \end{figure} The variation of different spectral parameters with total flux in 3-60 keV energy range is shown in Figure \ref{fig_9}. We obtained Pearson's correlation coefficient to quantify the strength of the relation of spectral parameters with respect to total flux. There seem to be an increase in the absorption in the source, as parameterized by the nH value (Figure \ref{fig_9}a), with increase in flux, with a correlation coefficient of 0.59 (the obvious assumption is that the properties of ISM does not change in tandem with the source). Figure \ref{fig_9}(b) shows that the photon index ($\Gamma$) shows spectral hardening with increasing 3-60 keV flux with a relatively strong anti-correlation as the Pearson's coefficients has a value of -0.72. A logarithmic function is used to fit the trend of spectral index variation with the flux. This anti-correlation suggests that the source is in horizontal branch of the hardness intensity diagram \citep{reig2013}, further strengthening the hypothesis that the nature of the binary system is that of the Be/X-ray binary type. The flux corresponding to the spectral component `powerlaw' (Figure \ref{fig_9}c) is mostly the unabsorbed flux for this source which is consistently higher by about 0.51$\pm$0.02$\times$$10^{-10} ergs/cm^2/s$ in nearly all observations and linearly increasing with increase in total flux of the source with a strong positive correlation of 0.96. Figure \ref{fig_9}(d) shows a trend of linear increase of the iron line normalization with the total flux, the correlation coefficient is 0.76. The iron line flux shows a positive correlation coefficient of 0.82 (Figure \ref{fig_9}e) with increasing total flux. Reprocessing of the hard X-ray continuum in relatively cool matter that produces the fluorescent iron K$\alpha$ line feature does not vary with flux, as observed from Table \ref{tbl-3}. The CRSF parameters were not significant enough in all the observations and hence a correlation test was not possible for the most important spectral feature detected. \section{Results \& Conclusion} In this work we have studied the timing and spectral properties of Be/X-ray binary IGR J19294+1816 using RXTE/PCA data. The RXTE/PCA observed the evolution of the source in the decaying phase for both the outbursts in 2009 and 2010. The overall spectral and timing features are very similar during these two outbursts, suggesting that the physical processes in the accretion region during the two outbursts separated by $\sim 600$ days are similar. We observed that the significance of the detection of the pulsation peak decreases with the decrease in flux. Furthermore, the pulse fraction showed logarithmically increasing trend with flux in all the energy ranges (as seen in Figure \ref{fig_5}). The spectral study reveals a detection of a CRSF for the first time in the source IGR J19294+1816 at 35.5$^{+2.1}_{-1.7}$ keV. In addition, an iron Fe line at 6.40$^{+0.10}_{-0.15}$ is also reported to be present in the source. Cyclotron absorption features originates in the X-ray spectrum due to the resonance scattering of photons with quantized electrons in the presence of magnetic field. The presence of the cyclotron absorption lines enable the direct measurement of the magnetic field of the pulsar as given by the following relation \citep[see page 471][]{Accretion_1}: \begin{equation} E_c = 11.6B_{12}(1+z)^{-1} keV, \label{eq1} \end{equation} where $E_c$ is the energy of the cyclotron absorption line, $z$ is the gravitational red shift at the neutron star surface and $B_{12}$ is the magnetic field in units of $10^{12}$ Gauss. Typically, the value of $z$ is 0.35 for neutron stars of mass in the range $1.4-2 M_{\odot}$ \citep{2002Natur.420...51C, 2014RMxAA..50..103Z}. Hence, using the value of $E_c = 35.5$ keV (corresponding to the most significant observation of the CRSF), the value of magnetic field obtained is $B = 4.13\times10^{12}$ Gauss. The detection of the CRSF at lower flux values is marginal at best fit, as a result there is no clear correlation between the energy of the cyclotron line and the X-ray luminosity (Table \ref{tbl-3}), similar to sources like 1A 0535+262 \citep{2007ESASP.622..471C}. Although there is mild positive correlation (coefficient of 0.41) between the cyclotron line depth and the total flux (it may be noted that in most of the cases energy of the CRSF is freezed see Table \ref{tbl-3}), it is not much of consequence as the error on the best fit value shows that the statistically significant detection of the CRSF occurs only at the highest flux value. Since the powerlaw hardens as the flux increases, one factor for significant detection of the CRSF at high flux is that the comparatively harder spectra provides the better continuum baseline above the background noise to enable a statistically significant detection of the CRSF. \section{Acknowledgment} This research has made use of data obtained through the HEASARC Online Service, provided by the NASA/GSFC, in support of NASA High Energy Astrophysics Programs. JR and PCA acknowledge the fellowship and the funding provided by the National Academy of Sciences, India (NASI). MC and JR acknowledges the many discussion with suggestions provided by Professor A. R. Rao, TIFR, Mumbai, India. We also thank the referee for his detailed valuable suggestions which considerably improved the manuscript. \bibliographystyle{apj}
1,116,691,497,960
arxiv
\section{\label{sec:level1}First-level heading:\protect\\ The line break was forced \lowercase{via} \textbackslash\textbackslash} This sample document demonstrates proper use of REV\TeX~4.1 (and \LaTeXe) in manuscripts prepared for submission to AIP journals. Further information can be found in the documentation included in the distribution or available at \url{http://authors.aip.org} and in the documentation for REV\TeX~4.1 itself. When commands are referred to in this example file, they are always shown with their required arguments, using normal \TeX{} format. In this format, \verb+#1+, \verb+#2+, etc. stand for required author-supplied arguments to commands. For example, in \verb+\section{#1}+ the \verb+#1+ stands for the title text of the author's section heading, and in \verb+\title{#1}+ the \verb+#1+ stands for the title text of the paper. Line breaks in section headings at all levels can be introduced using \textbackslash\textbackslash. A blank input line tells \TeX\ that the paragraph has ended. \subsection{\label{sec:level2}Second-level heading: Formatting} This file may be formatted in both the \texttt{preprint} (the default) and \texttt{reprint} styles; the latter format may be used to mimic final journal output. Either format may be used for submission purposes; however, for peer review and production, AIP will format the article using the \texttt{preprint} class option. Hence, it is essential that authors check that their manuscripts format acceptably under \texttt{preprint}. Manuscripts submitted to AIP that do not format correctly under the \texttt{preprint} option may be delayed in both the editorial and production processes. The \texttt{widetext} environment will make the text the width of the full page, as on page~\pageref{eq:wideeq}. (Note the use the \verb+\pageref{#1}+ to get the page number right automatically.) The width-changing commands only take effect in \texttt{twocolumn} formatting. It has no effect if \texttt{preprint} formatting is chosen instead. \subsubsection{\label{sec:level3}Third-level heading: Citations and Footnotes} Citations in text refer to entries in the Bibliography; they use the commands \verb+\cite{#1}+ or \verb+\onlinecite{#1}+. Because REV\TeX\ uses the \verb+natbib+ package of Patrick Daly, its entire repertoire of commands are available in your document; see the \verb+natbib+ documentation for further details. The argument of \verb+\cite+ is a comma-separated list of \emph{keys}; a key may consist of letters and numerals. By default, citations are numerical; \cite{feyn54} author-year citations are an option. To give a textual citation, use \verb+\onlinecite{#1}+: (Refs.~\onlinecite{witten2001,epr,Bire82}). REV\TeX\ ``collapses'' lists of consecutive numerical citations when appropriate. REV\TeX\ provides the ability to properly punctuate textual citations in author-year style; this facility works correctly with numerical citations only with \texttt{natbib}'s compress option turned off. To illustrate, we cite several together \cite{feyn54,witten2001,epr,Berman1983}, and once again (Refs.~\onlinecite{epr,feyn54,Bire82,Berman1983}). Note that, when numerical citations are used, the references were sorted into the same order they appear in the bibliography. A reference within the bibliography is specified with a \verb+\bibitem{#1}+ command, where the argument is the citation key mentioned above. \verb+\bibitem{#1}+ commands may be crafted by hand or, preferably, generated by using Bib\TeX. The AIP styles for REV\TeX~4 include Bib\TeX\ style files \verb+aipnum.bst+ and \verb+aipauth.bst+, appropriate for numbered and author-year bibliographies, respectively. REV\TeX~4 will automatically choose the style appropriate for the document's selected class options: the default is numerical, and you obtain the author-year style by specifying a class option of \verb+author-year+. This sample file demonstrates a simple use of Bib\TeX\ via a \verb+\bibliography+ command referencing the \verb+aipsamp.bib+ file. Running Bib\TeX\ (in this case \texttt{bibtex aipsamp}) after the first pass of \LaTeX\ produces the file \verb+aipsamp.bbl+ which contains the automatically formatted \verb+\bibitem+ commands (including extra markup information via \verb+\bibinfo+ commands). If not using Bib\TeX, the \verb+thebibiliography+ environment should be used instead. \paragraph{Fourth-level heading is run in.}% Footnotes are produced using the \verb+\footnote{#1}+ command. Numerical style citations put footnotes into the bibliography\footnote{Automatically placing footnotes into the bibliography requires using BibTeX to compile the bibliography.}. Author-year and numerical author-year citation styles (each for its own reason) cannot use this method. Note: due to the method used to place footnotes in the bibliography, \emph{you must re-run BibTeX every time you change any of your document's footnotes}. \section{Math and Equations} Inline math may be typeset using the \verb+$+ delimiters. Bold math symbols may be achieved using the \verb+bm+ package and the \verb+\bm{#1}+ command it supplies. For instance, a bold $\alpha$ can be typeset as \verb+$\bm{\alpha}$+ giving $\bm{\alpha}$. Fraktur and Blackboard (or open face or double struck) characters should be typeset using the \verb+\mathfrak{#1}+ and \verb+\mathbb{#1}+ commands respectively. Both are supplied by the \texttt{amssymb} package. For example, \verb+$\mathbb{R}$+ gives $\mathbb{R}$ and \verb+$\mathfrak{G}$+ gives $\mathfrak{G}$ In \LaTeX\ there are many different ways to display equations, and a few preferred ways are noted below. Displayed math will center by default. Use the class option \verb+fleqn+ to flush equations left. Below we have numbered single-line equations, the most common kind: \begin{eqnarray} \chi_+(p)\alt{\bf [}2|{\bf p}|(|{\bf p}|+p_z){\bf ]}^{-1/2} \left( \begin{array}{c} |{\bf p}|+p_z\\ px+ip_y \end{array}\right)\;, \\ \left\{% \openone234567890abc123\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}% \label{eq:one}. \end{eqnarray} Note the open one in Eq.~(\ref{eq:one}). Not all numbered equations will fit within a narrow column this way. The equation number will move down automatically if it cannot fit on the same line with a one-line equation: \begin{equation} \left\{ ab12345678abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2}% \right\}. \end{equation} When the \verb+\label{#1}+ command is used [cf. input for Eq.~(\ref{eq:one})], the equation can be referred to in text without knowing the equation number that \TeX\ will assign to it. Just use \verb+\ref{#1}+, where \verb+#1+ is the same name that used in the \verb+\label{#1}+ command. Unnumbered single-line equations can be typeset using the \verb+\[+, \verb+\]+ format: \[g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \] \subsection{Multiline equations} Multiline equations are obtained by using the \verb+eqnarray+ environment. Use the \verb+\nonumber+ command at the end of each line to avoid assigning a number: \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} \delta_{\sigma_1,-\sigma_2} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_jl_i\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1), \end{eqnarray} \begin{eqnarray} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\nonumber \\ & &\times \left( \sum_{i<j}\right) \sum_{\text{perm}} \frac{1}{S_{12}} \frac{1}{S_{12}} \sum_\tau c^f_\tau~. \end{eqnarray} \textbf{Note:} Do not use \verb+\label{#1}+ on a line of a multiline equation if \verb+\nonumber+ is also used on that line. Incorrect cross-referencing will result. Notice the use \verb+\text{#1}+ for using a Roman font within a math environment. To set a multiline equation without \emph{any} equation numbers, use the \verb+\begin{eqnarray*}+, \verb+\end{eqnarray*}+ format: \begin{eqnarray*} \sum \vert M^{\text{viol}}_g \vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2} (N^2-1)\\ & &\times \left( \sum_{i<j}\right) \left( \sum_{\text{perm}}\frac{1}{S_{12}S_{23}S_{n1}} \right) \frac{1}{S_{12}}~. \end{eqnarray*} To obtain numbers not normally produced by the automatic numbering, use the \verb+\tag{#1}+ command, where \verb+#1+ is the desired equation number. For example, to get an equation number of (\ref{eq:mynum}), \begin{equation} g^+g^+ \rightarrow g^+g^+g^+g^+ \dots ~,~~q^+q^+\rightarrow q^+g^+g^+ \dots ~. \tag{2.6$'$}\label{eq:mynum} \end{equation} A few notes on \verb=\tag{#1}=. \verb+\tag{#1}+ requires \texttt{amsmath}. The \verb+\tag{#1}+ must come before the \verb+\label{#1}+, if any. The numbering set with \verb+\tag{#1}+ is \textit{transparent} to the automatic numbering in REV\TeX{}; therefore, the number must be known ahead of time, and it must be manually adjusted if other equations are added. \verb+\tag{#1}+ works with both single-line and multiline equations. \verb+\tag{#1}+ should only be used in exceptional case - do not use it to number all equations in a paper. Enclosing single-line and multiline equations in \verb+\begin{subequations}+ and \verb+\end{subequations}+ will produce a set of equations that are ``numbered'' with letters, as shown in Eqs.~(\ref{subeq:1}) and (\ref{subeq:2}) below: \begin{subequations} \label{eq:whole} \begin{equation} \left\{ abc123456abcdef\alpha\beta\gamma\delta1234556\alpha\beta \frac{1\sum^{a}_{b}}{A^2} \right\},\label{subeq:1} \end{equation} \begin{eqnarray} {\cal M}=&&ig_Z^2(4E_1E_2)^{1/2}(l_i^2)^{-1} (g_{\sigma_2}^e)^2\chi_{-\sigma_2}(p_2)\nonumber\\ &&\times [\epsilon_i]_{\sigma_1}\chi_{\sigma_1}(p_1).\label{subeq:2} \end{eqnarray} \end{subequations} Putting a \verb+\label{#1}+ command right after the \verb+\begin{subequations}+, allows one to reference all the equations in a subequations environment. For example, the equations in the preceding subequations environment were Eqs.~(\ref{eq:whole}). \subsubsection{Wide equations} The equation that follows is set in a wide format, i.e., it spans across the full page. The wide format is reserved for long equations that cannot be easily broken into four lines or less: \begin{widetext} \begin{equation} {\cal R}^{(\text{d})}= g_{\sigma_2}^e \left( \frac{[\Gamma^Z(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^Z(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right) + x_WQ_e \left( \frac{[\Gamma^\gamma(3,21)]_{\sigma_1}}{Q_{12}^2-M_W^2} +\frac{[\Gamma^\gamma(13,2)]_{\sigma_1}}{Q_{13}^2-M_W^2} \right)\;. \label{eq:wideeq} \end{equation} \end{widetext} This is typed to show the output is in wide format. (Since there is no input line between \verb+\equation+ and this paragraph, there is no paragraph indent for this paragraph.) \section{Cross-referencing} REV\TeX{} will automatically number sections, equations, figure captions, and tables. In order to reference them in text, use the \verb+\label{#1}+ and \verb+\ref{#1}+ commands. To reference a particular page, use the \verb+\pageref{#1}+ command. The \verb+\label{#1}+ should appear in a section heading, within an equation, or in a table or figure caption. The \verb+\ref{#1}+ command is used in the text where the citation is to be displayed. Some examples: Section~\ref{sec:level1} on page~\pageref{sec:level1}, Table~\ref{tab:table1},% \begin{table} \caption{\label{tab:table1}This is a narrow table which fits into a text column when using \texttt{twocolumn} formatting. Note that REV\TeX~4 adjusts the intercolumn spacing so that the table fills the entire width of the column. Table captions are numbered automatically. This table illustrates left-aligned, centered, and right-aligned columns. } \begin{ruledtabular} \begin{tabular}{lcr} Left\footnote{Note a.}&Centered\footnote{Note b.}&Right\\ \hline 1 & 2 & 3\\ 10 & 20 & 30\\ 100 & 200 & 300\\ \end{tabular} \end{ruledtabular} \end{table} and Fig.~\ref{fig:epsart}. \section{Figures and Tables} Figures and tables are typically ``floats''; \LaTeX\ determines their final position via placement rules. \LaTeX\ isn't always successful in automatically placing floats where you wish them. Figures are marked up with the \texttt{figure} environment, the content of which imports the image (\verb+\includegraphics+) followed by the figure caption (\verb+\caption+). The argument of the latter command should itself contain a \verb+\label+ command if you wish to refer to your figure with \verb+\ref+. Import your image using either the \texttt{graphics} or \texttt{graphix} packages. These packages both define the \verb+\includegraphics{#1}+ command, but they differ in the optional arguments for specifying the orientation, scaling, and translation of the figure. Fig.~\ref{fig:epsart}% \begin{figure} \includegraphics{fig_1 \caption{\label{fig:epsart} A figure caption. The figure captions are automatically numbered.} \end{figure} is small enough to fit in a single column, while Fig.~\ref{fig:wide}% \begin{figure*} \includegraphics{fig_2 \caption{\label{fig:wide}Use the \texttt{figure*} environment to get a wide figure, spanning the page in \texttt{twocolumn} formatting.} \end{figure*} is too wide for a single column, so instead the \texttt{figure*} environment has been used. The analog of the \texttt{figure} environment is \texttt{table}, which uses the same \verb+\caption+ command. However, you should type your caption command first within the \texttt{table}, instead of last as you did for \texttt{figure}. The heart of any table is the \texttt{tabular} environment, which represents the table content as a (vertical) sequence of table rows, each containing a (horizontal) sequence of table cells. Cells are separated by the \verb+&+ character; the row terminates with \verb+\\+. The required argument for the \texttt{tabular} environment specifies how data are displayed in each of the columns. For instance, a column may be centered (\verb+c+), left-justified (\verb+l+), right-justified (\verb+r+), or aligned on a decimal point (\verb+d+). (Table~\ref{tab:table4}% \begin{table} \caption{\label{tab:table4}Numbers in columns Three--Five have been aligned by using the ``d'' column specifier (requires the \texttt{dcolumn} package). Non-numeric entries (those entries without a ``.'') in a ``d'' column are aligned on the decimal point. Use the ``D'' specifier for more complex layouts. } \begin{ruledtabular} \begin{tabular}{ccddd} One&Two&\mbox{Three}&\mbox{Four}&\mbox{Five}\\ \hline one&two&\mbox{three}&\mbox{four}&\mbox{five}\\ He&2& 2.77234 & 45672. & 0.69 \\ C\footnote{Some tables require footnotes.} &C\footnote{Some tables need more than one footnote.} & 12537.64 & 37.66345 & 86.37 \\ \end{tabular} \end{ruledtabular} \end{table} illustrates the use of decimal column alignment.) Extra column-spacing may be be specified as well, although REV\TeX~4 sets this spacing so that the columns fill the width of the table. Horizontal rules are typeset using the \verb+\hline+ command. The doubled (or Scotch) rules that appear at the top and bottom of a table can be achieved by enclosing the \texttt{tabular} environment within a \texttt{ruledtabular} environment. Rows whose columns span multiple columns can be typeset using \LaTeX's \verb+\multicolumn{#1}{#2}{#3}+ command (for example, see the first row of Table~\ref{tab:table3}).% \begin{table*} \caption{\label{tab:table3}This is a wide table that spans the page width in \texttt{twocolumn} mode. It is formatted using the \texttt{table*} environment. It also demonstrates the use of \textbackslash\texttt{multicolumn} in rows with entries that span more than one column.} \begin{ruledtabular} \begin{tabular}{ccccc} &\multicolumn{2}{c}{$D_{4h}^1$}&\multicolumn{2}{c}{$D_{4h}^5$}\\ Ion&1st alternative&2nd alternative&lst alternative &2nd alternative\\ \hline K&$(2e)+(2f)$&$(4i)$ &$(2c)+(2d)$&$(4f)$ \\ Mn&$(2g)$\footnote{The $z$ parameter of these positions is $z\sim\frac{1}{4}$.} &$(a)+(b)+(c)+(d)$&$(4e)$&$(2a)+(2b)$\\ Cl&$(a)+(b)+(c)+(d)$&$(2g)$\footnote{This is a footnote in a table that spans the full page width in \texttt{twocolumn} mode. It is supposed to set on the full width of the page, just as the caption does. } &$(4e)^{\text{a}}$\\ He&$(8r)^{\text{a}}$&$(4j)^{\text{a}}$&$(4g)^{\text{a}}$\\ Ag& &$(4k)^{\text{a}}$& &$(4h)^{\text{a}}$\\ \end{tabular} \end{ruledtabular} \end{table*} The tables in this document illustrate various effects. Tables that fit in a narrow column are contained in a \texttt{table} environment. Table~\ref{tab:table3} is a wide table, therefore set with the \texttt{table*} environment. Lengthy tables may need to break across pages. A simple way to allow this is to specify the \verb+[H]+ float placement on the \texttt{table} or \texttt{table*} environment. Alternatively, using the standard \LaTeXe\ package \texttt{longtable} gives more control over how tables break and allows headers and footers to be specified for each page of the table. An example of the use of \texttt{longtable} can be found in the file \texttt{summary.tex} that is included with the REV\TeX~4 distribution. There are two methods for setting footnotes within a table (these footnotes will be displayed directly below the table rather than at the bottom of the page or in the bibliography). The easiest and preferred method is just to use the \verb+\footnote{#1}+ command. This will automatically enumerate the footnotes with lowercase roman letters. However, it is sometimes necessary to have multiple entries in the table share the same footnote. In this case, create the footnotes using \verb+\footnotemark[#1]+ and \verb+\footnotetext[#1]{#2}+. \texttt{\#1} is a numeric value. Each time the same value for \texttt{\#1} is used, the same mark is produced in the table. The \verb+\footnotetext[#1]{#2}+ commands are placed after the \texttt{tabular} environment. Examine the \LaTeX\ source and output for Tables~\ref{tab:table1} and \ref{tab:table2}% \begin{table} \caption{\label{tab:table2}A table with more columns still fits properly in a column. Note that several entries share the same footnote. Inspect the \LaTeX\ input for this table to see exactly how it is done.} \begin{ruledtabular} \begin{tabular}{cccccccc} &$r_c$ (\AA)&$r_0$ (\AA)&$\kappa r_0$& &$r_c$ (\AA) &$r_0$ (\AA)&$\kappa r_0$\\ \hline Cu& 0.800 & 14.10 & 2.550 &Sn\footnotemark[1] & 0.680 & 1.870 & 3.700 \\ Ag& 0.990 & 15.90 & 2.710 &Pb\footnotemark[2] & 0.450 & 1.930 & 3.760 \\ Au& 1.150 & 15.90 & 2.710 &Ca\footnotemark[3] & 0.750 & 2.170 & 3.560 \\ Mg& 0.490 & 17.60 & 3.200 &Sr\footnotemark[4] & 0.900 & 2.370 & 3.720 \\ Zn& 0.300 & 15.20 & 2.970 &Li\footnotemark[2] & 0.380 & 1.730 & 2.830 \\ Cd& 0.530 & 17.10 & 3.160 &Na\footnotemark[5] & 0.760 & 2.110 & 3.120 \\ Hg& 0.550 & 17.80 & 3.220 &K\footnotemark[5] & 1.120 & 2.620 & 3.480 \\ Al& 0.230 & 15.80 & 3.240 &Rb\footnotemark[3] & 1.330 & 2.800 & 3.590 \\ Ga& 0.310 & 16.70 & 3.330 &Cs\footnotemark[4] & 1.420 & 3.030 & 3.740 \\ In& 0.460 & 18.40 & 3.500 &Ba\footnotemark[5] & 0.960 & 2.460 & 3.780 \\ Tl& 0.480 & 18.90 & 3.550 & & & & \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Here's the first, from Ref.~\onlinecite{feyn54}.} \footnotetext[2]{Here's the second.} \footnotetext[3]{Here's the third.} \footnotetext[4]{Here's the fourth.} \footnotetext[5]{And etc.} \end{table} for an illustration. All AIP journals require that the initial citation of figures or tables be in numerical order. \LaTeX's automatic numbering of floats is your friend here: just put each \texttt{figure} environment immediately following its first reference (\verb+\ref+), as we have done in this example file. \begin{acknowledgments} We wish to acknowledge the support of the author community in using REV\TeX{}, offering suggestions and encouragement, testing new versions, \dots. \end{acknowledgments} \section*{Data Availability Statement} AIP Publishing believes that all datasets underlying the conclusions of the paper should be available to readers. Authors are encouraged to deposit their datasets in publicly available repositories or present them in the main manuscript. All research articles must include a data availability statement stating where the data can be found. In this section, authors should add the respective statement from the chart below based on the availability of data in their paper. \begin{center} \renewcommand\arraystretch{1.2} \begin{tabular}{| >{\raggedright\arraybackslash}p{0.3\linewidth} | >{\raggedright\arraybackslash}p{0.65\linewidth} |} \hline \textbf{AVAILABILITY OF DATA} & \textbf{STATEMENT OF DATA AVAILABILITY}\\ \hline Data available on request from the authors & The data that support the findings of this study are available from the corresponding author upon reasonable request. \\\hline Data available in article or supplementary material & The data that support the findings of this study are available within the article [and its supplementary material]. \\\hline Data openly available in a public repository that issues datasets with DOIs & The data that support the findings of this study are openly available in [repository name] at http://doi.org/[doi], reference number [reference number]. \\\hline Data openly available in a public repository that does not issue DOIs & The data that support the findings of this study are openly available in [repository name], reference number [reference number]. \\\hline Data sharing not applicable – no new data generated & Data sharing is not applicable to this article as no new data were created or analyzed in this study. \\\hline Data generated at a central, large scale facility & Raw data were generated at the [facility name] large scale facility. Derived data supporting the findings of this study are available from the corresponding author upon reasonable request. \\\hline Embargo on data due to commercial restrictions & The data that support the findings will be available in [repository name] at [DOI link] following an embargo from the date of publication to allow for commercialization of research findings. \\\hline Data available on request due to privacy/ethical restrictions & The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due [state restrictions such as privacy or ethical restrictions]. \\\hline Data subject to third party restrictions & The data that support the findings of this study are available from [third party]. Restrictions apply to the availability of these data, which were used under license for this study. Data are available from the authors upon reasonable request and with the permission of [third party]. \\\hline \end{tabular} \end{center} \section{} \subsection{} \subsubsection{} \section{Experiment} Experiments were performed at the SXR instrument of the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory. The experimental setup is described in detail in Ref.~\onlinecite{Higley2016}. X-ray absorption spectra were measured in transmission. The incident X-ray intensity was measured via the x-ray fluorescence from a Si$_3$N$_4$ membrane placed in the beam before the sample and detected with an microchannel plate (MCP). The transmitted X-ray intensity behind the sample was recorded by a fast charge coupled device (CCD) detector. XAS spectra over the L$_3$ absorption edge corresponding to $2p_{3/2} \to 3d$ transitions were acquired by varying the x-ray energy via the LCLS electron beam energy. A 250~meV x-ray bandwidth was selected by the beamline monochromator using a 100 lines per mm grating resulting in an effective resolving power of 3000 at 780~eV\cite{Heimann2011}. Circularly polarized X-ray pulses were produced using the ``Delta" afterburner undulator\cite{Lutman2016}, enabling the measurement of XAS and XMCD spectra by alternating the magnetic field saturating the sample along the beam direction and computing sum and difference for XAS and XMCD, respectively. Time-resolved XAS and XMCD data were acquired by scanning the time delay between the 50~fs Full Width at Half Maximum (FWHM) X-ray probe pulse and the 60~fs FWHM pump laser at a central wavelength of 798~nm. The data was corrected for timing jitter between the pump and probe pulses by measuring the arrival time of the electron pulses via the so-called phase cavity.\cite{Glownia2010} Slower timing drifts on the few-minutes-scale was corrected using a cross-correlation-based time delay estimation method as detailed in the supplementary information. The pump laser was focused on the sample to a spot size of 190$\times$150~$\mu$m$^2$ FWHM giving a fluence of $\mathcal{F}$ = 35~mJ/cm$^{2}$. The x-ray spot size was 50$\times$50~$\mu$m$^2$ FWHM and the x-ray fluence below 5 mJ/cm$^{2}$. A [Co(6\AA)\slash Pd(6\AA)]$_{38}$ multilayer sample capped with a Pd(20\AA) layer and grown onto a 100~nm Si$_3$N$_4$ membrane with a Ta(10\AA)\slash Pd(30\AA) buffer layer was used in the measurements described below. The sample was grown by DC magnetron sputtering with fabrication details given in the supplementary information. Prior to the LCLS experiments the sample was characterized at beamline 4.0.2 of the Advanced Light Source (ALS) using XAS and XMCD measurements, where sum rules analysis confirmed the magnetic properties of the multilayer with previously published work as discussed in the supplementary information. Conceptually, the experiment is depicted in the schematic shown in Fig.~\ref{fig:Fig1}. In an itinerant strong ferromagnet such as Co in [Co/Pd] multilayers, the density of states (DOS) can be separated into completely occupied majority (spin ``up") and partially occupied minority spin (spin ``down") channels which are shifted in energy by the exchange splitting. At the L$_3$ absorption edge, valence hole states are probed via $2p_{3/2} \to 3d$ core-valence transitions. In the ground state all electronic states up to the Fermi level E$_\mathrm{F}${} are occupied. As the pump laser pulse excites the $3d$ electronic system by promoting electrons from below to above the Fermi level, transient XAS can detect the additional hole states below E$_\mathrm{F}${}. XAS transitions into states above E$_\mathrm{F}${}, however, are reduced by the laser-excited transient electron population in these states. Exactly at E$_\mathrm{F}${} no XAS changes should be observed. Laser-excited holes below E$_\mathrm{F}${} are thought to lead to demagnetization in strong ferromagnets via spin-flip scattering events,\cite{Carva2011, Carva2013} where an electron from the majority spin fills the hole in the minority spin, as depicted in Fig.~\ref{fig:Fig1}(a). This flipped spin could then decay into spin waves as illustrated in Fig.~\ref{fig:Fig1}(b), which induces a band mirroring in the nearby atoms where the quantization axis has now changed. By using time-resolved XAS and XMCD we aim at uncovering the different timescales and energies of the different processes involved. \begin{figure \includegraphics[width=\figurewidth]{NFig1.pdf} \caption{\label{fig:Fig1} (Color online) (a) Schematic of the experiment where the unoccupied $3d$ spin-resolved density of states (DOS) are probed by $2p$ core-level absorption spectroscopy. Upon excitation by a femtosecond laser pulse, electrons are promoted from below to above the Fermi level, E$_\mathrm{F}${}, in a spin-conserving process (purple arrow). In a strong ferromagnet such as [Co\slash{}Pd]{}, spin relaxation can only occur below E$_\mathrm{F}${} by a hole spin-flip (green arrow). (b) After the localized hole spin-flip excitation, spin-waves are generated and correspondingly the spin-resolved DOS are partially mirrored.} \end{figure} Fig.~\ref{fig:dXAS-spectra} shows transient XAS and XMCD of laser-induced holes belwo and above the Fermi level. XAS (Fig.~\ref{fig:dXAS-spectra}(a)) and XMCD spectra (Fig.~\ref{fig:dXAS-spectra}(b)) were measured at a fixed time delay of 0.4~ps{} at the Co L$_3$ edge. The pump induced changes are shown as green symbols and shading. While the change in XMCD appears to be mostly an homogeneous reduction at all photon energies, the change in XAS clearly displays a derivative-like shape with a zero crossing at an x-ray energy of 777.2~eV (see top axis of Fig.~\ref{fig:dXAS-spectra}) as indicated by the dashed vertical line. At lower x-ray energy the XAS signal is increased as expected for fs laser-induced hole states. At higher energy XAS transitions into previously unoccupied states are blocked by laser-excited electrons leading to the observed intensity reduction. It is, therefore, possible to identify 777.2~eV as the position of the Fermi level (see bottom axis of Fig.~\ref{fig:dXAS-spectra}).\cite{Oppeneer2004} \begin{figure \includegraphics[width=\figurewidth]{NFig2.pdf} \caption{\label{fig:dXAS-spectra} (Color online) Pumped, unpumped and their differences in (a) XAS and (b) XMCD at the Co L$_3$ edge at a delay of 0.4~ps{}. The vertical dashed line indicates the position of the observed zero crossing at the Fermi level E$_\mathrm{F}${}. The photon energy is shown on the top axis, while the energy with respect to the Fermi level is shown on the bottom. The differences (pumped-unpumped) are shown on a separate vertical axis on the right. For the XMCD difference the sign was reversed to ease visual comparison with the unpumped XMCD profile. } \end{figure} \begin{figure \includegraphics[width=\figurewidth]{NFig3.pdf} \caption{\label{fig:L3waterfall} (Color online) Time-resolved change in state resolved (a) charge $\Delta$N and (b) relative polarization change P(t)/P$_0$ around the E$_\mathrm{F}${} at the L$_3$ edge. In (b), the data are shifted vertically for clarity and the gray dashed curves are the fit as explained in the text at E-E$_\mathrm{F}${} = 0.88~eV. } \end{figure} Fig.~\ref{fig:L3waterfall} displays time-delay traces obtained for various state energies relative to the Fermi level, E-E$_\mathrm{F}${}. Below E$_\mathrm{F}${} the curves display initial increases in the XAS intensity followed by subsequent decays on timescales longer than several 100~fs. Above E$_\mathrm{F}${} the transient XAS changes are negative while directly at E$_\mathrm{F}${} a more complex behavior emerges. In the following we describe these observations in terms of changes in the hole population, $\Delta$N, at state of energy E-E$_\mathrm{F}${}. The small contribution due to spin-orbit coupling of the state-resolved XAS intensity \cite{Wu1994,Ebert1996} will be neglected here. It is important to emphasise that $\Delta$N can also include time dependent changes in the electronic structure.\cite{Stamm2007} It is apparent in Fig.~\ref{fig:dXAS-spectra}(a) that such electronic structure changes indeed occur. For instance at E-E$_\mathrm{F}${} near 4~eV, i.e. much higher than the pump photon energy, the observed variations of $\Delta$N are unlikely to be caused by the population dynamics of electrons in these states. The curves for $\Delta$N in Fig.~\ref{fig:L3waterfall}(a) were fitted with a double exponential to describe an excitation and a relaxation process. The fit parameters are summarized in Table II in the supplementary information. Far above and below the Fermi level, the initial rise times of $\Delta$N are essentially determined by the length of the pump pulses. The subsequent decay time scales are shorter further away from the Fermi level as one would expect from a Fermi liquid behavior of the electronic system. The XMCD spectra can be described as being proportional to the product of state-dependent population, $N$ and a polarization term, $P$. The latter contains both spin and orbital polarization with the orbital contribution being significantly smaller than the spin polarization as shown in the sum rule analysis detailed in the supplementary information.\cite{Wu1994,Ebert1996} Similar to conventional sum-rule-analysis of time-resolved XMCD spectra, the magnetic dipole term can be neglected for our poly-crystalline samples \cite{Stamm2010,Boeglin2010} and was found to be negligible in similar samples.\cite{Guo1995} Using the results for $\Delta$N from Fig.~\ref{fig:L3waterfall}(a) we can separate the state polarization from state-dependent charge dynamics. The time-resolved polarization dynamics, normalized to the ground-state polarization of the respective states, are shown in Fig.~\ref{fig:L3waterfall}(b). The individual experimental results (symbols) are shifted vertically for clarity and are compared to exponential polarization decays that include the magnitude of the decay, $\Delta$P, the decay time constant, $\tau$, and a delayed demagnetization onset, $\Delta$t, as fit parameters. All parameters are summarized in Table II in the supplementary information. To highlight the difference between the curves below and above the Fermi level, the same relative polarization dynamics for E-E$_\mathrm{F}${} = 0.88~eV is shown as a dashed grey curve together with each trace. This allows two visual observations. Firstly, the amount of demagnetization is not the same at each value of E-E$_\mathrm{F}${}. Clearly, the demagnetisation is significantly stronger below E$_\mathrm{F}${} than above. Secondly, there is a time delay, $\Delta$t, apparent in the response, with faster dynamics for states below E$_\mathrm{F}${}. The complete fitting model and analysis of the uncertainties on the fitted values is presented in Table II in the supplementary information. In Fig.~\ref{fig:t0} the relative change in polarization $\Delta$P/P$_0$ and the time lag $\Delta$t are shown as function of E-E$_\mathrm{F}${}. We stress here that this time lag $\Delta$t is determined by the delayed apparent response of the relative change in polarization $\Delta$P/P$_0$ with respect to the charge dynamics $\Delta$N, for the same given photon energy. It can also be visualized by comparing the relative change in polarization $\Delta$P/P$_0$ at different photon energy but this is not how we extracted it. \begin{figure \includegraphics[width=\figurewidth]{NFig4.pdf} \caption{\label{fig:t0} (Color online) Fitted relative change in polarization $\Delta$P/P$^0$ and time lag $\Delta$t in polarization change response as function of E-E$_\mathrm{F}${} at L$_3$ edge. } \end{figure} The data we presented in this letter conclusively demonstrate a vastly different magnetization dynamics above and below the Fermi level for Co $3d$ levels in [Co\slash{}Pd]{} multilayers. Below E$_\mathrm{F}${}, the ultrafast drop in magnetic polarization is up to 32$\%$ larger than above (see Fig.~\ref{fig:t0}). This is clearly outside any experimental uncertainty as demonstrated in Fig.~\ref{fig:L3waterfall}(b). Moreover, the onset of the polarization dynamics occurs simultaneously to the charge dynamics, i.e. $\Delta$t = $0\pm10$~fs. This is the behavior expected for individual electrons/holes being scattered between different electronic states as depicted in Fig.~\ref{fig:Fig1}(a). This also leads to Stoner excitations where electrons/holes are scattered between the spin up and down states.\cite{Turgut2016} In strong ferromagnets such as [Co\slash{}Pd]{} multilayers, spin-flip scattering can only occur for states below E$_\mathrm{F}${} where spin up and down states are hybridized via spin-orbit coupling.\cite{Carva2013} The same Elliot-Yaffet-type spin-flip scattering processes are thought to also transfer spin angular momentum to the lattice.\cite{Carva2011, Carva2013, Dornes2019} However, for ultrafast demagnetization to occur, the flipped spins of individual electrons/holes need to be transferred to the whole electronic system. This usually takes place via the formation of collective spin excitations, i.e. spin waves. In the Heisenberg model, spin waves lead to slight changes of the atomic spin quantization axis and result in a mixing of spin up and down states as observed in photoemission spectroscopy.\cite{Eich2017} This situation is depicted in Fig.~\ref{fig:Fig1}(b). Since the formation of spin waves takes time~\cite{Mathias2012} we expect a characteristic time delay relative to the instantaneous demagnetization of individual electrons/holes. We assign the observed delayed onset of demagnetization above E$_\mathrm{F}${} of $\Delta$t = $35\pm10$~fs (see Fig.~\ref{fig:t0}) to this effect. This is observable above the Fermi level since there the majority of unoccupied states reflects the atomic magnetic moments.\cite{Carra1993, Thole1992} In summary, taking advantage of the high FEL brightness and improved I$_0$ normalization scheme, we were able to show that time-resolved XAS and XMCD spectroscopy can provide detailed information in the microscopic mechanism at play during ultrafast laser excitation. In particular we report on different dynamics of the spin system below and above the Fermi level. This is manifested by both a 32\% larger change in spin dynamics below the Fermi level and a $35\pm10$~fs delayed response above it. Both of these effects suggest a scenario for a strong ferromagnet where spin-flips occur preferentially below the Fermi level where spin up and down states are hybridized. Moreover, we also report on initial evidence that indicates effects beyond a simple electronic redistribution and demagnetization with changes in XAS observed 4~eV above the Fermi level, suggestive of band structure dynamics. With ever improved normalization schemes and higher repetition rate FELs, transient near-edge soft X-ray spectroscopy promises to be a valuable tool in understanding out-of-equilibrium phenomena. \section*{Supplementary Material} See supplementary material for the sample preparation, sum-rules analysis, timing drift correction, pump and probe absorption profiles in the sample and finally the complete fitting model with uncertainty estimation. \begin{acknowledgments} L.L.G. acknowledges the Volkswagen-Stiftung for the financial support through the Peter-Paul-Ewald Fellowship. Work at SLAC and the operation of LCLS are supported by the U.S. Department of Energy, Office of Science. \end{acknowledgments} \section*{Data Availability Statement} The data that support the findings of this study are available from the corresponding author upon reasonable request.
1,116,691,497,961
arxiv
\section{Introduction, Background and Related Works} \section{Introduction and related works} The use of processors based on multi- and many-core architectures is a common option in High Performance Computing (HPC). Several variants of these processors exist, differing mainly in the number and architecture of the cores integrated in a single silicon die. Conventional CPUs integrate tens of fat cores sharing a large on-chip cache. Fat cores include several levels of caches and complex control structures, able to perform hardware optimization techniques (branch-speculation, instruction scheduling, register renaming, etc). Vector instructions are also supported by these cores, with a moderate level of data parallelism: 2 to 4 vector elements are processed by one vector instruction. This architecture is reasonably efficient for many type of regular and non-regular applications and delivers a level of performance of the order of hundreds of GigaFlops per processor. On the other side of the spectrum we have Graphics Processor Units (GPU), available as accelerator boards attached to conventional CPUs. GPUs integrate thousands of slim cores able to efficiently support regular streams of computation, and deliver performances of the order of several TeraFlops. GPUs are extremely aggressive in terms of data-parallelism, implementing vector units with large vector sizes (16 and 32 words are presently available options). Midway between these two architectures, we have the Intel {\em Many Integrated Cores} (MIC) architecture based on several tens of slim cores. In this case, cores are similar to their fat counterparts, but their design has been simplified removing many hardware control structures (instruction scheduler, register renaming, etc) and adopting wider vector units, able to process up to 4 or 8 vector elements in parallel. Large scale computing centers today have not reached a common consensus on the ``best'' processor option for HPC systems, also because system choices are driven not only by application performances, but also by cost of ownership and energy aspects which are becoming increasingly critical parameters\cite{villa14}. Several computing centers do adopt machines based on GPUs, but other ones prefer to stay on more traditional CPUs, offering a lower peak performance, but better computing efficiency for a wider range of applications. In this scenario, the development of applications would greatly benefit from the availability of a unique code version, written in an appropriate programming framework, able to offer portability, in terms of code and performance, across several present and possibly future state-of-the-art processor architectures. A single code version, portable across several architectures, is of great convenience in particular for scientific applications, where code changes and development iterations are very frequent, so keeping several architecture-specific code versions up-to-date is a tedious and error prone effort\cite{se4hpcs15,scaleff-maintport-review}. Directives based programming models are going exactly in this direction, abstracting parallel programming to a descriptive level as opposite to a prescriptive level, where programmers must specify how the code should be mapped onto the target machine. OpenMP\cite{openmp} and OpenACC\cite{openacc} are among the most common such programming models, already used by a wide scientific community. Both are based on directives: OpenMP was introduced to manage parallelism on traditional multi-core CPUs, while OpenACC is mainly used to target GPUs (although designed to be architecture agnostic)\cite{Wienke2014812}. These two frameworks are in fact converging and extending their scope to cover a large subset of HPC applications and architectures: OpenMP version 4 has been designed to support also accelerators, while compilers supporting OpenACC (such as the PGI\cite{PGIref}) are starting to use directives also to target multi-core CPUs. In this work we describe the implementation of a Lattice QCD (LQCD) Monte Carlo code designed to be portable and efficient across several architectures. LQCD simulations represent a typical and well known HPC grand challenge, with physics results strongly limited by available computational resources\cite{Bernard:2002pd,bilardi}; over the years, several generations of parallel machines, optimized for LQCD, have been developed\cite{ape,apecise,qcdoc,qpacepower,bgq}, while the development of LQCD codes running on many core architectures, in particular GPUs, has seen large efforts in the last 10 years\cite{videogame,Barros:2008rd,quda,Cardoso:2010di,Chiu:2011dz,cudacode,bach1,bach2}. Our goal is to have just one code able to run on several processors without any major code changes, and possibly to have roughly the same level of efficiency, looking for an acceptable trade-off between portability and efficiency\cite{scaleff-maintport-review,perfport-directives}. As a programming model we have selected OpenACC, as it currently has a wider compiler support, in particular targeting NVIDIA GPUs, which are widely used in HPC clusters and commonly used for scientific computations. OpenACC has been successfully used to port and run other scientific codes, such as Lattice Boltzmann applications\cite{blair15,jiri,ccpe16} in computational fluid-dynamics, showing a good level of code and performance portability on several architectures. The migration of our code to OpenMP4, if needed, as soon as compiler support becomes more mature, is expected to be a simple additional effort. We have developed a code with all key features for a state-of-the-art simulations of QCD with dynamical fermions. Using this code as a user test case, we assess: i) if it is possible to write the code in such a way that the most computationally critical kernels can be executed on accelerators, as in previous CUDA implementations\cite{cudacode}; ii) how many of the presently available multi and many-core architectures can be really used; iii) how efficient are these codes, and in particular what is the price to pay in terms of performance with respect to a code written and optimized for a specific architecture (e.g., using CUDA for GPUs). We believe that our work is a non trivial step forward in the development of a fully portable production-grade LQCD Monte Carlo code, using the OpenACC programming model. An earlier paper\cite{pushan16} presented tests of selected portions of an OpenACC LQCD implementation on Fermi and K20 NVIDIA GPUs, comparing performances with an OpenMP implementation for CPUs. Similarly, in a preliminary study\cite{se4hpcs15}, we compared the performance of selected kernels of a full simulation, written in OpenACC, with an equivalent CUDA implementation, on a K20 NVIDIA GPU. In this work, we extend the use of OpenACC in several new directions: i) we show the portability of a complete implementation across several architectures; ii) we show performance figures for the same OpenACC code on a variety of multi and many-core processors, including the most recent GPUs like the K80 and the recently released P100; iii) we compare results with a previous implementation of the same full application written in CUDA\cite{cudacode}. The remainder of the paper is organized as follows: in Section~\ref{simalg} we give a brief introduction to LQCD and to the main computational aspects of our application; in Section~\ref{hpctrend} we highlight recent developments in HPC hardware and programming tools; in Section~\ref{implementation} we describe the OpenACC implementation of our code; in Section~\ref{results} we analyze our results; finally, Section~\ref{conclusions}, contains our concluding remarks. \section{Numerical challenges of Lattice QCD}\label{simalg} Quantum Chromodynamics (QCD) is the quantum field theory that describes strong interactions in the Standard Model of particle physics. It is a non-abelian gauge theory, based on the $SU(3)$ group (the ``color'' group), describing the interactions of six different species (``flavors'') of quarks, mediated by $8$ vector bosons, the ``gluons''. In principle QCD is not different from the theory that describes other sectors of the Standard Model (i.e. the electroweak interaction); however, strong interactions are indeed strong, i.e. the coupling constant of QCD is generically not small. Asymptotic freedom ensures that the coupling constant gets smaller and smaller as the energy scale increases (a summary of experimental results is available in \S9.4 of the Particle Data Group review\cite{pdg}), but a wealth of interesting phenomena take place for energies well below the perturbative regime; a systematically improvable computational scheme, that does not rely on the smallness of the coupling constant, is needed to study this phenomenology from first principles. Lattice QCD provides such a scheme. LQCD uses the Feynman path-integral quantization and approximates the infinite dimensional path-integral by a finite dimensional integral: continuous space-time is replaced by a finite lattice of sizes $L_t$, $L_x$, $L_y$, $L_z$ and lattice spacing $a$. In order to maintain gauge invariance, the variables $U_{\mu}(n)$ associated with the gauge fields are elements of the $SU(3)$ group and live on the links of the lattice; the quark fields $\psi(n)$ live on the lattice sites and transform under the gauge group as $3-$dimensional complex vectors\cite{Wilson:1974sk}. The fundamental problem of LQCD is the evaluation of expectation values of given functions of the fields, $O[U]$, that is integrals of the form \begin{eqnarray}\label{eq:pathint} \langle \hat{O}\rangle =\frac{1}{Z}\int \mathscr{D}U O[U]\det(M[U])e^{-S_g[U]}\ , \quad Z =\int \mathscr{D}U \det(M[U])e^{-S_g[U]}\ ; \end{eqnarray} the exponent $S_g$ is the discretization of the action of the gauge fields (usually written as a sum of traces of products of $U_{\mu}(n)$ along closed loops) and $\det(M)$ describes the gluon-quark interaction. Here, $M[U]$ is a large and sparse structured matrix (i.e. containing both space-time and color indexes) which is the discretization of the continuum fermion operator $M \sim m\, {\rm I} + D$ where $m$ is the fermion mass, multiplying the identity operator, and $D$ is the Dirac operator, which is constructed in terms of covariant derivatives. The integral in $\mathscr{D}U$ extends over all the $U_{\mu}(n)$ variables on the lattice using the Haar measure of $SU(3)$. Eq.~(\ref{eq:pathint}) refers to a single quark species (flavor); in the realistic case of multiple flavors\footnote{At present, we have experimental evidence of 6 different flavors in Nature, usually named with the letters $u$, $d$, $s$, $c$, $b$, $t$ and ordered by increasing quark mass. In a realistic simulation, one usually takes into account the first 3 (or 4, at most) flavors, since the heaviest species give a negligible contribution to the low-energy dynamics of the theory.}, one has to introduce a separate determinant for each flavor. This formulation makes contact with a standard problem in statistical mechanics: importance sampling of the distribution $\det(M[U])e^{-S_g[U]}$. What is non-standard is the form of this distribution and in particular the presence of the determinant. The best strategy devised so far to cope with this problem is to introduce the so called pseudofermion fields\cite{Weingarten:1980hx} $\phi$ and rewrite the integral as follows: \begin{eqnarray}\label{eq:pseudofermions} \int \mathscr{D}U O[U]\det(M[U]) e^{-S_g[U]}\propto \int \mathscr{D}U \mathscr{D}\phi \, O[U]\exp\left(-S_g[U]-\phi^{\dag} M[U]^{-1}\phi\right)\ ; \end{eqnarray} the action is still a non-local function of the field variables, but the computational burden required for the solution of a large sparse linear system is much lower than the one needed for the computation of its determinant. The explicit form of $S_g[U]$ and $M[U]$ is not fully determined, as these functions only have the constraint to go over to the correct continuum limit as the lattice spacing goes to zero. Much in the same way as several discretization schemes exist for the numerical solution of a partial differential equation, several discretization schemes of the QCD action exist. In this paper we consider a specific state-of-the-art discretization, the tree-level Symanzik improved action\cite{Weisz:1982zw,Curci:1983an} for the gauge part and the stout-improved\cite{Morningstar:2003gk} ``staggered'' action for the fermion part. Staggered actions have a residual degeneracy, that has to be removed by taking the $4-$th root of the determinant. So, \Eqref{eq:pseudofermions} becomes in the staggered case \begin{eqnarray}\label{eq:rooting} \int \mathscr{D}U \mathscr{D}\phi \, O[U]\exp\big(-S_g[U]-\phi^{\dag} M[U]^{-1/4}\phi\big)\ . \end{eqnarray} \subsection{Why LQCD is a computational grand challenge} \label{grand_challenge_section} The physical system that one would like to simulate by the lattice box has a characteristic physical length $\xi$, which is of the order of $10^{-15}$ m. In order to reduce systematic effects related to discretization and to the finite box size, one would like that, at the same time, the lattice spacing $a$ be much smaller, and the box size $L a$ much larger than $\xi$, i.e. $a \ll \xi \ll La$. Making the reasonable approximation that $\ll$ translates into one order of magnitude means that the number of sites in each direction should be $\simeq10^2$; the corresponding fermion matrix, considering also internal (e.g., color) indexes, has a dimension slightly exceeding $10^8 \times 10^8$; note that it is a sparse matrix, since the discretization of the Dirac operator $D$ connects only neighbor lattice sites. In finite temperature simulations the size of the lattice is typically smaller, since in that case the temporal direction is shortened and equal to the inverse of the temperature, $1/T$. The most computationally demanding task in the typical LQCD algorithm is the solution of a linear system involving the fermion matrix $M$. The numerical difficulty of this problem is fixed by the condition number of $M$, hence, since the highest eigenvalue is typically $O(1)$, by the smallest eigenvalue of $M$. Here the physical properties of QCD play a significant role: the eigenvalues of the Dirac operator are dense around zero, a property related to the so-called {\em spontaneous breaking of chiral symmetry}, so the smallest eigenvalue is set by $a m$ where $m$ is quark mass. Since Nature provides us with two quark flavors ($u$ and $d$ quarks) whose mass is significantly lower (by two orders of magnitude) than other energy scales of the theory, typical values of $a m$ are typically very small, resulting in a bad condition number ($\kappa\gtrsim 10^5$ being a typical value). Also regarding this aspect, the situation becomes better when one is interested in the regime of very high temperatures, since in that case the spontaneous breaking of chiral symmetry disappears, the minimum eigenvalue of $D$ is non-zero, and the condition number significantly improves. \subsection{Numerical algorithms for LQCD} \label{algorithm_section} In LQCD, the usual local updates adopted in statistical mechanics scale badly with the volume, as the action of \Eqref{eq:pseudofermions} is non-local. This problem is partly solved by the Hybrid Monte Carlo (HMC) algorithm\cite{Duane:1987de}; in HMC we associate fake conjugate momenta -- entering quadratically in the action -- to each degree of freedom of the system. For an $SU(3)$ gauge theory, momenta conjugate to the link variable are again $3 \times 3$ matrices $H_\mu(n)$ associated to each link of the lattice, this time living in the group algebra (hence Hermitian and traceless). Eq.~(\ref{eq:rooting}) is rewritten as \begin{eqnarray}\label{eq:rooting2} \int \mathscr{D}U \mathscr{D}\phi \mathscr{D} H O[U]\exp\left(-\frac{1}{2} H^2 -S_g[U]-\phi^{\dag} M[U]^{-1/4}\phi\right)\ , \end{eqnarray} where the momenta term is a shorthand to indicate the sum of $- {\rm Tr} (H_\mu(n)^2)/2$ over the whole lattice. The update then proceeds as follows: \begin{enumerate} \item random gaussian initial momenta $H$ and pseudofermions $\phi$ are generated; \item starting from the initial configuration and momenta $(U,H)$, a new state $(U', H')$ is generated by integrating the equations of motion; \item the new state $(U', H')$ is accepted with probability $e^{-\Delta S}$, where $\Delta S$ is the change of the total (i.e. included the momenta) action. \end{enumerate} Step 2 is an unphysical evolution in a fictitious time and, under mild conditions on the numerical integration of the equations of motion, it can be shown to satisfy the detailed balance principle\cite{Duane:1987de,KennedyLec}, so it provides a stochastically exact way to estimate the integral in \Eqref{eq:pseudofermions}. The more time consuming steps of the update are the ones that involve the non-local term in the exponent of \Eqref{eq:pseudofermions}. In particular, the most time consuming single step of the whole algorithm is the solution of a linear system \begin{equation}\label{eq:lineq} M[U]\varphi=b \, . \end{equation} This calculation is needed to compute the forces appearing in the equations of motion and also to evaluate $\Delta S$, and one usually resorts to Krylov solvers. In the case of staggered fermions, corresponding to \Eqref{eq:rooting}, it is customary to use the so-called Rational HMC (RHMC) algorithm\cite{Clark:2004cp,Clark:2006fx,Clark:2006wp}, in which the algebraic matrix function appearing in \Eqref{eq:rooting} is approximated to machine precision by a rational function. In this case one replaces \Eqref{eq:lineq} by $r$ equations ($r$ is the order of the approximation adopted) \begin{equation}\label{eq:shlineq} (M[U]+\sigma_i)\varphi_i=b\ , \quad i\in\{1,\ldots,r\}\ , \end{equation} where the real numbers $\sigma_i$ are the poles of the rational approximations. These equations can again be solved by using Krylov methods: by exploiting the shift-invariance of the Krylov subspace it is possible to write efficient algorithms that solve all the equations appearing in (\ref{eq:shlineq}) at the same time, using at each iteration only one matrix-vector product\cite{Jegerlehner:1996pm,Simoncini}. For most of the discretizations adopted in QCD (and in particular for the one we use), the matrix $M[U]$ can be written in block form \begin{equation}\label{eq:diracblock} M=m\,I+\left(\begin{array}{cc} 0 & D_{oe} \\ D_{eo} & 0 \end{array}\right), \qquad D_{oe}^{\dag}=-D_{eo}\ ; \end{equation} matrices $D_{oe}$ and $D_{eo}$ connect only even and odd sites. It is thus convenient to use an even/odd preconditioning\cite{DeGrandDeTarBook,DeGrand:1990dk}; in this case, \Eqref{eq:lineq} is replaced by: \begin{equation}\label{eq:lineqeo} (m^2\, I-D_{eo}D_{oe})\varphi_{e}=b_{e}; \end{equation} $\varphi_e$ is defined only on even sites and the matrix is positive definite (because of \Eqref{eq:diracblock}), so we can use the simplest of the Krylov solvers: the conjugate gradient (or its shifted counterpart). Over the years, many improvements of this basic scheme have been developed; these are instrumental in reducing the computational cost of actual simulations but their implementation is straightforward, once the basic steps of the ``naive'' code are ready. For this reason we will not discuss in the following the details of multi-step integrators\cite{Sexton:1992nu,Urbach:2005ji}, improved integrators\cite{Omelyan02,Omelyan03,Takaishi:2005tz}, multiple pseudofermions\cite{Clark:2006fx} or the use of different rational approximations and stopping residuals in different parts of the HMC\cite{Clark:2006wp}, even if our code uses all these improvements. \subsection{Data structures and computational challenges} Our most important data structures are the collection of all gauge variables $U_{\mu}(n)$ (elements of the group of $SU(3)$ matrices, one for each link of the four-dimensional lattice) and of the pseudofermion fields $\phi(n)$ ($3-$dimensional complex vectors, one for each even site of the lattice when using the even/odd preconditioning). We also need many derived and temporary data structures, such as: \begin{enumerate} \item the configurations corresponding to different stout levels ($U_{\mu}^{(k)}(n)$, again $SU(3)$ matrices), used in the computation of the force (typically less than five stout levels are used) and the momenta configuration (which are $3 \times 3$ Hermitian traceless matrices); \item some auxiliary structures needed to compute the force acting on the gauge variables, like the so called ``staples'' $\Sigma_{\mu}^{(k)}(n)$ and the $\Gamma_{\mu}(n)$ and $\Lambda_{\mu}(n)$ matrices\cite{Morningstar:2003gk}; $\Sigma_{\mu}^{(k)}(n)$ and $\Gamma_{\mu}(n)$ are generic $3\times 3$ complex matrices and $\Lambda_{\mu}(n)$ are $3\times 3$ Hermitian traceless matrices; \item the solutions $\varphi_i$ of \Eqref{eq:shlineq} and some auxiliary pseudofermion-like structure needed in the Krylov solver. \end{enumerate} At the lowest level, almost all functions repeatedly multiply two $3\times 3$ complex matrices (e.g., in the update of the gauge part), or a $3\times 3$ complex matrix and a $3-$dimensional complex vector (e.g., in the Krylov solver) or compute dot products and linear combinations of complex $3-$vectors. All these operations have low computational intensity, so it is convenient to compress as much as possible all basic structures by exploiting their algebraic properties. The prototypical example is $U_{\mu}(n)$: one only stores the first two rows of the matrix and recovers the third one on the fly as the complex conjugate of the wedge product of the first two rows\cite{DeForcrand:1986inu}. This overhead is negligible with respect to the gain induced, at least for GPUs, by the reduction of the memory transfer\cite{Clark:2009wm,JooPhi}\footnote{A priori it would be possible to do even better, i.e. to store just $8$ real numbers, but in this case the reconstruction algorithm presents some instabilities\cite{Clark:2009wm}.}. At a higher level the single most time consuming function is the Krylov solver, which may take $40 \dots 80\%$ of the total execution time of a realistic simulation (depending e.g. on the value of the temperature) and consists basically of repeated applications\footnote{typically $10^2\div 10^3$ iterations are needed to reach convergence, depending on the temperature.} of the $D_{oe}$ and $D_{eo}$ matrices defined in \Eqref{eq:diracblock}, together with some linear algebra on the pseudofermion vectors (basically \emph{zaxpy}-like functions). An efficient implementation of $D_{eo}$ and $D_{oe}$ multiplies is then of paramount importance, the effectiveness of this operation being often taken as a key figure of merit in the LQCD community. \section{Current trends in HPC} \label{hpctrend} There is a clear trend in high-performance computing (HPC) to adopt multi-core processors and accelerator-based platforms. Typical HPC systems today are clusters of computing nodes interconnected by fast low-latency communication networks, e.g. Infiniband. Each node typically has two standard multi-core CPUs, each attached to one or more accelerators, either Graphic Processing Unit (GPU) or many-core systems. Recent development trends see a common path to performance for CPUs and accelerators, based on an increasing number of independent cores and on wider vector processing facilities within each core. In this common landscape, accelerators offer additional computing performance and better energy efficiency by further pushing the granularity of their data paths and using a larger fraction of their transistors for computational data paths, as opposed to control and memory structures. As a consequence, even if CPUs are more tolerant for intrinsically unstructured and irregular codes, in both class of processors computing efficiency goes through careful exploitation of the parallelism available in the target applications combined with a regular and (almost) branch-free scheduling of operations. This remark supports our attempt to write just one LQCD code which is not only portable, but also efficiency-portable across a large number of state-of-the-art CPUs and accelerators. In this paper we consider Intel multi-core CPUs, and NVIDIA and AMD GPUs, commonly used today by many scientific HPC communities. \tablename~\ref{tab:architecture} summarizes some key features of the systems we have used\cite{kepler,pascal,hawaii,haswell}, that we describe very briefly in the following. \begin{table} \tbl{ Selected hardware features of some of the processors used in this work: the Xeon-E5 systems are two recent multi-core CPUs based on the Haswell and Broadwell architecture, the K80 GPU is based on the {\em Kepler} architecture while the P100 GPU adopts the {\em Pascal} architecture. The FirePro W9100 is an AMD GPU, based on the Hawaii architecture. }{ \centering \resizebox{\textwidth}{!}{ \begin{tabular}{lrrrlrr} \toprule & Xeon E5-2630 v3 & Xeon E5-2697 v4 & K80-GK210 & & P100 & FirePro W9100 \\ \midrule Year & 2014 & 2016 & 2014 & & 2016 & 2014 \\ Architetcure & Haswell & Broadwell & Kepler & & Pascal & Hawaii \\ \#physical-cores / SMs & 8 & 18 & 13 & \hspace{-1em} $\times$ 2 & 56 & 44 \\ \#logical-cores / CUDA-cores & 16 & 26 & 2496 & \hspace{-1em} $\times$ 2 & 3584 & 2816 \\ Nominal Clock (GHz) & 2.4 & 2.3 & 562 & & 1328 & 930 \\ Nominal DP performance (Gflops) & $\approx 300$ & $\approx 650$ & 935 & \hspace{-1em} $\times$ 2 & 4759 & 2620 \\ \midrule LL cache (MB) & 20 & 45 & 1.68 & & 4 & 1.00 \\ Total memory supported (GB) & 768 & 1540 & 12 & \hspace{-1em} $\times$ 2 & 16 & 16 \\ Peak mem. BW (ECC-off) (GB/s) & 69 & 76.8 & 240 & \hspace{-1em} $\times$ 2 & 732 & 320 \\ \bottomrule \end{tabular} } \label{tab:architecture} } \end{table} Intel Xeon-E5 architectures are conventional x86 multi-core architectures. We have used two generations of these processors, differing for the number of cores and for the amount of integrated last-level cache. Performances in both cases rely on the ability of the application to run on all cores and to use 256-bit vector instructions. NVIDIA GPUs are also multi-core processors. A GPU hosts several Streaming Multiprocessors (SM), which in turn include several (depending on the specific architecture) compute units called CUDA-cores. At each clock-cycle SMs execute multiple warps, i.e. groups of 32 instructions, belonging to different CUDA-threads, which are executed in {\em Single Instructions Multiple Threads} (SIMT) fashion. SIMT is similar to SIMD execution but more flexible, e.g. different CUDA-threads of a SIMT-group are allowed to take different branches of the code, although at a performance penalty. Each CUDA-thread has access to its copy of registers and context switches are almost at zero cost. This structure has remained stable across several generations with minor improvements. The NVIDIA K80 has two GK210 GPUs; each GPU has 13 {\em Next Generation} Streaming Multiprocessor, (SMX) running at a base frequency of $562$ MHz that can be increased to $875$ MHz under specific condition of work-load and power. The corresponding aggregate peak performance of the two GK210 units is then $1.87$ and $2.91$ TFlops in double precision. The peak memory bandwidth is $240$~GB/s considerably higher compared to that of E5-Xeon CPUs. The GP100 GPU, based on the Pascal architecture, has recently become available. It has 56 streaming processors running at base-frequency of $1.3$ that can be increased to $1.48$~GHz, delivering a peak double-precision performance of $4.76$ and $5.30$~Tflops. Peak memory bandwidth has been increased to $732$~GB/s. The AMD GPUs are conceptually similar to NVIDIA GPU. The AMD FirePro W9100 has 44 processing units, each one with 64 compute units (stream processors), running at $930$~MHz. This board delivers a peak double-precision performance of $2.6$~Tflops, and has a peak memory-bandwidth of $320$~GB/s. Native programming models, commonly used for the systems shown in \tablename~\ref{tab:architecture}, differ in several aspects. For Xeon-E5 CPUs, the most common models are OpenMP and OpenMPI. Both models support core-parallelism, running one thread or one MPI process per logical core. Moreover OpenMP is a directive based programming model, and allows to exploit vector-parallelism properly annotating for-loops that can be parallelized\cite{openmp}. On GPUs, the native programming model is strongly based on data-parallel models, with one thread typically processing one element of the application data domain. This helps exploit all available parallelism of the algorithm and hide latencies by switching among threads waiting for data coming from memory and threads ready to run. The native language is CUDA-C for NVIDIA GPUs and OpenCL for AMD systems. Both languages have a very similar programming model but use a slight different terminology; for instance, on OpenCL the CUDA-thread is called work-item, the CUDA-block work-group, and the CUDA-kernel is a device program. A CUDA-C or OpenCL program consists of one or more functions that run either on the host, a standard CPU, or on a GPU. Functions that exhibits no (or limited) parallelism run on the host, while those exhibiting a large degree of data parallelism can go onto the GPU. The program is a modified C (or C++, Fortran) program including keyword extensions defining data parallel functions, called {\em kernels} or {\em device programs}. Kernel functions typically translate into a large number of threads, i.e. a large number of independent operations processing independent data items. Threads are grouped into blocks which in turn form the execution {\em grid}. When all threads of a kernel complete their execution, the corresponding grid terminates. Since threads run in parallel with host CPU threads, it is possible to overlap in time processing on the host and the accelerator. New programming approaches are now emerging, mainly based on directives, moving the coding abstraction layer at an higher lever, over the hardware details. These approaches should make code development easier on heterogeneous computing systems\cite{openacc}, simplifying the porting of existing codes on different architectures. OpenACC is one such programming models, increasingly used by several scientific communities. OpenACC is based on \textit{pragma} directives that help the compiler to identify those parts of the code that can be implemented as {\em parallel functions} and offloaded on the accelerator or divided among CPU cores. The actual construction of the parallel code is left to the compiler making, at least in principle, the same code portable without modifications across different architectures and possibly offering more opportunities for performance portability. This make OpenACC more descriptive compared to CUDA and OpenCL which are more prescriptive oriented. \begin{figure}[t] \begin{lstlisting}[language=C,label=lst:saxpy,belowcaptionskip=2em, caption=Sample OpenACC code computing a {\em saxpy} function on vectors $x$ and $y$. The {\em pragma} clauses control data transfers between host and accelerator and identify the code regions to be run on the accelerator.\vspace*{1em}] #pragma acc data copyin(x), copy(y) { #pragma acc kernels present(x) present(y) async(1) #pragma acc loop vector(256) for (int i = 0; i < N; ++i) y[i] = a*x[i] + y[i]; #pragma wait(1); } \end{lstlisting} \end{figure} Listing~\ref{lst:saxpy} shows an example of the \textit{saxpy} operation of the {\em Basic Linear Algebra Subprogram} (BLAS) set coded in OpenACC. The \textit{pragma acc kernels} clause identifies the code fragment running on the accelerator, while \textit{pragma acc loop...} specifies that the iterations of the for-loop can execute in parallel. The standard defines several directives, allowing a fine tuning of applications. As an example, the number of threads launched by each device function and their grouping can be tuned by the \textit{vector}, \textit{worker} and \textit{gang} directives, in a similar fashion as setting the number of \textit{work-items} and \textit{work-groups} in CUDA. Data transfers between host and device memories are automatically generated, and occur on entering and exiting the annotated code regions. Several data directives are available to allow the programmer to optimize data transfers, e.g. overlapping transfers and computation. For example, in Listing~\ref{lst:saxpy} the clause \textit{copyin(ptr)} copies the array pointed by \textit{ptr} from the host memory into the accelerator memory before entering the following code region; while \textit{copy(ptr)} perform the additional operation of copying it also back to the host memory after leaving the code region. An asynchronous directive \textit{async} is also available, instructing the compiler to generate asynchronous data transfers or device function executions; a corresponding clause (i.e. \textit{\#pragma wait(queue)}) allows to wait for completion. OpenACC is similar to the OpenMP (Open Multi-Processing) framework widely used to manage parallel codes on multi-core CPUs in several ways\cite{Wienke2014812}; both frameworks are directive based, but OpenACC targets accelerators in general, while at this stage OpenMP targets mainly multi-core CPUs; the latest release of OpenMP4 standard has introduced directives to manage also accelerators, but currently, compilers support is still limited. Regular C/C++ or Fortran code, already developed and tested on traditional CPU architectures, can be annotated with OpenACC pragma directives (e.g. \textit{parallel} or \textit{kernels} clauses) to instruct the compiler to transform loop iterations into distinct threads, belonging to one or more functions to run on an accelerator. Ultimately, OpenACC is particularly well suited for developing scientific HPC codes for several reasons: \begin{itemize} \item it is highly hardware agnostic, allowing to target several architectures, GPUs and CPUs, allowing to develop and maintain one single code version; \item the programming overhead to offload code regions to accelerators is limited to few \textit{pragma} lines, in contrast to CUDA and in particular OpenCL verbosity; \item the code annotated with OpenACC \textit{pragmas} can be still compiled and run as plain C code, ignoring the \textit{pragma} directives. \end{itemize} \section{OpenACC implementation of Lattice QCD}\label{implementation} In this section we describe the OpenACC implementation of our LQCD code. We first describe the data structures used, then we highlight the most important OpenACC-related details of our implementation. In writing the OpenACC version, we started from our previous code implementations\cite{se4hpcs15}: a C++/CUDA\cite{cudacode} developed for NVIDIA GPUs aggressively optimized with CUDA-specific features, and a C++ one, developed using OpenMP and MPI directives, targeting large CPU clusters\cite{nissa}. \subsection{Memory allocation and data structures} \label{memalloc_section} Data structures have a strong impact on performance\cite{se4hpcs15,ppam15} and can hardly be changed on an existing implementation: their design is in fact a critical step in the implementation of a new code. We have analyzed in depth the impact of data-structures for LQCD on different architectures (i.e. a GPU and a couple of CPUs), confirming that the {\em Structure of Arrays} (SoA) memory data layout is preferred when using GPUs, but also when using modern CPUs\cite{se4hpcs15}. This is due to the fact that the SoA format allows vector units to process many sites of the application domain (the lattice, in our case) in parallel, favoring architectures with long vector units (e.g. with wide SIMD instructions). Modern CPUs tend indeed to have longer vector units than older ones and we expect this trend to continue in the future. For this reason, all data structures related to lattice sites in our code follow the SoA paradigm. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{vec3_soa} \caption{Memory data layout for structure {\sf vec3\_soa}. Each component $i$ of each array {\sf c0, c1} and {\sf c2} is a C99 complex value. See sections~\ref{memalloc_section} for details.} \label{fig:mem-vec} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{su3_soa} \caption{Memory data layout for structure {\sf su3\_soa}, used in the code for $SU(3)$ matrices; this structure contains 3 vectors. To mitigate memory-bandwidth requirements, one can avoid reading and writing the {\sf r2} member and recompute it on the fly, exploiting the unitarity constraint.}\label{fig:mem-su3} \end{figure} In our implementation, we use the {\sf C99 double complex} as basic data-type which allows to use built-in complex operators of the C library making coding easier and more readable without loss of performance. The algorithm is based on even/odd preconditioning, so the pseudo-fermion variables (implemented as {\sf vec3\_soa} data-types) live only on the even sites of the lattice. This comes at the price of requiring that all sides of the lattice must be even\footnote{Actually, for staggered fermions, this is a requirement coming from the discretization itself.}; in the following we call {\sf LNH\_SIZEH} half the number of lattice sites. The pseudofermion field has three complex values for each even lattice site, corresponding to the three QCD ``colors'' that we label {\sf c0}, {\sf c1}, {\sf c2}. A schematic representation of the {\sf vec3\_soa} structure is shown in Fig.~\ref{fig:mem-vec} and a lexicographical ordering was used for the even lattice sites: \begin{equation} \label{lexicographic_eo} \textrm{idxh} = \textrm{ \sf (int)} \frac{ x_0 + \textrm{\sf LNH\_N0} [x_1 + \textrm{\sf LNH\_N1} (x_2 + \textrm{\sf LNH\_N2}\, x_3 )]}{2} \qquad \mathrm{s.t.}\ \sum_{i=0}^3 x_i \% 2 =0\ , \end{equation} where {\sf LNH\_N0}, {\sf LNH\_N1} and {\sf LNH\_N2} are the lattice sizes; we allow for full freedom in the mapping of the physical directions $x$, $y$, $z$ and $t$ onto the logical directions $x_0$, $x_1$, $x_2$ and $x_3$, as this option will be important for future versions of the code able to run on many processors and accelerators. The data structure used for the generic $3\times 3$ complex matrices is the {\sf su3\_soa} data-type,\footnote{Here the name of the data-type is slightly misleading, since this data structure is used to store $GL(3)$ matrices, while actual $SU(3)$ matrices require in principle less memory.} used e.g. for the ``staples'' $\Sigma_{\mu}^{(k)}$ and the $\Gamma_\mu$ matrices needed in the stouting procedure\cite{Morningstar:2003gk}. Structure {\sf su3\_soa} is a collection of 3 {\sf vec3\_soa} structures ({\sf r0}, {\sf r1}, {\sf r2}, see Fig.~\ref{fig:mem-su3}), and data that has to be stored in this structure typically involve a number of matrices equal to the number of links present in the lattice, i.e. {\sf 8 LNH\_SIZEH}; this means that an array of 8 {\sf su3\_soa } elements is required. Gauge configurations, i.e. the set of the gauge links $U_{\mu}(n)$ and their stouted counterparts, are stored in memory as an array of 8 {\sf su3\_soa} structures. As previously explained the algorithm is typically bandwidth limited and for $SU(3)$ matrices it is convenient to read and write just the first two rows, computing the third one on the fly as ${\sf r2}=({\sf}r0 \wedge {\sf r1})^*$. Note that the SoA memory layout avoids the prefetching problems discussed in similar cases\cite{JooPhi}. Other data structures are needed to store in memory $3\times 3$ traceless Hermitian matrices or $3\times 3$ traceless anti-Hermitian matrices. In these cases, only 8 real parameters per matrix are needed: 3 complex numbers for the upper triangular part and the first two elements of the diagonal, which are real (imaginary) numbers for (anti-)Hermitian traceless matrices. These data structures have been implemented according to the SoA scheme as follows: {\sf thmat\_soa} and {\sf tamat\_soa} contain 3 vectors of {\sf C99 double complex} numbers and 2 vectors of {\sf double} numbers, in a form that closely resemble the one of {\sf vec3\_soa}. Data movements between device and host are negligible, with significant transfers happening only at the beginning and at the end of each Monte Carlo update, and managed mainly with the {\sf update device} and {\sf update host} OpenACC directives. \subsection{Implementation methodology} \label{sec:meth} To initially assess the performance level achievable using OpenACC, we have developed a mini-application benchmark of the Dirac operator\cite{se4hpcs15}. As previously underlined this is the fundamental building block of the Krylov solver, commonly accounting for not less than $40\%$ of the running time, and reaching up to $80\%$ in low temperature simulations. This compute intensive part of an LQCD simulation is where most of the optimization efforts are usually concentrated\cite{dirac-opt}. The Dirac operator code uses three functions: {\sf deo}, {\sf doe} (corresponding respectively to the application of functions $D_{eo}$ and $D_{oe}$ defined in \Eqref{eq:diracblock}) and a \textit{zaxpy}-like function which is negligible in terms of execution time. A direct comparison indicated that the performance of the OpenACC versions of the double precision {\sf deo} and {\sf doe} functions were comparable with the CUDA ones\cite{se4hpcs15}. This promising start was a strong indication that also for LQCD the higher portability of the OpenACC implementation is not associated with a serious loss of performance, and motivated us to proceed to an OpenACC implementation of the full RHMC code. As a side benefit, the use of the OpenACC programming model significantly simplified the implementation of algorithmic improvements. The implementation of these new features started with the coding and testing of the improvements on a single thread version. After the algorithm is validated, the acceleration is switched on by annotating the code with {\sf \#pragma} directives. In order to have a more readable code, the most complex kernels have been split in several functions. While small functions can be used in kernels if declared as {\sf static inline}, for larger ones we had to use the {\sf routine seq} OpenACC directive as large functions cannot be inlined. Kernels have been parallelized following two different approaches. Those using data belonging to nearest (and/or next-to-nearest) neighbors have been parallelized via the {\sf \#pragma acc loop} directive on $4$ nested loops, one for each dimension. This allows to use 3D thread blocks, which should improve data reuse between threads thus reducing bandwidth requirements, which is our major performance concern. The other kernels, i.e. the ones performing only single-site operations, have been parallelized using a single cycle running on the lattice sites. \begin{table}[ht] \tbl{ Breakup of the execution time of a selection of computationally heavy steps of our OpenACC code on different architectures for low temperature and finite temperature simulations. }{ \begin{tabular}{l|S[table-format={2}] S[table-format={2}] S[table-format={2}] S[table-format={2}]} \toprule \multirow{2}{*}{Phase} & \multicolumn{2}{c}{GPU NVIDIA GK201} & \multicolumn{2}{c}{CPU Intel E5-2630v3}\\ & {Low Temp.} & {High Temp.} & {Low Temp.} & {High Temp.}\\ \midrule Dirac Operator & 63 & 16 & 57 & 24 \\ Gauge MD & 8 & 56 & 1 & 24 \\ \bottomrule \end{tabular} \label{tab:functions} } \end{table} After the implementation of a first full working OpenACC simulation, various optimization iterations took place, in particular for the performance critical steps. These include the Dirac operator in the first place, but also the gauge part of the molecular dynamics steps, since their relative impact on the overall execution time is very large, as shown in \tablename~\ref{tab:functions} for a few representative examples. During the full development phase, every time a new OpenACC feature has been introduced, extensive checks have been performed to ensure the correctness of the improved code, against possible semantic misunderstanding of OpenACC clauses or compiler bugs. \subsection{Implementation details of selected kernels} \label{sec:kernels} This section describes the overall structure of our code, and focuses on the OpenACC implementation of selected performance-critical parts. \begin{algorithm}[ht] \caption{Top level scheme of the full simulation code} \label{big_scheme_of_things_algorithm} \begin{algorithmic}[1] \STATE Read gauge configuration $U$ \STATE Create momenta $p$ \STATE Generate pseudofermions by heatbath \label{pseudoferm_generation_algorithm} \STATE Calculation of initial action \STATE Molecular Dynamics [possibly in single precision] \label{moldyn_step_algorithm} \STATE Calculate action variation $\Delta S$ \label{final_action_calculation_algorithm} \STATE Montecarlo step accepted with probability $\textrm{min}(1,e^{-\Delta S})$ \STATE Take measurements \end{algorithmic} \end{algorithm} Algorithm~\ref{big_scheme_of_things_algorithm} is a top-level description of the full code, showing the main computational tasks. For performances, the most critical steps are Molecular Dynamics (step~\ref{moldyn_step_algorithm}) followed by the heatbath generation of the pseudofermions (step~\ref{pseudoferm_generation_algorithm}), and the calculation of the final action (step~\ref{final_action_calculation_algorithm}). Steps~\ref{pseudoferm_generation_algorithm} and~\ref{final_action_calculation_algorithm} consist basically in function calls to the multishift inverter routine, with a high target accuracy. The outer level of the multistep integrator for Molecular Dynamic evolution (step~\ref{moldyn_step_algorithm}) in Algorithm~\ref{big_scheme_of_things_algorithm} is expanded in Algorithm~\ref{outer_cycle_algorithm}. As explained in Sec.(\ref{grand_challenge_section}), in zero temperature simulations or for small quark masses usually the heaviest computational parts are the calculations of the fermion force, while in high temperature simulations the load is shifted inside the gauge cycles, as already shown in \tablename~\ref{tab:functions}. The fermion force calculation step is implemented following\cite{Morningstar:2003gk}; for this step a large fraction of the execution time is is spent in computation of {\sf deo} and {\sf doe} functions implementing the Dirac operator. \begin{figure}[ht] \begin{lstlisting}[language=C,label=lst:dirac,belowcaptionskip=2em, caption={OpenACC implementation of the {\sf Deo} function; directive {\sf vector tile} divides the computational domain in sub-lattices (tiles), each processed within a compute unit in order to allow data re-use.\vspace*{1em}}] void acc_Deo( __restrict const su3_soa * const u, __restrict vec3_soa * const out, __restrict const vec3_soa * const in, __restrict const double_soa * const backfield){ int hd0, d1, d2, d3; #pragma acc kernels present(in) present(out) present(u) present(backfield) async(1) #pragma acc loop independent gang(GANG) for(d3=0; d3<nd3;d3++) { #pragma acc loop independent vector tile(TILE0,TILE1,TILE2) for(d2=0; d2<nd2; d2++) { for(d1=0; d1<nd1; d1++) { for(hd0=0; hd0 < nd0h; hd0++) { ... } } } } } \end{lstlisting} \end{figure} The {\sf deo} OpenACC implementation is shown in Listing~\ref{lst:dirac}, showing the 4 dimension nested loops and the corresponding pragma directives. In this listing OpenACC directives are used: i) to identify the data structures already present in the accelerator memory, when targeting accelerators ({\sf present()} clause); ii) to make the compiler aware of the data independence of loops iterations ({\sf independent} clause); iii) to request to group iterations in order to execute them in the same (or close) compute units ({\sf tile} clause). In particular, the {\sf tile} OpenACC clause asks the compiler to split or strip-mine each loop in the nest into two loops, an outer tile loop and an inner element loop. Where possible (e.g. in {\sf deo} and {\sf doe}), performing computations of adjacent lattice sites in close hardware compute units may increase data reuse (i.e. matrices shared between sites)\cite{dirac-opt} for all the architectures where data caches are present, which means almost every modern processing architecture. The tile sizes offering the best performance depend, for each kernel, on several features of each specific architecture, e.g. vector units size, register numbers, cache levels and sizes. We keep a door open for limited architecture-specific optimization, by allowing to specify the {\sf TILE0, TILE1, TILE2} variables at compile time, telling the compiler how to group together iterations involving adjacent lattice sites. \begin{algorithm}[t] \caption{MD evolution - 2nd order Minimum Norm integrator (outer cycle)} \label{outer_cycle_algorithm} \begin{algorithmic}[1] \STATE Fermion Force Calculation \STATE Evolve momenta for $\lambda \Delta T/N_{md}$ \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \COMMENT{$\lambda=0.1931833275037836$}\cite{Omelyan03} \FOR{$i=1$ to $N_{md}-1$} \STATE {\bf Gauge cycle } ($\Delta T/2N_{md}$) \STATE Fermion Force Calculation \STATE Evolve momenta for $(1-2\lambda )\Delta T/N_{md}$ \STATE {\bf Gauge cycle }($\Delta T/2N_{md}$) \STATE Fermion Force Calculation \STATE Evolve momenta for $2\lambda \Delta T/N_{md}$ \ENDFOR \STATE {\bf Gauge cycle }($\Delta T/2N_{md}$) \STATE Fermion Force Calculation \STATE Evolve momenta for $(1-2\lambda )\Delta T/N_{md}$ \STATE {\bf Gauge cycle }($\Delta T/2N_{md}$) \STATE Fermion Force Calculation \STATE Evolve momenta for $\lambda \Delta T/2N_{md}$ \end{algorithmic} \end{algorithm} The actual evolution of the gauge configuration happens inside the inner gauge cycles, where the gauge contribution to the momenta evolution is also calculated. Among the tasks performed in the gauge cycles, the computation of staples in the gauge force calculation is the most time consuming. It consists of calculating 6 products of 3 and 5 $SU(3)$ matrices representing links on C-shaped paths on the lattice. The implementation of one of these functions is sketched in Listing~\ref{lst:rect_staples}: also in this case the parallelization has been done using the {\sf tile} directive over the 3 innermost nested cycles. This allows us also in this case to use 3D thread blocks, which should improve data reuse between threads, reducing the bandwidth needs. We shall also remark that in this case, since second-nearest-neighbor-site addressing is needed, for the sake of simplicity we use indirect addressing\footnote{The code would be greatly more complicated if using direct addressing, also because of some limitations in the coding options necessary to avoid branches that would destroy thread coherence.}. Notice that the function {\sf staple\_type1} (as well as similar ones) has to be declared with {\sf \#pragma acc routine seq} to be used inside a kernel. \begin{figure} \begin{lstlisting}[language=C,label=lst:rect_staples,belowcaptionskip=1em, caption={Implementation of the function performing the evaluation of a staple.\vspace*{1em}}] #pragma acc routine seq void staple_type1(...){...} void calc_staples_type1( __restrict const su3_soa * const u, __restrict su3_soa * const loc_stap ) { int d0, d1, d2, d3, mu, iter; #pragma acc kernels present(u) present(loc_stap) present(nnp_openacc) present(nnm_openacc) #pragma acc loop independent gang(IMPSTAPGANG3) for(d3=0; d3<nd3; d3++){ #pragma acc loop independent vector tile(IMPSTAPTILE0, IMPSTAPTILE1, IMPSTAPTILE2) for(d2=0; d2<nd2; d2++) { for(d1=0; d1<nd1; d1++) { for(d0=0; d0 < nd0; d0++) { #pragma acc loop seq for(mu=0; mu<4; mu++){ ... const int idx_pmu = nnp_openacc[idxh][mu][parity]; ... staple_type1(&u[dir_nu_1R], idx_pmu, ... ) \end{lstlisting} \end{figure} In order to improve performance, we also implemented a single precision version of the code for the molecular dynamics evolution. Due to the low arithmetic density of the LQCD algorithms, on GPUs at least, all kernels are memory-bound; this means that, when precision is not an issue, it is preferable to have single precision versions of selected functions and structures, as a plain $\times2$ increase in performance is expected with respect to the double precision implementation. \section{Performance analysis}\label{results} To compare the performance of our code on different architectures we consider two different benchmarks taking into account the most computational intensive parts of the code. The first benchmark evaluates the performance of the Dirac operator, both single and double precision version, and the latter evaluates the performance of the gauge part of the molecular dynamics step. Depending on input configuration parameters either the former or the latter kernels make up most of the execution time of a typical simulation, as shown in \tablename~\ref{tab:functions}. We present the execution time per site of the Dirac operator for different lattice sizes in \tablename~\ref{tab:dirac-norm}. Exactly the same code has been run on all platforms without requiring any change; we have just re-compiled it with different flags instructing the PGI 16.10 compiler to target the corresponding architectures and using the best tile dimensions for each of them. \begin{table}[ht] \tbl{Measured execution time per lattice site [ns] for the Dirac operator, on several processors and for several typical lattice sizes.}{ \resizebox{\textwidth}{!}{ \begin{tabular}{l|SSSSSSSS} \toprule \multirow{3}{*}{Lattice} & \multicolumn{8}{c}{Processor (CPU or GPU)} \\ & \multicolumn{2}{c}{NVIDIA GK201} & \multicolumn{2}{c}{NVIDIA P100} & \multicolumn{2}{c}{Intel E5-2630v3} & \multicolumn{2}{c}{Intel E5-2697v4} \\ & { SP} & { DP} & { SP} & { DP} & { SP} & { DP} & { SP} & { DP} \\ \midrule \midrule $32^2 \times 8 \times 32$ & 4.19 & 8.51 & 1.77 & 3.07 & 72.18 & 99.17 & 38.92 & 54.90 \\ $32^3 \times 8$ & 4.15 & 8.39 & 1.22 & 2.48 & 72.81 & 101.46 & 77.33 & 103.87 \\ $24^4$ & 4.43 & 8.62 & 1.58 & 2.90 & 70.44 & 94.42 & 51.13 & 66.87 \\ $32^4$ & 4.02 & 9.54 & 1.32 & 2.40 & 79.05 & 100.19 & 43.90 & 54.88 \\ $32^3 \times 36$ & 4.03 & 8.48 & 1.46 & 2.54 & 83.12 & 107.47 & 38.82 & 50.29 \\ \bottomrule \end{tabular} } \label{tab:dirac-norm} } \end{table} We tested two different NVIDIA GPUs, the K80 based on the Kepler architecture and the recently released P100 board based on the Pascal architecture. For the K80 the single precision version takes $\approx 4ns$ per lattice site, while the double precision version requires $\approx 8.5ns$. Running on the P100 we measure $\approx 1.5ns$ for single and $\approx 2.5ns$ for double precision, improving approximately by a factor $3 \times$ over the K80. This results perfectly scales with architecture performance of P100 that has $\approx 4.3 \times$ more cores and $\approx 3 \times$ more memory bandwidth, see \tablename~\ref{tab:architecture}. Concerning Intel CPUs, we have compared two different processors, the 8-core E5-2630v3 CPU based on Haswell architecture, and the 18-core E5-2697v4 CPU based on Broadwell. Since computing resource of the CPUs are roughly $3 \times$ lower than on GPUs, see \tablename~\ref{tab:architecture}, a performance drop is expected. However, the actual performance drop measured on both CPUs is much larger than this expected theoretical figure; indeed time per site is approximately $10 \times$ or larger on the Haswell than on one K80 GPU. The Broadwell performs approximately a factor $2\times$ better compared to Haswell, at least for some lattice sizes. We have identified two main reasons for this non-optimal behavior, and both of them point to some still immature features of the PGI compiler when targeting x86 architectures, that -- we expect -- should be soon resolved: \begin{itemize} \item \textbf{Parallelization} - the compiler is only able to split outer-loops across different threads, while inner loops are executed serially or vectorized within each thread. This explains why on the Broadwell CPU running on a lattice $32^2\times8\times32$ we have a performance $2\times$ better than for a $32^3\times8$ lattice, which has the same volume but allows to split the outer loop only on $8$ threads. \item \textbf{Vectorization} - as reported by the compilation logs, the compiler fails to vectorize the {\sf deo} and {\sf doe} functions computing the Dirac operator (see Listing~\ref{lst:dirac}) reporting to be unable to vectorize due to the use of ``mixed data-types''. To verify if this is related to how we have coded these functions, we have translated the OpenACC pragmas into the corresponding OpenMP ones -- without changing the C code -- and compiled using the Intel compiler (version 17.0.1). In this case the compiler succeeds in vectorizing the two functions, running a factor $2 \times$ faster compared to the OpenACC version compiled by PGI compiler. \end{itemize} \begin{table}[ht] \tbl{ Measured execution time per lattice site [ns] for the pure gauge Molecular Dynamics step for several processors and several typical lattice sizes. }{ \resizebox{\textwidth}{!}{ \begin{tabular}{l|SSSS} \toprule \multirow{2}{*}{Lattice} & \multicolumn{4}{c}{Processor (CPU or GPU)} \\ & \multicolumn{1}{c}{NVIDIA GK201} & \multicolumn{1}{c}{NVIDIA P100} & \multicolumn{1}{c}{Intel E5-2630v3} & \multicolumn{1}{c}{Intel E5-2697v4}\\ \midrule \midrule $32^2 \times 8 \times 32$ & 193.79 & 51.80 & 1613.88 & 926.48 \\ $32^3 \times 8$ & 190.39 & 51.69 & 2075.08 & 1756.78 \\ $24^4$ & 212.04 & 53.74 & 1265.13 & 979.28 \\ $32^4$ & 201.82 & 51.72 & 1719.97 & 944.40 \\ $32^3 \times 36$ & 208.54 & 52.62 & 1801.81 & 837.68 \\ \bottomrule \end{tabular} } \label{tab:md-norm} } \end{table} \tablename~\ref{tab:md-norm} shows the execution time of the gauge part of the molecular dynamics step. As already remarked this is one of the two most time-consuming steps together with the application of the Dirac operator. As we see the update time per site is quite stable for all lattice sizes we have tried and for all architectures. Going from the NVIDIA K80 to the P100 the time improves by a factor $\approx 3\times$, while between Haswell and Broadwell we have roughly a factor $\approx 1.5\times$ / $2.0\times$. We finally mention that we have also been able to compile and run our code on an AMD FirePro W9100 GPU and on the latest version of the Intel Xeon Phi processor, the Knights Landing (KNL). However, in these cases, results are still preliminary. In more details, the compiler itself crashes when compiling the code for the AMD GPU for some specific lattice sizes; for the KNL, specific compiler support is still missing, but this processor is able to run the code compiled for the Haswell architecture, implying however that 512-bit vectorization is not used. These problems do not allow us to perform a systematic comparison of performance for these architectures. Once again, we believe that this is due to some immaturity of the compiler, and we expect that these issues will be resolved in future versions. \begin{table}[ht] \tbl{ Execution time [sec] of a full trajectory of a complete Monte Carlo simulation for several typical physical parameters, running on one GPU of a NVIDIA K80 system. We compare the OpenACC code developed in this paper and and earlier GPU-optimized CUDA code. Here we use the standard Wilson action and unimproved staggered fermions as the CUDA code does not support the more advanced improvements available in the OpenACC version. }{ { \begin{tabular}{l|SSSSS} \toprule {Lattice} & $am$ & $\beta$ & {CUDA} & {OpenAcc} & {Variation} \\ \midrule \midrule $32^3 \times 8$ & 0.0125 & 5.55 & 392.69 & 490.74 & {+25\%} \\ $24^4$ & 0.0125 & 5.55 & 303.80 & 328.07 & {+8\%} \\ $32^4$ & 0.001 & 5.52 & 8973.82 & 8228.36 & {-8\%} \\ \bottomrule \end{tabular} } \label{tab:cuda-openacc} } \end{table} \tablename~\ref{tab:cuda-openacc} addresses the question of the efficiency costs (if any) of our architecture-portable code; the table compares the execution time for a {\em full} Monte Carlo step (in double precision) of the OpenACC code and a previously developed CUDA implementation\cite{cudacode}, optimized for NVIDIA GPUs. Although the two codes are not exactly in a one to one correspondence, the implementations are similar enough to make such a test quantitatively meaningful. One immediately sees that the performances of the two implementations are comparable and the use of OpenACC does not imply a dramatic performance loss, the differences between the execution times of the two versions being of the order of $10\div 20\%$. The worst case is the one of the $32^3\times 8$ lattice, in which OpenACC is about $25\%$ slower than CUDA. Since we are comparing an high-level version of the code with one specifically developed for NVIDIA GPUs, this would not be a dramatic loss, however in this case the comparison is also not completely fair. Indeed for this high temperature simulation the gauge part of the Molecular Dynamic step starts to be the computationally heaviest task and, in the CUDA implementation, part of it had been explicitly hard coded in single precision. For the low-temperature test cases the differences between the CUDA and the OpenACC implementation are much smaller and, in fact, in one case the OpenACC version is the fastest one. A possible explanation of this is the following: in the CUDA version unidimensional blocks are adopted to parallelize the Dirac operator, while in the OpenACC implementation three-dimensional block structures are used, that fit better the larger cache of recent GPUs and, especially on larger lattices, improves data reuse. \section{Concluding Remarks}\label{conclusions} In this work we have developed a full state-of-the-art production-grade code for Lattice QCD simulations with staggered fermions, using the OpenACC directive-based programming model. Our implementation includes all steps of a complete simulation, and most of them run on accelerators, minimizing the transfer of lattice data to and from the host. We have used the PGI compiler, which supports the OpenACC standard and is able to target almost all current architectures relevant for HPC computing, even if with widely different levels of maturity and reliability. Exactly the same code runs successfully on NVIDIA many-core GPUs and Intel multi-core CPUs, and for both architectures we have measured roughly comparable levels of efficiency. Also, the performance of the complete code is roughly the same as that of an equivalent code, specifically optimized for NVIDIA GPUs and written in the CUDA language. Our code also runs on AMD GPUs and on the KNL Intel Phi processor, even if the compilation and run-time environment for these processors is still unable to deliver production-grade codes; in these cases, we have strong indications that these problems come from a residual immaturity of the compilation chain and we expect that they will be soon resolved. All in all, our final result is a LQCD Monte Carlo code portable on a large subset of HPC relevant processor architectures and with consistent performances. Some further comments are in order: i) using a directive-based programming model, we are able to target different computing platforms presently used for HPC, avoiding to rewrite the code when moving from one platform to another; ii) the OpenACC standard provides a good level of hardware abstraction requiring the programmer to only specify the function to be parallelized and executed on the accelerator; the compiler is then able to exploit the parallelism according to the target processor, hiding from the programmer most hardware optimizations; iii) the OpenACC code has roughly the same level of performance of that implemented using a native language such as CUDA for NVIDIA GPUs, allowing to efficiently exploit the computing resources of the target processor. In the near future we plan to carefully assess performances on AMD and KNL systems, in order to enlarge the platform portfolio of our code. We also plan to assess whether OpenMP4 provides the same level of portability as OpenACC, as soon as compilers supporting this programming standard become available. This is important to have a unique directive-based programming model which is widely used by several scientific communities and supported by several compilers (GCC, ICC, \ldots). Finally, we are already working on a massively parallel version of our code, able to run concurrently on a large clusters of CPUs and accelerators. \section*{Acknowledgments} We warmly thank Francesco Sanfilippo (the developer of the NISSA code\cite{nissa}) for his advice and support. We thank the INFN Computing Center in Pisa for providing us with the development framework, and Universit\`a degli Studi di Ferrara and INFN-Ferrara for the access to the COKA GPU cluster. This work has been developed in the framework of the SUMA, COKA and COSA projects of INFN. FN acknowledges financial support from the INFN SUMA project.
1,116,691,497,962
arxiv
\section{Introduction} Qualitative numerical problems (QNPs) are classical planning problems extended with non-negative numerical variables $X$ that can be decreased of increased ``qualitatively'', i.e., by random amounts. Since such numerical variables cannot be used for counting, QNP planning, unlike most other general forms of planning with numbers \cite{helmert:numeric}, turns out to be decidable. QNPs were introduced by \citeay{sid:aaai2011} as a useful model for \emph{generalized planning}, namely, the synthesis of plans that solve multiple classical planning instances \cite{levesque:loops,bonet09automatic,srivastava:generalized,hu:generalized,bonet:ijcai2015,BelleL16,anders:review}. Basically, collections $\mathcal{Q}$ of planning instances $P$ that share the same set of actions and state features may be often expressed as a single QNP problem $Q$ whose solutions, that map state features into actions, then solve all problems $P$ in $\mathcal{Q}$ \cite{bonet:ijcai2018}. QNPs can be solved in two steps \cite{sid:aaai2011}. First, the QNP $Q$ is converted into a standard fully observable non-deterministic (FOND) problem $P$ \cite{cimatti:fond}. Then, solutions of $P$ obtained by an off-the-shelf FOND planner are tested for \emph{termination}. This last step is required because the non-determinism in the FOND problem $P$ is not fair but \emph{conditionally fair:} infinite qualitative decrements of a numerical variable $X$ make the expression $X=0$ true eventually, provided that $X$ is increased not more than a finite number of times \cite{bonet:ijcai2017}. The policies that solve $P$ and terminate are then exactly the policies that solve the QNP $Q$ \cite{sid:aaai2011}. The computational shortcomings of solving QNPs following this generate-and-test approach, however, are two. First, it is not simple to amend FOND planners to generate all the solutions of a FOND problem because FOND plans are not action sequences but closed-loop policies. Second, the number of policies that need to be tested for termination may be huge: exponential in the number of FOND states, and hence, doubly exponential in the number of variables. In this work we address these limitations while providing additional insights on QNPs. We introduce two polynomial-time reductions, one from QNPs to FOND problems and the other from FOND problems to QNPs. As every (formal) reduction, both reduction are sound and complete, and hence do not require termination tests. A result of these reductions is that QNPs are shown to have the same expressive power and in particular, the plan-existence decision problem for both have the same complexity EXP-Complete \cite{littman:fond,rintanen:po}. The new QNP to FOND translator is implemented and available. In combination with sound and complete FOND planners, the translation yields the only sound and complete QNP planner available. The structure of the paper is as follows. We review first classical planning, FOND planning, QNPs, the direct translation of QNPs into FOND problems, and the termination test. We follow ideas from \citeay{sid:aaai2011} but in a slightly different and more expressive formulation. We then introduce the two new reductions: from FOND problems into QNPs, and from QNPs into FOND problems. This last reduction is very different from the one sketched in \cite{bonet:ijcai2017} which is buggy (we illustrate this with an example). We then consider variations and extensions of QNPs and FOND problems, look at the relation between the two in terms of the fairness assumptions that each model makes, discuss related work, and report experimental results. \section{Classical and FOND planning} A classical planning problem is a sequential decision problem where a goal is to be reached by performing actions with deterministic effects from a given initial state. These problems are usually expressed in compact form in planning languages such as STRIPS \cite{fikes:strips,russell:book}. A (grounded) STRIPS planning problem (with negation) is a tuple $P=\tup{F,I,O,G}$ where $F$ denotes a set of propositional variables, $I$ and $G$ are sets of $F$-literals representing the initial and goal situation, and $O$ is a set of actions $a$ with preconditions and effects $Pre(a)$ and $\mathit{Eff}\xspace(a)$ given by sets of $F$-literals. The \emph{state model} $\S(P)$ for the problem $P=\tup{F,I,O,G}$ is a tuple $\S(P)=\tup{S,s_0,Act,A,f,S_G}$ where $S$ is the set of possible truth-valuations over the $F$ literals, called the states, $s_0$ is the initial state, $Act=O$, $A(s)$ represents the actions $a$ in $Act$ whose preconditions are true in $s$, $f(a,s)$ represents the state $s'$ that follows action $a$ in $s$ for $a \in A(s)$, and $S_G$ is the set of goal states. It is assumed that the problem $P$ is consistent in the sense that $s_0$ and $f$ are well-defined and $S_G$ is not empty. A solution to a classical problem $P$ is an action sequence $a_0, \ldots, a_n$ that generates a state sequence $s_0,\ldots, s_{n+1}$ over the model $\S(P)$ that reaches the goal. In this sequence, $a_i \in A(s_i)$ and $s_{i+1}=f(a_i,s_i)$ for $i=0, \ldots, n$, and $s_{n+1} \in S_G$. A \emph{fully-observable non-deterministic (FOND) problem} $P$ is like a classical planning problem except that actions $a$ may have non-deterministic effects expressed as $\mathit{Eff}\xspace_1(a) \, | \, \cdots \, | \mathit{Eff}\xspace_n(a)$ where $\mathit{Eff}\xspace_i(a)$ is a set of $F$-literals as above \cite{cimatti:fond,geffner:book,ghallab:new-book}. The state model $\S(P)$ determined by a FOND problem $P=\tup{F,I,O,G}$ is a tuple $\S(P)=\tup{S,s_0,Act,A,F,S_G}$ as above with the difference that the state transition function $F$ is non-deterministic, and maps an action $a$ and state $s$ into a non-empty set $F(a,s)$ of possible successor states. As usual, the non-deterministic transition function $F$ is given in factored form. That is, for action $a$ made of multiple effects $\mathit{Eff}\xspace_1\,|\,\cdots\,|\,\mathit{Eff}\xspace_n$ (possibly deterministic when $n=1$), each outcome $s'$ in $F(a,s)$ results of the choice of one $\mathit{Eff}\xspace_i$ for each non-deterministic effect of $a$.\footnote{\label{foot:1}As it is standard, any choice of effects is assumed to be consistent (i.e., any pair of choices for two different non-deterministic effects of the \emph{same action} contain no complementary literals). However, with some (polynomially bounded) extra work, our methods, algorithms and results still apply if the model is extended with \emph{constraints} that every outcome $s'$ must satisfy, when such constraints are given in suitable form; e.g.\ DNF formulas over $F$. } The solutions of FOND problems ensure that the goal is reached with certainty under certain fairness assumptions. Policies or plans in the FOND setting are partial functions $\pi$ mapping states $s$ into actions $\pi(s)$. A state trajectory $s_0, s_1, \ldots, s_n$ finite or infinite is induced by $\pi$ over the model $\S(P)$ if the action $a_i=\pi(s_i)$ is defined, it is applicable in the state $s_i$, i.e., $a_i \in A(s_i)$, and $s_{i+1}$ is in $F(a_i,s_i)$, $i=1, \ldots, n-1$. The trajectory is said to be a \emph{$\pi$-trajectory.} The trajectory is \emph{maximal} if A)~it is infinite, i.e., $n=\infty$, and does not include a goal state, B)~if $s_n$ is the first goal state in the sequence, or C)~the action $\pi(s_n)$ is not defined or not applicable in $s_n$. A policy $\pi$ is a solution of the FOND problem $P$ if all the \emph{fair} maximal trajectories induced by $\pi$ over the model $\S(P)$ are goal reaching \cite{strong-cyclic,cimatti:fond}. The so-called \emph{strong solutions} assume that all state trajectories are fair. \emph{Strong-cyclic solutions}, on the other hand, assume that all trajectories are fair \emph{except} the infinite trajectories where a state $s$ occurs infinitely often but a state transition $(s,s')$ for some $s' \in F(a,s)$ for $a=\pi(s)$, occurs finitely often. The latter trajectories are deemed to be \emph{unfair}. Other \emph{equivalent} characterizations of strong and strong cyclic solutions are common. For example, a strong cyclic solution $\pi$ for a FOND problem $P$ is also a policy $\pi$ such that for each $\pi$-trajectory connecting an initial state to a state $s$, there is a $\pi$-trajectory connecting $s$ to a goal state. Similarly, a strong solution is a strong cyclic solution $\pi$ with no cycles; i.e., one where no $\pi$-trajectory visits the same state twice. Strong solutions can also be thought as winning strategies against an adversary, while strong cyclic solutions as winning strategies against nature. Indeed, there is a well known relation between (proper) policies that achieve the goal with probability 1 in goal-based MDPs (Markov Decision Processes) and the strong cyclic policies that solve the FOND problem associated with the MDP, where the transition function is such that $F(a,s)$ collects the states $s'$ that are possible after action $a$ in $s$, i.e., for which $P_a(s'|s) > 0$ \cite{geffner:book}. From now, by solution of a FOND problem we mean a \emph{strong cyclic solution}, and by a FOND planner, we mean a strong cyclic planner. There are some good FOND planners available, including PRP \cite{prp}, based on classical planners, MyND \cite{mynd}, based on heuristic AND/OR search, and FOND-SAT \cite{geffner:fond-sat}, based on a reduction to SAT. \section{Qualitative Numerical Problems} \emph{Qualitative numerical problems (QNPs)} are classical planning problems extended with numerical variables that can be decremented or incremented ``qualitatively''. We make this formal below. \subsection{QNPs: Syntax} The syntax of QNPs is defined as an extension of the STRIPS language with negation. A QNP is a tuple $Q=\tup{F,V,I,O,G}$ where the new component is a set $V$ of \emph{non-negative numerical variables} $X \in V$. These variables introduce the non-propositional atoms $X=0$ and their negations, denoted as $X > 0$. These literals can appear in the initial situation, action preconditions, and goals of $Q$. The effects of actions $a$ on a numerical variable $X$ can be just qualitative increments or qualitative decrements denoted by the expressions $Inc(X)$ and $Dec(X)$. We refer to $X=0$ and $X > 0$ as the $V$-literals for $X \in V$, and to $p$ and $\neg p$ for $p \in F$, as the $F$-literals in $Q$. \begin{definition} \label{def:qnp:syntax} A QNP is a tuple $Q=\tup{F,V,I,O,G}$ where $F$ and $V$ are sets of propositional and numerical variables respectively, $I$ and $G$ denote the initial and goal situations, and $O$ is a set of actions $a$ with preconditions, and propositional and numerical effects that are denoted as $Pre(a)$, $\mathit{Eff}\xspace(a)$, and $N(a)$ respectively. The $F$-literals can appear in $I$, $G$, $Pre(a)$, and $\mathit{Eff}\xspace(a)$, while $V$-literals can appear in $I$, $G$, and $Pre(a)$. The numerical effects $N(a)$ only contains special atoms of the form $Inc(X)$ or $Dec(X)$ for the variables $X$ in $V$. Actions with the $Dec(X)$ effect must feature the precondition $\GT{X}$ for any variable $X$ in $V$. \end{definition} QNPs are assumed to be syntactically consistent by requiring that no pair of complementary literals or qualitative effects appears in the initial situation, action effects, or goals. A pair of complementary literals or qualitative effects has the form $\{p,\neg p\}$ for some $p$ in $F$, or $\{\EQ{X},\GT{X}\}$ or $\{\DEC{X},\INC{X}\}$ for some $X$ in $V$. The preconditions, propositional and numerical effects for an action $a$ are denoted by pairs $\abst{Pre(a)}{\mathit{Eff}\xspace(a), N(a)}$ where the $Inc(X)$ and $Dec(X)$ expressions in $N(a)$ are abbreviated by $\INC{X}$ and $\DEC{X}$ respectively. Before defining the semantics of QNPs, let us consider an example. \begin{example} An \emph{abstraction} that is suitable for expressing the generalized problem of achieving the goal $clear(x)$ over an arbitrary Blocksworld instance \cite{bonet:ijcai2018} is given in terms of the QNP $Q_{clear(x)}=\tup{F,V,I,O,G}$ where $F=\{H\}$ contains a boolean variable $H$ that represents if the gripper is holding a block, $V=\{n(x)\}$ contains a numerical variable $n(x)$ that represents the number of blocks above $x$, and $I=\{\neg H, \GT{n(x)}\}$ and $G=\{\EQ{n(x)}\}$ represent the initial and goal situations. The actions $O=\{a,b\}$ are \begin{alignat}{1} \label{eq:ex1a} a\ &=\ \abst{\neg H, \GT{n(x)}}{H, \DEC{n(x)}} \intertext{and} \label{eq:ex1b} b\ &=\ \abst{H}{\neg H} \,. \end{alignat} \indent It is easy to see that the first action $a$ picks up blocks that are above $x$; its first precondition $\neg H$ expresses that the gripper is holding no block while the second $\GT{n(x)}$ that there is at least one block above $x$. The effects, on the other hand, make $H$ true (expressing that some block is being held) and decrease the number $n(x)$ of blocks above $x$. The other action $b$ puts blocks being held away from block $x$ as expressed by its precondition $H$ and effect $\neg H$. The fact that $b$ puts blocks away from block $x$ is reflected in that it does not affect the variable $n(x)$. The QNP $Q_{clear}$ captures the relevant part of the infinite collection of Blocksworld instances where the goal if to achieve the atom $clear(x)$ for some block $x$. The solution to $Q_{clear}$ provides the general strategy for solving all such instances. For ways of learning such abstractions automatically; see \cite{bonet:aaai2019}. \end{example} \subsection{QNPs: Semantics} A state $s$ for QNP $Q=\tup{F,V,I,O,G}$ is a valuation that assigns a truth value $s[p]$ to each boolean variable $p \in F$, and a non-negative real value $s[X]$ to each numerical variable $X \in V$. Since the initial situation $I$ can only feature atoms of the form $\EQ{X}$ or $\GT{X}$, there is a \emph{set} $S_0$ of possible initial states $s_0$ that correspond to the valuations that satisfy the literals in $I$. For example, in $Q_{clear(x)}$, $I$ is given by the literals $I=\{\neg H, \GT{n(x)}\}$, meaning that $S_0$ contains all and only the valuations that make $H$ false and $n(x){\,=\,}r$ for some positive real number $r$. The use of variables that can take real values for representing integer counters illustrates that the semantics of QNPs is coarse-grained, and for this reason, decidable. Indeed, QNPs use just one qualitative property of numbers; namely, that a non-negative variable that is decreased infinitely often and increased finitely often, eventually must reach the value of zero. This property is true for integers, and it also true for reals, as long as the magnitude of the decrements is bounded from below by some positive $\epsilon$-parameter. More about this below. The state model $\S(Q)$ represented by a QNP can be characterized as follows: \begin{definition} \label{def:qnp:state-model} A QNP $Q=\tup{F,V,I,O,G}$ determines a non-deterministic state model $\S(Q)=\tup{S,S_0,Act,A,F,S_G}$ where \begin{enumerate}[$\bullet$] \item the states $s$ in $S$ are the valuations that assign a truth value to the boolean variables in $F$ and a non-negative real value to the numerical variables in $V$, \item the initial states $s_0$ in $S_0$ are those that satisfy the literals in $I$ under a closed-world assumption ($s_0$ makes $p$ and $\EQ{X}$ false if the literals $p$ and $\EQ{X}$ are not in $I$), \item the actions in $Act$ are those in $O$; i.e., $Act=O$, \item the actions $A(s)$ applicable in $s$ are those in $Act$ such that $Pre(a)$ is true in $s$, \item the goal states in $S_G$ are those that satisfy $G$, \item the transition function $F$ is such that $s' \in F(a,s)$ for $a \in A(s)$ if \begin{enumerate} \item $s'[p]$ is $true$ (resp. $false$) if $p$ (resp. $\neg p)$ is in $\mathit{Eff}\xspace(a)$, \item $s[X] < s'[X]$ if $Inc(X)$ is in $N(a)$, \item $s'[X] < s[X]$ if $Dec(X)$ is in $N(a)$, \item $s'[p] = s[p]$ if neither $p$ nor $\neg p$ in $\mathit{Eff}\xspace(a)$, \item $s'[X] = s[X]$ if neither $\INC{X}$ nor $\DEC{X}$ in $N(a)$. \end{enumerate} \end{enumerate} \end{definition} A \emph{trajectory} $s_0, a_0, s_1, a_1, \ldots, s_n$ is compatible with the model $\S(Q)=\tup{S,S_0,Act,A,F,S_G}$ if $s_0 \in S_0$, and $a_i \in A(s_i)$ and $s_{i+1} \in F(a_i,s_i)$ for each $a_i$ in the sequence. The trajectory is an $\epsilon$-bounded trajectory or \emph{$\epsilon$-trajectory} if the numerical changes are bounded from below by a parameter $\epsilon > 0$, except when this would make the variable negative: \begin{definition} \label{def:qnp:epsilon-trajectory} A trajectory $s_0, a_0, s_1, a_1, \ldots, s_n$ is an \emph{$\epsilon$-trajectory} iff for any variable $X$ and time point $i$, with $i < n$, $s_{i+1}[X] \neq s_{i}[X]$ implies $|s_{i+1}[X] - s_{i}[X]| \geq \epsilon$ or $0=s_{i+1}[X]<s_i[X]<\epsilon$. \end{definition} Trajectories bounded by $\epsilon > 0$ cannot decrease the value of a variable asymptotically without ever reaching the value of zero. This is in agreement with the key assumption in QNPs by which variables that are decreased infinitely often and increased finitely often must eventually reach the value zero. From now, \textbf{trajectories over QNPs will refer to $\epsilon$-trajectories for some $\epsilon > 0$.} \subsection{QNPs: Solutions} Solutions to QNPs take the form of partial functions or policies $\pi$ that map states into actions. The choice of the action $\pi(s)$ to be done in a state $s$, however, can only depend on the truth values $s[p]$ associated with the boolean variables $p$ in $F$ and the truth values of the expressions $s[X]=0$ associated with the numerical variables $X$ in $V$. If we use the notation $s[X=0]$ to refer to $s[X]=0$, then $\pi(s)$ must depend solely on the \emph{truth-valuation over the $F$-literals $p$ and the $V$-literals $X=0$} that are determined by the state $s$. There is indeed a finite number of such truth valuations but an infinite number of states. We refer to such truth valuations as the \emph{boolean states} of the QNP and denote the boolean state associated with a state $s$ as $\bar s$. \begin{definition}[Policy] \label{def:qnp:policy} A policy $\pi$ for a QNP $Q=\tup{F,V,I,O,G}$ is a partial mapping of states into actions such that $\pi(s)=\pi(s')$ if $\bar s=\bar s'$. \end{definition} A trajectory $s_0, a_0, s_1, a_1, \ldots,s_n$ compatible with the model $\S(Q)$ is said to be \emph{$\pi$-trajectory} for $Q$ if $\pi$ if $a_i = \pi(s_i)$. The $\pi$-trajectory is also said to be a \emph{trajectory induced by} $\pi$ or compatible with $\pi$. As before, a $\pi$-trajectory is \emph{maximal} if A) the trajectory is infinite and does not include a goal state, B)~$s_n$ is the first goal state in the trajectory, or C)~$\pi(s_n)$ is undefined or denotes an action that is not applicable in $s_n$. The solutions to QNPs are defined then as follows: \begin{definition}[Solution] \label{def:qnp:solution} Let $Q$ be a QNP and let $\pi$ be a policy for $Q$. The policy $\pi$ \emph{solves} $Q$ iff for every $\epsilon>0$, all the maximal $\epsilon$-trajectories induced by $\pi$ reach a goal state. \end{definition} \begin{example} Consider the QNP $Q_{clear(x)}=\tup{F,V,I,O,G}$ from above with $F=\{H\}$, $V=\{n(x)\}$, $I=\{\neg H,\GT{n(x)}\}$, $G=\{\EQ{n(x)}\}$, and $O=\{a,b\}$ where $a=\abst{\neg H, \GT{n(x)}}{H, \DEC{n(x)}}$ and $b=\abst{H}{\neg H}$. Let $\pi$ be the policy defined by the rules: \begin{alignat}{1} &\text{if $\neg H$ and $\GT{n(x)}$, then do $a$} \,, \\ &\text{if $H$ and $\GT{n(x)}$, then do $b$} \,. \end{alignat} All the maximal $\epsilon$-bounded trajectories that are induced by the policy $\pi$ on $Q_{clear(x)}$ have the form \begin{alignat}{1} s_0,a,s_1,b,s_2,a,s_3,b,\ldots,s_{2m},a,s_{2m+1} \end{alignat} where $s_m$, for positive integer $m$, is the first state where $\EQ{n(x)}$ is true. The actions $a$ and $b$ alternate because the first makes $H$ false and the second makes it true. In each transition $(s_i,s_{i+1})$ for a non-negative even integer $i$, the numerical variable $n(x)$ decreases by $\epsilon$ or more, or $0=s_{i+1}[n(x)]<s_i[n(x)]<\epsilon$. The former case cannot happen more than $s_0[n(x)]/2\epsilon$ times, as the numerical variable $n(x)$ is decreased every two steps and is never increased. Thus, in all cases and for any $\epsilon > 0$, any $\epsilon$-trajectory induced by the policy $\pi$ reaches a goal state in a finite number of steps, regardless of the initial value $s_0[n(x)]$ of $n(x)$, and regardless of the actual magnitude of the changes $|s_{i+1}[n(x)]-s_{i}[n(x)]|$. \end{example} \medskip \begin{example} A more interesting QNP that requires ``nested loops'' is $Q_{nest}=\tup{F,V,I,O,G}$ with $F=\emptyset$, $V=\{X,Y\}$, $I=\{\GT{X},\GT{Y}\}$, $G=\{\EQ{X}\}$, and $O=\{a,b\}$ where \begin{alignat}{1} a\ &=\ \abst{\GT{X}, \EQ{Y}}{\DEC{X}, \INC{Y}} \,, \\ b\ &=\ \abst{\GT{Y}}{\DEC{Y}} \,. \end{alignat} The policy $\pi$ is given by the rules: \begin{alignat}{1} &\text{if $\GT{X}$ and $\EQ{Y}$, then do $a$} \,, \\ &\text{if $\GT{X}$ and $\GT{Y}$, then do $b$} \,. \end{alignat} The policy decrements $Y$ using action $b$ until the action $a$ that decreases $X$ and increases $Y$ can be applied, and the process is repeated until $\EQ{X}$. The $\epsilon$-trajectories induced by $\pi$ have the form \begin{alignat}{1} s_0,\quad a, s^1_1, b, \ldots, b, s^1_{k_1}, \quad a, s^2_1, b, \ldots, b, s^2_{k_2}, \quad \ldots, \quad a, s^m_1, b, \ldots, b, s^m_{k_m}, \quad a,s_G \,. \end{alignat} where there is an outer loop that is executed a number of times $m$ bounded by $s_0[X]/\epsilon$, as $X$ is decreased by $\epsilon$ or more, but is not increased. In the iteration $i$ of such a loop, the action $b$ is executed a number of times $k_i$ bounded by $s^i_1[Y]/\epsilon$ as in such inner loop $Y$ begins with value $s^i_1[Y]$ and it is decreased and not increased. The result is that all the $\epsilon$-trajectories induced by $\pi$ reach a goal state in a finite number of steps that cannot be bounded a priori because the increments of $Y$ produced by the action $a$ are finite but not bounded. The policy $\pi$ thus solves $Q_{nest}$. \end{example} \section{Direct Translation and Termination Test} The problem of deciding the existence of a policy that solves a given QNPs is decidable as noted by \citeay{sid:aaai2011}. They hint a generate-and-test procedure to find such a policy where the QNP is first translated into a FOND problem, and then all the possible strong cyclic policies for the FOND problem are enumerated and tested for termination. The translation runs in polynomial (lineal) time in the number of boolean states for the QNP while the termination test for a given strong cyclic solution is polynomial in the number of FOND states. However, the number of strong cyclic solutions that need to be tested is exponential in the number of FOND states in the worst case. The generate-and-test approach is not efficient but it is complete and runs in finite time. In contrast, the plan-existence problem for \emph{numerical planning is undecidable} even in the classical setting where there is a single initial state and the action effects are deterministic; e.g., Post's correspondence problem can be reduced to a numerical planning problem \cite{helmert:numeric}. The decidability of plan existence for QNPs is due to the ``qualitative'' behaviour of the numerical variables that cannot keep track of counts; in particular, the variables cannot be incremented or decremented by specific amounts nor queried about specific values. We review the translation and the termination test for QNPs before considering a novel polynomial translation which does not require such tests and which thus is a true reduction of QNPs into FOND problems. The translation $T_D$ from a QNP $Q$ to a FOND $P=T_D(Q)$ is simple and direct, and it involves three steps: 1)~the literals $\EQ{X}$ and $\GT{X}$ are made propositional with the numerical variables $X$ eliminated, 2)~$Inc(X)$ effects are converted into deterministic boolean effects $\GT{X}$, and 3)~$Dec(X)$ effects are converted into \emph{non-deterministic} boolean effects $\GT{X}\,|\,\EQ{X}$. \begin{definition}[Direct Translation $T_D$] \label{def:td} For QNP $Q=\tup{F,V,I,O,G}$, the FOND problem $P=T_D(Q)$ is $P=\tup{F',I',O',G'}$ with \begin{enumerate}[1.] \item $F'=F \cup \{\EQ{X} \,:\, X\in V\}$, where $\EQ{X}$ stands for a new propositional symbol $p_{\EQ{X}}$ and $\GT{X}$ stands for $\neg p_{\EQ{X}}$, \item $I'=I$ but with $\EQ{X}$ and $\GT{X}$ denoting $p_{\EQ{X}}$ and $\neg p_{\EQ{X}}$, \item $O'=O$ but with $Inc(X)$ effects replaced by the deterministic propositional effects $\GT{X}$, and $Dec(X)$ effects replaced by non-deterministic propositional effects $\GT{X}\,|\,\EQ{X}$, \item $G'=G$ but with $\EQ{X}$ and $\GT{X}$ denoting $p_{\EQ{X}}$ and $\neg p_{\EQ{X}}$. \end{enumerate} \end{definition} The problem $P=T_D(Q)$ is a special type of FOND problem. For example, from its definition, there is no action in $P$ that can achieve a proposition $\EQ{X}$ deterministically. We refer to actions in the FOND $P$ with effects $\GT{X}$ and $\GT{X}\,|\,\EQ{X}$ as $Inc(X)$ and $Dec(X)$ actions, as such effects in $P$ may only come from $Inc(X)$ and $Dec(X)$ effects in $Q$. Also, observe that the FOND $P$ has a unique initial state even though there QNP $Q$ may have an infinite number of initial states. The states of the FOND problem $P=T_D(Q)$ are related to the \emph{boolean states} over $Q$, i.e., the truth-assignments over the atoms $p$ and $\EQ{X}$, the latter of which stand for (abbreviation of) symbols in $P$. A policy $\pi$ for the QNP $Q$ thus induces a policy over the FOND problem $P$ and vice versa.\footnote{The policy $\pi$ over states $s$ of $Q$ determines the policy $\pi'$ over the FOND $P$ where $\pi'(t)=\pi(s)$ if $t=\bar s$, and vice versa, a policy $\pi'$ for $P$ determines a policy $\pi$ for $Q$ where $\pi(s)=\pi'(t)$ if $\bar s=t$. For simplicity, we use the same notation $\pi$ to refer to the policy $\pi$ over $Q$ and the policy $\pi'$ that it induces over $P=T_D(Q)$.} Moreover, the FOND problem $P=T_D(Q)$ captures the \emph{possible boolean state transitions} in $Q$ exactly. More precisely, $(s,a,s')$ is a possible transition in $Q$ iff $(\bar s,a,\bar s')$ is a possible transition in $P$. Indeed, if we extend the notion of strong cyclic policies to QNPs: \begin{definition} \label{def:qnp:strong-cyclic} Let $Q$ be a QNP and let $\pi$ be a policy for $Q$. $\pi$ is strong cyclic for $Q$ iff for every $\pi$-trajectory connecting $s_0$ with a state $s$, there is a $\pi$-trajectory connecting $s$ with a goal state. \end{definition} \noindent The following correspondence between boolean states in $Q$ and the states of the boolean FOND problem $T_D(Q)$ results: \begin{theorem} \label{thm:strong-cyclic} Let $Q$ be a QNP and let $\pi$ be a policy for $Q$. $\pi$ is strong cyclic solution for $Q$ iff $\pi$ is strong cyclic policy for the FOND problem $T_D(Q)$. \end{theorem} \begin{proof} Let $\S(Q)=\tup{S,S_0,Act,A,F,S_G}$ and $\S(P)=\tup{S',s'_0,Act',A',F',S'_{G}}$ be the state models for the QNP $Q$ and the FOND problem $P=T_D(Q)$. From the definition of the $T_D$ translation, the state $s$ is in $S_0$ (resp.\ in $S_G$) iff $\bar s=s'_0$ (resp.\ in $S'_{G}$), and the state $s' \in F(a,s)$ for $a \in A(s)$ iff $\bar s' \in F'(a,\bar s)$ for $a \in A'(\bar s)$. This means that there is a $\pi$-trajectory connecting an initial state $s_0$ in $S_0$ with a state $s$ in $S$ iff there is a corresponding $\pi$-trajectory connecting $s'_0$ with $\bar s$ in $S'$, and similarly, there is a $\pi$-trajectory connecting $s$ with a goal state $s'$ iff there is a corresponding $\pi$-trajectory connecting $\bar s$ with $\bar s'$ in $\S(Q)$. \end{proof} The correspondence between the $\pi$-trajectories connecting states $s$ in $Q$ and the $\pi$-trajectories connecting the states $\bar s$ in $P=T_D(Q)$ does not imply however that the solutions of $P$ and $Q$ are the same. Indeed, the $Dec(x)$ effects of an action $a$ in $Q$ are mapped into the non-deterministic propositional effects $\GT{X}\,|\,\EQ{X}$ in $P=T_D(Q)$ which implies that $\EQ{X}$ will be true if the action $a$ is repeated infinitely often. On the other hand, a $Dec(X)$ effect in $Q$ ensures that $\EQ{X}$ will be true if $a$ is repeated infinitely often \emph{as long as no $Inc(X)$ action is performed infinitely often as well}. In other words, the correspondence between the state transitions $(s,a,s')$ in $Q$ and the state transitions $(\bar s,a,\bar s')$ in $P=T_D(Q)$ does not extend to \emph{infinite trajectories} \cite{bonet:ijcai2017}. Recall that trajectories in $Q$ refer to $\epsilon$-trajectories for some $\epsilon > 0$ that exclude ``infinitesimal'' changes. As a result: \begin{theorem} \label{thm:td:gap} Let $Q$ be a QNP and let $\pi$ be a policy for $Q$. If $\tau = s_0, s_1 \ldots$ is an infinite $\pi$-trajectory in $Q$, then $\bar\tau = \bar s_0, \bar s_1, \ldots$ is an infinite $\pi$-trajectory in $P=T_D(Q)$. Yet if $\bar\tau = \bar s_0, \bar s_1, \ldots$ is an infinite $\pi$-trajectory in $P=T_D(Q)$, there may not be an infinite trajectory $\tau = s_0, s_1 \ldots$ in $Q$. \end{theorem} \begin{proof} For the first part, if $\tau$ is an infinite $\pi$-trajectory over $Q$, then $s_{i+1} \in F(a_i,s_i)$ for $a_i=\pi(s_i)$; therefore $\bar s_{i+1} \in F(a_i,\bar s_i)$ for $a_i = \pi(\bar s_i)$, and hence $\bar\tau = \bar s_0, \bar s_1, \ldots$ is a infinite $\pi$-trajectory over $P$. For the second part, one example suffices. Let $Q$ be a QNP with a single variable $X$ that is numerical, a single action $a$ with precondition $\GT{X}$ and effect $Dec(X)$, initial condition $\GT{X}$, and goal $\EQ{X}$. In the state model $\S(P)$ associated with the FOND problem $P=T_D(Q)$, there are two states $t$ and $t'$, the first where $\EQ{X}$ is true and the second where $\GT{X}$ is true, and there is an infinite trajectory $\bar\tau = \bar s_0, \bar s_1, \ldots$ where all $\bar s_i=t$ and $\pi(\bar s_i)=a$, but there is no infinite trajectory $\tau = s_0, s_1 \ldots$ with $\pi(s_i)=a$ and where $\GT{X}$ stays true forever while being decremented. Indeed, for any $\epsilon > 0$ and any initial value of $X$, $s_0[X]>0$, it is the case that $s_n[X]=0$ for $n > s_0[X]/\epsilon$. \end{proof} The notion of \emph{termination} is aimed at capturing the infinite $\pi$-trajectories over the FOND problem $P=T_D(Q)$ that do not map into infinite $\pi$-trajectories over $Q$. Let \begin{alignat}{1} \bar s_0, \bar s_1, \ldots, [\bar s_i, \ldots, \bar s_{m}]^* \end{alignat} denote \emph{any infinite} $\pi$-trajectory on the FOND $P$ where the states $\bar s_i, \ldots, \bar s_{m}$ in brackets make the non-empty set of \emph{recurring states}; namely those that occur infinitely often in the trajectory (not necessarily in that order). We refer to such set of recurrent states as the \emph{loop} of the trajectory. Termination imposes the following condition on loops: \begin{definition}[Terminating Trajectories] \label{def:td:terminatig-trajectories} Let $Q$ be a QNP and let $\pi$ be a policy for $Q$. An infinite $\pi$-trajectory $\bar s_0, \ldots, [\bar s_i, \ldots, \bar s_{m}]^*$ is \emph{terminating} in $P=T_D(Q)$ if there is a variable $X$ in $Q$ that is decremented but not incremented in the loop; i.e., if $\pi(\bar s_k)$ is a $Dec(X)$ action for some $k \in [i,m]$, and $\pi(\bar s_j)$ is not an $Inc(X)$ action for any $k \in [i,m]$. \end{definition} The notion of termination is a notion of fairness that is different from the one underlying strong cyclic planning that says that infinite but terminating trajectories in $P$ are not ``fair'' and hence can be ignored. Indeed, this notion of termination closes the gap in Theorem~\ref{thm:td:gap}: \begin{theorem} \label{thm:termination1} Let $Q$ be a QNP and let $\pi$ be a policy for $Q$. $\bar\tau = \bar s_0, \bar s_1, \ldots$ is an infinite \emph{non-terminating} $\pi$-trajectory in $P=T_D(Q)$ iff there is an infinite $\pi$-trajectory $\tau = s_0, s_1, \ldots$ in $Q$. \end{theorem} \begin{proof} Let $\tau = s_0, s_1, \ldots$ be an infinite $\pi$-trajectory in $Q$, and let us assume that the infinite trajectory $\bar\tau = \bar s_0, \bar s_1, \ldots$ is terminating. Then there must be a variable $X$ that is decremented by $\pi(s)$ in some recurring state $s$ in $\tau$ and which is not incremented by $\pi(s')$ on any recurrent state $s'$ in $\tau$. Let $s(t)$ denote the state at time point $t$ in $\tau$, let $t$ be the last time point where variable $X$ is increased in $\tau$ ($t=-1$ if $X$ is not increased in $\tau$), and let $X(t+1)$ be the value of variable $X$ at the next time point. The maximum number of times that $X$ can be decreased after $t+1$ is bounded by $X(t+1)/\epsilon$, and after this, $X$ must have zero value. But in $\tau$, $X$ is decreased an infinite number of times, in contradiction with the assumption that any action that decrements $X$ features $\GT{X}$ as precondition. For the converse, we show that one such trajectory $\tau$ in $Q$ can be constructed for any $\epsilon >0$, given that the trajectory $\bar\tau$ in $P$ is non-terminating. We do so by adjusting the non-deterministic increments and decrements of the actions, all of which have to be greater than or equal to $\epsilon$, except when this would result in negative values that are increased back to zero. We construct $\tau = s_0, s_1, \ldots$ from $\bar\tau = \bar s_0, \bar s_1, \ldots$ as follows. The value of the boolean variables is the same in $s_i$ as in $\bar s_i$, and in addition, $s_i[X]=0$ iff $\EQ{X}$ is true in $\bar s_i$ for all $i$. We just have to find exact values for the numerical variables $X$ in each of the states $s_i$ in $\tau$, and this is a function of the their initial values $s_0[X]$ when $s_0[X]>0$, and the positive decrements or increments $\Delta(X,s_i)$ when $\pi(s_i)$ is a $Dec(X)$ or $Inc(X)$ action, and $\Delta(X,s_i) \ge \epsilon$. For simplicity and without loss of generality, let us assume that $\epsilon < 1$. All the positive initial values of numerical variables, increments, and decrements are set to \emph{positive integers} by considering the sequence of actions $\pi(s_i)$, $i=1,\ldots$. The initial values $s_0[X]$ are set to $max(k(X,0),1)$ where $k(X,i)$ stands for the number of $Dec(X)$ actions that occur between the state $s_i$ and the first state $s_j$ after $s_i$ where an $Inc(X)$ action occurs (if no $Inc(X)$ action occurs after state $s_i$, $k(X,i)$ is the number of $Dec(X)$ actions after $s_i$). That is, $k(X,i)$ is the cardinality of the set \begin{alignat}{1} \{ j : \text{$i \leq j < ind(X,i)$ and $\pi(s_j)$ is $Dec(X)$ action} \} \end{alignat} where $ind(X,i)$ is the minimum index $j > i$ such that $\pi(s_j)$ is an $Inc(X)$ action, or $\infty$ if there is no such action after $s_i$. Observe that $k(X,i)$ is bounded. The only way it could be infinite is when no $Inc(X)$ action occurs after $s_i$ while at the same time an infinite number of $Dec(X)$ actions occur; yet, this is impossible since then $X$ eventually becomes zero after which no $Dec(X)$ action may occurs as such actions feature the precondition $\GT{X}$. Likewise, the increments $\Delta(X,s_i)$ are set to $max(k(X,i),1)$, and the decrements $\Delta(X,s_i)$ are set to $s_i[X]$ if $\EQ{X}$ is true in $\bar s_{i+1}$ and to $1$ is $\GT{X}$ if true in $\bar s_{i+1}$. It is not difficult to verify that these choices define a trajectory $\tau$ in $Q$ that corresponds to the assumed trajectory $\bar\tau$ in $P$. \end{proof} The full correspondence between infinite $\pi$-trajectories $\tau$ in $Q$ and infinite non-terminating $\pi$-trajectories $\tau$ in $P=T_D(Q)$ suggests the following definition of \emph{termination} in QNPs $Q$ and FOND problems $T_D(Q)$: \begin{definition}[Termination in $Q$] \label{def:qnp:termination} A policy $\pi$ for the QNP $Q$ is terminating iff all the $\pi$-trajectories on $Q$ are of finite length. In such a case, we say that $\pi$ is $Q$-terminating. \end{definition} \begin{definition}[Termination in $P$] \label{def:td:termination} Let $Q$ be a QNP. A policy $\pi$ for the FOND problem $P=T_D(Q)$ is terminating iff all the infinite $\pi$-trajectories on $P$ are terminating. In such a case, we say that $\pi$ is $P$-terminating. \end{definition} \noindent The correspondence between policies can then be expressed as: \begin{theorem} \label{thm:termination2} Let $Q$ be a QNP, let $P=T_D(Q)$ be its direct translation, and let $\pi$ be a policy for $Q$ (and thus also for $P$). Then, $\pi$ is $Q$-terminating iff $\pi$ is $P$-terminating. \end{theorem} \begin{proof} Direct from Theorem~\ref{thm:termination1}. For on direction, assume that $\pi$ is $Q$-terminating and let $\bar\tau$ be a $\pi$-trajectory in $P$. If $\bar\tau$ is not terminating, by Theorem~\ref{thm:termination1}, $\tau$ is infinite and thus $\pi$ would not be $Q$-terminating. Therefore, every $\pi$-trajectory $\bar\tau$ in $P$ is terminating and thus $\pi$ is $P$-terminating. The other direction is established similarly. \end{proof} \noindent The \emph{soundness and completeness} of the direct translation extended with termination can be expressed as following: \begin{theorem}[Soundness and Completeness $T_D$] \label{thm:td:main} Let $Q$ be a QNP, let $P=T_D(Q)$ be its direct translation, and let $\pi$ be a policy for $Q$ (and thus also for $P$). The following are equivalent: \begin{enumerate}[1.] \item $\pi$ solves $Q$, \item $\pi$ is a strong cyclic solution of $Q$ and $\pi$ is $Q$-terminating, \item $\pi$ is a strong cyclic solution of $P$ and $\pi$ is $P$-terminating. \end{enumerate} \end{theorem} \begin{proof} \textbf{($1 \Leftrightarrow 2$)} Assume that $\pi$ solves $Q$. If there is $\pi$-trajectory connecting an initial state with state $s$, there must be $\pi$-trajectory connecting $s$ with a goal state. Otherwise, $\pi$ would not be a solution for $Q$. Likewise, if $\tau$ is an infinite $\pi$-trajectory in $Q$, then $\tau$ does not reach a goal state and thus $\pi$ would not solve $Q$. For the converse direction, assume that $\pi$ is a strong cyclic solution for $Q$ and that $\pi$ is $Q$-terminating, and suppose that $\pi$ does not solve $Q$. Then, there is a maximal $\pi$-trajectory $\tau$ in $Q$ that does not reach a goal state. It cannot be the case that $\tau$ ends in a state $s$ where $\pi(s)$ is undefined or non-applicable as $\pi$ then would not be a strong cyclic solution for $Q$. Hence, $\tau$ must be infinite but this contradicts the assumption that $\pi$ is $Q$-terminating. \textbf{($2 \Leftrightarrow 3$)} By Theorem~\ref{thm:strong-cyclic}, $\pi$ is a strong cyclic solution for $Q$ iff it is strong cyclic solution for $P$. By Theorem~\ref{thm:termination2}, $\pi$ is $Q$-terminating iff it is $P$-terminating. \end{proof} \section{Checking Termination with {\sc Sieve}\xspace} {\sc Sieve}\xspace is the procedure introduced by \citeay{sid:aaai2011} to test whether a policy terminates. It runs in time that is polynomial in the number of states of the FOND problem reached by the policy.\footnote{In \cite{sid:aaai2011}, the notion of termination is not articulated independently of the algorithm, so our account departs from the one in that paper. A second difference is that our version of the algorithm is developed and applied to QNPs that involve both numerical and boolean variables.} For this, the algorithm takes as input a policy graph $\mathcal{G}(P,\pi) = \tup{V,E}$ constructed from the FOND problem $P=T_D(Q)$ and a strong cyclic policy $\pi$ for $P$. The nodes in the policy graph are the states $\bar s$ in the state model $\S(P)$ that are reachable from the initial state and the policy $\pi$, and the directed edges in $E$ are the pairs $(\bar s,\bar s')$ for $\bar s' \in F(a,\bar s)$ and $\pi(\bar s)=a$. These edges are labeled with the action $a$. The algorithm iteratively removes edges from the graph $\mathcal{G}(P,\pi)$ until the graph becomes acyclic or no additional edge can be removed. For incrementally removing edges from the graph, {\sc Sieve}\xspace identifies first its \emph{strongly connected components} by a single depth-first search traversal, following Tarjan's algorithm \cite{tarjan:sccs}. A strongly connected component (SCC) is a partition of the nodes of the graph such that if a node $\bar s$ belongs to a partition, any node $\bar s'$ that can be reached from $\bar s$ and that can reach $\bar s$ back in the graph, is placed in the same partition as $\bar s$. The algorithm then picks a variable $X$ and a SCC such that the variable $X$ is decremented but not incremented in the SCC. That is, there must be a state $\bar s$ in the SCC such that $\pi(\bar s)$ is a $Dec(X)$ action and no $\pi(\bar s')$ is an $Inc(X)$ action for any $\bar s'$ in the SCC. The algorithm then removes all the edges $(\bar s,\bar s')$ in the SCC such that $\pi(\bar s)$ is a $Dec(X)$ action. We abbreviate this by saying that \emph{variable $X$ is removed from the SCC}, which means that the edges associated with $Dec(X)$ actions are removed. Following the edge removals, the SCCs must be recomputed and the process is repeated until the graph becomes acyclic or no more edges can be removed in this manner. The result is that: \begin{theorem} \label{thm:sieve} Let $Q$ be a QNP. A policy $\pi$ for the FOND problem $P=T_D(Q)$ is $P$-terminating iff {\sc Sieve}\xspace reduces the policy graph $\mathcal{G}(P,\pi)$ to an acyclic graph. \end{theorem} \begin{proof} For the first direction, let us assume that $\pi$ is $P$-terminating and suppose that {\sc Sieve}\xspace terminates with a cyclic graph $\mathcal{G}$. Then, there must there must be a $\pi$-trajectory $\bar\tau$ in $P$ of the form $\bar s_0, \ldots, [\bar s_i, \ldots, \bar s_m]^*$ where the recurrent states $\bar s_i, \ldots, \bar s_m$ in $\bar\tau$ are \emph{exactly} all the states in a SCC $C$ of $\mathcal{G}$. By the assumption, there must be a variable $X$ such that $\pi(s)$ is a $Dec(X)$ action for some state $\bar s$ in the loop, and $\pi(s')$ is not an $Inc(X)$ action for any of the states $s'$ in the loop. But then {\sc Sieve}\xspace should have removed the variable $X$ from $C$, or any other similar variable, breaking the SCC $C$ into smaller components. For the converse, let us assume that $\pi$ is \emph{not} $P$-terminating. Then, there must be an infinite $\pi$-trajectory $\bar\tau$ in $P$ of the form $\bar s_0, \ldots, [\bar s_i, \ldots, \bar s_m]^*$ and some variable $X$ such that $X$ is decremented by some action $\pi(\bar s_j)$, $j\in[i,m]$, and not decremented by any action $\pi(\bar s_j)$, $j\in[i,m]$. We want to show that {\sc Sieve}\xspace terminates with a graph that has one SCC that includes all the states in the loop. Indeed, initially, all the states in the loop must be in one component as they are all reachable from each other. Then, notice that {\sc Sieve}\xspace is incapable of removing the variable $X$ from the loop as the component $C$ always features actions that increment and decrement $X$. Therefore, the states in the loop stay together within an SCC along the whole execution of {\sc Sieve}\xspace. \end{proof} \noindent The {\sc Sieve}\xspace procedure, slightly reformulated from \citeay{sid:aaai2011}, is depicted in Figure~\ref{alg:sieve2}. \begin{algorithm}[t] \SetKw{Break}{break} \SetKw{Continue}{continue} \DontPrintSemicolon {\sc Sieve}\xspace(Graph $\mathcal{G}=\mathcal{G}(P,\pi)$): \\ \Repeat{$\mathcal{G}$ is acyclic (terminating) or there is no SCC $C$ and variable $X$ to choose (non-terminating)}{ Compute the strongly connected components (SCC) of $\mathcal{G}$.\; \BlankLine Choose an SCC $C$ and a variable $X$ that is decreased in $C$ but is not increased in $C$;\; i.e., for some $\bar s$ in $C$, $\pi(\bar s)$ is a $Dec(X)$ action, and for no $\bar s$ in $C$, $\pi(\bar s)$ is an $Inc(X)$ action. \BlankLine Remove the edges $(\bar s,\bar s')$ such that $\bar s$ and $\bar s'$ are in $C$, and $\pi(\bar s)$ is a $Dec(X)$ action.\; } \caption{{\sc Sieve}\xspace procedure for testing whether policy $\pi$ for FOND problem $P=T_D(Q)$ terminates \cite{sid:aaai2011}.} \label{alg:sieve2} \end{algorithm} \begin{example} The policy $\pi$ for $Q_{nest}$ above is given by the rules: \begin{alignat}{1} &\text{if $\GT{X}$ and $\EQ{Y}$, then do $a$} \,, \\ &\text{if $\GT{X}$ and $\GT{Y}$, then do $b$} \end{alignat} where recall that $Q_{nest}=\tup{F,V,I,O,G}$ with $F=\emptyset$, $V=\{X,Y\}$, $I=\{\GT{X},\EQ{Y}\}$, $G=\{\EQ{X}\}$, and $O=\{a,b\}$ where $a=\abst{\GT{X}, \GT{Y}}{\DEC{X}, \INC{Y}}$ and $b=\abst{\GT{Y}}{\DEC{Y}}$. The policy decrements $Y$ using the action $b$ until the action $a$ that decreases $X$ and increases $Y$ can be applied. The process is repeated until $\EQ{X}$. The nested loops in the policy graph $\mathcal{G}(P,\pi)$ are shown in Figure~\ref{fig:qnest}. The policy graph contains three states: the leftmost one is the initial state, and the rightmost one is a goal state. The two states on the lest are reachable from each other, and hence, define a strongly connected component (SCC). In this SCC, the $Dec(X)$ edges are removed by {\sc Sieve}\xspace because $X$ is not increased anywhere. Once this is done, the $Dec(Y)$ edges are removed by {\sc Sieve}\xspace because the edges associated with $Inc(Y)$ effects are gone. The resulting graph is acyclic establishing thus that the policy $\pi$ terminates in $P=T_D(Q_{nest})$. \end{example} \medskip Using {\sc Sieve}\xspace it is easy to see that the problem of checking the existence of plans for QNPs can be decided in exponential space: \begin{theorem}[\citeay{sid:aaai2011}] \label{thm:plan-existence:expspace} Deciding plan existence for QNPs is in \textup{EXPSPACE}. \end{theorem} \begin{proof} Let $Q=\tup{F,V,I,O,G}$ be a QNP. The number of boolean states for $Q$ is exponential in the number of fluents and variables; i.e., $|F|+|V|$. A policy $\pi$ for $Q$ can be described in exponential space as a mapping from boolean states into actions. A brute-force algorithm enumerates all policies one by one using exponential space. Each one is tested for strong cyclicity and termination. The former is a straightforward test in a graph while the latter is done with {\sc Sieve}\xspace. If the policy is strong cyclic and terminating, $Q$ is accepted. Otherwise, if no policy is found to be strong cyclic and terminating, $Q$ is rejected. Since testing strong cyclicity and running {\sc Sieve}\xspace both require polynomial time in the size of the input policy, the whole algorithm can be implemented in space that is exponential in the size of $Q$. \end{proof} Below we improve this bound and show exponential time (EXP) solvability through a more complex translation of QNPs into FOND problems that is also polynomial. The novelty of the new translation is that the strong-cyclic policies of the resulting FOND problems do not need to be checked for termination. QNPs are thus \emph{fully reduced} to FOND problems. Since FOND problems can be reduced to QNPs as well, \emph{we will show indeed that FOND problems and QNPs have the same expressive power}, and the complexity of plan existence for FOND problems is known to be EXP-Complete \cite{littman:fond,rintanen:po}, then these reductions show the EXP-Completeness of the plan existence decision problem for QNPs. In addition to the establishment of novel theoretical results, these reductions are also of practical importance as they permit the computation of solutions; once a QNP is reduced to FOND, a solution for the QNP can be recovered in linear time from a solution to the FOND problem, and the same for the reduction from FOND problems into QNPs. The distinction between the classes EXP and EXPSPACE is important. EXPSPACE contains the (decision) problems that can be solved with Turing machines (TMs) that operate within exponential space (as a function of the input size), yet such TMs may in fact run in doubly exponential time as the running time is bounded, in the worst case, by an exponential function of the bound in space \cite{sipser:book}. On the other hand, EXP comprises the problems that can be solved with TMs that run within exponential time. The difference is analogous to the difference between the classes P (polynomial time) and PSPACE (polynomial space). \begin{figure}[t] \centering \begin{tikzpicture}[thick,>={Stealth[inset=2pt,length=8pt,angle'=33,round]},font={\normalsize},qs/.style={draw=black,fill=gray!20!white},init/.style={qs,fill=yellow!50!white},goal/.style={qs,fill=green!50!white}] \node[init] (A) at (0,0) { $\GT{X}, \GT{Y}$ }; \node[qs] (B) at (5,0) { $\GT{X}, \EQ{Y}$ }; \node[goal] (C) at (10,0) { $\EQ{X}, \GT{Y}$ }; \path[->] (A) edge[out=140,in=40,looseness=4] node[above,xshift=1] { $b: \DEC{Y}$ } (A); \path[->] (A) edge[transform canvas={}] node[above,yshift=-1] { $b: \DEC{Y}$ } (B); \path[->] (B) edge[out=220,in=320,looseness=0.8] node[below,yshift=0] { $a: \DEC{X}, \INC{Y}$ } (A); \path[->] (B) edge[transform canvas={}] node[above,yshift=-2] { $a: \DEC{X}, \INC{Y}$ } (C); \end{tikzpicture} \caption{% Testing termination with {\sc Sieve}\xspace. Policy graph $\mathcal{G}(P,\pi)$ for the FOND problem $P=T_D(Q_{nest})$ and policy $\pi$, from the example in text, containing three states: the leftmost is the initial state, and the rightmost is a goal state. The two states on the left are reachable from each other, and hence, define a strongly connected component (SCC). In this SCC, the $Dec(X)$ edges are removed by {\sc Sieve}\xspace because $X$ is not increased anywhere. Once this is done, the $Dec(Y)$ edges are removed because the edges associated with $Inc(Y)$ effects have been eliminated. The resulting graph is acyclic and hence $\pi$ terminates in $P$. } \label{fig:qnest} \end{figure} \section{First Reduction: FOND problems into QNPs} The first reduction that we introduce is from FOND problems into QNPs. It is a non-trivial reduction yet simpler than the inverse reduction from QNPs into FOND problems. The main obstacle to overcome is that the non-deterministic effects in FOND problems are over boolean variables, while those in QNPs are only on numerical variables through the decrement effects. Another important obstacle is that strong cyclic solutions in QNPs are not QNP solutions unless they are terminating. Let $P=\tup{F,I,O,G}$ be a FOND problem and let us denote the non-deterministic effects of action $a$ as $E^a_1\,|\,E^a_2\,|\,\cdots\,|\,E^a_{k_a}$ where each $E^a_i$ is a set (conjunction) of $F$-literals, and $k_a$ denotes the number of non-deterministic effects of the action $a$. If $k_a=1$, the action $a$ is deterministic, else it is non-deterministic. For simplicity, we assume that the set of effects $\{E^a_i\}_i$ for action $a$ aggregates all the multiple non-deterministic effects in the description of $a$ in $P$, and the reduction below is presented under this assumption. Afterwards, we discuss how to remove this requirement for handling in polynomial time FOND problems whose transitions are factorized. We map $P$ into a QNP $Q=\tup{F',V',I',O',G'}$ that extends $P$ with numerical variables $V'=\{X\}\cup\{Y_{a,i} : a\in O, 1\leq i\leq k_a\}$, extra boolean variables, and extra actions; i.e., $F \subseteq F'$, $I\subseteq I'$, $O' \subseteq O$, and $G'=G$. The heart of the reduction lies in the way in which the non-deterministic effects of each action are captured in $Q$. For this, the collection of non-deterministic effects of the action $a$ are replaced by an $Inc(X)$ action, for the unique numerical variable $X$, followed by a \emph{fixed loop} where the variable $X$ is decremented until becoming zero. The alternative effects $E_i^a$ are then triggered when $\EQ{X}$ becomes true in the corresponding part of the loop. The increments and decrements of the extra variables $Y_{a,i}$ ensure that \emph{only} strong cyclic policies $\pi$ for $P$ induce policies $\pi'$ for $Q$ that are terminating. The fixed loop sequence associated with action $a$ performs the following steps, that are implemented with the help of new auxiliary actions and propositional symbols: \begin{enumerate}[1.] \item $Inc(X)$ (implemented by modified action $a$), \item $Dec(X)$ (implemented by new action $Start$), \item If $X=0$, apply the effects in $E^a_1$, increment $Y_{a,1}$, decrement $Y_{a,j}$, $j\neq 1$, and break loop (implemented by new action $Exit(a,i)$ for $i=1$), \item $Dec(X)$ (implemented by new action $Cont(a,i)$ for $i=1$) \item If $X=0$, apply the effects in $E^a_2$, increment $Y_{a,2}$, decrement $Y_{a,j}$, $j\neq 2$, and break loop (implemented by new action $Exit(a,i)$ for $i=2$), \item $Dec(X)$ (implemented by the new action $Cont(a,i)$ for $i=2$) \item[\vdots] \item If $X=0$, apply the effects in $E^a_{k_a}$, increment $Y_{a,{k_a}}$, decrement $Y_{a,j}$, $j\neq k_a$, and break loop (implemented by new action $Exit(a,i)$ for $i=k_a$), \item Go back to 3 (implemented by new action $Loop(a)$) \end{enumerate} To show that the resulting mapping is indeed a reduction (i.e., the mapping is sound and complete), two things need to be established: that a policy $\pi$ that solves $Q$ induces a policy $\pi'$ that solves $P$ (soundness), and vice versa, that a policy $\pi'$ that solves the FOND problem $P$ induces a policy $\pi$ that solves $Q$ (completeness). The fixed sequence of effects for each action $a$ in $Q$ is implemented using the following additional boolean variables: \begin{enumerate}[$\bullet$] \item A boolean $normal$ that is false when the sequence is entered for some action $a$ and made true when the loop is exit. \item A boolean $ex(a)$ to express that the sequence for the action $a$ is being executed. The reduction is such that the atoms in $\{normal\}\cup\{ex(a):a\in O\}$ are pairwise mutex. \item A counter from $0$ to $K$ encoded using mutex atoms $cnt(\ell)$, $\ell=0,1,\ldots,K+1$, that is set to $1$ when the loop is entered and re-entered (step 8 above) and where $K$ is the maximum number of non-deterministic outcomes of any action in $P$. \end{enumerate} The actions that implement the fixed loop sequence are the following: \begin{enumerate}[$\bullet$] \item $Start=\abst{\neg normal, cnt(0), \GT{X}}{\neg cnt(0), cnt(1), \DEC{X}}$. \item $Exit(a,i)=\abst{ex(a), cnt(i), \EQ{X}, \GT{Y_{a,i}}}{\neg ex(a), \neg cnt(i), normal, cnt(0), E^a_i, \INC{Y_{a,i}}, \DEC{Y_{a,j}}}$ where the increment is for every variable $Y_{a,j}$ with $j\neq i$. \item $ExitG(a,i)=\abst{ex(a), cnt(i), \EQ{X}, \EQ{Y_{a,i}}}{\neg ex(a), \neg cnt(i), normal, cnt(0), G}$ that may be used to achieve the goal $G$ when $\EQ{X}$ and $\EQ{Y_{a,i}}$. \item $Cont(a,i)=\abst{ex(a), cnt(i), \GT{X}}{\neg cnt(i), cnt(1+i), \DEC{X}}$ to advance along the fixed loop sequence by decrementing variable $X$. \item $Loop(a)=\abst{ex(a), cnt(k_a), \GT{X}}{\neg cnt(k_a), cnt(1)}$ to start a new iteration of the loop. \end{enumerate} The initial state of the QNP includes the atoms $normal$ and $cnt(0)$. Deterministic actions $a$ in $P$ ``pass directly'' into the QNP $Q$ with these two atoms added as extra preconditions. Non-deterministic actions, however, are handled differently by replacing their effects $E^a_1\,|\,E^a_2\,|\,\cdots\,|\,E^a_{k_a}$ by deterministic effects $\{\neg normal, ex(a),\INC{X}\}$ after which the only applicable action would be the $Start$ action from above. The idea of the construction is illustrated in Figure~\ref{fig:compilation-fond}. \begin{figure}[t] \centering \begin{tikzpicture}[thick,>={Stealth[inset=2pt,length=8pt,angle'=33,round]},font={\footnotesize},qs/.style={draw=black,fill=gray!20!white},init/.style={qs,fill=yellow!50!white},goal/.style={qs,fill=green!50!white}] \node[qs] (entry) at (0,0) { $cnt(0),\GT{X},\GT{Y_i}$ }; \node[qs] (count1) at (0,-2) { $cnt(1),\GT{X},\GT{Y_i}$ }; \node[qs] (exit1) at (6,-2) { $cnt(1),\EQ{X},\GT{Y_i}$ }; \node[qs] (count2) at (0,-4) { $cnt(2),\GT{X},\GT{Y_i}$ }; \node[qs] (exit2) at (6,-4) { $cnt(2),\EQ{X},\GT{Y_i}$ }; \node[qs] (countk) at (0,-7) { $cnt(k),\GT{X},\GT{Y_i}$ }; \node[qs] (exitk) at (6,-7) { $cnt(k),\EQ{X},\GT{Y_i}$ }; \path[->] (entry) edge[transform canvas={xshift=-20}] node[right,yshift=0] { $Start: \DEC{X}$ } (count1); \path[->] (entry) edge[transform canvas={xshift=0}] node[sloped,yshift=6] { $Start: \DEC{X}$ } (exit1); \path[->] (count1) edge[transform canvas={xshift=-20}] node[right,yshift=-6] { $Cont(a,1): \DEC{X}$ } (count2); \path[->] (count1) edge[transform canvas={xshift=0}] node[sloped,yshift=6] { $Cont(a,1): \DEC{X}$ } (exit2); \path[->] (exit1) edge[dashed] node[above,yshift=-2] { $Exit(a,1): \INC{Y_1}, \{\DEC{Y_j}\}_{j\neq 1}$ } (12,-2); \path[->] (exit2) edge[dashed] node[above,yshift=-2] { $Exit(a,2): \INC{Y_2}, \{\DEC{Y_j}\}_{j\neq 2}$ } (12,-4); \path[->] (exitk) edge[dashed] node[above,yshift=-2] { $Exit(a,n): \INC{Y_n}, \{\DEC{Y_j}\}_{j\neq n}$ } (12,-7); \path[-] (count2) edge[dotted,transform canvas={xshift=-20}] (0,-4.80); \path[->] (0,-5.6) edge[transform canvas={xshift=-20}] node[right,yshift=-2] { $Cont(a,k{-}1): \DEC{X}$ } (countk); \path[->] (1.6,-5.6) edge[transform canvas={xshift=-5}] node[sloped,yshift=6] { $Cont(a,k{-}1): \DEC{X}$ } (exitk); \path[-] (-2.0,-7) edge[out=0,in=180] (countk); \path[-] (-2.0,-7) edge[] node [sloped,yshift=6] { $Loop(a)$ } (-2.0,-2); \path[->] (-2.0,-2) edge[out=0,in=180] (count1); \end{tikzpicture} \caption{% Encoding the non-deterministic boolean effects $E_1\,|\,\cdots\,|\,E_k$ of an action $a$ in the FOND problem $P$ as a fixed sequence of effects that loops while decrementing the variable $X$ in the QNP $Q=R(P)$. The variable $X$ implements the loop while the counter $cnt(i)$ determines which effect $E_i$ obtains when $X$ becomes zero. The variables $Y_i = Y_{a,i}$ are used to map \emph{unfair} trajectories in $P$ into goal-reaching trajectories in $Q$, while obliging solutions of $Q$ to induce solutions for $P$. Indeed, if one unfair trajectory contains the action $a$ infinitely often but neglects (starves) the effect $E_i$, the variable $Y_i$ eventually becomes zero, since the only action that increments it is $Exit(a,i)$, and then the trajectory forcibly applies the action $ExitG(a,i)$ that terminates the execution by reaching the goal (see text for details). } \label{fig:compilation-fond} \end{figure} \begin{definition} \label{def:reduction:fond->qnp} Let $P=\tup{F,I,O,G}$ be a FOND problem. The reduction $R$ maps $P$ into the QNP $Q=\tup{F',V',I',O',G'}$ given by \begin{enumerate}[1.] \item $F'=F \cup \{normal\} \cup \{ ex(a) : a \in O \} \cup \{ cnt(\ell) : \ell\in[0,K] \}$ where $K=\max_{a\in O} k_a$, \item $V'=\{X\} \cup \{Y_{a,i} : a \in O, i\in[1,k_a] \}$, \item $I' = I \cup \{normal,cnt(0)\} \cup \{\EQ{X}\} \cup \{ \GT{Y_{a,i}} : a \in O, i\in[1,k_a] \}$, \item $O' = O \cup \{Start\} \cup \{ \mathcal{A}(a,i) : \mathcal{A}\in\{Exit,ExitG,Cont\}, a \in O, i\in[1,k_a] \} \cup \{ Loop(a) : a \in O \}$, \item $G' = G$. \end{enumerate} \end{definition} For stating the formal properties of the reduction, we need to specify how a policy $\pi$ for $Q$ induces a policy $\pi'$ for $P$, and vice versa, as $Q$ involves extra variables and actions. Let us split the \emph{non-goal states} in $Q$ into two sets: the \textbf{normal states}, where the booleans $normal$ and $cnt(0)$ are true, and the rest of non-goal states where $normal$ or $cnt(0)$ is false. (Observe that there cannot be a normal state where some variable $Y_{a,i}$ is zero as when that happens, the loop must exit with the $ExitG(a,i)$ that reaches a goal state.) In the normal states, the (qualitative) value of all the extra variables is the same: all the extra boolean variables are false except for $normal$ and $cnt(0)$ that are true, $\EQ{X}$, and $\GT{Y_{a,i}}$ for $a\in O$ and $i=1,2,\ldots,k_a$. Moreover, for every state $s$ over the FOND $P$, there is a unique normal state $\tilde s$ over the QNP $Q$ that extends $s$ with this fixed value of the extra variables in $Q$, and vice versa, for any normal state $\tilde s$ over $Q$ there is a unique state $s$ over $P$. Taking advantage of this relation and leaving the part of the normal states over $Q$ that is fixed implicit, we obtain that a policy $\pi$ for $P$ represents a unique policy $\pi$ over the normal states in $Q=R(P)$, and vice versa, a policy $\pi$ over the normal states represents a unique policy for $P$. The policy over the non-normal states $t$ in $Q$ is determined as for each such state there is only one (new) action applicable; i.e., non-normal states $t$ are mapped to one and only one of the actions in $O'\setminus O$ according to the action precondition that is true in $t$; observe that the preconditions of these actions are mutually exclusive. The main properties of the reduction can be then expressed as follows: \begin{theorem}[Reduction FOND to QNP] \label{thm:reduction:fond->qnp} The mapping $R$ is a polynomial-time reduction from FOND problems into QNPs. That is, a FOND problem $P$ has solution iff the QNP $Q=R(P)$ has solution. Moreover, a solution for $P$ (resp.\ $Q$) can be recovered in polynomial time from a solution for $Q$ (resp.\ $P$). \end{theorem} \begin{proof} It is straightforward to see that $R$ is computable in time that is polynomial in the size of the input $P$ and the maximum value $k_a$ for any action $a$. (See below for a discussion on how to deal with FOND problems that may have multiple non-deterministic effects per action.) In the following we show that $R$ is indeed a reduction. We establish a correspondence between the FOND problem $P$ and the FOND problem $P'= T_D(Q)$ associated with $Q$, by focusing on the normal states over $Q$ and $P'$, leaving the values of the extra variables out, so that such reduced states over $P'$ are states over $P$ and vice versa. The problem $P'$ has indeed an associated state model $\S(P')$ with a transition function where $s' \in F'(a,s)$ if $s'$ is a first normal state that may follow the normal state $s$ after an action $a$ common to $P$ and $P'$ (i.e., $a$ is not an extra action from the translation). There must be such first normal states as for non-deterministic actions $a$, the loop through non-normal states, as shown in Fig.~\ref{fig:compilation-fond}, must eventually terminate wither in a normal state or in a non-normal \emph{goal state} of $P'$. If $a$ is a deterministic action in $P$, $F'(a,s)$ is given, but for non-deterministic actions $a$, $F'(a,s)$ follows from the definition of $R(Q)$. It is not difficult to show that 1)~$a \in A(s)$ iff $a \in A'(s)$ where $A$ and $A'$ denote the applicable actions in the models associated with $P$ and $P'$ in normal states $s$ that are common to both $P$ and $P$', and 2)~$s' \in F(a,s)$ iff $s' \in F'(a,s)$, where $F$ and $F'$ are the transition functions associated with $P$ and $P'$ respectively, and $a$ is an action common to $P$ and $P'$. Since the initial states of $P$ and $P'$ coincide, this means that infinite trajectories over $P$ induce infinite trajectories over $P'$ (and hence $Q$), and that infinite trajectories over $P'$ (and $Q$) induce infinite trajectories over $P$. For proving the two implications of the theorem, we need to show that 1)~infinite fair trajectories over $P$ yield non-terminating trajectories over $P'$ (and hence $Q$), and that 2)~non-terminating trajectories over $P'$ yield infinite fair trajectories over $P$. Then, if $\pi$ solves $P$ but the policy $\pi'$ induced by $\pi$ over $P'$ does not solve $P'$, it would mean that there must be a non-terminating $\pi'$-trajectory over $P'$, and hence, due to 2), that there must be an infinite fair $\pi$-trajectory over $P$, in contradiction with $\pi$ solving $P$. Similarly, if $\pi'$ solves $P'$ but the induced policy $\pi$ does not solve $P$, then by 1), there must be an infinite $\pi$-trajectory over $P$ that is fair, and hence a non-terminating trajectory over $P'$, in contradiction with $\pi'$ solving $P'$. We are thus left to show 1) and 2). For proving 1), if $\tau$ is an infinite fair $\pi$-trajectory over $P$, then the policy graph determined by $\pi$ over the states of $P$, has a strongly connected component $C$ such that for each non-deterministic action $a$ applied in a state $s \in C$, all its successor states $s' \in F(a,s)$ are also in $C$. This implies that in the policy graph determined by the policy $\pi'$ over the (normal) states $s'$ of $P'$ has a corresponding strongly connected component $C'$ that includes $s$ and all the (normal) succcessor states $s' \in F(a,s)$. The component $C'$ is non-terminating as all the variables $X$ and $Y_{a,i}$ that are used to capture the non-determinism of $a$, are all decremented and incremented in $C'$. Then, there is a non-terminating trajectory over $P'$, one that enters the component $C'$ infinitely often. The converse, required for proving 2), is also direct. If $C'$ is a non-terminating loop that includes a state $s$ where $a=\pi'(s)$ is a non-deterministic action, then $C'$ must include all the (normal) successor states $s' \in F(a,s)$, as otherwise, variable $Y_{a,i}$ would be decremented in $C'$ but not incremented, and hence the loop would be terminating. But then the loop $C$ corresponding to $C'$ in $P$ is also closed in this sense, and represents an infinite $\pi$-trajectory in $P$ that is thus fair. \end{proof} Theorem~\ref{thm:reduction:fond->qnp} states that the strong cyclic policies for a FOND problem $P$ correspond exactly with the policies of the QNP $Q=R(P)$ that can be obtained from $P$ in polynomial time. There is an analogous translation for computing the \emph{strong policies} of $P$; namely, the strong cyclic policies that are actually acyclic (no state visited twice along any trajectory). For this, let $R'$ be the translation that is like $R$ above but with the numerical variables $Y_{a,i}$ removed; i.e., initial and goal conditions on $Y_{a,i}$ are removed from $R'(P)$ as well as the conditions and effects on $Y_{a,i}$ in the actions $Exit(a,i)$, and also the actions $ExitG(a,i)$ are removed. \begin{theorem}[Reduction Strong FOND to QNP] \label{thm:reduction:fond->qnp:strong} The mapping $R'$ is a polynomial time reduction from FOND problems with strong solutions into QNPs. That is, a FOND problem $P$ has a strong solution iff the QNP $Q=R'(P)$ has solution. Moreover, a strong solution for $P$ (resp.\ solution for $Q$) can be recovered in polynomial time from a solution for $Q$ (resp.\ strong solution for $P$). \end{theorem} \begin{proof} In the absence of the variables $Y_{a,i}$, the only solutions to $Q=R'(P)$ must be acyclic, as any cycle would involve increments and decrements of the single numerical variable $X$, and thus, would not be terminating. The rest of the proof follows from the correspondence layed out in the previous proof between the trajectories over the normal states in $Q$ and the trajectories in $P$. \end{proof} Let us now consider the case when the FOND problem $P$ contains actions with multiple non-deterministic effects. Let $a$ be one such action, and let $n$ be the number of non-deterministic effects in $a$, each one denoted by $E^j_1\,|\,\cdots\,|\,E^j_{k_j}$ where $j\in[1,n]$ and $k_j$ is the number of outcomes for the $j$-th non-deterministic effect of $a$. By assumption, every choice of outcomes for the effects yields a \emph{consistent} (aggregated) outcome for $a$ (cf.\ footnote \ref{foot:1}). The easiest way to accomodate such actions is to ``preprocess'' $P$ by replacing the action $a$ by the \emph{sequence} $\tup{a(1),a(2),\ldots,a(n)}$ of $n$ non-deterministic actions, each one featuring exactly one effect; i.e., the effect of $a(j)$ is $E^j_1\,|\,\cdots\,|\,E^j_{k_j}$. For this, the precondition of $a(1)$ is set to the precondition of $a$, that of $a(j)$ to $\{seq(a),next(j)\}$, $j\in[2,n]$, and the precondition of every other action is extended with $\neg seq(a)$. Likewise, the effect of $a(1)$ is extended with $\{seq(a),next(2)\}$, that of $a(j)$ with $\{\neg next(j),next(j+1)\}$, $j\in[2,n-1]$, and that of $a(n)$ with $\{\neg next(n),\neg seq(a)\}$. The new atoms $seq(a)$ and $next(j)$ denote that the sequence for $a$ is being applied and the next action to apply in the sequence is $a(j)$ respectively. In this way, the FOND problem $P$ is converted in linear time into an \emph{equivalent} FOND problem where each action has exactly one non-deterministic effect. In other words, we may assume without loss of generality that the FOND problem $P$ is such that each action has at most one non-deterministic effect since if this is not the case, $P$ can be converted into an equivalent FOND problem $P'$ in linear time. By equivalent, we mean that any solution $\pi$ for $P$ can be converted in polynomial time into a solution $\pi'$ for $P'$, and vice versa. \medskip We have shown how strong cyclic and strong planning over a FOND problem $P$ translates into QNP planning over a QNP $Q$ obtained from $P$: in one case, with all the variables $X$ and $\{Y_{a,i}\}_{a,i}$ in place, in the second, with no such variables. The translation with these variables offers the possibility of capturing more subtle forms of fairness. For example, if we just remove from the translation a variable $Y_{a,i}$ along with the effects on it, the resulting QNP would assume that all the outcomes $E_j$, $j\neq i$, of action $a$ are fair (i.e., they cannot be skipped forever in a fair trajectory) but that the outcome $E_i$ is not. In other words, while in strong cyclic planning, all the non-deterministic actions are assumed to be fair, and in strong planning, all of them to be unfair, in QNP planning, it is possible to handle a combination of fair and unfair actions (as in dual FOND planning \cite{geffner:fond-sat}), as well as a combination of fair and unfair outcomes of the same action. \section{Second Reduction: QNPs into FOND problems} We have shown that FOND problems can be reduced in polynomial time to QNPs. We show now the other direction: QNPs can be reduced in polynomial time to FOND problems. The two results imply a new complexity result; namely, that QNPs have the same expressive power as FOND problem and that the plan existence decision problem for both models have the same complexity. This second translation $T$ is more subtle than the first and unlike the direct translation $T_D$ above, it is a full reduction which does not require termination tests. The first attempt at such a translation was sketched in \cite{bonet:ijcai2017} but the reduction is buggy as it is not always sound. The intuition, however, is useful and we build on it. Basically, that reduction introduces boolean variables $q_X$ that when set to true preclude increments of variable $X$, hence making the decrements of $X$ ``fair''. The variable $q_X$ can be reset to false when the loop ``finishes'', i.e., when $\EQ{X}$ is true. This idea, however, does not fully avoid non-terminating loops and hence, by itself, does not produce a sound reduction.\footnote{Consider a QNP $Q=\tup{F,V,I,O,G}$ with a single numerical variable $X$ and four actions $a$, $b$, $c$, and $d$ that result in a loop where $a=\abst{p_1, \GT{X}}{\neg p_1, p_2, \DEC{X}}$, $b=\abst{p_2}{p_3, \neg p_2}$, $c=\abst{p_3, \GT{X}}{\DEC{X}}$, $d=\abst{p_3, \EQ{X}}{\neg p_3, p_1, \INC{X}}$. Let us assume that $I=\{p_1,\GT{X}\}$ and $G=\{\EQ{X},p_2\}$. There is a single policy $\pi$ for $Q$, as in all the (non-goal) states that can be reached from $I$, there is a single applicable action. This policy $\pi$ is strongly cyclic but is not terminating. The reason is that one of the trajectories determined by the policy is a non-terminating loop $s_0,a,s_1,b,s_2,c,s_3,d,s_0,\ldots$ where the single variable that is decremented ($X$) is also incremented, and where $\bar s_0=\{p_1,\GT{X}\}$, $\bar s_1=\{p_2,\GT{X}\}$, $\bar s_2=\{p_3,\GT{X}\}$, and $\bar s_3=\{p_3,\EQ{X}\}$. The FOND problem that results from the translation sketched in \cite{bonet:ijcai2017} accepts this policy $\pi$ as a solution, which is incorrect. This happens because the variable $q_X$ can be set and reset an infinite number of times; indeed, right before and right after the action $c$ in the loop, respectively. The new translation excludes such non-terminating loops via a stack and counters. } The \emph{new translation} replaces the $q_X$ variables by a \emph{bounded stack} that keeps the variables $X$ that are being decremented \emph{in order}, and suitable \emph{counters}. The new variables and actions enforce that solutions of the FOND problem $P=T(Q)$, unlike the solutions of the direct translation $T_D(Q)$, are all terminating. For this, the new translation introduces conditions that mirror those captured by the {\sc Sieve}\xspace procedure. In particular, for capturing policies that terminate, the variables are to be placed on the stack following the order by which {\sc Sieve}\xspace removes them. \subsection{Extra Variables and Actions} The reduction $T(Q)$ introduces a bounded stack $\alpha$ where numerical variables from $V$ can be pushed and popped, and bounded counters $c(d)$, for $d=0, \ldots, |V|$ that are associated with the possible levels (depths) $d=|\alpha|$ of the stack. There is also a top counter $c_T$ that may only increase. The stack starts empty and may grow to contain all the variables in $V$, but no variable can appear in the stack more than once. The stack is represented as growing from left to right; e.g., $\alpha X$ is the stack that results of pushing the variable $X$ in the stack $\alpha$. The $c$ counters start at $0$ and may grow up to a $Max$ number, that for completeness must be set to $1+2^n$ where $n$ is the total number of boolean and numerical variables in $Q$. In practice, $Max$ can be set to a much small number.\footnote{For structured policies that result in loops that can be entered and left through single entry and exit points, $Max$ is the bound on the number of consecutive loops (blocks), possibly with other loops nested, that the policy can generate at the same level. } In any case, the counters and the stack are captured in terms of a polynomial number of boolean variables and the whole reduction $T(Q)$ is computed in polynomial time. The state of the stack $\alpha$ is captured by the atoms $in(X)$, $depth(d)$, and $index(X,d)$ that represent whether $X$ is in the stack, the depth of the stack, and the depth at which $X$ is in the stack, respectively. $X$ is the top element in the stack when $index(X,d)$ and $depth(d)$ are true (i.e., the stack is $\alpha X$ and $|\alpha|=d-1$), and it is bottom element when $index(X,1)$ is true (i.e., the stack is $X$). The stack is empty when $depth(0)$ holds. The \textbf{extra actions} in $P=T(Q)$ are those for pushing and popping variables to and from the stack, and for advancing the top counter $c_T$. \begin{enumerate}[1.] \item \textbf{Actions $Push(X,d)$} for variable $X$ and depth $d\in[0,|V|-1]$ have preconditions $\neg in(X)$, $depth(d)$ and $c(d) < Max$, and effects: \begin{enumerate}[$a)$] \item $in(X)$, $index(X,d+1)$, $depth(d+1)$ and $\neg depth(d)$ to push $X$ and increase stack depth, \item $c(d) := c(d) + 1$ to increment counter for old level, \item $c(d+1):= 0$ to initialize counter for new level. \end{enumerate} \item \textbf{Actions $Pop(X,d)$} for variable $X$ and depth $d\in[1,|V|]$ have preconditions $in(X)$, $index(X,d)$ and $depth(d)$, and effects: \begin{enumerate}[$a)$] \item $\neg in(X)$, $\neg index(X,d)$, $\neg depth(d)$, $depth(d-1)$ to pop $X$ and decrease stack depth. \end{enumerate} \item \textbf{Action $Move$} advances the top counter $c_T$ by 1 when the stack is empty; i.e., it has preconditions $depth(0)$ and $c_T < Max$, and effect $c_T := c_T+1$. \end{enumerate} For simplicity, we assume that the language of our FOND problems $P$ makes room for the integer counters $c(d)$, $d=0,\ldots,|V|$, that may be increased by 1, from $0$ up to a fixed number $Max$ and that may be reset back to $0$. In the implementation, these counters are represented in terms of a linear number of boolean variables.\footnote{In the actual encoding, the counters $c(d)$ are represented with $1+n$ atoms (bits), $b_i(d)$, $i=0,\ldots,n$, where $n$ is total number of variables in $Q$, i.e., $n=|F|+|V|$. A preconditions such as $c(d)<Max$ then translates into the precondition $\neg b_n(d)$; the least and most significant bits for $c(d)$ are $b_0(d)$ and $b_n(d)$ respectively. Increments of $c(d)$ by $1$ may be translated in two different ways, either by using conditional effects, or by increasing the number of actions. For the former, conditional effects of the form $b_0(d), \ldots b_{i-1}(d), \neg b_i(d) \rightarrow \neg b_0(d), \ldots, \neg b_{i-1}(d), b_i(d)$, for $i\in[0,n]$, are used. For the latter, each action $act$ that increases $c(d)$ is replaced by $n$ actions $act(i)$, $i\in[0,n]$, that are like $act$ but have the extra precondition $b_0(d), \ldots b_{i-1}(d), \neg b_i(d)$ and the extra effects $b_0(d), \ldots, \neg b_{i-1}(d), b_i(d)$. The first translation, however, when compiled into STRIPS introduces additional actions as well \cite{nebel:expressiveness}. Finally, a reset effect $c(d) := 0$ is obtained by setting all atoms $b_0(d)$, $i\in[0,n]$, to false. } The actions $a$ that belong to $Q$ are split into two classes. Actions $a$ \emph{that do not decrement any variable}, keep their names in $P=T(Q)$ but replace their $Inc(X)$ effects by propositional effects $\GT{X}$, and add the precondition $\neg in(X)$ that disables $a$ when $X$ is in the stack: \begin{enumerate}[1.] \item[4.] \textbf{Actions $a$ in $Q$ that decrement no variable}, keep the same names in $T(Q)$, the same preconditions and same effects, except that the effects $Inc(X)$ are replaced by propositional effects $\GT{X}$, if any, and in such a case, the precondition $\neg in(X)$ is added. \end{enumerate} Actions $a$ from $Q$ \emph{that decrement variables} (and hence introduce non-determinism) map into actions in $P$ of type $a(X,d)$ where $X$ is a variable decremented by $X$ that is in the stack at depth $d$: more of the variables that are decremented are in the stack when the action is applied: \Omit{ \begin{enumerate}[1.] \item[5.] \textbf{Actions $a(d)$ for actions $a$ in $Q$ that decrement variables $X_1,\ldots,X_k$, $k\ge 1$, none of which is in the stack} inherit propositional preconditions and effects of $a$, and for each variable $Y$ that is increased by $a$, they include the precondition $\neg in(Y)$ and effect $\GT{Y}$. The parameter $d\in[0,|V|]$ stands for the current stack depth. The action $a(d)$ also has: \begin{enumerate}[$a)$] \item extra preconditions $depth(d)$ and $\neg in(X_i)$, $i\in[1,k]$, and $c(d) < Max$, \item extra non-deterministic effects $\GT{X_i}\,|\,\EQ{X_i}$, $i\in[1,k]$, and \item extra effect $c(d):= c(d)+1$ to increase the counter for level $d$. \end{enumerate} \end{enumerate} Similarly, the actions $a$ from $Q$ that decrement a variable $X$ that is in the stack are captured through $a(X,d)$ actions: } \begin{enumerate}[1.] \item[5.] \textbf{Actions $a(X,d)$ for $a$ in $Q$ that decrement a variable $X$ in the stack at level $d$ (and possibly others)} inherit propositional preconditions and effects of $a$, and for each variable $Y$ that is increased by $a$, they include the precondition $\neg in(Y)$ and the effect $\GT{Y}$. The parameter $d\in[1,|V|]$ stands for the current stack depth. The action $a(X,d)$ also has: \begin{enumerate}[$a)$] \item extra precondition $index(X,d)$ (i.e., $d$ is level at which $X$ appears in stack), \item extra non-deterministic effects $\GT{X_i}\,|\,\GT{X_i}$ for each $Dec(X_i)$ effect, and \item extra effects $c(d'):=0$ for each $d'$ such that $d \leq d' \leq |V|$ to reset the counters for the levels above or equal to $d$. \end{enumerate} \end{enumerate} In words, actions $a$ from $Q$ that do not decrement any variable map into a single action of the form $a$ in $P=T(Q)$, while actions $a$ from $Q$ that decrement variables map into actions $a(X,d)$ applicable only when a variable $X$ decremented by $a$ is in the stack at level $d$. The actions of the form $a$ in $P=T(Q)$ are deterministic, and only actions $a(X,d)$ can generate cycles in a strong cyclic policy for $P$. The reduction $P=T(Q)$ can be summarized as follows: \begin{definition}[Reduction QNP to FOND] \label{def:reduction:qnp->fond} Let $Q\tup{F,V,I,O,G}$ be a QNP. The FOND problem $P=T(Q)$ is $P=\tup{F',I',O',G'}$ with: \begin{enumerate}[1.] \item $F' = F \cup \{c_T\} \cup \{ depth(d), c(d) \} \cup \{ in(X) \} \cup \{ idx(X,d') \}$, \item $I' = I \cup \{ depth(0), \EQ{c_T}, \EQ{c(0)} \}$, \item $G' = G$, \item $O'= \{a : a \in O^+\} \cup \{ a(Y,d') : a \in O^- \} \cup \{ Push(X,d'-1), Pop(X,d') \} \cup \{ Move \}$ \end{enumerate} where $X$ ranges over $V$, $d$ and $d'$ range over $[0,|V|]$ and $[1,|V|]$ respectively, and $O^-$ and $O^+$ stand for the sets of actions in $O$ that decrement and do not decrement a variable respectively, and the variable $Y$ in $a(Y,d')$ ranges among the variables decremented by the action $a$ in $Q$. Preconditions and effects of the actions in $O'$ are described above in the text. \end{definition} \section{Properties} \noindent Clearly, the reduction $P=T(Q)$ can be computed in polynomial time: \begin{theorem} \label{thm:translation:poly} Let $Q=\tup{F,V,I,O,G}$ be a QNP. The reduction $P=T(Q)$ can be computed in time that is polynomial in the size of $Q$. \end{theorem} \begin{proof} Let $n=|F|+|V|$ be the number of variables, propositional or numerical, in $P=T(Q)$. $P$ has $1+n$ counters of capacity $1+2^n$, each one requiring $1+n$ bits: the counter $c(d)$ is encoded in binary with bits $c(d,i)$, $i\in[0,n]$. $P$ also has $n$ atoms of form $depth(d)$, $|V|=O(n)$ atoms of form $in(X)$, and $n|V|=O(n^2)$ atoms of form $index(X,d)$. Therefore, $P$ has $O(|F|+n^2)=O(n^2)$ propositional variables. $P$ has $|V|^2=O(n^2)$ push actions. Since $Push(X,d)$ has precondition $c(d)<Max$ and effect $c(d):=c(d)+1$, it gets compiled into $n$ actions of the form $Push(X,d,i)$, $i\in[0,n-1]$, where precondition $c(d)<Max$ is expressed as $\{\neg c(d,i)\}\cup\{c(d,j):j\in[0,i-1]\}$, and effect $c(d):=c(d)+1$ is expressed as $\{c(d,i)\}\cup\{\neg c(d,j):j\in[0,i-1]\}$. The pop actions do not modify counters, so there are $O(n^2)$ of them. The $Move$ action increments the counter $c_T$ and then, like for $Push(X,d)$, it gets compiled into $n$ different actions. Actions $a$ in $Q$ that do not decrement variables are translated into actions $a$ in $P$. Actions $a$ that decrement a variable get translated into actions $a(X,d)$; there are $O(n^2)$ such actions $a(X,d)$ in $P$ for each such action $a$ in $Q$. In total, $P$ has $O(n^2)$ propositional variables and $O(|O|n^2 + n^3)$ actions, where the cubic term accounts for the $Push(X,d,i)$ actions. These numbers (polynomially) bound the size of $P$. It is clear that producing each action in $P$ is straightforward and can be done in polynomial time. \end{proof} The second direct property of the translation is that due to the use of the counters, all strong cyclic policies $\pi$ for $P$ must terminate: \begin{theorem} \label{thm:terminateT} Let $Q$ be a QNP. Any strong cyclic policy $\pi$ for $P=T(Q)$ is $P$-terminating. \end{theorem} \begin{proof} Let $\pi$ be a strong cyclic policy for $P$ and let $\tau = \bar s_0, \ldots, [\bar s_i, \ldots, \bar s_m]^*$ be an infinite $\pi$-trajectory. We need to show that there is some variable $X$ that is decreased in one of these states and increased in none. Clearly, $\pi(\bar s)$ for some $\bar s$ in the recurrent set must be a non-deterministic action, and this means it is an action of form $a(X,d)$. The actions $a(X,d)$ require $X$ to be in the stack and then resets all counters $c(d')$ for $d' \ge d$ back to $0$. Let us pick an action $a(X,d)$ in the recurrent states of $\tau$ to be one with smallest stack depth $d$, and let $\bar s$ be one of such states where $\pi(\bar s)=a(X,d)$. In the state $\bar s$, the variable $X$ is in the stack at level $d$; i.e., $index(X,d)$ is true. We show next that this same atom must be true in all the other recurrent states in $\tau$. Indeed, if there is a recurrent state where $index(X,d)$ is false, it means that there are recurrent states where $X$ is popped from the stack, and others where it is pushed back at level $d$, as $\bar s$ is a recurrent state where $index(X,d)$ holds. Yet, each occurrence of the action $Push(X,d-1)$ needed to make $index(X,d)$ true increases the counter $c(d-1)$ that no action $a(Y,d')$ can reset with $d'<d$, due our choice of the action $a(X,d)$ as one with minimum $d$. As a result, it has to be the case that $X$ is in the stack at level $d$ in all the recurrent states of $\tau$, and hence no action that increases $X$ is applied while in a recurrent state (since increments of $X$ are disabled when $X$ is in the stack). Then, since there is a recurrent state where $X$ is decremented, the infinite $\pi$-trajectory $\tau$ is terminating. Therefore, the policy $\pi$ is $P$-terminating. \end{proof} In order to prove soundness and completeness, we establish a correspondence between the strong cyclic policies of $Q$ and the strong cyclic policies of $P=T(Q)$. The policies cannot be the same, however, as the reduction $T$, unlike the direct translation $T_D$, adds extra variables and actions. Indeed, $T$ preserves the atoms $p$ and $\EQ{X}$ from $Q$, the latter being propositional, but adds boolean variables and actions that ensure that the policies over of $T(Q)$, unlike the policies over $T_D(Q)$, terminate. Let $Q_M$ be the QNP obtained from the FOND problem $P=T(Q)$ by 1)~adding the numerical variables $X$ from $Q$, 2)~replacing the effects $\GT{X}$ by $Inc(X)$, and the non-deterministic effects $\GT{X}\,|\,\EQ{X}$ by $Dec(X)$, and 3)~interpreting the preconditions and goal of the form $\EQ{X}$ and $\GT{X}$ in terms of such variables (i.e., non-propositionally). \begin{theorem} If $\pi$ solves the FOND problem $P=T(Q)$, $\pi$ solves the QNP $Q_M$. \end{theorem} \begin{proof} $P$ is the direct translation of $Q_M$, i.e.\ $P=T_D(Q_M)$. $\pi$ is $P$-terminating by Theorem~\ref{thm:terminateT} and strong cyclic for $P$ as it solves it. Therefore, by Theorem~\ref{thm:td:main}, $\pi$ solves $Q_M$. \end{proof} The QNP $Q_M$ can be thought of as the composition of the original QNP $Q$ and with a deterministic model that encodes the state of the stack and counters. From this perspective, the policy $\pi$ that solves $Q_M$ stands for a \emph{controller} $\pi_M$ for $Q$ with that has an internal memory $M$ comprised of the atoms that encode the stack and counters: actions like $Push(X,d)$, $Pop(X,d)$ and $Move$ only affect the internal memory $M$ of the controller $\pi_M$, actions $a$ that do not decrement any variable in $Q$, only affect the state of $Q$, while actions $a(X,d)$ affect both the state of $Q$ and the internal memory $M$. Due to the correspondence between the application of policy $\pi$ to the QNP $Q_M$ and the application of the controller with memory $\pi_M$ to the QNP $Q$, it is then direct that: \begin{theorem}[Soundness] \label{thm:policy-memory} If policy $\pi$ solves the FOND problem $P=T(Q)$, the controller $\pi_M$ solves $Q$. \end{theorem} \begin{proof} Each execution of $\pi$ in $Q_M$ generates a trajectory over $M$ and one over $Q$. Since $\pi$ solves $Q_M$, the latter must be terminating and goal reaching, but then they must be terminating and goal reaching in $Q$ that shares the same goal as $Q_M$ and the same $Dec(X)$ and $Inc(X)$ actions. \end{proof} The inverse direction of this theorem is also true, but it does not give us a completeness result. For that, we need to show that a policy $\pi$ that solves $Q$ determines a policy $\pi'$ that solves $P=T(Q)$. \subsection{Completeness} We now assume that there is a policy $\pi$ for $Q$ that solves it, and want to show that then there is a policy $\bar\pi$ for $P=T(Q)$ that solves it. Since $\pi$ solves $Q$, $\pi$ is $Q$-terminating and also $P'$-terminating where $P'=T_D(Q)$ is the direct translation of $Q$ (cf.\ Theorem~\ref{thm:td:main}). Let $\mathcal{G}$ be the policy graph associated with $\pi$ in $P'$. By Theorem~\ref{thm:sieve}, {\sc Sieve}\xspace reduces $\mathcal{G}$ to an acyclic graph. For the rest of this section, we assume that \emph{{\sc Sieve}\xspace is run until all edges that are associated with actions that decrement variables are eliminated} rather than stopping as soon as the graph becomes acyclic.\footnote{Clearly, the modification on the stopping condition for {\sc Sieve}\xspace does not affect its correctness since an acyclic graph remains acyclic when one or more edges are removed, and, on the other hand, if the original {\sc Sieve}\xspace cannot reduced a component, the modified algorithm is not able to reduce it either.} Each edge removed by {\sc Sieve}\xspace can be identified with a variable, and edges are removed in batches by {\sc Sieve}\xspace, each such $batch(C)$ associated with a component $C$ and a variable $X$ chosen by {\sc Sieve}\xspace; i.e., in a given iteration, {\sc Sieve}\xspace chooses a component $C$ and a variable $X$, and removes all edges $(\bar s,\bar s')$ from $C$ such that $\pi(\bar s)$ is a $Dec(X)$ action (cf.\ Figure~\ref{alg:sieve2}). Let us index the top SCCs processed by {\sc Sieve}\xspace in topological order (i.e., if $C_i$ reaches $C_j$ for $j\neq i$, then $i<j$), and let $scc(\bar s)$ be the index of the (top) component that includes $\bar s$ (i.e., the index of the component that includes $\bar s$ in the graph $\mathcal{G}$). {\sc Sieve}\xspace decomposes each component $C$ into a collection of \emph{nested} SCCs that result of recursively removing edges from $C$. For each state $\bar s$, let $\mathcal{C}_{\bar s}=\{C_{\bar s}^j\}_{j\geq1}$ be the collection of nested SCCs that contain $\bar s$; i.e., \begin{enumerate}[$\bullet$] \item $C_{\bar s}^1=C_{k_1}$ where $k_1=scc(\bar s)$ is the index of the (top) component that contains $\bar s$, and \item for $j\geq1$, $C_{\bar s}^{j+1}=C_{k_{j+1}}$ where $\bar s$ is in the component $C_{k_{j+1}}$ of the graph that results when all edges in $\cup\{ batch(C_{k_i}) : i\in[1,j]\}$ have been removed by {\sc Sieve}\xspace. \end{enumerate} For each state $\bar s$, let $stack(\bar s)$ be the sequence of variables chosen by {\sc Sieve}\xspace for each component in $\mathcal{C}_{\bar s}$. Observe that such sequence contains no repetitions since once {\sc Sieve}\xspace chooses variable $X$ for a component $C$, the same $X$ cannot be chosen later for another component $C'$ contained in $C$. Also, if the action $\pi(\bar s)$ is a $Dec(X)$ action for variable $X$, then $stack(\bar s)$ contains $X$ by the assumption that {\sc Sieve}\xspace is run until all all edges associated with decrements of variables are eliminated. We compare the stack $\alpha$ and $stack(\bar s)$, and say that $\alpha=X_1 \cdots X_n$ is a \emph{prefix} of $stack(\bar s)=Z_1 \cdots Z_m$ if the latter can be obtained from the former by pushing variables only; i.e., if $n\leq m$ and $X_i = Z_i$ for $i\in[1,n]$. A property of the {\sc Sieve}\xspace algorithm that we exploit in the completeness proof is the following: \begin{theorem} \label{thm:scc:stack} Let $Q$ be a QNP and let $P'=T_D(Q)$ be its direct translation. If $\pi$ solves $Q$ and $\bar\tau=\bar s_0,\ldots,[\bar s_i,\ldots,\bar s_m]^*$ is an infinite $\pi$-trajectory in $P'$, there is a variable $X$ and a recurrent state $\bar s$ such that $\pi(\bar s)$ is a $Dec(X)$ action, and $X$ is in $stack(\bar s')$ for every recurrent state $\bar s'$ in $\bar\tau$. \end{theorem} \begin{proof} If $\pi$ solves $Q$, $\pi$ must be $P'$-terminating. Thus, there must be a variable that is decreased by $\pi$ in some recurrent state, and increased by $\pi$ in no recurrent state. At the same time, {\sc Sieve}\xspace is complete and must remove variables (edges) in the policy graph until it becomes acyclic (cf.\ Theorem~\ref{thm:sieve}). Initially, all the recurrent states in $\bar\tau$ are in the same component but at one point {\sc Sieve}\xspace removes a variable $X$ and splits the set of recurrent states into different and smaller components. From its definition, this means that $X$ appears in $stack(\bar s)$ for each recurrent state $\bar s$ in $\bar\tau$. Moreover, since the removal of $X$ leaves these states into two or more components, $\pi(\bar s)$ for one such state must be a $Dec(X)$ action. \end{proof} We now define the policy $\pi^*$ for $P=T(Q)$ that is determined by the policy $\pi$ that solves $Q$. In the definition, we use the two functions $scc(\bar s)$ and $stack(\bar s)$ defined above in terms of the execution of {\sc Sieve}\xspace on the policy graph for $\pi$ on $T_D(Q)$. The states over $P$ are denoted by triplets $\tup{\bar s,c,\alpha}$ where $\bar s$ is the state in $T_D(Q)$, $c$ stands for the state of the counters $c(d)$, $d\in[0,|V|]$, and $c_T$, and $\alpha$ stands for the state of the stack (given by the atoms $depth(d)$, $in(X)$, $index(X,d)$). The policy $\pi^*$ for $P$ is defined at triplet $\tup{\bar s,c,\alpha}$ by \begin{equation} \label{def:pi} \begin{cases} Pop(X,d) & \text{if $c_T < scc(\bar s)$, $X$ is top variable in $\alpha$, and $d=|\alpha|$, else} \\ Move & \text{if $c_T < scc(\bar s)$ and empty stack, else } \\ Pop(X,d) & \text{if $X$ is top variable in $\alpha$, $d=|\alpha|$, and $\alpha$ is not a prefix of $stack(\bar s)$, else} \\ Push(X,d) & \text{if $\alpha X$ is a prefix of $stack(\bar s)$ and $d=|\alpha|$, else} \\ a & \text{if $\pi(\bar s)=a$ decrements no variable, else} \\ a(X,d) & \text{if $\pi(\bar s)=a$ decrements $X$ at depth $d$ in $\alpha$ but no other var at depth $d' < d$.} \\ \end{cases} \end{equation} A first observation is that the policy $\pi^*$ is defined on every triplet $\tup{\bar s,c,\alpha}$ such that $\pi(\bar s)$ is defined. The policy $\pi^*$ for the FOND problem $P=T(Q)$ on a triplet $\tup{\bar s,c,\alpha}$ advances the $c_T$ counter until it becomes equal to the index $scc(\bar s)$ of the SCC in $\mathcal{G}$ that contains the node $\bar s$. It then performs pops and pushes until $\alpha$ becomes equal to $stack(\bar s)$, and finally applies the action $a$ selected by the policy $\pi$ on $Q$ using the action names $a$ or $a(X,d)$ according to whether $a$ decrements no variable or decrements a variable, which must be in $stack(\bar s)$ as discussed above. In the latter case, the variable $X$ for the action $a(X,d)$ is the variable $X$ decremented by $a$ that is \emph{deepest in the stack}, at depth $d$. The completeness result can then be expressed as follows: \begin{theorem}[Completeness] \label{thm:qnp->fond:completeness} Let $Q$ be a QNP and let $\pi$ be a policy for $Q$. If $\pi$ solves $Q$, then the policy $\pi^*$ defined by \eqref{def:pi} solves $P=T(Q)$. \end{theorem} \begin{proof} From Theorem~\ref{thm:td:main}, $\pi$ solves $Q$ iff $\pi$ solves and terminates in $P'=T_D(Q)$. We will show that if $\pi$ solves and terminates in $P'$, then $\pi^*$ must solve $P=T(Q)$. Therefore, by forward reasoning, given that $\pi$ solves $Q$, then $\pi$ solves and terminates in $P'$ from which we obtain that $\pi^*$ solves $P$. We need to show that the policy $\pi^*$ is executable in $P$, and more precisely that 1)~$\pi^*$ cannot generate non-goal states $\tup{\bar s,c,\alpha}$ where $\pi^*$ is not defined or defined but non applicable, and 2)~$\pi^*$ cannot get trapped in a loop that only involves the extra actions $Move$, $Pop(X,d)$ and $Push(X,d)$. These two properties ensure that in any $\pi^*$-trajectory over $P$, if a non-goal state $\tup{\bar s,c,\alpha}$ is reached, an action $a$ or $a(X,d)$ will be the one changing the component $\bar s$ of the state when $\pi(\bar s)=a$, and that this will happen after a bounded number of applications of the extra actions $Move$, $Pop(X,d)$ and $Push(X,d)$ that do not change $\bar s$. Since the effect of the actions $a$ or $a(X,d)$ on $\bar s$ in $P$ is the same as the effect of $a$ on $s$ in $P'$, it follows that $\pi^*$ will be strong cyclic for $P$ if $\pi$ is strong cyclic for $P'$. Alternatively, 1) and 2) ensure that if $\pi^*$ is executable in $P$, it generate trajectories over the $\bar s$ components that are the same as those obtained by the policy $\pi$ over $P'$ except for a bounded number of steps where the $\bar s$ component in the states $\tup{\bar s,c,\alpha}$ does not change. \medskip Point 2) is direct. $Move$ increases the counter $c_T$ that no other action decreases. Pushes and pops are applied in order, either to flush out the stack when $c_T < scc(\bar s)$, or to make $\alpha=stack(\bar s)$. In the latter case, $\alpha$ is popped until it becomes a prefix of $stack(\bar s)$ (flushed out in the extreme case), and then pushes take place to make $\alpha$ equal to $stack(\bar s)$. Hence, no loops that only involve $Move$, $Pop(X,d)$ and $Push(X,d)$ actions are possible. \medskip Point 1) is more subtle. The policy $\pi^*$ is defined on all triplets $\tup{\bar s,c,\alpha}$ for which $\bar s$ is reachable by $\pi$. We first argue, that except for the preconditions on the counters $c(d)$ and $c_T$, the rest of the preconditions are true for the actions selected by $\pi^*$. Observe that every triplet $\tup{\bar s,c,\alpha}$ reached by $\pi^*$ is such that $\bar s$ is reachable by $\pi$, and thus $\pi(\bar s)$ is defined and applicable in $\bar s$. Second, for an action selected by $\pi^*$, its easy to see, except for $\neg in(Y)$ when the action is $a$ or $a(X,d)$ and it increments $Y$, that its preconditions hold. To see that $\neg in(Y)$ also holds, observe that if the actions increments $Y$, then $stack(\bar s)$ cannot contain $Y$; if so, the collection $\mathcal{C}_{\bar s}$ of nested components for $\bar s$ has a component $C$ that contains a state where $Y$ is decremented while being incremented in $\bar s$, thus making $Y$ ineligible by {\sc Sieve}\xspace. The actions that have preconditions on counters are of type $Push(\Box,d)$ with precondition $c(d)<Max$, and $Move$ with precondition $c_T<Max$. Here, $\Box$ is a placeholder that denotes any variable $X$ in $V$. For the top counter, $c_T < Max$ always hold since $c_T$ starts at 0, it is only increased to make it equal to $scc(\bar s)$, and the number of components in $\mathcal{G}$ is less than or equal the number of subsets of states which is less than $Max$. We are thus left to show $c(d) < Max$ by considering the only type of actions that increase $c(d)$: $Push(\Box,d)$. For this, we show that $\pi^*$ cannot generate a trajectory $\tilde\tau$ in $P$ that contains a fragment $\tilde\tau'$ with $1 + 2^n$ (i.e.\ $Max$) actions of the form $Push(\Box,d)$ while no action of the form $a(\Box,d')$, $d' \leq d$, or $Push(\Box,d-1)$ as this would be the only way in which $c(d)$ may grow up to $1 + 2^n$: actions of the form $Push(\Box,d)$ increase $c(d)$ by 1, and the only actions that decreases $c(d)$, back to $0$, have the form $a(\Box,d')$ for $d'\leq d$, or $Push(\Box,d-1)$. Indeed, let $\tilde\tau'=\tup{\bar s_1,c_1,\alpha_1}, \tup{\bar s_2,c_2,\alpha_2}, \ldots$ be such a fragment, and let $1=i_1<i_2<\cdots<i_m$, for $m=1+2^n$, be the indices for the triplets in $\tilde\tau'$ on which the policy $\pi^*$ selects an action of type $Push(\Box,d)$. Observe that between each pair of such indices, there must be one triplet where an action of type $a$ or $a(\Box,d')$ is applied: two pushes at the same stack depth must be mediated by at least one such action. Let $i^*_1<i^*_2<\cdots$ be the indices such that $i^*_k$ is the first index after $i_k$ where the action selected by $\pi^*$ is of type $a$ or $a(\Box,d')$, $k\in[1,m]$. Since the total number of states is less than or equal to $m$, there is some $\bar s$ that repeats. Without loss of generality, let us assume that $\bar s_{i^*_1}=\bar s_{i^*_m}$. The policy $\pi$ loops in $P'$ on a set $\mathcal{R}$ of recurrent states that includes $\{\bar s_{i^*_k} : k\in[1,m] \}$. By Theorem~\ref{thm:scc:stack}, there is a variable $X$ that is decremented by $\pi$ while looping in $\mathcal{R}$ such that $X$ belongs to each $stack(\bar s_{i^*_k})$, $k\in[1,m]$. We choose $X$ to be such variable appearing deepest in the stacks. Therefore, there is index $k\geq 1$ such that $\pi^*(\tup{\bar s_k,c_k,\alpha_k})=a(X,d')$ where $d'$ is the depth of $X$ in $\alpha_k$. Since $X$ also belongs to $\alpha_1$, it must be the case $d'\leq |\alpha_1|=d$, the latter inequality since $\pi^*(\tup{\bar s_{i_1},c_{i_1},\alpha_{i_1}})$ is of type $Push(\Box,d)$. This is a contradiction with the assumption that $\tilde\tau'$ contains no action of type $a(\Box,d')$ for $d'\leq d$. \Omit{ We show that $\pi^*$-trajectories with such fragments $\bar\tau'$ are not possible by considering three cases. In each case, $\bar\tau'$ contains $1+2^n$ actions of type $Push(X,d)$ or $a(d)$ but no action of type $a(X,d')$, $d'\leq d$, or $Push(X,d-1)$. Actions of type $Push(X,d)$ or $a(d)$ are referred to as ``offending'' actions. The cases are: \begin{enumerate}[1.] \item $Push(X,d)$ and $a(d)$ actions in $\bar\tau'$ are executed in triplets that have the \emph{same} state $\bar s$. \item $Push(X,d)$ and $a(d)$ actions in $\bar\tau'$ are executed in triplets where all the states $\bar s$ are \emph{different}. \item The rest; i.e., $\bar\tau'$ contains at least two different states $\bar s$ and $\bar s'$ where actions of the form $Push(X,d)$ and $a(d)$ are executed, and $\bar s$ repeats in $\bar\tau'$. \end{enumerate} \textbf{First case.} Observe that the maximum number of $Push(X,d)$ actions that may be executed by $\pi^*$ before an action of type $a$, $a(d)$, or $a(X,d)$ executes is bounded by $|V|$, which is the case when the stack needs to be flushed out and then grown up again. Thus, in order to get $1+2^n$ offending actions, some action $a(d)$, that increases $c(d)$ must be performed in $\bar\tau'$. However, since $\bar s$ does not change and because $\pi^*(\tup{\bar s,c',\alpha'})=a(d)$ implies $\pi(\bar s)=a$, then $\pi$ induces a self-loop $(\bar s,a,\bar s)$ in $P'=T_D(Q)$. By Theorem~\ref{thm:scc:stack} and the fact that $\pi$ is terminating in $P'$, it follows that $\pi(\bar s)=a$ must be a $Dec(X)$ action for some variable $X$ in $stack(\bar s)$ which, by the definition of $\pi^*$, is equal to $\alpha'$. This is a contradiction since the action $\pi^*(\tup{\bar s,c',\alpha'})=a(d)$ is only applicable when the variables it decrements are not in $\alpha'$. Since $\bar\tau'$ is assumed to contain on actions of type $a(\cdot,d')$, $d'\leq d$, or $Push(\cdot,d-1)$, it is impossible to have $1+2^n$ offending actions in $\bar\tau'$. \textbf{Second case.} Let us assume that the $1+2^n$ offending actions in $\bar\tau'$ are executed in triplets $\tup{\bar s,c,\alpha}$ that do not repeat the state $\bar s$. Since there cannot be more than $2^n$ different states $\bar s$, this case is also impossible. \textbf{Last case.} Let us assume that the $1 + 2^n$ offending actions are executed in triplets $\tup{\bar s,c,\alpha}$ where $\bar s$ changes but some is repeated. Let $\bar s_i,\ldots,\bar s_m$ denote one such sequence of states $\bar s$ over $P$ (and within $\bar\tau'$) where $\bar s_i = \bar s_m = \bar s$, $m > i$. The policy $\pi$ yields the same loop in $P'$, but this loop must be terminating in $P'$ since $\pi$ solves $P'$. Hence, by Theorem~\ref{thm:scc:stack}, there must be a state $\bar s_k$ in the loop, $i \leq k < m$, such that $\pi(\bar s_k)$ is a $Dec(X)$ action for a variable $X$ that appears in every $stack(\bar s_j)$ for the states $\bar s_j$ in the loop. Since the action $a(d)$ in states states $\tup{s_j,c_j,\alpha_j}$ where $d=|\alpha|$ and $X$ is in $\alpha_j$ that is equal to $stack(s_j)$, it follows that $X$ must be at level $d' \leq d$ in $\alpha_j$ and throughout the whole loop. % Since the action $a(d)$ in states states $\tup{s_j,c_j,\alpha_j}$ where $d=|\alpha|$ and $X$ is in $\alpha_j$ that is equal to $stack(s_j)$, it follows that $X$ must be at level $d' \leq d$ in $\alpha_j$ and throughout the whole loop. % This means that if $\tup{s_k,c_k,\alpha_k}$ is the recurring state in $\tau$ (i.e. in the cycle of $\tau$) for which $\pi'(\tup{s_k,c_k,\alpha_k})$ is not an extra action, $\pi'(\tup{s_k,c_k,\alpha_k})$ must be equal to an action of the form $a(X,d')$ for $d' < d$, in contradiction with the assumption that the offending sequence of actions $push(X,d)$ or $a(d)$ in $\tau$ do not contain any intermediate $a(X,d')$ action with $d'< d$. This establishes that $c(d) \leq 2^{n}$ is always true during the execution of the policy $\pi'$ over $P$, and hence that the policy $\pi'$ is executable in $P$, and like $\pi$ over $P'$, it is a strong cyclic solution of $P$. } \end{proof} The second reduction from QNPs into FOND problems may be used to compute policies for a given QNP from policies of the resulting FOND. The reduction is a sound and complete mapping. As mentioned above, the resulting QNP policies correspond to controllers that map states $\bar s$ and controller states, pairs $\tup{c,\alpha}$ that encode the state of the (bounded) counters and stack, into actions. Hence, there is still the question of whether a QNP solvable by such a controller is solvable by a flat policy (as given in Definition~\ref{def:qnp:solution}). \emph{This question remains open and it is beyond the scope of this paper.} We have spent time trying to answer it without success. Indeed, there are arguments that support both an affirmative and a negative answer. For the former, QNPs contain no hidden information and thus it seems that they satisfy the Markov property: the current state decouples the past (history) from the future. For the latter, it seems that in order to select an appropriate action, an agent must keep in memory what variables it is currently trying to get to zero to avoid any increment of them (such information is in the stack). On the other hand, the other general approach for solving QNPs, based on LTL synthesis and discussed below, is also incapable of resolving this issue as well as the inefficient algorithm of brute-force enumeration and termination test of all strong-cyclic policies. \section{Extensions and Variations of QNPs} For simplicity, QNPs have been defined with certain syntactic restrictions that do not limit their expressive power. In particular, there are no actions with non-deterministic effects on boolean variables, and there are no effects that can map a literal $\EQ{X}$ non-deterministically into the literals $\GT{X}$ and $\EQ{X}$, as decrements require $\GT{X}$ as a precondition, and increments yield the outcome $\GT{X}$. Yet, these two restrictions are not essential and can be bypassed. As we have seen before, non-deterministic effect on boolean variables as found in strong and strong cyclic FOND planning can be obtained in QNPs by using additional numerical variables (Section~6). Likewise, a sequence of two consecutive effects $Inc(X)$ and $Dec(X)$ can be used to emulate the effect of an action that leave the value of $X$ completely uncertain; i.e., that it may increase $X$, decrease $X$, or leave the value of $X$ unchanged. There are also syntactic conditions that when satisfied, make QNPs simpler. For example, if the numerical variables $X_i$ in a QNP $Q$ can be linearly ordered so that the actions that increment a variable $X_i$ also decrease a variable $X_j$ that appears later in the ordering, then every policy $\pi$ that solves the direct translation $P=T_D(Q)$ of $Q$ will necessarily solve $Q$ as well, as any such policy will terminate in $Q$. Indeed, if $X_i$ is the latest variable in the ordering that is decreased in a cycle induced by $\pi$, the cycle cannot include a different state where the variable $X_i$ is increased, as otherwise, the condition implies that another variable $X_j$ appearing later in the ordering must be decreased in the cycle. For such \textbf{well-ordered} QNPs, the simpler, direct translation $T_D$ is thus both sound and complete. \section{Implementation} The reduction from QNPs to FOND problems has been implemented and it can be found in the GitHub repository \url{https://github.com/bonetblai/qnp2fond}. The reduction produces a FOND problem without conditional effects. The reduction may be parametrized in terms of the maximum capacity for counters and the maximum stack depth whose default values are the ones used in the proofs. In some cases, the reduction can be made simpler. Variables $X$ that are decremented but which are not incremented by any action can be translated as in the direct translation $T_D$ bypassing the more complex translation $T$; i.e., the decrements of $X$ are translated as non-deterministic boolean outcomes $\EQ{X}\,|\,\GT{X}$. Likewise, if there is a subset of numerical variables that can be ordered as $X_1,\ldots,X_n$, such that actions that increment a variable $X_i$ decremented a variable $X_j$ in the subset with $i < j$, then the effects of all the variables $X_i$ in this subset can be translated as in the direct translation $T_D$ as well. This is because, loops that involve changes in these variables only, must necessarily be terminating: the variable $X_i$ with the largest index in the loop that is decremented in the loop, cannot be incremented as well, as else, there will be another variable $X_j$ in the set, $j > i$ such that $X_j$ is decremented. Our implementation supports the optimization described above where variables $X$ that are incremented by no action are translated as in the direct translations. There is also the option to disable this optimization and an option to force the direct translation independently of the increments in the actions. The default mode, used in the examples, is to use the optimization whenever possible, to use the direct translation when no variable is incremented by any action, and to use the maximum capacity for counters and maximun stack depth. In the examples, the resulting FOND problems are solved with FOND-SAT \cite{geffner:fond-sat}, a general SAT-based solver that is available at \url{https://github.com/tomsons22/FOND-SAT}. This planner calls a SAT solver multiple times. The SAT solver used by FOND-SAT is Minisat \cite{minisat}. \section{Examples} We consider some examples of QNPs and their translations. There is no useful baseline for evaluating the use of the translator in combination with FOND planners. The only other complete QNP planner would result from translating QNP problems into LTL synthesis tasks but the comparison would be unfair because, as mentioned earlier, the LTL synthesis is computationally harder than QNP planning. There is also no complete generate-and-test QNP planner reported, which would have to generate the strong cyclic policies of the direct FOND translation, one by one, while checking them for termination. The QNPs below represent abstractions of particular generalized planning problems; i.e, families of concrete planning problems, and they have been learned automatically (except for the action names) from PDDL representations of the planning domains and sampled plans \cite{bonet:aaai2019}. These QNPs involve a small number of boolean and numerical variables, something which is common in QNPs that represent abstractions where the number of variables is bounded and does not grow with the size of the concrete instances \cite{bonet:ijcai2015}. The QNP is a \emph{sound abstraction} of the concrete problems when the boolean and numerical variables $p$ and $n$ in the QNP accurately represent and track the value changes of certain boolean and numerical state features $\phi_p$ and $\phi_n$. In such a case, the sequences of boolean valuations over the literals $p$ and $n=0$ in the abstraction can be emulated by sequences of states in the concrete problems with the corresponding values over the features. All QNPs below are sound abstractions in this sense. Soundness implies that the abstract policy obtained by solving the QNP can be applied successfully on the concrete problems by making use of the state feature functions $\phi_p$ and $\phi_n$ \cite{bonet:ijcai2018}. Let us recall that a policy $\pi$ that solves the FOND $P$ obtained from the translation $P=T(Q)$ of a QNP $Q$, defines a policy that can be understood as a memory-extended controller that solves $Q$ using extra boolean variables and actions. Often, however, the policy $\pi(\bar s,m)$ obtained from $P$, where $s$ is the state over $Q$ and $m$ is the memory state, can be projected onto a memoryless policy $\pi'$ for $Q$ which does not use either the extra variables or actions. This happens when there is no state $s$ where the actions $\pi(\bar s,m)$ and $\pi(\bar s,m')$ selected by the policy over two different memory states are associated with different $Q$ actions. In such a case, all states $s$ are associated with a single action $a$ from $Q$ (there must be one such action as otherwise $\pi$ could not solve $P$), and the memoryless policy $\pi'$ for $Q$ can be defined so that $\pi'(\bar s)=a$. \subsection{Clearing a Block} A general plan for clearing a given block $x$ in any Blocksworld instance featuring the block $x$ can be obtained by solving the following abstraction, expressed as a QNP problem $Q=\tup{F,V,I,O,G}$ where $F=\{H\}$ contains a boolean variable $H$ that represents if a block is being held, $V=\{n\}$ contains a numerical variable that counts the number of blocks above $x$, the initial situation $I=\{\neg H, \GT{n}\}$ assumes that block $x$ is not clear and that gripper is empty, and the goal situation $G=\{\EQ{n}\}$ expresses that there are no blocks on top of $x$. The two actions in $O$ are \begin{enumerate}[--] \item $\textit{Putaway} = \abst{H}{\neg H}$ to put on the table or on a block not above $x$ the block being held, and \item $\textit{Pick} = \abst{\neg H, \GT{n}}{H, \DEC{n}}$ to pick the topmost block above $x$. \end{enumerate} A policy $\pi$ that solves $Q$ is $\pi(\bar s)=Putaway$ when $H$ is true in $\bar s$, and $\pi(\bar s)=Pick$ when $H$ and $\EQ{n}$ are both false in $\bar s$. The QNP $Q$ translates into a FOND problem $P$ with 2 atoms and 2 actions since the direct translation is triggered as no action in $Q$ increments any variable, and it runs in less than 0.01 seconds. FOND-SAT solves $P$ and outputs the solution shown in Figure~\ref{fig:clear} in 0.06 seconds; the planner makes 2 calls to the SAT solver requiring less than 0.01 seconds in total. \begin{figure}[t] \centering \begin{tikzpicture}[thick,>={Stealth[inset=2pt,length=8pt,angle'=33,round]},font={\normalsize},qs/.style={draw=black,fill=gray!20!white},init/.style={qs,fill=yellow!50!white},goal/.style={qs,fill=green!50!white}] \node[qs] (A) at (0,0) { $H, \GT{n}$ }; \node[init] (B) at (5,0) { $\overline{H}, \GT{n}$ }; \node[goal] (C) at (10,0) { $H, \EQ{n}$ }; \path[->] (A) edge[transform canvas={}] node[above,yshift=-2] { $\textit{Putaway}$ } (B); \path[->] (B) edge[out=140,in=40,looseness=0.8] node[above,yshift=-2] { $\textit{Pick}: \DEC{n}$ } (A); \path[->] (B) edge[transform canvas={}] node[above,yshift=-2] { $\textit{Pick}: \DEC{n}$ } (C); \end{tikzpicture} \caption{% Resulting policy and policy graph for the problem of clearing a block. Nodes represent states in the QNP abstraction and edges represent actions with the effects on numerical variables as shown. The initial state is the left most state and the goal state is the right most state (only one goal state shown; goal is $n=0$). The policy is strong cyclic and terminating. } \label{fig:clear} \end{figure} \subsection{Placing a Block on Top of Another} A general plan for placing block $x$ on top of block $y$ (both fixed) may be obtained by solving a suitable QNP. For simplicity, we only consider the case when the blocks $x$ and $y$ are initially in \emph{different} towers. In this case, the QNP $Q=\tup{F,V,I,O,G}$ has boolean and numerical variables $F=\{E,X,D\}$ and $V=\{n,m\}$ respectively. The general case when $x$ and $y$ may be in the same tower requires a QNP that involves more features and more actions. The boolean variables in $Q$ represents whether the gripper is empty ($E$), holding block $x$ ($X$), or the goal has been achieved ($D$), while the numerical variables count the number $n$ and $m$ of blocks above $x$ and $y$ respectively. The initial $I=\{E,\neg X,\neg D,\GT{n}, \GT{m}\}$ describes a configuration where no block is being held, there are blocks above $x$ and above $y$, but $x$ and $y$ are in different towers; the goal is simply $G = \{D\}$. The QNP has 7 different actions: \begin{enumerate}[--] \item $\textit{Pick-above-$x$} = \abst{E, \neg X, \neg D, \GT{n}, \GT{m}}{\neg E, \DEC{n}}$ to pick the topmost block that is above $x$, \item $\textit{Pick-above-$y$} = \abst{E, \neg X, \neg D, \EQ{n}, \GT{m}}{\neg E, \DEC{m}}$ to pick the topmost block that is above $x$, \item $\textit{Putaside-1} = \abst{\neg E, \neg X, \neg D, \EQ{n}}{E}$ to put aside (not above $x$ or $y$) the block being held, \item $\textit{Putaside-2} = \abst{\neg E, \neg X, \neg D, \GT{n}, \GT{m}}{E}$ to put aside (not above $x$ or $y$) the block being held, \item $\textit{Pick-$x$} = \abst{E, \neg X, \neg D, \EQ{n}, \EQ{m}}{\neg E, X}$ to pick block $x$, \item $\textit{Put-$x$-aside} = \abst{\neg E, X, \neg D, \EQ{n}, \GT{m}}{E, \neg X}$ to put block $x$ aside (not above $y$), and \item $\textit{Put-$x$-on-$y$} = \abst{\neg E, X, \neg D, \EQ{n}, \EQ{m}}{E, \neg X, D, \INC{m}}$ to put $x$ on $y$. \end{enumerate} Observe that some actions have preconditions that are not strictly necessary, and that there are two different actions to put a block aside. The reason is that this abstraction is automatically learned from a sample of plans and thus there is no guarantee on that it will be the simplest possible one. Since the QNP features increments, the translator switches to the complete translation, but it notices that there are no increments for $n$ (the number of blocks above $x$), and thus the optimization for $n$ triggers. The translator runs in less than 0.01 seconds and generates a FOND problem $P$ with 74 atoms and 60 actions (69 atoms for encoding the counters and the stack, and 48 actions to manipulate the stack and move the top counter). FOND-SAT finds the solution shown in Figure~\ref{fig:on} in 5.14 seconds; it makes 8 calls to the SAT solver that require 0.49 seconds in total. \begin{figure}[t] \centering \begin{tikzpicture}[thick,>={Stealth[inset=2pt,length=8pt,angle'=33,round]},font={\footnotesize},qs/.style={draw=black,fill=gray!20!white},init/.style={qs,fill=yellow!50!white},goal/.style={qs,fill=green!50!white}] \node[init] (n0) at (0,0) { $E, \overline{X}, \overline D, \GT{n}, \GT{m}$ }; \node[qs] (n1) at (6,0) { $\overline E, \overline X, \overline D, \GT{n}, \GT{m}$ }; \node[qs] (n2) at (0,-1.5) { $\overline E, \overline X, \overline D, \EQ{n}, \GT{m}$ }; \node[qs] (n3) at (6,-1.5) { $E, \overline X, \overline D, \EQ{n}, \GT{m}$ }; \node[qs] (n6) at (12,-1.5) { $\overline E, \overline X, \overline D, \EQ{n}, \EQ{m}$ }; \node[qs] (n7) at (12,-4) { $E, \overline X, \overline D, \EQ{n}, \EQ{m}$ }; \node[qs] (n8) at (6,-4) { $\overline E, X, \overline D, \EQ{n}, \EQ{m}$ }; \node[goal] (n9) at (0,-4) { $E, \overline X, D, \EQ{n}, \GT{m}$ }; \path[->] (n0) edge[transform canvas={}] node[above,yshift=-1] { $\textit{Pick-above-$x$}: \DEC{n}$ } (n1); \path[->] (n1) edge[out=140,in=40,looseness=0.8] node[above,yshift=0] { $\textit{Put-aside-2}$ } (n0); \path[->] (n0) edge[transform canvas={xshift=-30}] node[right,yshift=0] { $\textit{Pick-above-$x$}: \DEC{n}$ } (n2); \path[->] (n2) edge[transform canvas={}] node[above,yshift=0] { $\textit{Put-aside-1}$ } (n3); \path[->] (n3) edge[out=220,in=320,looseness=0.8] node[below,yshift=2] { $\textit{Pick-above-$y$}: \DEC{m}$ } (n2); \path[->] (n3) edge[transform canvas={}] node[above,yshift=-2] { $\textit{Pick-above-$y$}: \DEC{m}$ } (n6); \path[->] (n6) edge[transform canvas={xshift=30}] node[left,yshift=0] { $\textit{Put-aside-1}$ } (n7); \path[->] (n7) edge[transform canvas={}] node[above,yshift=0] { $\textit{Pick-$x$}$ } (n8); \path[->] (n8) edge[transform canvas={}] node[above,yshift=-2] { $\textit{Put-$x$-on-$y$}: \INC{m}$ } (n9); \end{tikzpicture} \caption{% Resulting policy and policy graph for the problem of placing a block on top of another. Nodes represent states in the QNP abstraction and edges represent actions with the effects on numerical variables as shown. The initial state is the top left state and the goal state is the bottom left state (only one goal state shown; goal is $D$). The policy is strong cyclic and terminating, as it involves two non-trivial (non-singleton) components where a numerical variable is decremented but not incremented. } \label{fig:on} \end{figure} \subsection{Gripper} The task involves a robot with grippers whose goal is to move a number of balls from one room into a target room. Each gripper may carry one ball at a time. The STRIPS predicates are $at\text{-}robby(l)$, $at\text{-}ball(b,l)$, $free(g)$, $carry(b,g)$ that denote, respectively, whether the robot (the ball) is at location $l$, whether gripper $g$ is free, and whether gripper $g$ carries ball $b$, plus unary type predicates $room$, $ball$, and $gripper$. From two plans for this task, one for an instance with 4 balls and 2 grippers and the other with 5 balls and 2 grippers, the approach of \citeay{bonet:aaai2019} learns the QNP $Q=\tup{F,V,I,O,G}$ that involves one boolean feature $T$ that indicates whether the robot is in the target room, and 3 numerical features that count the number of balls still to be moved ($b$), the number of balls being carried ($c$), and the number of empty grippers ($g$). The set of (abstract) actions in $Q$ is: \begin{enumerate}[--] \item $\textit{Drop} = \abst{T, \GT{c}}{\DEC{c}, \INC{g}}$ to drop balls in the target room, \item $\textit{Pick} = \abst{\neg T, \GT{b}, \GT{g}}{\DEC{b}, \INC{c}, \DEC{g}}$ to pick balls in the other room, \item $\textit{Move-half-loaded} = \abst{\neg T, \EQ{b}, \GT{c}, \GT{g}}{T}$ to move to the target room when there is still capacity to carry balls, \item $\textit{Move-fully-loaded} = \abst{\neg T, \GT{c}, \EQ{g}}{T}$ to move to the target room where there is no more capacity to carry balls, and \item $\textit{Leave-target} = \abst{T, \EQ{c}, \GT{g}}{\neg T}$ to move to the other room. \end{enumerate} It is easy to see that this abstraction correctly captures any instance involving any number of balls and, remarkably, also works for instances where the robot has any positive number of grippers. The translator switches to the complete translation as there are actions that increment variables but, as in the previous case, the variable $b$ is optimized because no action increments it. The complete translation runs in less than 0.01 seconds and produces a FOND problem $P$ that has 63 atoms and 40 actions (59 atoms for encoding the counters and the stack, and 35 actions to manipulate the stack and move the top counter). FOND-SAT finds the solution shown in Figure~\ref{fig:gripper} in 2.53 seconds; it makes 7 calls to the SAT solver that require 0.30 seconds in total. \begin{figure}[t] \centering \resizebox{.975\textwidth}{!}{ \begin{tikzpicture}[>={Stealth[inset=2pt,length=8pt,angle'=33,round]},font={\small},qs/.style={draw=black,fill=gray!20!white},init/.style={qs,fill=yellow!50!white},goal/.style={qs,fill=green!50!white}] \node[init] (n0) at (0.0, 0.0) { $\bar s_0: \overline T,\GT{b},\EQ{c},\GT{g}$ }; \node[qs] (n1) at (12.0, 0.0) { $\bar s_1: \overline T,\GT{b},\GT{c},\GT{g}$ }; \node[qs] (n2) at (0.0,-2.0) { $\bar s_2: \overline T,\GT{b},\GT{c},\EQ{g}$ }; \node[qs] (n3) at (6.0,-2.0) { $\bar s_3: \overline T,\EQ{b},\GT{c},\GT{g}$ }; \node[qs] (n4) at (12.0,-2.0) { $\bar s_4: \overline T,\EQ{b},\GT{c},\EQ{g}$ }; \node[qs] (n5) at (0.0,-5.0) { $\bar s_5: T,\GT{b},\GT{c},\EQ{g}$ }; \node[qs] (n6) at (6.0,-4.0) { $\bar s_6: T,\EQ{b},\GT{c},\GT{g}$ }; \node[qs] (n7) at (12.0,-4.0) { $\bar s_7: T,\EQ{b},\GT{c},\EQ{g}$ }; \node[qs] (n8) at (0.0,-7.0) { $\bar s_8: T,\GT{b},\EQ{c},\GT{g}$ }; \node[qs] (n9) at (6.0,-7.0) { $\bar s_9: T,\GT{b},\GT{c},\GT{g}$ }; \node[goal] (n10) at (12.0,-7.0) { $\bar s_{10}: T,\EQ{b},\EQ{c},\GT{g}$ }; \path[->] (n0) edge[transform canvas={}] node[above,yshift=-2] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n1); \path[->] (n0) edge[transform canvas={xshift=-30}] node[right,yshift=0] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n2); \path[->] (n0) edge[transform canvas={xshift=-20}] node[sloped,yshift=6,xshift=4] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n3); \path[->] (n0) edge[transform canvas={xshift=0}] node[sloped,yshift=6,xshift=-60] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n4); \path[->] (n1) edge[transform canvas={xshift=0}] node[sloped,yshift=6,xshift=60] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n2); \path[->] (n1) edge[transform canvas={xshift=20}] node[sloped,yshift=6,xshift=-4] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n3); \path[->] (n1) edge[transform canvas={xshift=30}] node[left,yshift=0] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n4); \path[->] (n1) edge[out=140,in=40,looseness=4] node[above,yshift=-2] { $\textit{Pick}: \DEC{b}, \INC{c}, \DEC{g}$ } (n1); \path[->] (n2) edge[transform canvas={xshift=-30}] node[right,yshift=14] { $\textit{Move-fully-loaded}$ } (n5); \path[->] (n3) edge[transform canvas={xshift=-30}] node[right,yshift=0] { $\textit{Move-half-loaded}$ } (n6); \path[->] (n4) edge[transform canvas={xshift=30}] node[left,yshift=0] { $\textit{Move-fully-loaded}$ } (n7); \path[->] (n5) edge[transform canvas={xshift=-30}] node[right,yshift=0] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n8); \path[->] (n5) edge[transform canvas={xshift=20}] node[sloped,yshift=6,xshift=4] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n9); \path[->] (n6) edge[transform canvas={xshift=10}] node[sloped,yshift=6,xshift=0] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n10); \path[->] (n7) edge[transform canvas={xshift=30}] node[left,yshift=8,xshift=0] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n10); \path[->] (n7) edge[transform canvas={}] node[above,xshift=2,yshift=-2] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n6); \path[->] (n9) edge[transform canvas={}] node[above,xshift=2,yshift=-2] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n8); \path[->] (n9) edge[out=220,in=320,looseness=4] node[below,yshift=0] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n9); \path[->] (n6) edge[transform canvas={xshift=-43},out=140,in=220,looseness=4] node[left,yshift=0] { $\textit{Drop}: \DEC{c}, \INC{g}$ } (n6); \coordinate (off) at (-2.4,0.0) {}; \path[-] (n8) edge[] ($(n8) + (off)$); \path[-] ($(n8) + (off)$) edge[] node[sloped,yshift=8] { $\textit{Leave-target}$} ($(n0) + (off)$); \path[->] ($(n0) + (off)$) edge[] (n0); \end{tikzpicture} } \caption{% Resulting policy and policy graph for Gripper. Nodes represent states in the QNP abstraction and edges represent actions with the effects on numerical variables as shown. The initial state is the top left state and the goal state is the bottom right state (only one goal state shown; goal is $G=\{b=0,c=0\}$). The policy is strong cyclic and terminating. The main component is $\{\bar s_0, \bar s_1, \bar s_2, \bar s_5, \bar s_8, \bar s_9\}$ where the variable $b$ is decreased but not increased. {\sc Sieve}\xspace removes the edges associated with such decrements, then allowing the elimination of the edges associated with the decrements of $g$, and then the edges for the decrements of $c$, leaving the graph acyclic. The policy has been derived by solving the FOND translation $T$ of the QNP that ensures termination and strong cyclicity. } \label{fig:gripper} \end{figure} \section{Related Work} QNPs have been introduced as a decidable planning model able to account for plans with loops \cite{sid:aaai2011,sid:aaai2015}. In addition, by defining the boolean and numerical variables of QNPs as suitable general boolean and numerical features over a given domain, it has been shown that QNPs can be used to express abstract models for generalized planning, in particular when the ground actions change from instance to instance \cite{bonet:ijcai2018}. More recently, it has been shown that these QNP abstractions can be learned automatically from a given planning domain and sampled plans \cite{bonet:aaai2019}. QNPs thus provide a convenient language for a \textbf{model-based approach} for the computation of general plans where such plans are derived from a (QNP) planning model. If the model is sound, the general plans are guaranteed to be correct \cite{bonet:ijcai2018,bonet:aaai2019}. This is contrast with the more common \textbf{inductive} or \textbf{learning-based approaches} where plans computed to solve a few sampled instances are assumed to generalize to other instances by virtue of the compact form of the plans \cite{khardon:action,martin:applied,fern:generalized}. These learning approaches do not construct or solve a suitable abstraction of the problems as expressed by QNPs. Inductive approaches have been used recently to learn general plans in the form of finite-state controllers \cite{bonet:icaps2009,hu:synthesis}, finite programs \cite{javier:procedures}, and deep neural nets learned in a supervised manner \cite{trevizan:dl,sanner:dl,fern:dl,mausam:dl}. A key difference between learning-based and model-based approaches is that the correctness of the latter follows from the soundness of the model. Deep reinforcement learning methods have also been used recently for computing generalized plans with no supervision \cite{sid:sokoban,mazebase}, yet by not using first-order symbolic representations, they have difficulties in dealing with relational domains that involve objects and relations \cite{shanahan:review}. Forms of generalized planning have also been formulated using first-order logic \cite{srivastava:generalized,sheila:generalized2019}, and general plans over finite horizons have been derived using first-order regression as well \cite{boutilier2001symbolic,wang2008first,van2012solving,sanner:practicalMDPs}. The use of QNPs for expressing (or learning) abstractions for generalized planning problems, combined with the compilation of QNPs into FOND problems, allows us to benefit from the performance of propositional off-the-shelf FOND planners like PRP \cite{prp}, MyND \cite{mynd}, or FOND-SAT \cite{geffner:fond-sat} in order to find policies for generalized planning. QNP planning problems can be easily translated into LTL planning problems with FOND domains, reachability goals, and a particular type of trajectory constraints that can be expressed as compact LTL formula \cite{bonet:ijcai2017}. The trajectory constraints use a fragment of LTL \cite{ltl} to express the QNP fairness constraints; namely, that for each numerical variable $X$ in a QNP, it is always the case that infinite decrements of $X$ combined with finite increments of $X$ must eventually drive the variable $X$ to $0$. As a result, QNP planning can be translated quite efficiently (linear time) into LTL synthesis. The translation, however, is not particularly useful computationally, as QNP planning, like FOND planning, is EXP-Complete, while LTL synthesis is 2EXP-Complete (doubly exponential in time) \cite{ltl:2exp}. In LTL planning, i.e., FOND planning with LTL goals and trajectory constraints, the double exponential growth is in the number of variables that appear in such formulas \cite{camacho:ltlplanning,sasha:ltlplanning}. For the specific type of LTL trajectory constraints that QNPs convey, the general method of \citeay{bonet:ijcai2017} results in an EXPSPACE algorithm for the synthesis of a tree automaton that solves the given QNP (or to determine that such automaton does not exist). Indeed, the method first computes a deterministic parity word (DPW) automaton that accepts the models of an LTL formula that captures the QNP; this automaton may be of doubly exponential size and with an exponential number of priorities for general types of LTL trajectory constraints, but it is ``only'' of exponential size and with a bounded number of priorities for QNPs. Then, a deterministic parity tree automaton $A_t$, that accepts the policies for the QNP and is built from the DPW automaton, must be tested for non-emptiness. The tree automaton $A_t$ has size that is polynomial in the size of the DPW automaton and with the same number of priorities. The non-emptiness test requires time that is polynomial in the size of $A_t$ but exponential in the number of priorities. For QNPs, the number of priorities is bounded and thus this method can be implemented in exponential space since the DPW automaton must be \emph{explicitly} built. Like the reduction from QNPs into FOND problems, this method does not solve the question posed above about the solvability of QNPs by memoryless policies since the automaton $A_t$ captures all history-based policies for the input QNP, not only memoryless policies. \section{Conclusions} QNPs are convenient abstract models for generalized planning. In this work we have studied QNPs and placed them on firmer ground by revising their theoretical foundations, and have shown that FOND problems can be reduced into QNPs, and vice versa, that QNPs can reduced into FOND problems. Both translations are new and polynomial-time computable, hence establishing that the two models have the same expressive power and the same complexity. The previous, direct translation $T_D(Q)$ for QNPs $Q$ also yields a FOND problem but with fairness assumptions that do not match those underlying strong cyclic FOND planning, and this is why solutions to this translation need to be checked for termination. QNPs can be reduced to LTL synthesis and planning but these are harder computational tasks. In the future, it would be interesting to study more general types of fairness assumptions and the fragments of LTL that can be handled efficiently with methods similar to the ones developed for QNPs that are based on polynomial-time translations and off-the-shelf FOND planners. \bibliographystyle{theapa}
1,116,691,497,963
arxiv
\section{Introduction}\label{sec:intro} Although many astrophysical observations indicate the existence of dark matter~\cite{Young2016}, it has yet to be observed in the laboratory. While it is possible that dark matter has only gravitational interactions, many compelling models of new physics contain a dark matter candidate that interacts with quarks. One class of models includes new, electrically-neutral fermions called ``dark quarks'', \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace, which are not charged under the forces of the standard model (SM) but are charged under a new force in the dark sector (``dark QCD'') that has confining properties similar to quantum chromodynamics (SM QCD)~\cite{petraki, Zurek201491}. Unlike models based on the popular weakly interacting neutral particle paradigm~\cite{Bertone2005279}, such models naturally explain the observed mass densities of baryonic matter and dark matter~\cite{planck}. We consider, in particular, the dark QCD model of Bai, Schwaller, Stolarski, and Weiler (BSSW) that predicts ``emerging jets'' (EMJ)~\cite{BYS,Schwaller2015}. Emerging jets contain electrically charged SM particles that are consistent with having been created in the decays of new long-lived neutral particles (dark hadrons), produced in a parton-shower process by dark QCD. In this model, dark QCD has an $SU(\ensuremath{N_{C_{\mathrm{DK}}}}\xspace)$ symmetry, where \ensuremath{N_{C_{\mathrm{DK}}}}\xspace is the number of dark colors. The particle content of the model consists of the dark fermions, the dark gluons associated with the force, and a mediator particle that is charged under both the new dark force and under SM QCD, thus allowing interactions with quarks. The dark fermions are bound by the new force into dark hadrons. These hadrons decay via the mediator to SM hadrons. The mediator \ensuremath{\mathrm{X}_\mathrm{DK}}\xspace is a complex scalar. Under SM QCD, it is an $SU(3)$ color triplet, and thus can be pair produced via gluon fusion (Fig.~\ref{fig:DMprod}, left) or quark-antiquark annihilation (Fig.~\ref{fig:DMprod}, right) at the CERN LHC. The mediator has an electric charge of either ${1}/{3}$ or ${2}/{3}$ of the electron charge, and it can decay to a right-handed quark with the same charge and a \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace via Yukawa couplings. There are restrictions on the values of the Yukawa couplings from searches for flavor-changing neutral currents, neutral meson mixing, and rare decays~\cite{fc1,fc2,fc3,Renner2018}. We abide by these restrictions by assuming that all the Yukawa couplings are negligible except for the coupling to the down quark~\cite{fc1,fc2,fc3,Renner2018}. \begin{figure}[hbtp]\centering {\includegraphics[width=0.4\textwidth]{Figure_001-a.pdf}} {\includegraphics[width=0.4\textwidth]{Figure_001-b.pdf}} \caption{Feynman diagrams in the BSSW model for the pair production of mediator particles, with each mediator decaying to a quark and a dark quark \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace, via gluon-gluon fusion (left) and quark-antiquark annihilation (right).} \label{fig:DMprod} \end{figure} The decay length of the lightest dark meson (dark pion)~\cite{Schwaller2015}, is given by Eq.~\eqref{eqn:bssw}: \begin{linenomath} \begin{equation} c\tau \approx 80\mm \left( \frac{1}{\kappa^4}\right) \left( \frac{2\GeV}{\ensuremath{f_{\pi_\mathrm{DK}}}\xspace} \right)^2 \left( \frac{100\MeV}{\ensuremath{m_{\text{down}}}\xspace} \right)^2 \left( \frac{2\GeV}{\ensuremath{m_{\pi_\mathrm{DK}}}\xspace} \right) \left( \frac{\ensuremath{m_{\mathrm{X_{DK}}}}\xspace}{1\TeV} \right)^4, \label{eqn:bssw} \end{equation} \end{linenomath} where $\kappa$ is the appropriate element of the $\ensuremath{N_{C_{\mathrm{DK}}}}\xspace{\times}3$ matrix of Yukawa couplings between the mediator particle, the quarks, and the dark quarks; \ensuremath{f_{\pi_\mathrm{DK}}}\xspace is the dark pion decay constant; and \ensuremath{m_{\text{down}}}\xspace, \ensuremath{m_{\pi_\mathrm{DK}}}\xspace, and \ensuremath{m_{\mathrm{X_{DK}}}}\xspace are the masses of the down quark, the dark pion, and the mediator particle, respectively. The signature for this search thus consists of four high transverse momentum (\pt) jets, two from down quarks and two from dark quarks. The dark quark jets contain many displaced vertices arising from the decays of the dark pions produced in the dark parton shower and fragmentation. For models with dark hadron decay lengths comparable to the size of the detector, there can also be significant missing transverse momentum (\ptmiss). The main background for this signature is SM four-jet production, where jet(s) are tagged as emerging either because they contain long-lived \PB\ mesons or because of track misreconstruction, and large artificial \ptmiss is created because of jet energy mismeasurement. We use a photon+jets data sample to measure the probability for an SM jet to pass selection criteria designed for emerging jets, and use this probability in estimating the background, as described in Section~\ref{sec:bkgd}. \section{The CMS detector and event reconstruction}\label{sec:cms} The CMS detector is a multipurpose apparatus designed to study physics processes in proton-proton ({\Pp\Pp}) and heavy ion collisions. A superconducting solenoid occupies its central region, providing a magnetic field of 3.8\unit{T} parallel to the beam direction. The silicon tracker system consists of 1\,440 silicon pixel and 15\,148 silicon strip detector modules. The trajectories of charged particles within the pseudorapidity range $\abs{\eta}<2.5 $ are reconstructed from the hits in the silicon tracking system using an iterative procedure with a Kalman filter~\cite{TRK-11-001}. The tracking efficiency for prompt hadrons is typically over 98\% for tracks with \pt above 1\GeV. For nonisolated particles with $1<\pt<10\GeV$ and $\abs{\eta}<1.4$, the track resolutions are typically 1.5\% in \pt and 25--90 (45--150)\mum in the transverse (longitudinal) impact parameter~\cite{TRK-11-001}. The reconstruction efficiency is low for tracks with an impact parameter larger than 25\cm~\cite{TRK-11-001}. A lead tungstate crystal electromagnetic calorimeter (ECAL) and a brass/scintillator hadron calorimeter (HCAL) surround the tracking volume and cover $\abs{\eta}<3$. A steel and quartz-fiber Cherenkov hadron forward calorimeter extends the coverage to $\abs{\eta}<5$. The muon system consists of gas-ionization detectors embedded in the steel flux return yoke outside the solenoid, and covers $\abs{\eta}<2.4$. The first level of the CMS trigger system~\cite{Khachatryan:2016bia} is designed to select events in less than 4\mus, using information from the calorimeters and muon detectors. The high-level trigger (HLT) processor farm then reduces the event rate to around 1\unit{kHz} before data storage. A more detailed description of the CMS detector, together with a definition of the coordinate system and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. The $\Pp\Pp$ interaction vertices are reconstructed by clustering tracks on the basis of their $z$ coordinates along the beamline at their points of closest approach to the center of the luminous region using a deterministic annealing algorithm~\cite{Rose98deterministicannealing}. The position of each vertex is estimated with an adaptive vertex fit~\cite{Fruhwirth:2007hz}. The resolution in the position is around 10--12\mum in each of the three spatial directions~\cite{TRK-11-001}. The reconstructed vertex with the largest value of summed physics-object $\pt^2$ is taken to be the primary $\Pp\Pp$ interaction vertex (PV). The physics objects are the jets, clustered using the jet finding algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma} with the tracks assigned to the vertex as inputs, and the associated \ptmiss, taken as the negative vector sum of the \pt of those jets. Other vertices in the same event due to additional {\Pp\Pp} collisions in the same beam crossing are referred to as pileup. The particle-flow (PF) algorithm~\cite{Sirunyan:2017ulk} is used to reconstruct and identify each individual particle, with an optimized combination of information from the various elements of the CMS detector. The energy of each photon is directly obtained from the ECAL measurement, corrected for zero-suppression effects. The energy of each electron is determined from a combination of the track momentum at the PV, the corresponding ECAL cluster energy, and the energy sum of all bremsstrahlung photons attached to the track. The energy of each muon is obtained from the corresponding track momentum. The energy of each charged hadron is determined from a combination of the track momentum and the corresponding ECAL and HCAL energies, corrected for zero-suppression effects and for the response functions of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies. The analysis involves two types of jets: SM QCD jets and emerging jets. For each event, the reconstruction of both types of jets starts with the clustering of reconstructed particles with the infrared and collinear safe anti-\kt algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma}, with a distance parameter $R$ of 0.4. The jet momentum is determined as the vectorial sum of the momenta of associated particles. Additional identification criteria for the emerging jets are given in Section~\ref{sec:selection}. For the SM jets, the momentum is found in the simulation to be within 5 to 10\% of the true momentum for jets, created from the fragmentation of SM quarks and gluons, over the entire \pt spectrum and detector acceptance. Additional proton-proton interactions within the same or nearby bunch crossings can contribute additional tracks and calorimetric energy depositions to the jet momentum. To mitigate this effect, charged hadrons not associated with the PV are removed from the list of reconstructed particles using the pileup charged-hadron subtraction algorithm~\cite{Sirunyan:2017ulk}, while an offset correction is applied to correct for remaining contributions~\cite{JetEnCor2011V2,CACCIARI2008119,CMS-PAS-JME-14-001}. Jet energy corrections are derived from simulation and are confirmed with in situ measurements with the energy balance of Drell--Yan+jet, dijet, multijet, and photon+jet events~\cite{Khachatryan:2016kdb}. Jets consistent with the fragmentation of \cPqb\ quarks are identified using the Combined Secondary Vertex version 2 (CSVv2) discriminator~\cite{BTV-16-002}. The loose working point corresponds to correctly identifying a \cPqb\ quark jet with a probability of 81\% and misidentifying a light-flavor jet as a \cPqb\ quark jet with a probability of 8.9\%. The \ptvecmiss is the negative vector sum of the \ptvec of all PF candidates in an event. Its magnitude is referred to as \ptmiss. \section{Simulated samples}\label{sec:mc} Simulated Monte Carlo (MC) samples are used for the estimation of the signal acceptance A, defined as the fraction of MC events passing the selection criteria, and thus including, e.g., tracking and other efficiencies. These samples are also used for the construction of the templates for background estimation and the validation of background estimation techniques. The simulation of SM processes, unless otherwise stated, is performed at leading order in the strong coupling constant using \MGvATNLO 2.2.2~\cite{Alwall:2014hca} or \PYTHIA~8.2~\cite{Sjostrand:2014zea} with the \textsc{NNPDF3.0}~\cite{Ball:2014uwa} parton distribution functions (PDFs). The strong coupling constant at the {\cPZ} mass scale is set to 0.130 in the generator. Parton shower development and hadronization are simulated with {\PYTHIA} using the underlying-event tune CUETP8M1~\cite{Khachatryan:2015pea}. Signal samples are generated with the ``hidden valley'' model framework in \PYTHIA~8.212, using modifications discussed in Ref.~\cite{Schwaller2015}. The model has several parameters: the mass of the mediator particle, the width of the mediator particle, the number of dark colors, the number of dark flavors, the matrix of Yukawa couplings between the \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace and the quarks with the same electric charge as the mediator, the dark force confinement scale, the masses of the \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace (one for each dark flavor), the mass of the dark pion, the dark pion proper decay length, and the mass of the dark rho meson. Following Ref.~\cite{Schwaller2015}, we assume that there are three dark colors and seven dark flavors as suggested in Ref.~\cite{BYS}. We assume that all \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace (and therefore dark pions) are mass degenerate and that the \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace mass equals the dark force confinement scale. The mass of the dark pion is assumed to be one half the mass of the \ensuremath{\mathrm{Q}_{\mathrm{DK}}}\xspace. The mass of the dark rho meson is taken to be four times larger than the mass of the dark pion. The width of the mediator particle is assumed to be small as compared with the detector mass resolution. These assumptions leave the mediator mass \ensuremath{m_{\mathrm{X_{DK}}}}\xspace, the dark pion mass \ensuremath{m_{\pi_\mathrm{DK}}}\xspace, and the dark pion proper decay length \ensuremath{c\tau_{\pi_\mathrm{DK}}}\xspace as free parameters. Samples are generated for all permutations of the values of these parameters listed in Table~\ref{tab:sigpar}. Each set of values defines a single model. \begin{table}[htb]\centering \topcaption{Parameters used in generating the 336 simulated signal event samples. A sample corresponding to a single model was created for each possible set of parameter values.} \label{tab:sigpar} \begin{tabular}{rl} \hline {Signal model parameters} & \multicolumn{1}{c}{List of values} \\ \hline Dark mediator mass \ensuremath{m_{\mathrm{X_{DK}}}}\xspace [{\GeVns}] & \multicolumn{1}{c}{400, 600, 800, 1000, 1250, 1500, 2000} \\ Dark pion mass \ensuremath{m_{\pi_\mathrm{DK}}}\xspace [{\GeVns}] & \multicolumn{1}{c}{1, 2, 5, 10} \\ Dark pion decay length \ensuremath{c\tau_{\pi_\mathrm{DK}}}\xspace [{\ensuremath{\text{mm}}\xspace}] & \multicolumn{1}{c}{1, 2, 5, 25, 45, 60, 100, 150, 225, 300, 500, 1000} \\ \hline \end{tabular} \end{table} The range in the mediator particle mass over which the search is sensitive depends on the mediator particle pair production cross section. The mediator particle has the same SM quantum numbers as the supersymmetric partner of an SM quark (squark)~\cite{Schwaller2015}. Because we assume three dark colors, the signal production cross section is assumed to be three times larger than that for the pair production of a single flavor of squark of the same mass. We use a calculation of the squark pair production cross section that is based on simplified topologies~\cite{ArkaniHamed:2007fw,Alwall:2008ag,Alwall:2008va,Alves:2011sw,Alves:2011wf}, with other squarks and gluinos decoupled. The cross section is calculated at next-to-leading order in SM QCD with next-to-leading logarithm soft-gluon resummation~\cite{Borschensky:2014cia}. For all samples, multiple minimum-bias events simulated with \PYTHIA, with the multiplicity distribution matching that observed in data, are superimposed with the primary interaction event to model the pileup contribution. Generated particles are processed through the full \GEANTfour-based simulation of the CMS detector~\cite{GEANT, GEANTdev}. \section{Event selection}\label{sec:selection} The analysis is based on data from {\Pp\Pp} collisions at $\sqrt{s}=13\TeV$, corresponding to an integrated luminosity of 16.1\fbinv collected by the CMS detector in 2016. The data were obtained using a trigger based on the \pt of the jets in an event. At the HLT, events were selected if they passed a 900\GeV threshold on the scalar \pt sum of all hadronic jets. This analysis used only a portion of the data collected during 2016 because, for part of that running period, saturation-induced dead time was present in the readout of the silicon strip tracker. Such data were not analyzed because of hard-to-model instantaneous luminosity-dependent inefficiencies for the reconstruction of tracks, in particular those tracks with impact parameters larger than 10\mm that are key to the selection of the emerging jet signature. An emerging jet contains multiple displaced vertices and thus multiple tracks with large impact parameters. Since impact parameter-based variables give good discrimination between SM and emerging jets, we do not attempt to reconstruct the individual decay vertices of the dark pions. Emerging jet candidates are required to have $\abs{\eta}<2.0$, corresponding to the region of the tracker where the impact parameter resolution is best. Tracks are associated with the candidate if they have $\PT>1\GeV$, pass the ``high-purity'' quality selection described in Ref.~\cite{TRK-11-001}, and are within a cone of $R=\sqrt{\smash[b]{(\Delta\eta)^2+(\Delta\phi)^2}}=0.4$ (where $\phi$ is azimuthal angle in radians) around the direction of the jet momentum. Emerging jet candidates are required to have at least one associated track so that the impact parameter can be estimated. The jet candidates are also required to have less than 90\% of their energy from electrons and photons, to reduce backgrounds from electrons. Four variables, similar to the ones defined in Ref.~\cite{EXO-16-003}, are used to select the emerging jets. The median of the unsigned transverse impact parameters of associated tracks (\ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace) is correlated with the dark meson proper decay length, and should be small for SM jets and large for emerging jets. The distance between the $z$ position of the track at its distance of closest approach to the PV and the $z$ position of the PV (\ensuremath{PU_{\mathrm{dz}}}\xspace) is used to reject tracks from pileup vertices. A variable called \ensuremath{D_{\mathrm{N}}}\xspace, defined as \begin{linenomath} \begin{equation} \ensuremath{D_{\mathrm{N}}}\xspace = \sqrt{ \Big[\frac{\ensuremath{z_{\mathrm{PV}}}\xspace-\ensuremath{z_{\text{trk}}}\xspace}{0.01\cm}\Big]^2 + [\ensuremath{IP_{\text{sig}}}\xspace]^2 }, \end{equation} \end{linenomath} where \ensuremath{z_{\mathrm{PV}}}\xspace is the $z$ position of the primary vertex, \ensuremath{z_{\text{trk}}}\xspace is the $z$ of the track at its closest approach to the PV, and \ensuremath{IP_{\text{sig}}}\xspace is the transverse impact parameter significance of the track at its closest approach to the PV, is used to identify tracks that have an impact parameter that is inconsistent with zero within uncertainties. The variable \ensuremath{D_{\mathrm{N}}}\xspace is smaller for tracks from prompt particles. A variable called \ensuremath{\alpha_{\mathrm{3D}}}\xspace, which is the scalar \pt sum of the associated tracks whose values of \ensuremath{D_{\mathrm{N}}}\xspace are smaller than a threshold, divided by the scalar \pt sum of all associated tracks, is used to quantify the fraction of the \pt of the jet that is associated with prompt tracks. This variable should be large for SM jets and small for emerging jets. Figure~\ref{fig:meanIP} shows the distributions of \ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace for background and for signals with a mediator mass of 1\TeV and a dark pion of various masses and with a proper decay length of 25\mm. Figure~\ref{fig:a3D} shows the distributions of \ensuremath{\alpha_{\mathrm{3D}}}\xspace for background and for signals with a mediator mass of 1\TeV and a dark pion mass of 5\GeV. \begin{figure}[hbtp]\centering \includegraphics[width=0.55\textwidth]{Figure_002.pdf} \caption{ Distributions of \ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace for background (black) and for signals with a mediator mass of 1\TeV and a dark pion proper decay length of 25\mm, for various dark pion masses.} \label{fig:meanIP} \end{figure} \begin{figure}[hbtp]\centering \includegraphics[width=0.55\textwidth]{Figure_003.pdf} \caption{ Distributions of \ensuremath{\alpha_{\mathrm{3D}}}\xspace for background (black) and for signals with a mediator mass of 1\TeV and a dark pion mass of 5\GeV for dark pion proper decay lengths ranging from 1 to 300\mm.} \label{fig:a3D} \end{figure} Since the efficacy of the variables used to select emerging jets depends on the correct identification and reconstruction of the PV, additional selections are used to remove rare cases observed in simulated background events where the PV was either not reconstructed or a pileup vertex was chosen as the PV. We require that the chosen PV be the vertex with the largest scalar \pt sum of its associated tracks. We also require that the scalar \pt sum of tracks whose extrapolated separation in $z$ from the PV, at the point of closest approach, is less than 0.01\cm, be larger than 10\% of the sum over all tracks. Selected candidate events are required to have four jets with $\abs{\eta}<2.0$ and to pass a threshold on the scalar \pt sum of these jets (\HT). They must have either two jets tagged as emerging, or one jet tagged as emerging and large \ptmiss. The selection requirements on the jet-\PT thresholds and the emerging jet selection criteria were optimized for each signal model listed in Table~\ref{tab:sigpar} as follows. For each variable listed in Tables~\ref{tab:emjcut} and \ref{tab:cutsets}, a set of potential selection thresholds were chosen based on the distribution of the variable for signal and background. For each permutation of all the selection thresholds, we calculated the predicted pseudo-significance for each signal model, defined as $S/\sqrt{S+B+(0.1 B)^2}$, where $S$ and $B$ correspond to the number of signal and background events and the $0.1$ corresponds to an estimate of the systematic uncertainty. In order to limit the final number of background calculations, the pseudo-significances were used to find the minimum number of selection criteria where the difference in pseudo-significance between the best selection thresholds and a chosen selection threshold is no more than 10\%, resulting in a total of seven selection sets. In Table~\ref{tab:emjcut}, the selection criteria used to select emerging jets are listed. These jet-level selection criteria, along with event-level kinematic selection criteria, comprise the final selection criteria, given in Table~\ref{tab:cutsets}. There are six groups of criteria used to select emerging jets. The seven selection sets used to define signal regions are given in Table~\ref{tab:cutsets} (sets 1 to 7), which gives the selections on kinematic variables, along with the corresponding emerging jet criteria from Table~\ref{tab:emjcut}. Two basic categories of selections emerge. Other than set 3, the signal region selection sets require two jets pass emerging jet criteria, and have no requirement on \ptmiss. Selection set 3 requires that one jet satisfies the emerging jet criteria, and includes a requirement on \ptmiss. Note that in addition to the \ptmiss requirement, the EMJ-3 group imposes the loosest criteria on \ensuremath{PU_{\mathrm{dz}}}\xspace and \ensuremath{D_{\mathrm{N}}}\xspace, and the tightest requirement on \ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace, favoring more displaced tracks. Selection set 3 is used for signal models with dark pions with large proper decay lengths. The selection on \ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace is large enough that it removes most events containing \cPqb\ quark jets with tracks with large impact parameters due to the \cPqb\ lifetime; most SM jets thus selected have tracks with large impact parameters due to misreconstruction. The substantive requirement on the \ptmiss for this selection set is essential to attain background rejection equivalent to that obtained when requiring two emerging jet candidates. Since the initial optimization only used a rough estimate of the systematic uncertainty, the final selection set for each model is chosen from among the seven as the one that gives the most stringent expected limit, taking into account more realistic systematic uncertainties. We also define two additional groups of jet-level criteria that are used to test the effectiveness of the background estimation methods, described in Section~\ref{sec:bkgd}. The EMJ-7 group has the same \ensuremath{PU_{\mathrm{dz}}}\xspace, \ensuremath{D_{\mathrm{N}}}\xspace, and \ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace criteria as EMJ-1 set, but loosens only $\ensuremath{\alpha_{\mathrm{3D}}}\xspace<0.4$, while the EMJ-8 group has the same \ensuremath{PU_{\mathrm{dz}}}\xspace and \ensuremath{D_{\mathrm{N}}}\xspace criteria as EMJ-3 set, but loosens $\ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace>0.10$ and $\ensuremath{\alpha_{\mathrm{3D}}}\xspace<0.5$. These two groups of jet-level criteria are more efficient for quark or gluon jets than those used for the final selections in the analysis, improving the statistical power of the tests. The acceptance of the selection criteria for signal events ranges from a few percent for models with a mediator mass of 400\GeV to 48\% for more massive mediators with a dark pion decay length of 25\mm. Figure~\ref{fig:sigeff} shows an example of the signal acceptance of models with dark pion mass of 5\GeV as a function of the mediator mass and the dark pion proper decay length, with text indicating the corresponding selection set number. \begin{table}[htb]\centering \topcaption{Groups of requirements (associated operator indicated in parentheses) on the variables used in the identification of emerging jets. The groups EMJ-1 to -6 are used for the selection sets that define the signal regions, while the groups EMJ-7 and -8 are used to define SM QCD-enhanced samples for the tests of the background estimation methods. \label{tab:emjcut}} \begin{tabular}{lcccc} \hline Criteria group & \ensuremath{PU_{\mathrm{dz}}}\xspace ($<$) [{\ensuremath{\text{cm}}\xspace}]& \ensuremath{D_{\mathrm{N}}}\xspace ($<$) & \ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace ($>$) [{\ensuremath{\text{cm}}\xspace}] & \ensuremath{\alpha_{\mathrm{3D}}}\xspace ($<$)\\ \hline EMJ-1 & 2.5& 4& 0.05 & 0.25\\ EMJ-2 & 4.0& 4& 0.10 & 0.25\\ EMJ-3 & 4.0& 20& 0.25 & 0.25\\ EMJ-4 & 2.5& 4& 0.10 & 0.25\\ EMJ-5 & 2.5& 20& 0.05 & 0.25\\ EMJ-6 & 2.5& 10& 0.05 & 0.25\\ [-2.ex] \\ EMJ-7 & 2.5& 4& 0.05 & 0.40\\ EMJ-8 & 4.0& 20& 0.10 & 0.50\\ \hline \end{tabular} \end{table} \begin{table}[htb]\centering \topcaption{The seven optimized selection sets used for this search, and the two SM QCD-enhanced selections (sets 8 and 9) used in tests of the background estimation methods. The headers of the columns are: the scalar \pt sum of the four leading jets (\HT) [{\GeVns}], the requirements on the \pt of the jets ($p_\mathrm{T,i}$) [{\GeVns}], the requirement on \ptmiss [{\GeVns}], the minimum number of the four leading jets that pass the emerging jet selection (\ensuremath{n_\mathrm{EMJ}}\xspace), and the EMJ criteria group described in Table~\ref{tab:emjcut}. The last column is the total number of models defined in Table~\ref{tab:sigpar} for which the associated selection set gives the best expected sensitivity. \label{tab:cutsets}} \cmsTable{ \begin{tabular}{cccccccccc} \hline Set number & \HT & $p_\mathrm{T,1}$ & $p_\mathrm{T,2}$ & $p_\mathrm{T,3}$ & $p_\mathrm{T,4}$ & $\ptmiss$ & $\ensuremath{n_\mathrm{EMJ}}\xspace (\geq)$ & EMJ group & no. models\\ \hline 1 & 900 & 225 & 100 & 100 & 100 & 0 & 2 & 1 & 12\\ 2 & 900 & 225 & 100 & 100 & 100 & 0 & 2 & 2 & 2\\ 3 & 900 & 225 & 100 & 100 & 100 & 200 & 1 & 3 & 96\\ 4 & 1100 & 275 & 250 & 150 & 150 & 0 & 2 & 1 & 49\\ 5 & 1000 & 250 & 150 & 100 & 100 & 0 & 2 & 4 & 41\\ 6 & 1000 & 250 & 150 & 100 & 100 & 0 & 2 & 5 & 33\\ 7 & 1200 & 300 & 250 & 200 & 150 & 0 & 2 & 6 & 103\\ [-2.ex] \\ 8 & 900 & 225 & 100 & 100 & 100 & 0 & 2 & 7 & \multirow{2}{*}{SM QCD-enhanced}\\ 9 & 900 & 225 & 100 & 100 & 100 & 200 & 1 & 8 & \\ \hline \end{tabular} } \end{table} \begin{figure}[hbtp]\centering \includegraphics[width=0.8\textwidth] {Figure_004.pdf} \caption{The signal acceptance A, defined as the fraction of simulated signal events passing the selection criteria, for models with a dark pion mass \ensuremath{m_{\pi_\mathrm{DK}}}\xspace of 5\GeV as a function of the mediator mass \ensuremath{m_{\mathrm{X_{DK}}}}\xspace and the dark pion proper decay length \ensuremath{c\tau_{\pi_\mathrm{DK}}}\xspace. The corresponding selection set number for each model is indicated as text on the plot.} \label{fig:sigeff} \end{figure} \section{Background estimation}\label{sec:bkgd} The production of events containing four SM jets can mimic the signal when two of the jets pass the emerging jet criteria, or when one passes and jet mismeasurement results in artificial \ptmiss. The background contributions for each of the selection sets are calculated in two different ways, using the probability for an SM QCD jet to pass the emerging jet requirements. In the first method, for selection sets 3 and 9 that require at least one emerging jet candidate and \ptmiss, the background is calculated using Eq.~\eqref{eqn:fakeit2}, \begin{linenomath} \begin{equation} \ensuremath{N_{\text{bkg,EMJ}}}\xspace=\sum_{\mathrm{events}}\ensuremath{P_\mathrm{EMJ}}\xspace, \label{eqn:fakeit2} \end{equation} \end{linenomath} where \ensuremath{N_{\text{bkg,EMJ}}}\xspace is the predicted background and \ensuremath{P_\mathrm{EMJ}}\xspace is the probability for at least one of the four leading \pt jets to pass the emerging jet criteria. The sum is over all events in a ``control sample'' defined using all the selection requirements for this set except for the requirement of at least one emerging jet candidate. Instead, events are vetoed if one of the four leading \pt jets passes the emerging jet selection. The misidentification probability of each jet is calculated using Eq.~\eqref{eqn:avgfake}. \begin{linenomath} \begin{equation} \ensuremath{\epsilon_{\mathrm{f}}}\xspace = \ensuremath{\epsilon_{\text{fb}}}\xspace \ensuremath{f_{\mathrm{b}}}\xspace +\ensuremath{\epsilon_{\mathrm{fl}}}\xspace \left( 1-\ensuremath{f_{\mathrm{b}}}\xspace \right) \label{eqn:avgfake} \end{equation} \end{linenomath} Here \ensuremath{\epsilon_{\text{fb}}}\xspace is the misidentification probability for \cPqb\ jets, \ensuremath{\epsilon_{\mathrm{fl}}}\xspace is the misidentification probability for light-flavor jets, and \ensuremath{f_{\mathrm{b}}}\xspace is the probability that the jet is a \cPqb\ jet. The methodology used to estimate \ensuremath{\epsilon_{\text{fb}}}\xspace, \ensuremath{\epsilon_{\mathrm{fl}}}\xspace, and \ensuremath{f_{\mathrm{b}}}\xspace is described below. The probability \ensuremath{P_\mathrm{EMJ}}\xspace is calculated as shown in Eq.~\eqref{eqn:fakeit}. \begin{linenomath} \begin{equation} \begin{split} \ensuremath{P_\mathrm{EMJ}}\xspace&=\sum_{i\in \mathrm{jets}} \ensuremath{\epsilon_{\mathrm{f}}}\xspace \prod_{j\neq i}\left( 1- \ensuremath{\epsilon_{\mathrm{f}}}\xspace \right) \\ &+\frac{1}{2}\sum_{i, j\in \mathrm{jets}} \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \prod_{k\neq i, j}\left( 1- \ensuremath{\epsilon_{\mathrm{f}}}\xspace \right)\\ &+\frac{1}{3}\sum_{i, j, k\in \mathrm{jets}} \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \prod_{m\neq i, j, k}\left( 1- \ensuremath{\epsilon_{\mathrm{f}}}\xspace \right) + \frac{1}{4}\sum_{i,j,k,m \in \mathrm{jets}}\ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \end{split} \label{eqn:fakeit} \end{equation} \end{linenomath} The other selection sets (1 to 8, excluding set 3) require at least two of the four \pt leading jets to pass emerging jet selection requirements. The background is estimated using Eq.~\eqref{eqn:fakeit2} as well, except that the control sample requires exactly one jet to pass the corresponding emerging jet criteria as well as all other selection requirements for the selection set. In this case, \ensuremath{P_\mathrm{EMJ}}\xspace is the probability for one additional jet to pass the emerging jet requirements, and is calculated using Eq.~\eqref{eqn:fakeit3}. \begin{linenomath} \begin{equation} \begin{split} P_{\mathrm{EMJ}}&=\frac{1}{2} \sum_{i\in \mathrm{jets\,not\,candidate}} \ensuremath{\epsilon_{\mathrm{f}}}\xspace \prod_{j\neq i}\left( 1- \ensuremath{\epsilon_{\mathrm{f}}}\xspace \right)\\ &+\frac{1}{3} \sum_{i, j\in \mathrm{jets\,not\,candidate}} \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \prod_{k\neq i}\left( 1- \ensuremath{\epsilon_{\mathrm{f}}}\xspace \right)\\ &+\frac{1}{4} \sum_{i, j, k\in \mathrm{jets \,not\,candidate}} \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \ensuremath{\epsilon_{\mathrm{f}}}\xspace \end{split} \label{eqn:fakeit3} \end{equation} \end{linenomath} In Eq.~\eqref{eqn:fakeit3} the sum is over jets that do not pass the emerging jet selection criteria. The probability for an SM jet to pass the emerging jet selection criteria (misidentification) depends on the flavor of the jet and on the number of tracks associated with the jet. The probability for a jet initiated by a \cPqb\ quark (\cPqb\ jet) to pass the selection can be a factor of ten larger than that for a jet initiated by any other type of parton (light-flavor jet). For EMJ-3, because of the requirement that \ensuremath{\langle IP_{\mathrm{2D}}\rangle}\xspace be large, the misidentification probability for \cPqb\ jets and light-flavor jets is similar. The misidentification probability has a strong dependence on track multiplicity, ranging from a few percent at low track multiplicities, to values several orders of magnitude smaller at the highest multiplicities. The misidentification probability is measured as a function of track multiplicity using a sample of events collected with a trigger that requires the presence of an isolated photon with $\PT>165\GeV$. We do not expect any signal contamination in this sample. Two subsamples are created: one with an enhanced and one with a suppressed \cPqb\ quark fraction. The sample with an enhanced fraction of \cPqb\ jets is selected by requiring the event to contain at least one additional jet with $\pt>50\GeV$, beyond the one used in the misidentification probability calculation, that has a value for the discriminator of the CSVv2 algorithm greater than 0.8. The sample with suppressed probability of containing a \cPqb\ jet requires an additional jet with $\pt>50\GeV$ with a CSVv2 discriminator value below 0.2. The \cPqb\ quark fraction of each subsample \ensuremath{f_{\mathrm{b}}}\xspace is determined by fitting the observed distribution of the CSVv2 discriminator to the sum of two templates, one created using simulated \cPqb\ jets and the other simulated light-flavor jets. The misidentification probability as a function of the initiating parton type can then be calculated as follows: \begin{linenomath} \begin{equation} \begin{pmatrix} \ensuremath{\epsilon_{\text{fb}}}\xspace \\ \ensuremath{\epsilon_{\mathrm{fl}}}\xspace \end{pmatrix} = \begin{pmatrix} \frac{1-\ensuremath{f_{\mathrm{b2}}}\xspace}{\ensuremath{f_{\mathrm{b1}}}\xspace-\ensuremath{f_{\mathrm{b2}}}\xspace} & \frac{-(1-\ensuremath{f_{\mathrm{b1}}}\xspace)}{\ensuremath{f_{\mathrm{b1}}}\xspace-\ensuremath{f_{\mathrm{b2}}}\xspace} \\ \frac{-\ensuremath{f_{\mathrm{b2}}}\xspace}{\ensuremath{f_{\mathrm{b1}}}\xspace-\ensuremath{f_{\mathrm{b2}}}\xspace} & \frac{\ensuremath{f_{\mathrm{b1}}}\xspace}{\ensuremath{f_{\mathrm{b1}}}\xspace-\ensuremath{f_{\mathrm{b2}}}\xspace}\end{pmatrix} \begin{pmatrix} {\ensuremath{\epsilon_{\mathrm{f1}}}\xspace} \\ {\ensuremath{\epsilon_{\mathrm{f2}}}\xspace} \end{pmatrix}, \label{eqn:fakerateformula} \end{equation} \end{linenomath} where \ensuremath{\epsilon_{\mathrm{f1}}}\xspace, \ensuremath{f_{\mathrm{b1}}}\xspace, \ensuremath{\epsilon_{\mathrm{f2}}}\xspace, and \ensuremath{f_{\mathrm{b2}}}\xspace represent the respective misidentification probability and \cPqb\ jet fraction in the two samples. Figure~\ref{fig:misID_cut1a} shows the measured misidentification probability for EMJ-1 set. When convolving the misidentification probabilities with the kinematic characteristics and parton composition of the kinematic samples using Eqs.~\eqref{eqn:fakeit} and \eqref{eqn:fakeit3}, the parton composition of the kinematic sample is determined by fitting the CSVv2 distribution to \cPqb\ jet and light-flavor jet templates obtained from MC simulation. Figure~\ref{fig:misID_cut1b} shows the resulting fit for the kinematic sample of selection set 1. The \cPqb\ quark content, \ensuremath{f_{\mathrm{b}}}\xspace, is determined separately for all events and for events with at least one jet passing the emerging jet criteria. The first is used for predicting the background fraction for selection set 3, which is the only selection set to require only one emerging jet, the second for the other selection sets. \begin{figure}[hbtp]\centering \includegraphics[width=0.45\textwidth]{Figure_005.pdf} \caption{Measured misidentification probability distribution as a function of track multiplicity for the EMJ-1 criteria group defined in Table~\ref{tab:emjcut}. The red up-pointing triangles are for \cPqb\ jets while the blue down-pointing triangles are for light-flavor jets. The horizontal lines on the data points indicate the variable bin width. The uncertainty bars represent the statistical uncertainties of \ensuremath{\epsilon_{\mathrm{f1}}}\xspace, \ensuremath{\epsilon_{\mathrm{f2}}}\xspace, \ensuremath{f_{\mathrm{b1}}}\xspace, and \ensuremath{f_{\mathrm{b2}}}\xspace in Eq.~\eqref{eqn:fakerateformula}, where the uncertainties in \ensuremath{\epsilon_{\mathrm{f1}}}\xspace and \ensuremath{\epsilon_{\mathrm{f2}}}\xspace correspond to Clopper-Pearson intervals~\cite{ClopperPearson}.} \label{fig:misID_cut1a} \end{figure} \begin{figure}[hbtp]\centering \includegraphics[width=0.45\textwidth]{Figure_006.pdf} \caption{ Determination of the \cPqb\ jet fraction by fitting the CSVv2 discriminator distribution. The red and blue distributions are the CSVv2 discriminator templates of \cPqb\ jets and light-flavor jets, respectively. The black points with uncertainty bars show the data distribution. The uncertainties in the upper panel include statistical uncertainties of the \cPqb\ jet and light-flavor jet templates, and the fit uncertainties, summed in quadrature. The goodness of fit is given by the $\chi^2$ divided by the number of degrees of freedom (ndof). The bottom panel shows the difference between data and the fit result, divided by the combination of the statistical uncertainty of data and the uncertainty from the upper panel. The distributions are derived from kinematic samples resulting from selection set 1 in Table~\ref{tab:cutsets}.} \label{fig:misID_cut1b} \end{figure} The method for estimating the background was tested by using the same procedure on simulated samples, verifying that the predicted number of selected events was in good agreement with the results obtained when applying the selection criteria to the samples. For example, the average expected number of events obtained by applying the background estimation method to simulated samples (average expected number of events passing the selection in simulated samples) are $207 \pm 30~(231 \pm 18)$ and $52.8 \pm 9.2~(52.1 \pm 6.2)$ for selection sets 8 and 9, respectively. The background estimation method was also verified using data in the SM QCD-enhanced regions, and the predicted (observed) numbers of events are $317 \pm 35~(279)$ and $115 \pm 28~(98)$, as shown in Figs.~\ref{fig:data2tag} and~\ref{fig:data1tag} for selection sets 8 and 9, respectively. The uncertainty in the predicted number combines those due to the number of events in the control sample and statistical uncertainties in the misidentification probabilities. \begin{figure}[hbtp]\centering \includegraphics[width=0.42\textwidth]{Figure_007-a.pdf} \includegraphics[width=0.42\textwidth]{Figure_007-b.pdf} \caption{The \HT (left) and number of associated tracks (right) distributions for the observed data events (black points) and the predicted background estimation (blue) for selection set 8 (SM QCD-enhanced), requiring at least two jets tagged by loose emerging jet criteria. The bottom panel shows the difference between observed data and predicted background, divided by the sum in quadrature of the statistical uncertainty in data and the predicted uncertainties from misidentification probability estimation.} \label{fig:data2tag} \end{figure} \begin{figure}[hbtp]\centering \includegraphics[width=0.42\textwidth]{Figure_008-a.pdf} \includegraphics[width=0.42\textwidth]{Figure_008-b.pdf} \caption{The \HT (left) and number of associated tracks (right) distributions of the observed data events (black points) and the predicted background estimation (blue) for selection set 9 (SM QCD-enhanced), requiring at least one jet tagged by loose emerging jet criteria and large \ptmiss. The bottom panel shows the difference between observed data and predicted background, divided by the sum in quadrature of the statistical uncertainty in data and the predicted uncertainties from misidentification probability estimation.} \label{fig:data1tag} \end{figure} The background estimation was also tested using a second method for estimating the fraction of \cPqb\ jets in the control samples. The distribution of the measured number of \cPqb\ jets (\ensuremath{n_{\text{btag}}}\xspace) per event in a sample is related to the distribution of the true number of \cPqb\ jets per event, the distribution of the true number of non-\cPqb\ jets, the identification probability for \cPqb\ jets, and the misidentification probability for non-\cPqb\ jets. This relationship can be written in the form of a matrix: \begin{linenomath} \begin{equation} \begin{pmatrix} N_{\mathrm{m,0}} \\ N_{\mathrm{m,1}} \\ N_{\mathrm{m,2}} \\ N_{\mathrm{m,3}} \\ N_{\mathrm{m,4}} \end{pmatrix} = \begin{pmatrix} A_{\mathrm{0,0}} & A_{\mathrm{0,1}} &A_{\mathrm{0,2}} & A_{\mathrm{0,3}}& A_{\mathrm{0,4}} \\ A_{\mathrm{1,0}} & A_{\mathrm{1,1}} &A_{\mathrm{1,2}} & A_{\mathrm{1,3}}& A_{\mathrm{1,4}} \\ A_{\mathrm{2,0}} & A_{\mathrm{2,1}} &A_{\mathrm{2,2}} & A_{\mathrm{2,3}}& A_{\mathrm{2,4}} \\ A_{\mathrm{3,0}} & A_{\mathrm{3,1}} &A_{\mathrm{3,2}} & A_{\mathrm{3,3}}& A_{\mathrm{3,4}} \\ A_{\mathrm{4,0}} & A_{\mathrm{4,1}} &A_{\mathrm{4,2}} & A_{\mathrm{4,3}}& A_{\mathrm{4,4}} \end{pmatrix} \begin{pmatrix} N_{\mathrm{t,0}} \\ N_{\mathrm{t,1}} \\ N_{\mathrm{t,2}} \\ N_{\mathrm{t,3}} \\ N_{\mathrm{t,4}} \end{pmatrix}, \label{eqn:method2} \end{equation} \end{linenomath} where $N_{\mathrm{t,i}}$ is the number of events with $\mathrm{i}$ \cPqb\ jets and $\mathrm{4-i}$ non-\cPqb\ jets, $N_{\mathrm{m,i}}$ is the number of events with $\mathrm{i}$ jets passing the CSVv2 loose identification requirements and $\mathrm{4-i}$ failing them, and $A_{\mathrm{i,j}}$ is the appropriate combination of the CSVv2 efficiencies for a \cPqb\ jet to pass the identification requirement and for a non-\cPqb\ jet to pass the identification requirement, including combinatorics. As these probabilities depend on the jet kinematics, the value used is a weighted sum over the jets in the events. This matrix can be inverted to get the number of events as a function of true \cPqb\ jet multiplicity from the number of events as a function of the number of identified \cPqb\ jets. Once the true \cPqb\ jet and non-\cPqb\ jet multiplicities are known, the misidentification probabilities measured from the photon+jets data can be applied. To build the matrix, first a sample of events passing all the selection requirements of a selection set, except the requirement on the number of emerging jet candidates, is selected. This sample is dominated by SM four-jet production. The number of events with zero, one, two, three, or all of the four leading jets satisfying the CSVv2 loose working point is counted, and the array described in Eq.~\eqref{eqn:method2} is constructed. The array is inverted to obtain the probability $w(\{\nu\},\ensuremath{n_{\text{btag}}}\xspace)$ for each of the $\{\nu\}$ possibilities for the true number of \cPqb\ quarks (0--4). The background is then calculated using Eq.~\eqref{eq:qcdbkg_bu1}, where each probability is weighted with the appropriate combination of misidentification probabilities, efficiencies, and their combinatorics. \begin{linenomath} \begin{equation} \ensuremath{N_{\text{bkg,EMJ}}}\xspace(\ensuremath{n_\mathrm{EMJ}}\xspace)=\sum_{\mathrm{events}}\sum^{4}_{\nu=0} \ensuremath{P_\mathrm{EMJ}}\xspace(\ensuremath{n_\mathrm{EMJ}}\xspace|\{\nu|\ensuremath{n_{\text{btag}}}\xspace\}) \label{eq:qcdbkg_bu1} \end{equation} \end{linenomath} The probability \ensuremath{P_\mathrm{EMJ}}\xspace represents the probability of having at least \ensuremath{n_\mathrm{EMJ}}\xspace jets pass the emerging jet selections given $\nu$ true \cPqb\ jets, and is calculated using Eq.~\eqref{eq:qcdbkg_bu2}. \begin{linenomath} \begin{equation} \begin{split} &\ensuremath{P_\mathrm{EMJ}}\xspace(\ensuremath{n_\mathrm{EMJ}}\xspace|\{\nu|\ensuremath{n_{\text{btag}}}\xspace\})=\sum_{\{\ensuremath{n_\mathrm{EMJ}}\xspace|\{\nu\}\}} \frac{w(\{\nu\},\ensuremath{n_{\text{btag}}}\xspace)}{\ensuremath{n_{\text{comb}}}\xspace(\nu)} \prod_{i \in \{\ensuremath{n_\mathrm{EMJ}}\xspace\}}p_{i}\prod_{j \neq i}(1-p_{j})\\ &p_{k}=p_{k}(\ensuremath{\varphi(\{\nu\})}\xspace)= \begin{cases} \ensuremath{\epsilon_{\text{fb}}}\xspace\\ \ensuremath{\epsilon_{\mathrm{fl}}}\xspace \end{cases}\\ &\ensuremath{n_{\text{comb}}}\xspace(\nu)=\binom{4}{\nu}=\frac{4!}{\nu!(4-\nu)!} \end{split} \label{eq:qcdbkg_bu2} \end{equation} \end{linenomath} Here $p_{k}$ is the flavor-dependent misidentification probability of jet $k$, and \ensuremath{\varphi(\{\nu\})}\xspace represents all possible flavor assignments of the four jets. The combinatoric factor (\ensuremath{n_{\text{comb}}}\xspace) is the binomial coefficient, to account for combinatorics in each permutation in $\{\nu\}$. The respective numbers of predicted background events for selection sets 8 and 9 are $209.2 \pm 1.3 $ and $53.1 \pm 1.2$ in simulated samples, and are $312.2 \pm 2.0$ and $112.0 \pm 1.6$ for data in SM QCD-enhanced regions. The predicted numbers include only the uncertainty due to the control sample event statistics. The predictions are in good agreement with the primary background estimation method. \section{Systematic uncertainties}\label{sec:syst} The main sources of systematic uncertainty in the background estimate are due to the limited number of events in the photon+jets data and in the simulated samples used for the misidentification probability estimation. Two other sources are the uncertainties in the determination of \ensuremath{f_{\mathrm{b}}}\xspace for each of the samples used in the misidentification probability determination and the uncertainties due to differences in the composition of the non-\cPqb\ jets in the sample used in determining the misidentification probability compared to that in the kinematic samples. We estimate the first uncertainty by using the value of \ensuremath{f_{\mathrm{b}}}\xspace predicted by simulation instead of that obtained by the template fit. We estimate the second uncertainty by using the method on MC simulation. The uncertainty is estimated as the difference in the prediction when using a misidentification probability determined using an MC sample of events containing a high-\PT photon and when using a misidentification probability determined using an MC sample of SM QCD multijet production. The estimated resulting uncertainty for each selection set is given in Table~\ref{tab:bckuncertainties}. \newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}} \newcolumntype{d}{D{.}{.}{-1} } \begin{table}[htb] \topcaption{ Systematic uncertainties affecting the background estimate from control samples in data. For the definition of the selection sets, see Table~\ref{tab:cutsets}. } \label{tab:bckuncertainties} \centering \begin{tabular}{cC{4.5cm}d} \hline \multirow{2}{*}{Set number} & \multicolumn{2}{c}{Source of uncertainty (\%)} \\ & \cPqb\ quark fraction & \multicolumn{1}{c}{non-\cPqb\ quark composition} \\ \hline 1 & 2.8 & 1.4 \\ 2 & 0.6 & 4.4 \\ 3 & 2.9 & 28.3 \\ 4 & 5.0 & 4.4 \\ 5 & 0.9 & 4.0 \\ 6 & 1.6 & 2.1 \\ 7 & 1.0 & 6.3 \\ \hline \end{tabular} \end{table} The main source of uncertainty in the estimation of the signal acceptance is the modeling of displaced tracks in the simulation. Other sources include uncertainties in PDFs, MC modeling of the trigger efficiency, integrated luminosity determination, jet energy scale (JES), pileup reweighting, and statistical uncertainties due to the limited size of the MC samples. Systematic uncertainties are largest for the models with the shortest decay lengths. The uncertainty due to the track modeling in simulation is evaluated by smearing the tracks in signal events using the resolution functions that respectively transform the simulated distributions of $\ensuremath{z_{\mathrm{PV}}}\xspace-\ensuremath{z_{\text{trk}}}\xspace$ and 2D impact parameter in photon+jet MC samples so that they agree with those in data. The change in signal acceptance when using this transformation is taken as the uncertainty. The acceptance is evaluated using both the MC trigger selection and using a trigger efficiency determined using SM QCD multijet events. The difference is taken as an uncertainty in the acceptance. The uncertainty in the integrated luminosity determination is 2.5\%~\cite{cms_lumi}. The uncertainty due to pileup modeling is measured by varying the total inelastic cross section by 4.6\%~\cite{Sirunyan2018} and reweighting the simulation accordingly. The effect of the JES uncertainty is evaluated by shifting the \pt of jets by the JES uncertainty, and measuring its effect on signal acceptance~\cite{Khachatryan:2016kdb}. The shift in signal acceptance is taken as the uncertainty. We account for variations of the acceptance due to the PDF uncertainties following the PDF4LHC prescription~\cite{Butterworth2016}. The resulting ranges of the systematic uncertainties are given in Table~\ref{tab:signcertainties}. \begin{table}[htb] \topcaption{ Ranges of systematic uncertainties over all models given in Table~\ref{tab:sigpar} for which a 95\% \CL exclusion is expected, for the uncertainties from different sources. } \label{tab:signcertainties} \centering \begin{tabular}{l r@{\hspace{1.7em}}r@{ -- }ll} \hline Source & \multicolumn{4}{c}{Uncertainty (\%)} \\ \hline Track modeling & & {\textless}1 & 3 \\ MC event count & & 2 & 17 \\ Integrated luminosity & & \multicolumn{2}{c}{2.5} \\ Pileup & & {\textless}1 & 5 \\ Trigger & & 6 & 12 \\ JES & & {\textless}1 & 9 \\ PDF & & {\textless}1 & 4 \\ \hline \end{tabular} \end{table} \section{Results}\label{sec:results} The number of events passing each selection set, along with the background expectation, is given in Table~\ref{tab:results}. Figure~\ref{fig:eventdisplay} shows a graphical representation of one of the events passing the selection requirements. This event passes both selection set 1 and selection set 5. The display on the left shows the four jets. The display on the right shows the reconstructed tracks in the $\rho$--$\phi$ view. The filled circles represent reconstructed secondary vertices, while the grey lines represent the innermost layer of the silicon pixel tracker. \newcolumntype{e}{D{.}{.}{+2} } \newcolumntype{f}{D{.}{.}{+3} } \newcolumntype{g}{D{.}{.}{+4} } \begin{table}[htb] \centering \topcaption{Expected ($\mathrm{mean} \pm \mathrm{syst}_1 \pm \mathrm{syst}_2$) and observed event yields for each selection set. Uncertainties due to the limited number of events in the control sample and statistical uncertainties in the misidentification probabilities are denoted by ``syst$_1$'', while ``syst$_2$'' combines the systematic uncertainty sources discussed in Table~\ref{tab:bckuncertainties}. The ``Signal'' column shows the expected event yield for the heaviest mediator mass that can be excluded for each set, with the systematic uncertainties from sources discussed in Table~\ref{tab:signcertainties} summed in quadrature. The associated model parameters are specified in the last three columns. } \label{tab:results} \cmsTable{ \begin{tabular}{ c r@{ }c@{ }r@{ }c@{ }r e r@{ }r@{ }c@{ }r@{ }r egf} \hline \multirow{2}{*}{Set number} & \multicolumn{5}{c}{\multirow{2}{*}{Expected}} & \multicolumn{1}{c}{\multirow{2}{*}{Observed}} & & \multicolumn{3}{c}{\multirow{2}{*}{Signal}} & & \multicolumn{3}{c}{Model parameters}\\ & & & & & & & & & & & & \multicolumn{1}{c}{\ensuremath{m_{\mathrm{X_{DK}}}}\xspace [{\GeVns}]} & \multicolumn{1}{c}{\ensuremath{m_{\pi_\mathrm{DK}}}\xspace [{\GeVns}]} & \multicolumn{1}{c}{\ensuremath{c\tau_{\pi_\mathrm{DK}}}\xspace [{\ensuremath{\text{mm}}\xspace}]} \\ \hline 1 & 168 & $\pm$ & 15 & $\pm$ & 5 & 131 & & 36.7 & $\pm$ & 4.0 & & 600 & 5 & 1 \\ 2 & 31.8 & $\pm$ & 5.0 & $\pm$ & 1.4 & 47 & & (\,14.6 & $\pm$ & 2.6 & )$\times 10^2$ & 400 & 1 & 60 \\ 3 & 19.4 & $\pm$ & 7.0 & $\pm$ & 5.5 & 20 & & 15.6 & $\pm$ & 1.6 & & 1250 & 1 & 150 \\ 4 & 22.5 & $\pm$ & 2.5 & $\pm$ & 1.5 & 16 & & 15.1 & $\pm$ & 2.0 & & 1000 & 1 & 2 \\ 5 & 13.9 & $\pm$ & 1.9 & $\pm$ & 0.6 & 14 & & 35.3 & $\pm$ & 4.0 & & 1000 & 2 & 150 \\ 6 & 9.4 & $\pm$ & 2.0 & $\pm$ & 0.3 & 11 & & 20.7 & $\pm$ & 2.5 & & 1000 & 10 & 300 \\ 7 & 4.40 & $\pm$ & 0.84 & $\pm$ & 0.28 & 2 & & 5.61 & $\pm$ & 0.64 & & 1250 & 5 & 225 \\ \hline \end{tabular} } \end{table} \begin{figure}[hbtp]\vspace{1em}\centering {\includegraphics[width=0.49\textwidth]{Figure_009-a.png}} {\includegraphics[width=0.49\textwidth]{Figure_009-b.png}} \vspace{1em} \caption{ Event display of an event passing both selection set 1 and selection set 5. The event contains four jets (jets 1 and 4 pass the emerging jet criteria), consistent with the decay of two massive mediator particles, each decaying to an SM quark and a dark QCD quark. In such a scenario, the dark mesons produced in the fragmentation of the dark quark would decay back to SM particles via the mediator, resulting in displaced vertices with decay distances on the mm scale. (Left) 3D display: the green lines represent reconstructed tracks, the red (blue) truncated pyramids represent energy in the ECAL (HCAL) detectors, respectively. (Right) Reconstructed tracks in $\rho$--$\phi$ view. The filled blue circles represent reconstructed secondary vertices, while the filled red circle is the PV. The solid grey lines represent the innermost layer of the silicon pixel detector. } \label{fig:eventdisplay} \end{figure} No significant excess with respect to the SM prediction is observed. A 95\% confidence level (\CL) cross section upper bound is calculated following the modified frequentist \CLs prescription~\cite{Junk:1999kv,Read:2002hq,CMS-NOTE-2011-005}, using an asymptotic approximation~\cite{Cowan:2010js} for the profile likelihood ratio based test statistic, where the systematic uncertainties are taken as nuisance parameters. The 95\% \CL limits on the signal cross section, expected, and observed exclusion contours on signal parameters are shown in Fig.~\ref{fig:limitcurve} for $\ensuremath{m_{\pi_\mathrm{DK}}}\xspace=5\GeV$. The dependence of the limit on \ensuremath{m_{\pi_\mathrm{DK}}}\xspace is weak for \ensuremath{m_{\pi_\mathrm{DK}}}\xspace between 1 and 10\GeV. Dark pion decay lengths between 5 and 225\mm are excluded at 95\% \CL for dark mediator masses between 400 and 1250\GeV. Decay lengths smaller than 5 and greater than 225\mm are also excluded in the lower part of this mass range. \begin{figure}[hbtp]\centering \includegraphics[width=0.8\textwidth] {Figure_010.pdf} \caption{Upper limits at 95\% \CL on the signal cross section and signal exclusion contours derived from theoretical cross sections for models with dark pion mass \ensuremath{m_{\pi_\mathrm{DK}}}\xspace of 5\GeV in the $\ensuremath{m_{\mathrm{X_{DK}}}}\xspace-\ensuremath{c\tau_{\pi_\mathrm{DK}}}\xspace$ plane. The solid red contour is the expected upper limit, with its one standard-deviation region enclosed in red dashed lines. The solid black contour is the observed upper limit. The region to the left of the observed contour is excluded. } \label{fig:limitcurve} \end{figure} \section{Summary}\label{sec:summary} A search is presented for events consistent with the pair production of a heavy mediator particle that decays to a light quark and a new fermion called a dark quark, using data from proton-proton collisions at $\sqrt{s}=13\TeV$ corresponding to an integrated luminosity of 16.1\fbinv. The dark quark is assumed to be charged only under a new quantum-chromodynamics-like dark force, and to form an emerging jet via a parton shower, containing long-lived dark hadrons that give rise to displaced vertices when decaying to standard model hadrons. The data are consistent with the expected contributions from standard model processes. Limits are set at 95\% confidence level excluding dark pion decay lengths between 5 and 225\mm for dark mediators with masses between 400 and 1250\GeV. Decay lengths smaller than 5 and greater than 225\mm are also excluded in the lower part of this mass range. The dependence of the limit on the dark pion mass is weak for masses between 1 and 10\GeV. This analysis is the first dedicated search for the pair production of a new particle that decays to a jet and an emerging jet. \begin{acknowledgments} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). \hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract No. 675440 (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science - EOS" - be.h project n. 30820817; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850 and 125105 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). \end{acknowledgments}
1,116,691,497,964
arxiv
\section{Introduction} In our previous papers [1-6] published recently, a time-independent novel perturbation theory has been developed in the bound state domain, which is non-perturbative, self-consistent and systematically improvable, and used to treat successfully significant problems in different fields of physics. Gaining confidence from these applications, we aim through the present work to show that similar techniques can also be used in the continuum. In the next section we summarize the main ideas of our approach. The extension of the model for scattering states and the relationship to some other perturbation approaches are discussed in Section 3. The paper ends with a brief summary and concluding remarks. \section{The Model} Let us start with a brief introduction of the formalism to remind the compact form of the method, which would provide an easy access of the scheme to understanding of the treatments in the continuum. For the consideration of spherically symmetric potentials, the corresponding Schr\"{o}dinger equation in the bound state domain for the radial wave function has the form $(\hbar=2m=1)$ \begin{equation} \frac{\Psi''_{n}(r)}{\Psi_{n}(r)}=[V(r)-E_{n}], ~~~V(r)=\left[V_{0}(r)+\frac{\ell(\ell+1)}{r^{2}}\right]+\Delta{V(r)},~~~~n=0,1,2,..., \end{equation} where $V_{0}$ is an exactly solvable unperturbed potential together with the angular momentum barrier while $\Delta V$ is a perturbing potential. Expressing the wave function $\Psi_{n}$ as a product \begin{equation} \Psi_{n}(r)=\chi_{n}(r)\phi_{n}(r), \end{equation} in which $\chi_{n}$ is the known normalized eigenfunction of the unperturbed Schr\"{o}dinger equation whereas $\phi_{n}(r)$ is a moderating function corresponding to the perturbing potential. Substituting (2) into (1) yields \begin{equation} \left(\frac{\chi''_{n}}{\chi_{n}}+\frac{\phi''_{n}}{\phi_{n}}+2\frac{\chi'_{n}}{\chi_{n}}\frac{\phi'_{n}}{\phi_{n}}\right)=V-E_{n}. \end{equation} Instead of setting the functions $\chi_{n}$ and $\phi_{n}(r)$, we will set their logarithmic derivatives \begin{equation} W_{n}=-\frac{\chi'_{n}}{\chi_{n}}, ~~~\Delta{W_{n}}=-\frac{\phi'_{n}}{\phi_{n}} \end{equation} which leads to \begin{equation} \frac{\chi''_{n}}{\chi_{n}}=W_{n}^{2}-W'_{n}=\left[V_{0}(r)+\frac{\ell(\ell+1)}{r^{2}}\right]-\varepsilon_{n}, \end{equation} where $\varepsilon_{n}$ is the eigenvalue of the exactly solvable unperturbed potential, and \begin{equation} \left(\frac{\phi''_{n}}{\phi_{n}}+2\frac{\chi'_{n}}{\chi_{n}}\frac{\phi'_{n}}{\phi_{n}}\right)= \Delta {W^{2}_{n}}-\Delta {W'_{n}}+2W_{n}\Delta {W_{n}}=\Delta {V(r)}-\Delta\varepsilon_{n} \end{equation} in which $\Delta\varepsilon_{n}$ is the energy value for the perturbed potential leading to $E_{n}=\varepsilon_{n}+\Delta\varepsilon_{n}$. If the whole potential, involving the perturbing piece $\Delta{V}$, can be analytically solvable, then Eq.(1) through (5) and (6) reduces to \begin{equation} (W_{n}+\Delta {W_{n}})^{2}-(W_{n}+\Delta {W_{n}})'=V-E_{n}, \end{equation} which is known as the usual supersymmetric quantum mechanical treatment \cite{cooper} in the literature. However, if the whole potential has no analytical solution as the case considered in this Letter, which means Eq.(6) cannot be exactly solvable for $\Delta{W}$, then one can expand the functions in terms of the perturbation parameter $\lambda$, \begin{equation} \Delta{V(r;\lambda)}=\sum^{\infty}_{N=1}\lambda^{N}\Delta{V_{N}(r)}, ~~\Delta{W_{n}(r;\lambda)}=\sum^{\infty}_{N=1}\lambda^{N}\Delta{W_{nN}(r)},\nonumber ~~\Delta\varepsilon_{n}(\lambda)=\sum^{\infty}_{N=1}\lambda^{N}\Delta\varepsilon_{nN} \end{equation} where $N$ denotes the perturbation order. Substitution of the above expansion into Eq.(6) and equating terms with the same power of $\lambda$ on both sides yields up to for instance $O(\lambda^{3})$ \begin{equation} 2W_{n}\Delta {W_{n1}}-\Delta {W'_{n1}}=\Delta {V_{1}}-\Delta\varepsilon_{n1}, \end{equation} \begin{equation} \Delta{W^{2}_{n1}}+2W_{n}\Delta{W_{n2}}-\Delta{W'_{n2}}=\Delta{V_{2}}-\Delta\varepsilon_{n2} \end{equation} \begin{equation} 2(W_{n}\Delta{W_{n3}}+\Delta{W_{n1}}\Delta{W_{n2}})-\Delta{W'_{n3}}=\Delta{V_{3}}-\Delta{\varepsilon_{n3}} \end{equation} Eq.(6)and its expansion through Eqs.(9-11) give a flexibility for the easy calculations of the perturbative corrections to energy and wave functions for the $\textit{nth}$ state of interest through an appropriately chosen perturbed superpotential. It has been shown [1-6] that this feature of the present model leads to a simple framework in obtaining the corrections to all states without using complicated mathematical procedures. \section{Application to the scattering domain} It is well known that there are many scattering problems in which the interaction between the projectile and the target decomposes naturally into two parts $(V=V_{0}+\Delta{V})$. This division is especially useful if the scattering wave function under the action one part can be obtained exactly $(V_{0})$, while the effect of the other $(\Delta{V})$ can be treated in some approximation as in the present formalism. For simplicity, we here confine ourselves to $s-$wave scattering from a potential which is assumed that vanishes beyond a finite radius $R$. The associated total wavefunction behaves at large distances \begin{equation} \Psi(r)=\frac{1}{k}\sin(kr+\delta) ,~~~r \geq R , \end{equation} where $\delta$ is the $s-$wave phase shift. Our present treatment of scattering has concerned itself primarily with determining how the solutions of the free Schr\"{o}dinger equation are affected by the presence of the interaction. Within the framework of the present formalism we suppose that the solutions of Eq.(5) are known, or are easily found, to give the corresponding phase shift $\delta_{0}$. Considering the expansion $\delta=\delta_{0}+\lambda\delta_{1}+\lambda^{2}\delta_{2}+...$, as in Eq.(8), we aim here to derive explicitly solvable and easily accessible expressions for the phase shift contributions at successive perturbation orders. \subsection{First-order phase shift correction} Keeping in mind Eq.(12) and considering the discussion in Section 2, at the first perturbation order one has \begin{equation} (W+\lambda\Delta{W_{1}})=-k\cot(kr+\delta_{0}+\lambda\delta_{1}),~~~W_{n}=-\frac{\chi'}{\chi}=-k\cot(kr+\delta_{0}), \end{equation} from where the superpotential relating to the perturbing interaction \begin{equation} \Delta{W_{1}(r)}=\frac{k\delta_{1}}{\sin^{2}(kr+\delta_{0})}~, \end{equation} is obtained assuming that $\sin\lambda\delta_{1}\cong\lambda\delta_{1}$ and $\cos\lambda\delta_{1}\cong1 $. In the second step, one needs to employ Eq. (9) to arrive at another expression for $\Delta{W_{1}}$. Rearranging the terms, $\Delta{W'_{1}}-2W\Delta{W_{1}}=(\Delta\varepsilon_{1}-\Delta{V_{1})}$ and multiply both sides by the integrating factor $\exp(-2\int^{r}_{0}W(z)dz)$, which is the square of the unperturbed wave function $\chi^{2}(r)$ through Eq.(4), one obtain \begin{equation} \frac{d}{dr}\left[\chi^{2}(r)\Delta{W_{1}(r)}\right]=\chi^{2}(r)(\Delta\varepsilon_{1}-\Delta{V_{1}}). \end{equation} The integration, and the remove of $\Delta{\varepsilon_{1}}$ term due to the consideration of elastic scattering process here, yields \begin{equation} \Delta{W_{1}(r)}=-\frac{1}{\chi^{2}(r)}\int^{r}_{0}\chi^{2}(z) \Delta{V_{1}(z)}dz. \end{equation} As $\chi=\frac{1}{k}\sin(kr+\delta_{0})$ in the asymptotic region, comparison of Eqs.(14) and (16) reproduces the first-order change in the phase shift \begin{equation} \delta_{1}=-k\int^{\infty}_{0}\chi^{2}(r) \Delta{V_{1}(r)}dr. \end{equation} If necessary, the corresponding change in the wavefunction can easily be obtained by the substitution of Eq.(16) into (4), $\phi_{1}=\exp(-\int\Delta{W_{1}})$. For the reliability of the present expression obtained, Eq (17), one may compare it with that reproduced by other methods. For example, in the limiting case where the unperturbed potential vanishes, the unperturbed $s-$wave function is reduced to a plane wave $\chi(r)=\sin(kr)/k$, and the first-order change in the phase shift becomes \begin{equation} \delta_{1}=-\frac{1}{k}\int^{\infty}_{0}\sin^{2}(kr) \Delta{V_{1}(r)}dr \end{equation} which is just the first Born approximation for the phase shift \cite{thaler}. In addition, the well known expression for $s-$wave scattering amplitude by the two-potential formula in scattering theory \cite{thaler}, \begin{equation} f_{1}=-e^{2i\delta_{0}}\int^{\infty}_{0}\chi^{2}(r) \Delta{V_{1}(r)}dr \end{equation} where the phase factor in front of the integration arises because of the standing wave boundary conditions, justifies once more our result since $f_{1}=-e^{2i\delta_{0}}\delta_{1}/k$ and, equating this to the above equation leads immediately to Eq.(17). The present result has a widespread applicability, which may also be used in the treatment of scattering length problems. At low-energy limit, the phase shift is related to the scattering length $\delta_{k\rightarrow{0}}\rightarrow{-ka}$ where ${a}={a_{0}}+\lambda{a_{1}}+\lambda^{2}{a_{2}}+...$ may be expanded in a perturbation series similar to the phase shift. Outside the range of the potential, the unperturbed wave function behaves as $\chi\rightarrow(r-a_{0})$. Thus, the first correction to the scattering length is \begin{equation} a_{1}=\lim_{r\rightarrow\infty}\left[\int^{r}_{0}(z-a_{0})^{2}\Delta{V_{1}(z)}dz\right] \end{equation} which can be calculated for a given $\Delta{V_{1}}$. The scattering length has an important physical significance. In the low-energy limit only the $s-$wave makes a nonzero contribution to the cross section, so that the angular distribution of the scattering is spherically symmetric and the total cross section is $4\pi(a_{0}+\lambda{a_{1}}+...)^{2}$. This is also exactly the result obtained in most textbooks for the low-energy scattering of a hard sphere of radius Thus the scattering length is the effective radius of the target at zero energy. As a last example, consider the case of the angular momentum barrier as the unperturbed potential $V_{0}=\ell(\ell+1)/r^{2}$ that produces $\left[rj_{\ell}(kr)\right]$ with a phase shift $\delta_{0}=-\ell\pi/2$. For a trivial perturbation let us choose $\Delta{V_{1}}=\lambda/r^{2}$, due to which the angular momentum is slightly perturbed $\overline{\ell}\approx\ell+\lambda/(2\ell+1)+O(\lambda^{2})$. Therefore the phase shift correction at first-order is $\delta_{1}=-\pi/2(2\ell+1)$. Again, this exact result confirms the reliability of Eq.(17). \subsection{Second-order phase shift correction} To solve Eq.(10) for $\Delta{W_{2}}$ we mimic the preceding calculation. The integration factor is the same. In fact, examining Eqs.(9) and (10), the only difference is that the quantity $\Delta{V_{1}}-\Delta\varepsilon_{1}$ is replaced by $\Delta{V_{2}}-\Delta{W^{2}_{1}}-\Delta\varepsilon_{2}$. As $\Delta\varepsilon_{2}$ term is zero due to the process of interest, $\Delta{W_{2}}$ is thus \begin{equation} \Delta{W_{2}(r)}=-\frac{1}{\chi^{2}(r)}\int^{r}_{0}\chi^{2}(z) \left[\Delta{W^{2}_{1}(z)}-\Delta{V_{2}(z)}\right]dz. \end{equation} Bearing in mind that $\chi=\frac{1}{k}\sin(kr+\delta_{0})$ for the region $r\geq{R}$, the second-order expansion in the superpotential similar to Eq.(13) provides another expression for $\Delta{W_{2}}$ which is \begin{equation} \Delta{W_{2}(r)}=k\delta_{1}^{2}\frac{\cot(kr+\delta_{0})}{\sin^{2}(kr+\delta_{0})}+\frac{k\delta_{2}}{\sin^{2}(kr+\delta_{0})} \end{equation} Comparison of Eqs.(21) and (22), together with the substitution of (14) in (21), leads to an auxiliary function for the second order phase shift correction, \begin{equation} \delta_{2}(r)=-\frac{1}{k}\int^{r}_{0}\Delta{V_{2}(z)}\sin^{2}(kz+\delta_{0}) dz +k\delta_{1}^{2}\int^{r}_{0}\frac{dz}{\sin^{2}(kz+\delta_{0})}-\delta_{1}^{2}\cot(kr+\delta_{0}), \end{equation} where a singularity appears in the second integral at $z=0$. This problem can be circumvented by replacing the lower limit of the integral with $R$. Assuming $\Delta{V}=\Delta{V_{1}}$ as in realistic problems of nuclear physics, which means that $\Delta{V_{2}}=0$, the $r-$dependent phase shift correction in the second-order is given in the form of \begin{equation} \delta_{2}(r)=\delta_{1}^{2}\cot(kR+\delta_{0})-2\delta_{1}^{2}\cot(kr+\delta_{0}). \end{equation} As an alternative treatment, which leads to a concrete comparison, one can go back to Eq.(21) and split $\chi^{2}\Delta{W_{1}^{2}}$ term in two parts as $(\chi^{2}\Delta{W_{1}})(\Delta{W_{1}})$ allowing to invoke Eq.(16). In this case the comparison of the result with the expansion in (22) gives \begin{equation} \delta_{2}=-k\int^{\infty}_{0}\chi^{2}(r)\Delta{V_{1}(r)}dr \int^{r}_{R}\frac{dz}{\chi^{2}(z)} \left[\int^{R}_{z}\chi^{2}(y)\Delta{V_{1}(y)}dy-\frac{\delta_{1}}{k}\right]+\delta_{1}^{2}\cot(kR+\delta_{0}) \end{equation} which is in agreement with the work in \cite{milward}. In addition, the use of (17) in (24) transforms it into Eq. (25). Furthermore, the reader is reminded that the second Born approximation for the phase shift can be most easily derived using the variable phase equation approach \cite{calegero}, \begin{equation} \delta_{2}=2k^{2}\int^{\infty}_{0}\chi^{2}(r)\Delta{V_{1}(r)}\cot(kr)dr\int^{r}_{0}\chi^{2}(y)\Delta{V_{1}(y)}dy \end{equation} which, in the light of Eq. (15), is the same result as we find from Eq (25), by putting $\delta_{0}=0$ . Higher order terms can also be evaluated in the same manner. \section{Concluding Remarks} The recently introduced time-independent perturbation theory has been successfully extended from the bound state region to the scattering domain. For the clarification, the work has been carried out with the consideration of $s-$wave scattering only. However, generalization of the formalism to higher partial waves in the scattering domain does not cause any problem. The inclusion of the centrifugal barrier contribution in the effective potential for instance leads to the replacement of the $s-$wave phase shift with $\delta_{\ell}-\ell\pi/2$ due to the related wave function $\chi(r)=\sin(kr+\delta_{\ell}-\ell\pi/2)/k$ in the asymptotic region, supposing both the unperturbed and perturbed potentials vanish at a large $r>R_{1}$ which means that in the region $R_{1}<r\leq{R}$ there is then only the centrifugal barrier contribution. This inclusion requires simply to repeat the present calculations for the replacement in the phase shift. It should be stress that, anything that can be achieved from the present formalism must also be obtainable from the works [9,10] in the literature. For instance, considering the bound state region, Bender's formalism \cite{bender} can be simplified by introducing the auxiliary function $F_{N}(r)$ such that the whole wave function $\Psi_{N}(r)=\chi(r)F_{N}(r)$ where denotes the perturbation order. The first-order correction can then be written as $\frac{d}{dr}\left[\chi^{2}\frac{dF}{dr}\right]=(\Delta{V_{1}}-\Delta\varepsilon_{1})\chi^{2}$ which corresponds exactly to the present treatment by Eq. (15) when we identify $\Delta{W_{1}}=dF/dr$. The higher order calculations can be linked to ours in the similar manner. Whereas, the works of Milward and Wilkin \cite{milward} may be related to the present formalism in both domain, the bound and scattering region by making a relation between their probability density distributions/derivatives and our $\Delta{W}$ functions, such as $\Delta{W_{0}}=-P_{0}'/2P_{0}$ at the zeroth order, $\Delta{W_{1}}=(-P_{1}/2P_{0})'$ at the first order and $\Delta{W_{2}}=(-P_{2}/2P_{0})'$ at the second order etc. Nevertheless, the present technique provides a clean and explicit route for the calculations without tedious and cumbersome integrals. The energy variation of the scattering wave function and phase shift can also be studied by perturbing in the energy. We wish to stress that all these effects depend purely upon the perturbation and the unperturbed wave function; explicit knowledge of the unperturbed potential is not necessary. This exposition will be deferred to a later publication.
1,116,691,497,965
arxiv
\section{Generalization to multi-parameter and joint expansion in $\mathbf{\emph{T}}$ and $\pmb{\mu_B}$} Our starting point for the multi-parameter expansion is \autoref{eq:rw}. There, the expectation value is taken over gauge fields associated with $\{\beta_0, m_0, \mu_B=0\}$, where $\beta$ is the QCD gauge coupling, $S_G$ is the pure gauge action and the $\Gn{ij}$ are given by \autoref{eq:G}. For brevity, in this section, we will use notations $\hat{\mu}_B\equiv\mu_B/T$, $\hat{m}=m/T$ and $\Delta\hat{m}\equiv\hat{m}-\hat{m}_0$, and provide explicit demonstration of the generalized resummation by keeping only leading order terms, \textit{i.e.} $\order{\Delta\beta}$, $\order{\Delta\hat{m}}$ and $\order{\hat{\mu}_B^2}$, in all expansions. Extensions to higher orders are straightforward. We consider the case where the temporal extent ($N_\tau$) of the lattice is kept fixed. In this case, $T$ is changed by varying the bare gauge coupling $\beta$, and the bare quark mass $m$ must be tuned with $\beta$ to keep vacuum hadron masses constant. Thus, $T(\beta, m)$ and $T_0(\beta_0, m_0)$. Applying chain rule for derivatives as well as expanding $T(\beta, m)$ around $(\beta_0, m_0)$ one gets \begin{equation} \left. \Delta T\pdv{}{T} \right\vert_{T_0} = \left. \Delta\beta\pdv{}{\beta} \right\vert_{\beta_0} + \left. \Delta\hat{m}\pdv{}{\hat{m}} \right\vert_{\hat{m}_0} + \dots \,. \label{eq:pdvT} \end{equation} With \begin{align} % \langle \Dn{i}^j \rangle = \frac{1}{Z(\beta,m)} \int \mathcal{D}U\; \Dn{i}^j\; e^{-\beta S_G[U]+\ln\det M(m)} \,, % \end{align} to the leading order \begin{align} % \Delta T\,\frac{d\langle\Dn{i}^j\rangle}{dT} = & -\left[\langle S_G \Dn{i}^j \rangle - \langle S_G\rangle \langle\Dn{i}^j\rangle\right]\Delta\beta \notag \\ & + \left[\langle\Gn{01}\Dn{i}^j \rangle - \langle \Gn{01} \rangle\langle \Dn{i}^j \rangle + j\langle \Dn{i}^{j-1}\Gn{i1}\rangle \right] \Delta\hat{m} \notag \\ & + \dots \,. \label{eq:dDdT} \end{align} The goal is to obtain $Z^R_N(T,\mu_B)$ by expanding around $Z(T_0,0)$, while resumming contributions of up to $N$-point baryon-current correlations to all orders in $\mu_B$ and $\Delta T$. Following multi-parameter reweighting technique \begin{equation} \frac{Z(T,\mu_B)}{Z(T_0,0)} = \left\langle e^{-\Delta\beta S_G}\frac{\det M(m,\mu_B)}{\det M(m_0,0)}\right\rangle \,, \label{eq:rwZ} \end{equation} where the expectation value is with respect to a gauge field ensemble generated for $\{\beta_0,m_0,\mu_B=0\}$. Simultaneously expanding in $\mu_B$ and $\Delta m$ the determinant ratio in \autoref{eq:rwZ} can be written as \begin{equation} \frac{\det M(m,\mu_B)}{\det M(m_0,0)} = \exp\left[ \sum_{i+j=1}^\infty\Gn{ij} \hat{\mu}_B^i \hat{m}^j \right] \,, \label{eq:detratio} \end{equation} where $\Gn{ij}$ are defined through \autoref{eq:G}. By plugging \autoref{eq:detratio} back into \autoref{eq:rwZ} and truncating the sum at $i+j=N$, we obtain \autoref{eq:rw}. Next, by Taylor expanding \autoref{eq:rw} in powers of $\Delta\beta$, $\Delta\hat{m}$ and $\hat{\mu}_B$ we get \begin{align} \frac{Z(T,\mu_B)}{Z(T_0,0)} & = 1 - \langle S_G \rangle \Delta\beta + \langle \Gn{01} \rangle \Delta\hat{m} + \langle\Dn{1}\rangle\hat{\mu}_B \notag \\ & - \left[\langle S_G\Dn{1}\rangle\Delta\beta - \langle\Gn{01}\Dn{1}+\Gn{11}\rangle\Delta\hat{m}\right]\hat{\mu}_B \notag \\ & + \langle \Dn{2} + \Dn{1}^2/2 \rangle \hat{\mu}_B^2 \notag \\ & - \left[ \langle S_G (\Dn{2}+\Dn{1}^2/2)\rangle \Delta\beta - \langle \Gn{01}(\Dn{2}+\Dn{1}^2)/2\rangle\Delta\hat{m} \right. \notag \\ & \left. -\langle\Gn{21}+\Gn{11}\Dn{1}\rangle\Delta m \right] \hat{\mu}_B^2 + \;\dots \,. \label{eq:Zexp} \end{align} The pressure difference is given by \begin{align} % \Delta\left[\frac{P}{T^4}\right] \equiv \frac{P(T,\mu_B)}{T^4} - \frac{P(T_0,0)}{T_0^4} = \frac{N_\tau^3}{N_s^3} \ln\left[\frac{Z(T,\mu_B)}{Z(T_0,0)}\right] \,, \notag % \end{align} where $N_s$ is the spatial extent of the lattice. Using \autoref{eq:Zexp}, expanding the logarithm in powers of $\Delta\beta$, $\Delta\hat{m}$, $\hat{\mu}_B$ and keeping only the real part one obtains \begin{align} % \frac{N_s^3}{N_\tau^3} & \Delta\left[\frac{P}{T^4}\right] = \langle \Gn{01} \rangle \Delta\hat{m} -\langle S_G \rangle \Delta\beta + \langle\Dn{2}+\Dn{1}^2/2\rangle \hat{\mu}_B^2 \notag \\ & - \left[ \langle S_G(\Dn{2}+\Dn{1}^2/2)\rangle - \langle S_G \rangle \langle (\Dn{2}+\Dn{1}^2/2) \rangle \right] \hat{\mu}_B^2\Delta\beta \notag \\ & + \left[ \langle \Gn{01}(\Dn{2}+\Dn{1}/2)\rangle - \langle \Gn{01} \rangle\langle(\Dn{2}+\Dn{1}^2/2)\rangle \right. \notag \\ & \left. + \langle\Gn{21}+\Gn{11}\Dn{1} \rangle \right] \hat{\mu}_B^2\Delta\hat{m} + \dots \,. % \label{eq:Pexp1} \end{align} Noting that \begin{align} % \left. \frac{d[P(T,0)/T^4]}{dT} \right\vert_{T_0} \Delta T = \langle \Gn{01} \rangle \Delta\hat{m} -\langle S_G \rangle \Delta\beta \,, % \end{align} and using \autoref{eq:pdvT} of the main paper, it is easy to identify that \autoref{eq:Pexp1} is nothing but a joint Taylor expansion of $P(T,\mu_B)$ in $T$ and $\mu_B$ around $(T_0,0)$, \begin{align} % \Delta\left[\frac{P}{T^4}\right] & = \left. \frac{d[P(T,0)/T^4]}{dT} \right\vert_{T_0} \Delta T + \frac{1}{2!}\chi^B_2(T_0)\hat{\mu}_B^2 \notag \\ & + \frac{1}{2!} \left. \frac{d\chi^B_2(T)}{dT} \right\vert_{T_0} \hat{\mu}_B^2\Delta T + \order{\hat{\mu}_B^4, (\Delta T)^2}\,. % \label{eq:Pexp2} \end{align} Thus, the generalized version given by \autoref{eq:rw} genuinely resums contributions of up to $N$-point baryon current in the Taylor expansion of EoS to all orders in $T,\mu_B$. Following Ref.~\cite{Borsanyi:2021sxv}, the generalized resummation of \autoref{eq:rw} can be made even more powerful by choosing the expansion point $T_0$ along some physically motivated line in the $T$-$\mu_B$-plane, \textit{i.e.} by choosing some physically motivated $\beta_0(\mu_B)$ and $m_0(\mu_B)$. The one-to-one correspondence between the Taylor expansion of \autoref{eq:rw} and alternative expansion scheme presented in Ref.~\cite{Borsanyi:2021sxv} can be readily observed. By including the $\mathcal{O}(\hat{\mu}_B^4)$ in \autoref{eq:Pexp2} and taking a $\hat{\mu}_B$-derivative we get \begin{align} % \chi^B_1(T,\mu_B) & = \hat{\mu}_B\chi^B_2(T_0,0) + \left. \frac{d\chi^B_2(T,0)}{dT} \right\vert_{T_0} \hat{\mu}_B\Delta T \notag \\ & + \frac{1}{6}\chi^B_4(T_0,0)\hat{\mu}_B^3 + \dots \,. % \label{eq:chi1} \end{align} If one chooses \begin{align} % T_0(\hat{\mu}_B) = T - \frac{1}{6}\frac{\chi^B_4(T_0,0)}{(d\chi^B_2(T,0)/dT)_{T_0}}\hat{\mu}_B^2 \,, % \end{align} in \autoref{eq:chi1} then one arrives at the starting point of Ref.~\cite{Borsanyi:2021sxv}, namely $\chi^B_1(T_0,\hat{\mu}_B) = \hat{\mu}_B \chi^B_2(T_0,0)$. Hence, the method used in Ref.~\cite{Borsanyi:2021sxv} is a special Taylor-expanded case of the generalized resummation \autoref{eq:rw}. \end{appendix} \end{document}
1,116,691,497,966
arxiv
\section{Introduction \label{intro_sec}} We use standard graph theoretic and discrete geometric notation and terminology, which may be found in~\cite{Bol98, Die05} and ~\cite{Mat02, Sch98} respectively. All graphs in this paper are finite and have no loops although they may be directed or have multiple edges (multi-graphs). We refer the reader to Section~\ref{notedef_sec} for some basic notation and definitions. Let $R$ be a positive $n+1$ dimensional vector and ${\Lambda}_R=\{D \in {\mathbb Z}^{n+1}: D \cdot R=0\}$. Fix ${\Lambda}$, a full-dimensional sub-lattice of ${\Lambda}_R$. As noted in~\ref{notedef_sec}, we refer to an element $D \in {\mathbb Z}^{n+1}$ as a divisor. We say divisors $D, D' \in {\mathbb Z}^{n+1}$ are {\it equivalent}, denoted by $D \sim D'$, if and only if $D-D'\in {\Lambda}$. We say a divisor $E \in \Bbb Z^{n+1}$ is {\it effective} if $E \geq {\vec{\bf 0}}$. For any divisor $D \in {\mathbb Z}^{n+1}$, the {\it linear system} associated to $D$ is the set $|D|$ of all effective divisors which are equivalent to $D$, i.e., $|D|=\{E \in {\mathbb Z}^{n+1}: E \geq {\vec{\bf 0}}, E \sim D\}$ and the {\it degree} of $D$, written $deg_R(D)$, is given by $D\cdot R$. \begin{definition} \label{rank_function_def} For any divisor $D \in {\mathbb Z}^{n+1}$, define the rank of $D$, denoted by $r(D)$, as follows: $$r(D)=\min\{{\rm deg}_R(E): |D-E|=\emptyset, E \geq {\vec{\bf 0}}\}-1.$$ \end{definition} Baker and Norine~\cite{BN07} developed a graph theoretic analogue of the Riemann-Roch formula, originally by studying a certain unrestricted chip-firing game on graphs. Geometrically their result states that for the lattice ${\Lambda}_G$ spanned by the rows of the Laplacian of a finite undirected graph $G$, there exists a {\it canonical} divisor $K \in {\mathbb Z}^{n+1}$ whose $i$-th entry is ${\rm deg}(v_i)-2$, of degree $2g-2$ (where $g=|E(G)|-|V(G)|+1$) such that for any divisor $D \in {\mathbb Z}^{n+1}$, \begin{equation} \label{RR_Baker_Norine} r(D)-r(K-D)={\rm deg}_{\vec{\bf 1}}(D)+1-g. \end{equation} Many of their results have since been generalized to a variety of objects including tropical curves, metric graphs and edge weighted graphs~\cite{GK08, HKN07, Luo08, MZ07}. Recently Amini and Manjunath~\cite{AM09} showed that by viewing a the chip-firing game of Baker and Norine geometrically as a walk through the lattice spanned by its Laplacian, a pair of necessary and sufficient Riemann-Roch conditions, equivalent to those of Baker and Norine, could be generalized to all sub-lattices of the lattice ${\Lambda}_{\vec{\bf 1}}$. They refer to these conditions as {\it uniformity} and {\it reflection invariance}. In Section~\ref{lat_sec}, Theorem~\ref{RR_formula_equiv_U_RI_thm} shows that the criteria of Amini and Manjunath~\cite{AM09} naturally extends to any full-dimensional sublattice of ${\Lambda}_R$. Lorenzini~\cite{Lor09} gives an alternate Riemann-Roch criteria for such lattices. Our approach differs from his in that we first give a specific rank function (Definition~\ref{rank_function_def}) and use this to define a pair of necessary and sufficient conditions for a lattice ${\Lambda}$ to have the Riemann-Roch property. Lorenzini~\cite{Lor09} instead says that such a lattice has the Riemann-Roch property if there exists a {\it suitable} rank function (\S 2.1 in~\cite{Lor09}), i.e., one which would allow for a Riemann-Roch formula (\ref{RR_Baker_Norine}) satisfying certain desirable properties. We conclude section 2 with Theorem~\ref{thm:RR_R_one} showing that a full-dimensional lattice ${\Lambda} \subseteq {\Lambda}_R$ has the Riemann-Roch property if and only if $\mathcal{R}{\Lambda} \subseteq {\Lambda}_{\vec{\bf 1}}$ does, where $ \mathcal{R}=diag(r_0, \dots, r_{n})$. This result is later employed when studying the column chip-firing game and when discussing the relationship of chip-firing on arithmetical graphs to the row chip-firing game on associated direct graphs. Various chip-firing games on graphs have been studied in ~\cite{Big99, BL92, BLS91, HLMPPW08, GR01, Mer05, Mer97, Spe93, Tar88, Heu88}. Baker and Norine~\cite{BN07} introduced an unrestricted chip-firing game on undirected graphs to prove their Riemann-Roch formula. Their game is as follows: begin with a graph and an integer number of ``chips'' at each vertex. A vertex either {\it borrows} a chip along each of its edges from its neighbors or it {\it fires}, sending a chip along each of its edges to its neighbors. The objective of the game is to bring all of the vertices out of debt. In Section~\ref{chip_sec}, we investigate two separate generalizations of the unrestricted chip-firing game on undirected graphs to directed graphs. To understand the two different generalizations of this game to directed graphs we should study how this game relates to the graph Laplacian. The question of whether a configuration $D$, also called a divisor, can be brought out of debt by some sequence of firings and borrowings is the equivalent to the question of whether $|D|\neq \emptyset$, i.e., $r(D)\geq 0$ for the lattice ${\Lambda}_G$. This is because a sequence of chip-firings corresponds to translation by a lattice point in ${\Lambda}_G$. Let $\vec{G}$ be a directed graph whose adjacency matrix $\vec{A}$ with $i$, $j$th entry $\vec{A}_{i,j}$ is the number of edges directed from $i$ to $j$. Let $\vec{\mathcal{D}}=diag({\rm deg}^+(v_i), \dots, {\rm deg}^+(v_n))$ where ${\rm deg}^+(v)$ denotes the number edges leaving vertex $v \in V(\vec{G})$. We call the matrix $\vec{Q}=\vec{\mathcal{D}}-\vec{A}$ the {\it Laplacian matrix} of the directed graph $\vec{G}$. Note that this directed Laplacian is symmetric if and only if it is the Laplacian of a graph with bidirected edges, i.e., an undirected graph. We investigate $r(D)$ and the Riemann-Roch formula for the lattice spanned by the rows of $\vec{Q}$ and the lattice spanned by the columns of $\vec{Q}$. For both of these lattices, it is equivalent to study certain chip-firing games on $\vec{G}$. We note that throughout the paper the directed graphs being studied are constrained to be strongly connected, i.e., for any two vertices $i, j \in V(\vec{G})$, there exists a directed path from $i$ to $j$. Studying the lattice spanned by the rows of the directed Laplacian is equivalent to studying the row chip-firing game in which if a vertex fires, it will send a chip along each of its outgoing edges. In [2], an important object, called a $v_0$-reduced divisor, is introduced. Essentially this is a configuration, where every vertex is out of debt with the possible exception of $v_0$ and there is no way of "pushing" any money towards $v_0$. We generalized this notion of a $v_0$ reduced divisor to the row chip-firing game on a strongly connected directed graph in Section~\ref{reduceddiv_sec}. In Section~\ref{subsec:Dhar_alg}, we generalize Dhar's Algorithm, which Baker and Norine used implicitly in~\cite{BN07}. Dhar's algorithm allows one to check whether a divisor whose entries are nonnegative for all vertices other than $v_0$ is $v_0$-reduced and gives, when the divisor is reduced, all of the equivalent $v_0$-reduced divisors (for the case of directed graphs, a $v_0$-reduced divisor is no longer in general unique). When the divisor is found to not be $v_0$-reduced, a firing is obtained, which will bring it ``closer'' to some $v_0$-reduced divisor. In section 4 we present examples which show that lattice spanned by the rows of $\vec{Q}$ may or may not have the Riemann-Roch formula. We say a directed graph has the strong Riemann-Roch property for directed graphs if it has the Riemann-Roch property and it has a canonical vector $K$ whose $i$th entry $K(v_i)$ is ${\rm deg}^+(v_i)-2$. We then mention a connection between the sandpile model and the Riemann-Roch property for the row chip-firing game in Section~\ref{subsec:sandpile}. The directed sandpile model is a constrained version of the row chip-firing game where we restrict our attention to effective divisors. We fire vertices only when they have at least as many chips as their outdegree (so that the divisor remains effective). While many authors require a global sink and ignore the number of chips at this vertex. Because we are studying strongly connected digraphs it is sufficient for our discussion to simply require that a specified vertex $v_0$ not fire. A divisor $D$ is stable if stable if no vertices may fire and a stable divisor $D$ is recurrent if for every other divisor there exists a way of adding chips to vertices after which the divisor will stabilized to $D$. We show that for a directed graph $\vec{G}$, the lattice ${\Lambda}_{\vec{G}}$ has the strong Riemann-Roch property for directed graphs if and only if for every $v_0$-recurrent sandpile configuration $D$, which is minimal with respect to dominance away from $v_0$, there exists $D'=D-ke_0, k \in {\mathbb Z}_{\geq0}$, which is a continuous extreme divisor. The notion of a continuous extreme divisor is introduced in section 2 and is equivalent to saying that there exist $E_i \in {\mathbb Z}_{\geq 0}$ for $0 \leq i \leq n$ such that $E_i(v_i)=0$ and $E_i(v_j)>0$ for $i \neq j$ and $D' \sim E_i$. We note that $v_0$-reduced divisors, their connection to $v_0$-recurrent sandpile configurations and the generalized Dhar's algorithm were independently discovered by Speer~\cite{Spe93} although he was not aware of the connection with Riemann-Roch theory. Studying the the lattice spanned by the columns is equivalent to studying the column chip-firing game in which if a vertex borrows, it sends a chip along each of its incoming edges and loses a number of chips equal to its outdegree. The number of chips is not conserved, but if we restrict our attention to strongly connected digraphs then we find that there exists a canonical set of currencies, which are integer multiples of some universal currency, with exchange rates so that the game is conservative. In Section~\ref{subsec:G-parking}, we explain that the $v_0$-reduced divisors for this game are precisely the directed $G$-parking functions studied in ~\cite{CP05}. We show that when studying the column chip-firing game on a strongly connected graph, it is equivalent to study the row chip-firing game on an associated Eulerian directed graph, that is, a directed graph for which each vertex has the same number of outgoing and incoming edges. We also mention how Dhar's algorithm can be run on a divisor in the column chip-firing game without any serious revision. We then consider the case of {\it arithmetical graphs} in Section~\ref{AGrpahs}. An arithmetical graph is an undirected multigraph along with a vector $R \in {\mathbb N}^{n+1}$, with $R= (r_0, ..., r_n)$, where $r_i$ is the weight of vertex $v_i$ subject to the constraint that the sum of the weights of the vertices adjacent to $v_i$ (counting with multiplicity equal the number of edges shared with $v_i$) you obtain $\delta_ir_i$ for some $\delta_i \in {\mathbb N}$. We define the Laplacian of an arithmetical graph to be the same as for a standard multigraph, but with the $i$th entry along the diagonal equal to $\delta_i$ instead of the degree of $v_i$. Lorenzini~\cite{Lor89} introduced arithmetical graphs as a way of studying the intersection matrices of degenerating curves, which encode some of the discrete data associated with the degeneration. In this paper our interest in arithmetical graphs is derived from the fact that they form a class of vertex weighted graphs whose Laplacian spans an $n$-dimensional sub-lattice of ${\Lambda}_R$. Indeed, Chung and Langlands~\cite{CL96} introduced a Laplacian matrix for a graph with weights on its vertices, and noted in~\cite{Lor09} that if for all $0 \leq i \leq n$ the weight of the vertex $v_i$ is the square of the positive integer $r_i$, the Laplacian matrix introduced in~\cite{CL96} is the same as the one defined above. The chip-firing game of Baker and Norine extends to arithmetical graphs by assigning to each vertex its own currency, interpreting each vertex's multiplicity as the integer exchange rate between this vertex's currency and the universal chip currency. This is very similar to the notion of currencies employed when studying the column chip-firing game. In doing so we are able to give a combinatorial interpretation of the geometric definitions and statements of Section~\ref{lat_sec} for arithmetical graphs. We may obtain from an arithmetical graph $(G,R)$ with Laplacian $Q$, the Laplacian $\vec{Q}=Q\mathcal{R}$ (where $ \mathcal{R}=diag(r_0, \dots, r_{n})$) of a closely related directed graph. In this way we may view arithmetical graphs as a special type of directed graph, particularly since this coordinatewise scaling reduces the chip-firing game for arithmetical graphs to the row chip-firing game for directed graphs and preserves the Riemann-Roch property by Theorem~\ref{thm:RR_R_one}. In Theorem~\ref{asdf} we show that the all of the associated directed graphs have the Riemman-Roch property for the column chip-firing game. Given an arithmetical graph $(G,R)$ we define $g_0$ by the formula $2g_0-2 = \sum_{i=0}^n r_i(\delta_i-2)$. See~\cite{Lor89} for a simple that $g_0$ is integral and note that $g_0$ is $g$ for a graph $(G,\vec{\bf 1})$. As an application of the tools developed in section 3 we give a combinatorial proof of Proposition 4.2 from~\cite{Lor09}, which states that $g_{max} \leq g_0$ and if $g_{min}=g_{max}= g_0$ then $(G,R)$ has the Riemann-Roch property (and in particular the associated directed graph has the Riemann-Roch property). The first half of this statement, in the language of chip-firing, says that if there are $g_0$ chips present in an arithmetical graph then there exists a winning strategy thus generalizing the result of Baker and Norine for arithmetical graphs. The original proof of this result due to Lorenzini was algebro-geometric in nature, employing Riemann-Roch formula for curves. We conclude with a discussion of some examples of arithmetical graphs, which demonstrate that either, both, or neither of the two Riemann-Roch conditions may be satisfied for an arithmetical graph. \subsection{Basic Notations and Definitions \label{notedef_sec}} For any two vectors $x,y \in \Bbb R^{n+1}$, let $x \cdot y$ denote the inner product of $x$ and $y$. For any $x=(x_0, \dots, x_n)^T \in \Bbb R^{n+1}$, define $x^+=(x^+_0, \dots, x^+_n)^T \in {\mathbb R}_+^{n+1}$ and $x^-=(x^-_0, \dots, x^-_n)^T \in {\mathbb R}_-^{n+1}$ to be the {\it positive part} and {\it negative part} of $x$ where $x=x^++x^-$ and $x_i^+x_i^-=0$, for all $0 \leq i \leq n$. Define ${\rm deg}_R(x)=R \cdot D$ and call it the {\it degree} of $x$. We denote ${\rm deg}_R(x^+)$ by ${\rm deg}^+_R(x)$ and we call it the {\it degree plus} of $x$.\\ Assume ${\vec{\bf 0}}$ and ${\vec{\bf 1}}$ are the vectors in ${\mathbb R}^{n+1}$ all of whose coordinates are $0$ or $1$, respectively. For any $x=(x_0, \dots, x_n)^T \in {\mathbb R}^{n+1}$, we say $x \geq {\vec{\bf 0}}$ ($x > {\vec{\bf 0}}$) if and only if for all $0 \leq i \leq n$, $x_i \geq 0$ ($x_i > 0$). We define a {\it partial order} in ${\mathbb R}^{n+1}$ as follows: for any $x,y \in {\mathbb R}^{n+1}$, we say $x \geq y$ ($x > y$) if and only if $x-y \geq {\vec{\bf 0}}$ ($x-y > {\vec{\bf 0}}$). For any vector $x \in \Bbb R^{n+1}$, define $C^+(x)=\{y \in \Bbb R^{n+1}: y \geq x\}$ and $C^-(x)=\{y \in \Bbb R^{n+1}: x \geq y\}$. We denote the standard basis for ${\mathbb R}^{n+1}$ by $\{e_0, \dots, e_n\}$. Suppose that $R \in {\mathbb N}^{n+1}$ is a vector, and define $H_R=\{x \in \Bbb R^{n+1}: R \cdot x=0\}$. Let ${\Lambda}_R=H_R \cap {\mathbb Z}^{n+1}$ be the integer lattice in the hyperplane $H_R$ where $R \in {\mathbb N}^{n+1}$. Let $\|\cdot\|$ denote the $\ell^2$-norm, i.e., $\|x\|= \sqrt{x \cdot x}$, for all $x \in {\mathbb R}^{n+1}$. Let $G$ be graph and let $\{v_0, \dots, v_n\}$ be an ordering of vertices of $G$. Let $Div(G)$ be the free Abelian group on the set of vertices of $G$. By analogy with the Riemann surface case as noted also in~\cite{BN07}, we refer to elements of $Div(G)$ as {\it divisors} on $G$. In the case that the graph $G$ is implied by context, we simply refer to elements of $Div(G)$ as divisors. Because there is a fixed ordering on vertices of $G$, we think of an element $\alpha \in Div(G)$, which is a formal integer linear combinations of vertices of $G$, as a vector $D=(d_0, \dots, d_n) \in {\mathbb Z}^{n+1}$ where $d_i$ is the coefficient of $v_i$ in $\alpha$ for all $0 \leq i \leq n$. We denote to the $i$th coordinate of $D$ by $D(v_i)$, for all $0 \leq i \leq n$. We refer to both vectors in ${\mathbb Z}^{n+1}$ and elements of $Div(G)$ as divisors. \section{Riemann-Roch Theory for Sub-lattices of ${\Lambda}_R$ \label{lat_sec}} \subsection{Preliminaries} We remark that many of the proofs and statements presented in this section are similar to the ones which appeared in Amini and Manjunath~\cite{AM09}'s work. Essentially, what is being demonstrated is that if one replaces each statement about lattices orthogonal to the all one's vector with the same statement for lattices orthogonal to some fixed positive vector, the proofs will go through without much extra effort. This in itself is not a very strong observation, but it is necessary for proving Theorem~\ref{RR_formula_equiv_U_RI_thm} and Theorem~\ref{thm:RR_R_one}, which are used several times in the proceeding sections so, for the sake of completeness, we have decided to provide all of the necessary lemmas with proofs. Throughout this section, $R$ will denote a vector in ${\mathbb N}^{n+1}$. \begin{definition} \label{sigma_region_def} Let ${\Lambda} \subseteq {\Lambda}_R$ be a sub-lattice of rank $n$. Define $$\Sigma({\Lambda})=\{D \in \Bbb Z^{n+1}: D \not \geq p {\hbox{ for all }} p \in {\Lambda}\},$$ $$\Sigma_{\Bbb R}({\Lambda})=\{x \in \Bbb R^{n+1}: x \not \geq p {\hbox{ for all }} p \in {\Lambda}\}.$$ \end{definition} Note that the set $\Sigma({\Lambda})$ defined in Definition~\ref{sigma_region_def} is the negative of the {\it Sigma region} set defined by Amini and Manjunath~\cite{AM09}. We denote by $\overline{\Sigma}_{\Bbb R}({\Lambda})$ the topological closure of the set $\Sigma_{\Bbb R}$ in $\Bbb R^{n+1}$. Let $B(x,r)=\{y \in {\mathbb R}^{n+1}: \|y-x\|\leq r\}$ denote the ball of radius $r$ with center at $x$. For any set $S \subset \Bbb R^{n+1}$, let $int(S)$ denote the {\it relative interior} of $S$. \begin{lemma} \label{sigma_closure} If ${\Lambda} \subseteq {\Lambda}_R$ is a sub-lattice of rank $n$, then $$\overline{\Sigma}_{\Bbb R}({\Lambda})=\{x \in \Bbb R^{n+1}: x \not > p, {\hbox{ for all }} p \in {\Lambda}\}.$$ \end{lemma} \begin{proof} { Suppose $x \in \Bbb R^{n+1}$ such that $x>p$ for some $p \in {\Lambda}$. Thus there exists $\delta>0$ such that for all $y \in B(x,\delta)$, $y>p$. Thus $x \not \in \overline{\Sigma}_{\Bbb R}({\Lambda})$. Now, suppose $x \not \in \overline{\Sigma}_{\Bbb R}({\Lambda})$. Then there exists $\delta>0$ and $p \in {\Lambda}$ such that $x-{\delta \over 2}{\vec{\bf 1}} \geq p$. Hence $x>p$, and this completes the proof of the lemma. } \end{proof} \begin{lemma} \label{sigma_closure_sigma_lem} If $D \in \Bbb Z^{n+1}$ then $D \in \Sigma({\Lambda})$ if and only if $D+{\vec{\bf 1}} \in \overline{\Sigma}_{\Bbb R}({\Lambda})$. \end{lemma} \begin{proof} { If $D \not \in \Sigma({\Lambda})$, then there exists $p \in {\Lambda}$ such that $D \geq p$. Hence $D+{\vec{\bf 1}} > p$ and by Lemma~\ref{sigma_closure} $D+{\vec{\bf 1}} \not \in \overline{\Sigma}_{\Bbb R}({\Lambda})$. If $D+{\vec{\bf 1}} \not \in \overline{\Sigma}_{\Bbb R}({\Lambda})$ then Lemma~\ref{sigma_closure} implies that $D+{\vec{\bf 1}} > p$ for some $p \in {\Lambda}$. Since $D,p \in \Bbb Z^{n+1}$, it follows that $D \geq p$ and this implies that $D \not \in \Sigma({\Lambda})$. } \end{proof} Suppose $R=(r_0, \dots,r_{n}) \in {\mathbb R}^{n+1}_+$ and $x=(x_0, \dots,x_n) \in {\mathbb R}^{n+1}$. Define $\|x\|_R=\sum_{i=0}^n r_i|x_i|$. It is easy to see that $\| \cdot \|_R$ is a norm on $\Bbb R^n$. For any two points $x,y \in \Bbb R^{n+1}$, we define $dist_R(x,y)=\|x-y\|_R$. One can consider $\| \cdot \|_{R}$ as a {\it weighted taxi-cab} distance. For any set $S \subseteq {\mathbb R}^{n+1}$ and $p \in {\mathbb R}^{n+1}$, we define $dist_R(p,S)=\inf\{dist_R(p,x): x \in S\}$. Observe that $r(D)=-1$ if $D$ is not equivalent to any effective divisor and $-1 \leq r(D) \leq {\rm deg}_R(D)$. \begin{lemma} \label{distance_rank_lem} If $D \in \Bbb Z^{n+1}$ is a divisor then \begin{itemize} \item[(i)] $r(D)=-1$ if and only if $D \in \Sigma({\Lambda})$. \item[(ii)] $r(D)=dist_R(D,\Sigma({\Lambda}))-1=\min\{dist_R(D,p): p \in \Sigma({\Lambda})\}-1$. \end{itemize} \end{lemma} \begin{proof} { \begin{itemize} \item [(i)]For $D \in \Bbb Z^{n+1}$, $r(D)=-1$ if and only if for all $p \in {\Lambda}$, $D-p \not \geq {\vec{\bf 0}}$ if and only if $D \in \Sigma({\Lambda})$. \item [(ii)]Since $\Sigma({\Lambda})$ is a closed set, $\inf\{dist_R(D,p): p \in \Sigma({\Lambda})\}=\min\{dist_R(D,p): p \in \Sigma({\Lambda})\}$.\\ \begin{eqnarray*} r(D) &=& \min\{{\rm deg}(E): |D-E|= \emptyset, E \geq {\vec{\bf 0}}\}-1 \\ &=& \min\{{\rm deg}(E): r(D-E)=-1, E \geq {\vec{\bf 0}}\}-1 \\ &=& \min\{{\rm deg}(E): D-E \in \Sigma({\Lambda}), E \geq {\vec{\bf 0}}\}-1 \\ &=& \min\{{\rm deg}_R(D-p): D-p \geq {\vec{\bf 0}}, p \in \Sigma({\Lambda})\}-1 \\ &=& dist_R(D,\Sigma({\Lambda}))-1. \end{eqnarray*} Note that the last equality follows from the fact that if $p \in \Sigma({\Lambda})$ and $(D-p)_i < 0$ for some $0 \leq i \leq n$ then $dist_R(D, p-e_i) \leq dist_R(D, p)$ and $p-e_i \in \Sigma({\Lambda})$. \end{itemize} } \end{proof} \subsection{Extreme Points of $\Sigma({\Lambda})$ and $\overline{\Sigma}_{\Bbb R}({\Lambda})$} Define $H_R^+=\{x \in \Bbb R^{n+1}: x \cdot R \geq 0\}$. For any vector $p \in H_R^+$, define $\Delta_R(p)=H_R \cap C^-(p)$ to be the $n$-dimensional simplex in the hyperplane $H_R$. For the definitions of simplex and facet and their properties, we refer the reader to~\cite{Mat02, Sch98}. For simplicity we denote $\Delta_R(R)$ by $\Delta_R$. It is easy to see that for any $p \in H^+_R$ there exists a unique $\lambda \geq 0$ and $p' \in H_R$ such that $p=p'+\lambda R$. Define the {\it projection} function $\pi:H^+_R \rightarrow H_R$ as follows: for any $p \in H_R^+$, define $\pi(p)=p'$. It is also easy to see that $\pi(p)=p-\lambda R$ where $\lambda={(p \cdot R) / \|R\|^2}$. We refer to $\pi(p)$ the {\it projection} of the point $p$ into the hyperplane $H_R$ along the vector $R$. The following lemma is an immediate consequence of the above definition. \begin{lemma} { \label{delta_simplex_easy_lem} If $p=(p_0, \dots, p_n) \in H^+_R$ and $p=\pi(p)+\lambda R$, then \begin{itemize} \item [(i)] $\Delta_R(p)=\pi(p)+\lambda\Delta_R$. \item [(ii)] $F_i=\Delta_R(p) \cap \{x \in \Bbb R^n: x_i=p_i\}$ for all $0 \leq i \leq n$, defines all the {\it facets} of the simplex $\Delta_R(p)$. \end{itemize} } \end{lemma} It is easy to see that $\Delta_R$ is the simplex in $H_R$ with vertices $b^0, \dots, b^n \in H_R$ whose coordinates are: $$b^i_j=\left\{{\begin{matrix} -\sum_{k \neq i}{r^2_k \over r_i} & \hbox{ if } i = j \cr r_i & \hbox { otherwise } \end{matrix}}\right.$$ for all $0 \leq j \leq n$. \begin{definition} \label{delta_distance_def} For any two points $p,q \in H_R$, define the ${\Delta_R}$-distance function between $p$ and $q$ as follows: $$d_{\Delta_R}(p,q)=\inf\{\lambda \geq 0: q \in p+\lambda \Delta_R\}.$$ \end{definition} The $\Delta_R$-distance function defined above is a {\it gauge function} (which is often used in the study of convex bodies). For more on gauge functions and their properties, see~\cite{Sie89}. \\For any point $p \in {\Lambda}$ define $d_{\Delta_R}(p,{\Lambda})=\min\{\lambda \geq 0: \hbox{ there exists } \, q \in {\Lambda} \hbox{ such that }q \in p+\lambda \Delta_R\}$. The following remark can be considered as a generalization of Lemma 4.7 in~\cite{AM09}, and its proof easily follows from Definition~\ref{delta_distance_def}. \begin{remark} { Given any two vectors $p,q \in H_R$, $$d_{\Delta_R}(p,q)=\max_{0 \leq i \leq n}\{{q_i-p_i \over r_i}\}.$$ } \end{remark} \begin{proof} { By Definition~\ref{delta_distance_def}, $$d_{\Delta_R}(p,q)=\inf\{\lambda \geq 0: q \in p+\lambda \Delta_R\}=\inf\{\lambda \geq 0 : q \in p+C^-({\lambda R})\}$$$$=\inf\{\lambda \geq 0 : q \leq p+\lambda R\}=\max_{0 \leq i \leq n}\{{q_i-p_i \over r_i}\}.$$ } \end{proof} \begin{definition} \label{extreme_critical_def} Define \begin{eqnarray*} Ext(\Sigma(\Lambda)) &=&\{\nu \in \Sigma(\Lambda): {\rm deg}_R(\nu) \geq {\rm deg}_R(p), {\hbox{ for all }} p \in N(\nu) \cap \Sigma(\Lambda)\}, \\ Ext(\overline{\Sigma}_{\Bbb R}(\Lambda)) &=& \{\nu \in \overline{\Sigma}_{\Bbb R}(\Lambda): \exists \, \delta>0, \hbox{ such that } {\rm deg}_R(\nu) \geq {\rm deg}_R(p), {\hbox{ for all }} p \in B(\nu,\delta) \cap \overline{\Sigma}_{\Bbb R}(\Lambda)\}, \\ Crit({\Lambda}) &=& \{\nu \in H_R: \exists \, \delta>0 \hbox{ such that } d_{\Delta_R}(\nu,{\Lambda}) \geq d_{\Delta_R}(p,{\Lambda}), \hbox{ for all } p \in B(\nu, \delta) \cap H_R\}. \end{eqnarray*} where $N(\nu)$ consists of all points $D \in \Bbb Z^{n+1}$ such that $\|D-\nu\|_{\vec{\bf 1}} \leq 1$. We call $Ext(\Sigma(\Lambda))$, $Ext(\overline{\Sigma}_{\Bbb R}(\Lambda))$ and $Crit({\Lambda})$, the set of {\it extreme points} or {\it extreme divisors} of $\Sigma(\Lambda)$, $\overline{\Sigma}_{\Bbb R}(\Lambda)$ and the set of critical points of ${\Lambda}$, respectively. \end{definition} \begin{lemma} { \label{cone_comparison_lem} If $p,q \in H_R^{+}$, then $p\leq q$ if and only if $\Delta_R(p) \subseteq \Delta_R(q)$. In particular, $p< q$ if and only if $\Delta_R(p) \subsetneq int(\Delta_R(q))$. } \end{lemma} \begin{proof} { It is easy to see that $p \leq q$ if and only if $C^{-}(p) \subseteq C^{-}(q)$. Now the second part of Lemma~\ref{delta_simplex_easy_lem} implies that $C^{-}(p) \subseteq C^{-}(q)$ if and only if $(C^{-}(p) \cap H_R) \subseteq (C^{-}(q) \cap H_R)$. } \end{proof} An easy application of Lemma~\ref{sigma_closure} is that if $p \in Ext(\overline{\Sigma}_{\Bbb R}(\Lambda))$, then $p \not \in \Lambda$. The following theorem characterizes the set of extreme points of $\overline{\Sigma}_{\Bbb R}(\Lambda)$. \begin{theorem} { \label{extreme_sigma_closure_thm} If $p \in \overline{\Sigma}_{\Bbb R}(\Lambda) \setminus \Lambda$ then $p \in Ext(\overline{\Sigma}_{\Bbb R}(\Lambda))$ if and only if each facet of the simplex $\Delta_R(p)$ contains a point of $\Lambda$ in its interior. } \end{theorem} \begin{proof} { Assume that $p=(p_0, \dots, p_n) \in \overline{\Sigma}_{\Bbb R}(\Lambda) \setminus \Lambda$. Let $F_i$, $0 \leq i \leq n$ be the facets of $\Delta_R(p)$. Let $0 \leq i \leq n$ be such that $int(F_i)$ contains no point of $\Lambda$. By Lemma~\ref{delta_simplex_easy_lem} (ii), there exists an $\epsilon >0$ such that $\Delta_R(p+\epsilon e_i)$ does not contain any points of $\Lambda$ in its interior. Hence Lemma~\ref{cone_comparison_lem} and Lemma~\ref{sigma_closure} imply that $p+\epsilon e_i \in \overline{\Sigma}_{\Bbb R}(\Lambda)$. Since ${\rm deg}_R(p) < {\rm deg}_R(p+\epsilon e_i)$, the point $p$ is not an extreme point. Conversely, assume that $p \in \overline{\Sigma}_{\Bbb R}(\Lambda) \setminus \Lambda$ is such that the interior of each facet $F$ of $\Delta_R(p)$ contains a point of $\Lambda$. We claim that for any $v=(v_0, \dots, v_n) \in \Bbb R^{n+1}$, either ${\rm deg}_R(p+\epsilon v) \leq {\rm deg}_R(p)$ for all $\epsilon \geq 0$, or there exists $\lambda>0$ such that for all $0< \epsilon \leq \lambda$, $p+\epsilon v \not \in \overline{\Sigma}_{\Bbb R}(\Lambda)$. If $v \leq {\vec{\bf 0}}$, then for all $\epsilon \geq 0$, ${\rm deg}_R(p+\epsilon v) \leq {\rm deg}_R(p)$. Now, without loss of generality assume that $v_0 >0$ and $v_1 \leq 0$. Suppose $x \in int(F)$ where $F=\Delta_R(D) \cap \{y \in \Bbb R^n: (y-D) \cdot e_0=0\}$. Since $x \in int(F)$, we can pick $\lambda>0$ small enough such that for all $0 < \epsilon \leq \lambda $, $x \in int(\Delta_R(p+\epsilon v))$. Thus Lemma~\ref{cone_comparison_lem} and Lemma~\ref{sigma_closure} imply that $x \not \in \overline{\Sigma}_{\Bbb R}(\Lambda)$ for all $0 < \epsilon \leq \lambda$. This completes the proof of the claim. It is easy to see that the proof of the theorem follows from the claim. } \end{proof} \begin{corollary} { \label{extreme_sigma_closure_integer_cor} $ Ext(\overline{\Sigma}_{\Bbb R}(\Lambda)) \subset \Bbb Z^{n+1}$. } \end{corollary} \begin{proof} { Let $p\in Ext(\overline{\Sigma}_{\Bbb R}(\Lambda))$. Theorem~\ref{extreme_sigma_closure_thm} shows that the interior of every facet $F$ of $\Delta_R(p)$ contains a point of $\Lambda$. Since $\Lambda \subseteq \Bbb Z^{n+1}$, the second part of Lemma~\ref{delta_simplex_easy_lem} implies that $p \in \Bbb Z^{n+1}$. } \end{proof} \begin{theorem} { \label{extreme_sigma_closure_sigma_thm} A divisor $\nu \in Ext(\Sigma(\Lambda))$ if and only if $\nu+{\vec{\bf 1}} \in Ext(\overline{\Sigma}_{\Bbb R}(\Lambda))$. } \end{theorem} \begin{proof} { Corollary~\ref{extreme_sigma_closure_integer_cor} implies that $Ext(\overline{\Sigma}_{\Bbb R}(\Lambda)) \subseteq \Bbb Z^{n+1}$. The theorem immediately follows from Lemma~\ref{sigma_closure_sigma_lem}. } \end{proof} The set of critical points of ${\Lambda}$ ($Crit({\Lambda})$ in Definition~\ref{extreme_critical_def}) is the set of local maxima of the function $d_{\Delta_R}(\cdot,{\Lambda})$. The following theorem characterizes critical points of ${\Lambda}$ in terms of extreme points of $\overline{\Sigma}_{{\mathbb R}}({\Lambda})$. \begin{theorem} { \label{extreme_sigma_closure_critical_thm} For $p \in H_R$, let $\lambda=d_{\Delta_R}(p,{\Lambda})$ and $p'=p+\lambda R$. Then $p' \in Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$ if and only if $p \in Crit({\Lambda})$. } \end{theorem} \begin{proof} { If $p' \in Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$ then by Theorem~\ref{extreme_sigma_closure_thm} each facet of the simplex $\Delta_R(p+\lambda R)=p+\lambda \Delta_R$ contains a point of ${\Lambda}$ in its interior. This shows that $p \in Crit({\Lambda})$. Conversely, assume that $p \in Crit(L)$and $p' \not \in Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$. As the proof of Theorem~\ref{extreme_sigma_closure_thm} shows, there exist $0 \leq i \leq n$ and $\delta>0$ such that for all $0<\epsilon \leq \delta$, $p'_{\epsilon}=p'+\epsilon e_i \in \overline{\Sigma}_{{\mathbb R}}({\Lambda})$. For each $0<\epsilon \leq \delta$, let $p_{\epsilon}=\pi(p'_{\epsilon})$ to be the projection of $p'_{\epsilon}$ along $R$ into $H_R$. Lemma~\ref{distance_HR_sigma_closure_lem} implies that $d_{\Delta_R}(p_{\epsilon},{\Lambda})=\left({p'_{\epsilon} \cdot R \over \|R\|^2}\right)$. Since $p_{\epsilon}' \cdot R > p' \cdot R$, we conclude that $d_{\Delta_R}(p_{\epsilon},{\Lambda}) > d_{\Delta_R}(p,{\Lambda})$, a contradiction. } \end{proof} \begin{corollary} { \label{extreme_L_ciritical_cor} Let $\varphi: Ext(\Sigma({\Lambda})) \rightarrow Crit({\Lambda})$ be as follows: For any $\nu \in Ext(\Sigma({\Lambda}))$, $\varphi(\nu)=\pi(\nu+{\vec{\bf 1}})$. Then $\varphi$ is a bijection. } \end{corollary} \begin{proof} { This follows from Theorems~\ref{extreme_sigma_closure_critical_thm} and ~\ref{extreme_sigma_closure_sigma_thm}. } \end{proof} \begin{lemma} { \label{distance_HR_sigma_closure_lem} Let $p \in H_R$, $\lambda=d_{\Delta_R}(p,{\Lambda})$ and $\lambda'=\max\{t \geq 0: p+tR \in \overline{\Sigma}_{\Bbb R}({\Lambda})\}$. Then $\lambda=\lambda'$. } \end{lemma} \begin{proof} { First note that since $p \in \overline{\Sigma}_{\Bbb R}({\Lambda})$ and $\overline{\Sigma}_{\Bbb R}({\Lambda})$ is a closed set, $\max\{t \geq 0: p+tR \in \overline{\Sigma}_{\Bbb R}({\Lambda})\}$ is well-defined. The first part of Lemma~\ref{delta_simplex_easy_lem} implies that $p+t\Delta_R=\Delta_R(p+tR)$. Now, for all $0 \leq t \leq \lambda$, by applying Lemma~\ref{sigma_closure} and Lemma~\ref{cone_comparison_lem}, we conclude that $p+tR \in \overline{\Sigma}_{\Bbb R}({\Lambda})$. So $\lambda' \geq \lambda$. Conversely, suppose $t \geq 0$ is such that ${\Lambda} \cap (p+t\Delta_R) \neq \emptyset$. Lemma~\ref{sigma_closure} and Lemma~\ref{cone_comparison_lem} imply that $p+tR \in \overline{\Sigma}_{\Bbb R}({\Lambda})$ if and only if ${\Lambda} \cap int(p+t\Delta_R) = \emptyset$. This shows that $\lambda' \leq \lambda$, completing the proof of the lemma. } \end{proof} \begin{lemma} { \label{dominate_by_extreme_lem} There exists a constant $C$ depending only on the lattice ${\Lambda}$ and the vector $R$ such that for any point $p \in \Sigma(\Lambda)$, we have: \begin{enumerate} \item[(i)] ${\rm deg}_R(p) \leq C$, \item[(ii)] there exists some $\nu \in Ext(\Lambda)$ such that $p \leq \nu$. \end{enumerate} } \end{lemma} \begin{proof} { $(i)$: First, we claim that there exists $c$ such that for all $p\in H_R, d_{\Delta_R}(p, \Lambda)\leq c.$ We start by noting that there exists a constant $K$ depending only on $R$ such that $d_{\Delta_R}(p,q) \leq K\cdot \|p-q\|$. This follows immediately by letting the constant $K$ be the largest radius of a sphere in $H_R$ with center at the origin contained in $\Delta_R$. Let $\{l_0, ..., l_{n-1}\}$ be a set of generators of $\Lambda$, and let $P$ be the parallelotope generated by $l_0,... l_{n-1}$. Because the $\Delta_R$-distance function is invariant under translation by lattice points, it is sufficient to prove the claim for all $p \in P$. By letting $c$ be $K$ times the maximum $\ell^2$-distance from a point in $P$ to the vertices of $P$ (diameter of $P$ by $\ell^2$-norm), the claim is proved. To prove the first part, it is enough to show that for all $p \in H^+_R \cap \Sigma({\Lambda})$, ${\rm deg}_R(p) \leq C$. Let $p'=\pi(p)$, $\lambda \geq 0$ be such that $p=p' +\lambda R$. Lemma~\ref{cone_comparison_lem} implies that $p \in \Sigma (\Lambda)$ if and only if $\Delta_R (p)$ contains no points of $\Lambda$. Lemma~\ref{distance_HR_sigma_closure_lem} and Theorem~\ref{extreme_sigma_closure_sigma_thm} imply that $\lambda \leq dist_{\Delta_R}(p,{\Lambda})$, so $\lambda \leq c$. Therefore, ${\rm deg}_R(p) = \lambda \|R\|^2 \leq c \|R\|^2$. This shows that $C \leq c \|R\|^2$, which completes the proof of the first part. $(ii)$: Let $p\in \Sigma(\Lambda)$. The first part shows that the degrees of points in $Ext(\Lambda)$ are bounded above by $C$. Therefore $C^+(p) \cap \Sigma({\Lambda})$ is a finite set. This immediately shows that there exists $\nu \in Ext({\Lambda})$ such that $p \leq \nu$. To be more precise, one can find an extreme point $\nu \in Ext({\Lambda})$ greedily by starting at point $p$ and walking in positive directions as much as possible. } \end{proof} \begin{lemma} { \label{rank_degree_plus_lem} For any divisor $D\in {\mathbb Z}^{n+1}$, $r(D)=\min \{{\rm deg}_R^+(D-\nu) : \nu \in Ext(\Lambda)\}-1$. } \end{lemma} \begin{proof} { First we show that $\min \{{\rm deg}_R^+(D-\nu) : \nu \in Ext(\Lambda)\}\leq r(D)+1$. Let $E \geq {\vec{\bf 0}}$ with ${\rm deg}_R(E)=r(D)+1$ be such that $D-E \in \Sigma (\Lambda)$, where the existence of $E$ guaranteed by Lemma~\ref{distance_rank_lem}. By Lemma~\ref{dominate_by_extreme_lem}, there exists $\nu \in \Sigma({\Lambda})$ such that $\nu \geq D-E$. Let $E' = \nu - (D-E)$. We claim that $E' \cdot E = 0$. Suppose not and assume there exists $0 \leq i \leq n$ such that $E_i, E'_i \geq 1$. Note that $D- (E- e_i) \in \Sigma (\Lambda)$ as $\nu \geq D- (E- e_i)$, but ${\rm deg}_R(E- e_i)<{\rm deg}_R(E)=r(D)+1$, a contradiction. This gives that ${\rm deg}_R^+(D-\nu)= {\rm deg}_R^+(E-E')={\rm deg}(E)=r(D)+1$. For proving the reverse inequality, let $\nu \in Ext(\Lambda)$ be such that ${\rm deg}^+(D-\nu)$ is minimum. Because $\nu \geq \nu + (D-\nu)^-= D-(D-\nu)^+$, it follows that $D-(D-\nu)^+ \in \Sigma(\Lambda)$. Hence Lemma~\ref{distance_rank_lem} implies that $r(D)\leq \min\{{\rm deg}_R^+(D-\nu) : \nu \in Ext(\Lambda)\}-1$, which completes the proof. } \end{proof} \subsection{Riemann-Roch Theorem for Uniform and Reflection Invariant Sub-lattices of ${\Lambda}_R$} \begin{definition} { Let ${\Lambda}$ be a sub-lattice of ${\Lambda}_R$ of rank $n$, and $Ext(\Sigma({\Lambda}))$ be the set of extreme points of $\Sigma({\Lambda})$. Define \begin{eqnarray*} g_{\min} &=&\min\{{\rm deg}_R(\nu) : \nu \in Ext(\Sigma(\Lambda))\}+1, \\ g_{\max} &=&\max\{{\rm deg}_R(\nu) : \nu \in Ext(\Sigma(\Lambda))\}+1. \end{eqnarray*} We say the lattice ${\Lambda}$ is uniform if $g_{\min}=g_{\max}$. } \end{definition} \begin{definition} { Let ${\Lambda}$ be a sub-lattice of ${\Lambda}_R$ of rank $n$. We say ${\Lambda}$ is reflection invariant if $-Crit({\Lambda})$ is a translate of $Crit({\Lambda})$, i.e., if there exists $v \in {\mathbb R}^{n+1}$ such that $-Crit({\Lambda})=Crit({\Lambda})+v$. } \end{definition} \begin{definition} \label{canonical_def} { Let ${\Lambda}$ be a sub-lattice of dimension $n$ of ${\Lambda}_R$. We say a divisor $K \in {\mathbb Z}^{n+1}$ is a canonical divisor of ${\Lambda}$, or equivalently ${\Lambda}$ has a canonical divisor $K$, if for all divisors $D \in {\mathbb Z}^{n+1}$, $${\rm deg}_R(D)-3g_{\max}+2g_{\min}+1 \leq r(D)-r(K-D) \leq {\rm deg}_R(D)- g_{\min}+1.$$ } \end{definition} \begin{lemma} \label{bijection_min_lem} Suppose $\phi:\mathcal{A} \rightarrow \mathcal{A}'$ is a bijection between sets, and $f:\mathcal{A} \rightarrow {\mathbb Z}$ and $f':\mathcal{A}' \rightarrow {\mathbb Z}$ are functions whose values are bounded from below. If there exist constants $c_1,c_2 \in {\mathbb Z}$ such that for all $a \in \mathcal{A}$, $$c_1 \leq f(a)-f'(\phi(a)) \leq c_2,$$ then $$ c_1 \leq \min_{a \in \mathcal{A}} f(a) - \min_{a' \in \mathcal{A'}} f'(a') \leq c_2.$$ \end{lemma} \begin{proof} Since $f$ and $f'$ are integer valued functions whose values are bounded from below, there exists $x \in \mathcal{A}$ and $y \in \mathcal{A}'$ such that $f(x)=\min_{a \in \mathcal{A}} f(a)$ and $f'(y)=\min_{a' \in \mathcal{A'}} f'(a')$. The choice of $x$ and $y$ implies that $f(x)-f'(y) \leq f(\phi^{-1}(y))-f'(y) \leq c_2$, and $f(x)-f'(y) \geq f(x)-f'(\phi(x)) \geq c_1$. Hence $c_1 \leq f(x)-f'(y) \leq c_2$, as desired. \end{proof} \begin{theorem} \label{reflection_invariant_inequality} Let ${\Lambda}$ be a reflection invariant sub-lattice of ${\Lambda}_R$ of rank $n$. Then ${\Lambda}$ has a canonical divisor, i.e. there exists a divisor $K$ such that for all $D \in \Bbb Z^{n+1}$, $$ {\rm deg}_R(D)-3g_{\max}+2g_{\min}+1 \leq r(D)-r(K-D) \leq {\rm deg}_R(D)- g_{\min}+1.$$ \end{theorem} \begin{proof} First we construct the canonical divisor $K$ and then we show it has the desired property. Since ${\Lambda}$ is reflection invariant, there exists a vector $v \in {\mathbb R}^{n+1}$ such that $-Crit({\Lambda})=Crit({\Lambda})+v$. Therefore there exists a bijection function $\eta$ from $Crit({\Lambda})$ to itself such that $\eta(c)+c=v$. Let $\varphi:Ext(\Sigma({\Lambda})) \rightarrow Crit({\Lambda})$ be the bijection described in Corollary~\ref{extreme_L_ciritical_cor}. Define the bijection $\phi$ from $Ext(\Sigma({\Lambda}))$ to itself so that for all $\nu \in Ext(\Sigma({\Lambda}))$, $\phi(\nu)=\varphi^{-1}\eta\varphi(\nu)$. Since for all $\nu \in Ext(\Sigma({\Lambda}))$, ${\rm deg}_R(\nu+\phi(\nu)) \leq 2g_{\max}$, there exists $\nu_0 \in Ext(\Sigma({\Lambda}))$ such that ${\rm deg}_R(\nu_0+\phi(\nu_0))$ is as large as possible. Let the canonical divisor $K$ be $\nu_0+\phi(\nu_0)$. For any $\nu \in Ext(\Sigma({\Lambda}))$, let $c=\varphi(\nu)$; then we have: $$\phi(\nu)+\nu=\phi(\varphi^{-1}(c))+\varphi^{-1}(c)=\varphi^{-1}\eta(c)+\varphi^{-1}(c)=\lambda R+v-2 \times \vec{\bf 1},$$ where $\lambda \in {\mathbb R}$ is a constant depends on $\nu$ (or equivalently $c$). Hence, the choice of $K$ implies that for any $\nu \in Ext(\Sigma({\Lambda}))$, there exists $E_{\nu} \in {\mathbb R}^{n+1}_+$ such that $\phi(\nu)+\nu+E_{\nu}=K$. Therefore, for all divisor $D \in {\mathbb Z}^{n+1}$ and $\nu \in Ext(\Sigma({\Lambda}))$ we have: \begin{eqnarray*} {\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(K-D-\phi(\nu)) &=&{\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(\phi(\nu)+\nu+E_{\nu}-D-\phi(\nu)) \\ &=& {\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(\nu+E_{\nu}-D)\\ & \leq & {\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(\nu-D) \\ & = & {\rm deg}_R(D)-{\rm deg}_R(\nu)\\ & \leq & {\rm deg}_R(D)-g_{\min}+1. \end{eqnarray*} Note that for all $\nu \in Ext(\Sigma({\Lambda}))$, $E_\nu=K-(\nu+\phi(\nu)) \leq 2g_{\max}-2g_{\min}$. Hence, \begin{eqnarray*} {\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(K-D-\phi(\nu)) &=&{\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(\phi(\nu)+\nu+E_{\nu}-D-\phi(\nu)) \\ &=& {\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(\nu+E_{\nu}-D)\\ & \geq & {\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(\nu-D) - 2(g_{\max}-g_{\min}) \\ & = & {\rm deg}_R(D)-{\rm deg}_R(\nu)-2g_{\max}+2g_{\min}\\ & \geq & {\rm deg}_R(D)-3g_{\max}+2g_{\min}+1. \end{eqnarray*} Therefore for all $D \in {\mathbb Z}^{n+1}$ and all $\nu \in Ext(\Sigma({\Lambda}))$, $${\rm deg}_R(D)-3g_{\max}+2g_{\min}+1 \leq {\rm deg}^+_R(D-\nu) - {\rm deg}^+_R(K-D-\varphi(\nu)) \leq {\rm deg}_R(D)-g_{\min}+1.$$ For a fixed $D \in {\mathbb Z}^{n+1}$, ${\rm deg}_R(D)-3g_{\max}+2g_{\min}+1$ and $ {\rm deg}_R(D)-g_{\min}+1$ are constant integers, ${\rm deg}^+_R(D-\nu)$ and ${\rm deg}^+_R(K-D-\varphi(\nu))$ are integer value functions bounded from below by zero, and $\varphi$ is a bijection from $Ext(\Sigma({\Lambda}))$ to itself, hence Lemma~\ref{bijection_min_lem} implies that $${\rm deg}_R(D)-3g_{\max}+2g_{\min}+1 \leq \min_{\nu \in Ext(\Sigma({\Lambda}))}{\rm deg}^+_R(D-\nu) - \min_{\nu \in Ext(\Sigma({\Lambda}))}{\rm deg}^+_R(K-D-\nu) \leq {\rm deg}_R(D)-g_{\min}+1.$$ The assertion of the theorem now follows from Lemma~\ref{rank_degree_plus_lem}. \end{proof} \begin{definition} { Let ${\Lambda}$ be a uniform sub-lattice of dimension $n$ of ${\Lambda}_R$. We say ${\Lambda}$ has the Riemann-Roch property if there exists a divisor $K$ with degree $2g-2$, where $g=g_{\min}=g_{\max}$, such that for all divisor $D \in {\mathbb Z}^{n+1}$: $$r(D)-r(K-D)={\rm deg}(D)-g+1.$$ } \end{definition} \begin{theorem} \label{RI_equiv_RR_prop_thm} Let ${\Lambda}$ be a uniform sub-lattice of dimension $n$ of ${\Lambda}_R$. Then ${\Lambda}$ is reflection invariant if and only if ${\Lambda}$ has the Riemann-Roch property. \end{theorem} \begin{proof} Assume ${\Lambda}$ is reflection invariant and let $K$ be the canonical divisor obtained in the proof of Theorem~\ref{reflection_invariant_inequality}. By applying Theorem~\ref{reflection_invariant_inequality}, its enough to show that ${\rm deg}(K)=2g-2$. The construction of $K$ shows that $K=\nu + \phi(\nu)$, where $\phi$ is the bijection obtained in proof of Theorem~\ref{reflection_invariant_inequality}. Since ${\Lambda}$ is uniform, $g_{\min}=g_{\max}=g$. Hence ${\rm deg}_R(\nu)={\rm deg}_R(\phi(\nu))=g-1$ and this implies that ${\rm deg}_R(K)=2g-2$. Now, assume that ${\Lambda}$ has the Riemann property. Assume $\nu$ is an extreme divisor of $\Sigma({\Lambda})$, so the first part of Lemma~\ref{distance_rank_lem} implies that $r(\nu)=-1$. Since ${\Lambda}$ is uniform ${\rm deg}_R(\nu)=g-1$ and this shows that $r(K-\nu)=r(\nu)=-1$. By Lemma~\ref{distance_rank_lem}, $K-\nu \in \Sigma({\Lambda})$, and is hence an extreme divisor of $\Sigma({\Lambda})$. Hence the function $\psi$ defined as $\psi(-\nu)=K-\nu$, for all $\nu \in Ext({\Lambda})$ is a bijection from $Ext({\Lambda})$ to itself. If $\varphi$ is the function defined in Corollary~\ref{extreme_L_ciritical_cor}, the function $\varphi o \psi o \varphi^{-1}$ is a bijection from $Crit({\Lambda})$ to itself. It is easy to see that for any $p \in Crit({\Lambda})$, $\varphi(\psi(\varphi^{-1}(p)))=-p+\pi(K)+2\pi({\vec{\bf 1}})$, and by picking $v=-\pi(K)-2\pi({\vec{\bf 1}})$, we have $-Crit({\Lambda})=Crit({\Lambda})+v$. \end{proof} \begin{definition} { We say a sub-lattice ${\Lambda}$ of ${\Lambda}_R$ has the Riemann-Roch formula if there exists a an integer $m \in {\mathbb Z}$ and a divisor $K$ of degree $2m-2$ such that for all $D \in {\mathbb Z}^{n+1}$: $$r(D)-r(K-D)={\rm deg}_R(D)-m+1.$$ } \end{definition} \begin{theorem} \label{RR_formula_equiv_U_RI_thm} Let ${\Lambda}$ be a sub-lattice of dimension $n$ of ${\Lambda}_R$. Then ${\Lambda}$ has a Riemann-Roch formula if and only if ${\Lambda}$ is uniform and reflection invariant, in particular ${\Lambda}$ has the Riemann-Roch property. \end{theorem} \begin{proof} If ${\Lambda}$ is uniform and reflection invariant, then Theorem~\ref{RI_equiv_RR_prop_thm} implies that ${\Lambda}$ has Riemann-Roch property and therefore ${\Lambda}$ has the Riemann-Roch formula with $m=g_{\max}$. For proving the other direction it is enough by Theorem~\ref{RI_equiv_RR_prop_thm} to show that ${\Lambda}$ is uniform and $m=g_{\max}$. First, we show that $m=g_{\max}$. Let $D$ be a divisor with ${\rm deg}_R(D) \geq m$. The Riemann-Roch formula implies that $r(D)-r(K-D) \geq 1$ and since $r(K-D) \geq -1$, we have $r(D) \geq 0$. It follows that $g_{\max} \leq m$. We know that for any divisor $D \in {\mathbb Z}^{n+1}$, if the degree of $D$ is more that $g_{\max}-1$ then the divisor is effective, so ${\rm deg}_R(D)-r(D) \leq g_{\max}$. On the other hand, if ${\rm deg}_R(D) >2m-2$, then ${\rm deg}_R(K-D) < 0$, therefore $r(K-D)=-1$. The Riemann-Roch formula implies that ${\rm deg}(D)-r(D) = m$. Therefore, $m \leq g_{\max}$. This shows that $m=g_{\max}$. To prove uniformity, let $\nu \in Ext(\Sigma({\Lambda}))$ and ${\rm deg}_R(\nu) < g_{\max}-1$. Since ${\rm deg}_R(K)=2g_{\max}-2$, ${\rm deg}_R(K-\nu) \geq g_{\max}$, so $K-\nu \not \in \Sigma({\Lambda})$, and by Lemma~\ref{distance_rank_lem} is equivalent to an effective divisor. The Riemann-Roch formula implies that $r(K-\nu)=g_{\max}-{\rm deg}(\nu)-2$, so there exists an effective divisor $E$ of degree $g_{\max}-{\rm deg}(\nu)-1>0$ such that $|K-\nu-E|=\emptyset$. We claim that $\nu+E$ is not equivalent to an effective divisor. The Riemann-Roch formula implies that $r(\nu+E)-r(K-\nu-E)={\rm deg}_R(\nu+E)-g_{\max}+1=0$ and therefore $r(\nu+E)=-1$. By Lemma~\ref{distance_rank_lem}, $\nu+E \in \Sigma({\Lambda})$, contradicting the fact that $\nu \in Ext(\Sigma({\Lambda}))$. \end{proof} \subsection{Riemann-Roch Theorem for sub-lattice of ${\Lambda}_R$ and ${\Lambda}_{\vec{\bf 1}}$} Let $R=(r_0, \dots, r_n) \in {\mathbb N}^{n+1}$ and $\mathcal{R}=diag(r_0, \dots, r_n)$ be a matrix mapping ${\Lambda}_R$ to ${\Lambda}_{\vec{\bf 1}}$. To be more precise, for any $p \in {\Lambda}_R$ the image of $p$ is $\mathcal{R}p$. For any set $S \subseteq {\mathbb R}^{n+1}$, let $\mathcal{R}S$ denote the set $\{\mathcal{R}p: p \in S\}$. It is easy to see that if ${\Lambda} \subseteq {\Lambda}_R$ is a sub-lattice of dimension $n$ then $\mathcal{R}{\Lambda}$ is a sub-lattice of ${\Lambda}_{\vec{\bf 1}}$ of dimension $n$. \begin{lemma} \label{lem:sigma_R_one} Let ${\Lambda}$ be a sub-lattice of dimension $n$ of ${\Lambda}_R$. Then $\mathcal{R}\Sigma({\Lambda})=\Sigma(\mathcal{R}{\Lambda})$. \end{lemma} The proof of above lemma follows easily from Definition~\ref{sigma_region_def} and the fact that $\mathcal{R}$ is an invertible matrix with positive diagonal entries. \begin{lemma} \label{lem:extremesigmaclosure_R_one} Let ${\Lambda}$ be a sub-lattice of dimension $n$ of ${\Lambda}_R$. Then $\mathcal{R}Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))=Ext(\overline{\Sigma}_{{\mathbb R}}(\mathcal{R}{\Lambda}))$. \end{lemma} \begin{proof} Let $\nu \in Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$ so that there exists some $\delta >0$ such that for all $p \in B(\nu, \delta) \cap \overline{\Sigma}_{{\mathbb R}}({\Lambda})$, ${\rm deg}_R(\nu) \geq {\rm deg}_R(p)$. Let $\delta' = \delta$. It is easy to see that if $q \in B(\mathcal{R}\nu, \delta')$, we have $\mathcal{R}^{-1}q \in B(\nu, \delta)$. Hence ${\rm deg}_R(\mathcal{R}^{-1}q) \leq {\rm deg}_R(\nu)$ and therefore ${\rm deg}_{\vec{\bf 1}}(q) \leq {\rm deg}_{\vec{\bf 1}}(\mathcal{R}\nu)$. Here we have used the fact that for any $D \in {\mathbb Z}^{n+1}$, ${\rm deg}_R(D)={\rm deg}_{\vec{\bf 1}}(\mathcal{R}D)$ and Lemma~\ref{lem:sigma_R_one}. This proves that $\mathcal{R}Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda})) \subseteq Ext(\overline{\Sigma}_{{\mathbb R}}(\mathcal{R}{\Lambda}))$. The other direction is proved similarly. \end{proof} The following corollary immediately follows from Lemma~\ref{lem:extremesigmaclosure_R_one} and Theorem~\ref{extreme_sigma_closure_sigma_thm}. \begin{corollary} \label{cor:uniformity_R_one} Let ${\Lambda}$ be a sub-lattice of dimension $n$ of ${\Lambda}_R$. Then ${\Lambda}$ is uniform if and only if $\mathcal{R}{\Lambda} \subseteq {\Lambda}_{\vec{\bf 1}}$ is uniform. \end{corollary} \begin{lemma} \label{lem:reflection_R_one} Let ${\Lambda}$ be a uniform sub-lattice of dimension $n$ of ${\Lambda}_R$. Then ${\Lambda}$ is reflection invariant if and only if $\mathcal{R}{\Lambda} \subseteq {\Lambda}_{\vec{\bf 1}}$ is reflection invariant. \end{lemma} \begin{proof} First suppose ${\Lambda}$ is reflection invariant. Then there exists a vector $v \in {\mathbb R}^{n+1}$ such that $-Crit({\Lambda})=Crit({\Lambda})+v$. By applying Lemma~\ref{lem:extremesigmaclosure_R_one} and Theorem~\ref{extreme_sigma_closure_critical_thm}, let $\mathcal{R}\nu-\vec{\bf 1}-{\rm deg}_{\vec{\bf 1}}(R\nu - \vec{\bf 1})\vec{\bf 1}$ be an arbitrary point of $Crit(\mathcal{R}{\Lambda})$ where $\nu$ is an arbitrary point of $Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$. Now, by applying Theorem~\ref{extreme_sigma_closure_critical_thm}, $$\nu-\vec{\bf 1}-{\rm deg}_R(\nu-\vec{\bf 1})R \in Crit({\Lambda}).$$ Since ${\Lambda}$ is reflection invariant, there exists $\nu' \in Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$ such that $$-\nu+\vec{\bf 1}+{\rm deg}_R(\nu-\vec{\bf 1})R=\nu'-\vec{\bf 1}-{\rm deg}_R(\nu'-\vec{\bf 1})R+v,$$ therefore $$-\mathcal{R}\nu+\mathcal{R}\vec{\bf 1}+{\rm deg}_R(\nu-\vec{\bf 1})\mathcal{R}R=\mathcal{R}\nu'-\mathcal{R}\vec{\bf 1}-{\rm deg}_R(\nu'-\vec{\bf 1})\mathcal{R}R+\mathcal{R}v.$$ Since ${\Lambda}$ is uniform ${\rm deg}_R(\nu-\vec{\bf 1})$ is a constant independent from the choice of $\nu \in Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$. Hence, $\mathcal{R}\nu-\mathcal{R}\nu'=u$ where $u$ is constant vector in ${\mathbb R}^{n+1}$ which does not depend on $\nu$ or $\nu'$. Since $\mathcal{R}{\Lambda}$ is uniform, ${\rm deg}_{\vec{\bf 1}}(\mathcal{R}\nu-\vec{\bf 1})$ is a constant independent from the choice of $\nu \in Ext(\overline{\Sigma}_{{\mathbb R}}({\Lambda}))$. This shows that $$\mathcal{R}\nu-\mathcal{R}\nu'=u+2{\rm deg}_{\vec{\bf 1}}(\mathcal{R}\nu-\vec{\bf 1})+2\times \vec{\bf 1}.$$ Hence $\mathcal{R}{\Lambda}$ is reflection invariant. The other direction is proved similarly. \end{proof} Recall the definition of the canonical vector (Definition~\ref{canonical_def}) and the argument in the proof of Lemma~\ref{reflection_invariant_inequality} in constructing a canonical vector for a reflection invariant sublattice of ${\Lambda}_R$. So we can consider the following corollary as a consequence of Theorem~\ref{extreme_sigma_closure_sigma_thm}, Lemma~\ref{lem:extremesigmaclosure_R_one}, and Lemma~\ref{lem:reflection_R_one}. \begin{corollary} \label{cor:canonical_1_R} Let ${\Lambda}$ be a reflection invariant sub-lattice of dimension $n$ of ${\Lambda}_{R}$. If $K$ is a canonical vector of $\mathcal{R}{\Lambda}$ then $\mathcal{R}^{-1}(K+2\times \vec{\bf 1})-2\times \vec{\bf 1}$ is a canonical vector of ${\Lambda}$. \end{corollary} The following theorem immediately follows from Theorem~\ref{RR_formula_equiv_U_RI_thm}, Corollary~\ref{cor:uniformity_R_one} and Lemma~\ref{lem:reflection_R_one}. \begin{theorem} \label{thm:RR_R_one} Let ${\Lambda}$ be a uniform sub-lattice of dimension $n$ of ${\Lambda}_R$. Then ${\Lambda}$ has the Riemann-Roch property if and only if $\mathcal{R}{\Lambda} \subseteq {\Lambda}_{\vec{\bf 1}}$ has the Riemann-Roch property. \end{theorem} \section{Chip-Firing Game on Directed Graphs \label{chip_sec}} \subsection{Row Chip-Firing Game, The Sandpile Model and Riemann-Roch Theory} Let $\vec{G}$ be a directed graph with vertex set $\{v_0, ..., v_n \}$ and adjacency matrix $\vec{A}$ whose entry $\vec{A}_{i,j}$ for $0\leq i,j \leq n$ is the number of edges directed from $v_i$ to $v_j$. Let $\vec{\mathcal{D}}=diag({\rm deg}^+(v_0), \dots, {\rm deg}^+(v_n))$ where ${\rm deg}^+(v)$ denotes the number edges leaving vertex $v \in V(\vec{G})$. We call the matrix $\vec{Q}=\vec{\mathcal{D}}-\vec{A}$ the {\it Laplacian matrix} of the directed graph $\vec{G}$. We define ${\Lambda}_{\vec{G}}$ to be the lattice spanned by the rows of $\vec{Q}$. In this section we study the following row chip-firing game on vertices of $\vec{G}$. Begin with $D \in {\mathbb Z}^{n+1}$, which we call a configuration or a divisor, whose $i$th entry $D(v_i)$ is the number of chips at vertex $v_i$. In each {\it move} of the game either a vertex {\it borrows} or {\it fires}. We say a vertex {\it fires} if it sends a chip along each of its outgoing edges to its neighbors and {\it borrows} if it receives a chip along each of its incoming edges from its neighbors. We say that a vertex is in {\it debt} if the number of chips at that vertex is negative. The objective of the game is to bring every vertex out of debt by some sequence of moves. Note that the game is ``commutative'' in the sense that the order of firings and borrowings does not effect the final configuration. For $f \in {\mathbb Z}^{n+1}$, we may interpret the divisor $D'=D-\vec{Q}^Tf$ as the divisor obtained from $D$ by a sequence of moves in which the vertex $v_i$ fires $f(v_i)$ times if $f(v_i) \geq 0$ and it borrows $f(v_i)$ times if $f(v_i) \leq 0$. We refer to $f$ as a {\it firing strategy}. Note that both firing strategies and divisors are vectors in ${\mathbb Z}^{n+1}$. We say a configuration is a {\it winning configuration} if all of the vertices are out of debt. We call a sequence of moves which achieves a winning configuration a {\it winning strategy}. The question of whether a winning strategy exists is equivalent to the question of whether there exists a firing strategy $f\in {\mathbb Z}^{n+1}$ and an effective divisor $E \in {\mathbb Z}_{\geq 0}^{n+1}$ such that $E=D+\vec{Q}^T f$, i.e., $D-E \in {\Lambda}_{\vec{G}}$, $|D| \neq \emptyset$ or $r(D)\geq 0$. In what follows we will restrict our attention to strongly connected directed graphs. The main motivation for this consideration is given in the following lemma which, interperetted combinatorially, characterizes strongly connected digraphs in terms of which firings leave a divisor unaffected. \begin{lemma} \label{lem:leftkernel} A directed graph $\vec{G}$ is strongly connected if and only if there exists a vector $R \in {\mathbb N}^{n+1}$, unique up to multiplication by a real constant, such that ${\vec{Q}}^TR=0$. \end{lemma} \begin{proof} { Suppose $\vec{G}$ is strongly connected. For the sake of contradiction suppose there exists $R \not \geq 0$ such that ${\vec{Q}}^TR=0$. Let $V^+$ be the set of vertices of $\vec{G}$ such that $R(v) > 0$ for all $v \in V^+$. Let $D=\vec{Q}^TR$. Since the net amount of chips leaving $V^+$ is positive, there must exist some $v \in V^+$ such that $D(v)<0$, a contradiction. Now assume there exist two linearly independent firing strategies $R_1$ and $R_2$ then it is easy to see that there exists a linear combination of $R_1$ and $R_2$, say $R$, such that $R \not \geq 0$. This proves the uniqueness. Note that we can take $R$ to be an integral vector. Conversely, suppose $\vec{G}$ is not strongly connected. Let $V_1, \dots, V_t$ be the decomposition of vertices of $\vec{G}$ into maximal strongly connected components. Without loss of generality, let $V_1$ be a set of vertices such that there exists no edges from $u$ to $v$ where $u \in V_i$, $2 \leq i \leq t$ and $v \in V_1$. As above there exists $v \in V_1$ such that ${\vec{Q}}^TR(v)<0$, a contradiction. } \end{proof} \subsubsection{\label{reduceddiv_sec}Reduced Divisors} Let $f, f' \in \Bbb Z^{n+1}$ be firing strategies. We define an equivalence relation $\approx$ on $\Bbb Z^{n+1}$ by declaring $f \approx f'$ if $\vec{Q}^T(f-f')=\vec{\bf 0}$. For any set $S \subseteq V(\vec{G})$, the {\it characteristic vector of} $S$, denoted by $\chi_{S}$, is the vector $\sum_{v_i \in S}e_i$. We say a vector $f \in \Bbb Z^{n+1}$ is a {\it natural} firing strategy if $f \leq R$, and $f \not \leq {\vec{\bf 0}}$. We say a nonzero vector $f \in \Bbb Z^{n+1}$ is a {\it valid} firing strategy with respect to $v_0$ if $f(v_0)=0$, and ${\vec{\bf 0}} \leq f \leq R$. The following lemma is an immediate consequence of Lemma~\ref{lem:leftkernel}. \begin{lemma} \label{natural_lemma} Let $f \in \Bbb Z^{n+1}$ be a nonzero firing strategy then there exists a unique $f' \in \Bbb Z^{n+1}$ such that $f \approx f'$ and $f'$ is a natural firing strategy. \end{lemma} \begin{definition} { \label{reduced_def} Let $\vec{G}$ be a directed graph. We call a divisor $D$ $v_0$-reduced if the following two conditions hold: \begin{enumerate} \item[(i)] for all $v \in V(\vec{G})\setminus \{v_0\}, D(v)\geq 0$, \item[(ii)] for every valid firing $f$ with respect to $v_0$, there exists a vertex $v \in V(\vec{G}) \setminus \{v_0\}$ such that $(D-\vec{Q}^Tf)(v) <0$. \end{enumerate} } \end{definition} The following remark immediately follows from Definition~\ref{reduced_def}. \begin{remark} \label{reduced_rem} If $D' \sim D$ is a $v_0$-reduced divisor then for all $k \in {\mathbb Z}$, $D'+k\chi_{\{v_0\}}$ is a $v_0$-reduced divisor and $D'+k\chi_{\{v_0\}} \sim D+k\chi_{\{v_0\}}$. \end{remark} \begin{lemma} { \label{firing_reduced_lemma}Let $D$ be a $v_0$-reduced divisor and let $f$ be a firing strategy such that $f(v_0) \leq 0$ and $f(v) >0$ for some vertex $v \in V(\vec{G}) \setminus \{v_0\}$. Then there exists $v \in V(\vec{G})\setminus \{v_0\}$ such that $(D-\vec{Q}^Tf)(v)<0$. } \end{lemma} \begin{proof} { Lemma~\ref{natural_lemma} implies that there exists a natural firing strategy $f' \approx f$ with $f'(v_0)\leq f(v_0)=0$. Suppose $f^+$ and $f^-$ are the positive and negative part of $f'$. It is easy to see that $f^+$ is a valid firing strategy with respect to $v_0$. Hence there exists a vertex $v \in V(\vec{G}) \setminus \{ v_0\}$ such that $(D-\vec{Q}^Tf^+)(v)<0$. Therefore, $$(D-\vec{Q}^Tf)(v)= (D-\vec{Q}^Tf')(v)= (D-\vec{Q}^T f^+ -\vec{Q}^T f^-)(v) \leq (D- \vec{Q}^T f^+)(v) <0.$$ } \end{proof} \begin{lemma} { \label{reduce_exist_lemma} Let $\vec{G}$ be a directed graph and let $D$ be a divisor. Then there exists a divisor $D' \sim D$ such that $D'$ is $v_0$-reduced. } \end{lemma} \begin{proof} { The proof that we present here is similar to the proof presented by Baker and Norine~\cite{BN07}(\S 3.1). The process of obtaining a $v_0$-reduced divisor $D' \sim D$ has two steps: first we bring every $v \in V(\vec{G}) \setminus \{v_0\}$ out of debt, so that it satisfies the first condition of Definition~\ref{reduced_def}, and then we ``reduce'' the divisor with respect to $v_0$, in order to satisfy the second condition of Definition~\ref{reduced_def}. For performing the first step, define $d(v)$, for all $v \in V(\vec{G}) \setminus \{v_0\}$, to be the length of the shortest directed path from $v_0$ to $v$. Let $d=\max_{v \in V(\vec{G}) \setminus \{v_0\}} d(v)$. For all $1 \leq i \leq d$, define $A_i=\{v \in V(\vec{G}): d(v)=i\}$. Now we bring the $A_i$'s out of debt consecutively, starting at $A_d$. We recursively define sequences of integers $b_i$ and divisors $D_i$ as follows. Let $b_d=\max\left(\{-D(v): v \in A_d, D(v) \leq 0\} \cup \{0\}\right)$. Define $D_d=D-\vec{Q}^Tf_d$ where $f_d$ is the all zero vector except $f_d(v_j)=b_d$ if $v_j \not \in A_d$. It is easy to see that $D_d(v_j) \geq 0$ for all $v_j \in A_d$. Now suppose $1 \leq i \leq d-1$, and define $b_i=\max\left(\{-D(v): v \in A_i, D_{i+1}(v) \leq 0\} \cup \{0\}\right)$. Define $D_i=D_{i+1}-\vec{Q}^Tf_i$ where $f_i$ is the all zero vector except $f_i(v_j)=b_i$ if $v_j \not \in \bigcup_{k=i}^d A_k$. It is easy to see that $D_i(v_j) \geq 0$ for all $v_j \in A_i$ and $D_i(v_j)=D_{i+1}(v_j)$ for all $v_j \in \bigcup_{k=i+1}^d A_k$. Since $d$ is a finite number and the $b_i$'s are bounded, the above procedure terminates. It is easy to verify that $D_1 \sim D$ is a divisor such that no vertex other than $v_0$ is in debt. This completes the description of the first step. Now we are going to explain the second step. Let $D'=D_1$ be the divisor obtained from the first step. While there exists a valid firing strategy $f$ with respect to $v_0$ such that $(D'-\vec{Q}^Tf)(v) \geq 0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$, replace $D'$ by $D'-\vec{Q}^Tf$. If we show that the procedure terminates, it is obvious that $D'$ is a $v_0$-reduced divisor. Since $f(v_0)=0$ for any valid firing strategy with respect to $v_0$, the vertex $v_0$ must stop receiving money at some point. At this point none of its neighbors fires, so they must eventually stop receiving money. By iterating this argument we see that, since $v_0$ is reachable from every vertex, each vertex must stop receiving money at some point. Hence, the above procedure terminates at a $v_0$-reduced divisor. } \end{proof} \begin{corollary} { \label{positive_divisor_reduction_cor} Let $D$ be a divisor satisfying the property (i) in Definition~\ref{reduced_def}. Then there exists a sequence of valid firings $f_1, \dots, f_k$ with respect to $v_0$ such that $D'=D-\vec{Q}^T(\sum_{i=1}^k f_i)$ is $v_0$-reduced. } \end{corollary} \begin{lemma} { \label{r_0_distinct_v_0_lem} For any divisor $D$, there exist exactly $r_0$ distinct $v_0$-reduced divisors equivalent to $D$. } \end{lemma} \begin{proof} { First, we show that there exist at most $r_0$ distinct reduced divisors equivalent to $D$. Suppose not, so by the pigeonhole principle, there exist two distinct reduced divisors, $D'=D-\vec{Q}^Tf'$ and $D''=D-\vec{Q}^Tf''$ with $f'(v_0) \equiv f''(v_0) \hbox{ (mod) } r_0$. Pick $k \in {\mathbb Z}$ so that $(f'-f''-kR)(v_0)=0$ and let $f^*=f'-f''-kR$. By our assumption $D' \neq D''$ and so $\vec{Q}^T(f'-f'') \neq 0$. Hence by Lemma~\ref{lem:leftkernel}, either $f^*$ or $-f^*$ satisfies the assumptions of Lemma~\ref{firing_reduced_lemma}. Without loss of generality, suppose $f^*$ satisfies the assumption of Lemma~\ref{firing_reduced_lemma}. But $D'=D''-\vec{Q}^Tf^*$ is a $v_0$-reduced divisor, contradicting Definition~\ref{reduced_def}(i). Now, we show that there exist at least $r_0$ distinct reduced divisors equivalent to $D$. Lemma~\ref{reduce_exist_lemma} implies that there exists at least one $v_0$-reduced divisor equivalent to $D$, so if $r_0 =1$ we are done. Therefore for the rest of the proof we will assume that $r_0 > 1$. Take a $v_0$-reduced divisor $D'\sim D$ and observe that $D''=D'-\vec{Q}^T(\chi_{\{v_0\}})$ satisfies the condition (i) of Definition~\ref{reduced_def}. Hence Corollary~\ref{positive_divisor_reduction_cor} implies that $D''$ can be reduced without firing $v_0$ to achieve a new reduced divisor from $D'$. We can acquire $r_0$ $v_0$-reduced divisors equivalent to $D$ by repeated application of this method. We claim that all of the $v_0$-reduced divisors obtained are distinct. Suppose there exist $0\leq i < j <r_0$ and firing strategies $f'$ and $f''$ such that $f'(v_0)=i$, $f''(v_0)=j$, and $D^*=D'-\vec{Q}^Tf'=D'-\vec{Q}^Tf''$ is $v_0$-reduced. This implies that $\vec{Q}^T(f''-f')={\vec{\bf 0}}$ but $0<(f''-f')(v_0)<r_0$, contradicting the statement of Lemma~\ref{lem:leftkernel}. } \end{proof} \begin{corollary} { \label{unique_firing_reduced_cor} Let $\vec{G}$ be a directed graph and let $D$ be a divisor. There exist $r_0$ $v_0$-reduced divisors $D_i=D-\vec{Q}^Tf_i$ where $f_i(v_0)=i$ for all $0 \leq i \leq r_0-1$. } \end{corollary} \begin{lemma} { \label{extreme_reduced_effective lem} Let $\vec{G}$ be a directed graph and let $D$ be a divisor. Then \begin{itemize} \item[(i)] $D$ is equivalent to an effective divisor if and only if there exists a $v_0$-reduced divisor $D' \sim D$ such that $D'$ is effective; \item[(ii)] Suppose $D$ is not equivalent to an effective divisor. Then $D$ is an extreme divisor if and only if for any $v \in V(\vec{G})$, there exists a $v$-reduced divisor $D' \sim D$ such that $D'(v)=-1$. \end{itemize} } \end{lemma} \begin{proof} $(i)$: One direction is obvious. So assume $D$ is equivalent to an effective divisor, call it $D''$. If $D''$ is $v_0$-reduced then we are done. Otherwise, Corollary~\ref{positive_divisor_reduction_cor} implies that there exists a valid firing strategy $f$ with respect to $v_0$ such that $D''-\vec{Q}^Tf$ is $v_0$-reduced. Since $D''$ is effective and $f$ is valid with respect to $v_0$, $D''-\vec{Q}^Tf$ is effective. $(ii)$: First assume that $D$ is an extreme divisor. The assertion of part (i) implies that for all $v \in V(D)$, if $D' \sim D$ is a $v$-reduced divisor, $D'(v) \leq -1$. Suppose there exists $v \in V(\vec{G})$ such that for all $v$-reduced divisor $D' \sim D$ we have that $D'(v) <-1$. Then by Remark~\ref{reduced_rem}, for all $v$-reduced divisors $D' \sim D$, $D'+\chi{\{v\}}$ is not effective and it is $v$-reduced. So by part (i), $D+\chi_{\{v\}}$ is not effective, a contradiction. For proving the other direction, it is enough to show that for all $v \in V(\vec{G})$, $D+\chi_{\{v\}}$ is equivalent to an effective divisor. So let $v$ be a vertex and let $D' \sim D$ be the $v$-reduced divisor such that $D'(v)=-1$. Then $D'+\chi_{\{v\}}$ is effective and so $D+\chi_{\{v\}}$ is also. \end{proof} \subsubsection{Dhar's Algorithm \label{subsec:Dhar_alg}} Dhar~\cite{Dhar90}, while studying the sand pile model, found a simple algorithm for checking whether a given divisor in an undirected graph $G$ is $v_0$-reduced or not. We discuss the directed sandpile model in the next section. Here we generalize his algorithm so that it applies to an arbitrary directed graph $\vec{G}$. The authors found this generalization independently from Speer~\cite{Spe93}. The input of the algorithm is a divisor $D$ satisfying the condition (i) of Definition~\ref{reduced_def}. The output of the algorithm is a finite sequence $f_i$ of firing strategies which is decreasing with respect to the $\leq$ relation. The description of the algorithm is as follows. We construct a sequence of firing strategies $f_i$'s recursively. Set $f_0=R$. For $t \geq 0$, if there exists some $v\in V(\vec{G})\setminus \{v_0\}$ such that \begin{equation} \label{dharmain_eq} (D-\vec{Q}^Tf_t)(v) \leq -1, \end{equation} pick one such vertex $v$ and set $f_{t+1}=f_t -\chi_{\{v\}}$. If for all $v \in V(\vec{G})\setminus \{v_0\}, (D-\vec{Q}^Tf_t)(v) \geq 0$ and $f_t(v_0)>0$, set $f_{t+1}=f_t-\chi_{\{v_0\}}$. Otherwise the algorithm terminates and the output of the algorithm is the decreasing sequence of $f_i$'s. We call the above algorithm the {\it generalized Dhar's Algorithm}. \begin{theorem} { \label{dhar_alg_thm} Let $D$ be a divisor satisfying condition (i) in Definition~\ref{reduced_def}. Then \begin{enumerate} \item[(i)] the divisor $D$ is $v_0$-reduced if and only if the generalized Dhar's Algorithm terminates at $f_{{\vec{\bf 1}} \cdot R}={\vec{\bf 0}}$. \item[(ii)] if $D$ is a $v_0$-reduced divisor then for each $0 \leq t \leq {\vec{\bf 1}} \cdot R -1$ such that $f_{t+1}=f_t-\chi _{\{v_0\}}$, $D-\vec{Q}^Tf_t$ is a $v_0$-reduced divisor. \end{enumerate} } \end{theorem} \begin{proof} { $(i)$: Clearly if $D$ is reduced then the algorithm terminates at $f_{\vec{\bf 1} \cdot R}=0$. So assume that the algorithm terminates on the divisor $D$. Take a valid firing $f$ with respect to $v_0$ and pick $t$ as large as possible such that $f_t \geq f$. The choice of $t$ implies that $f_{t+1}= f_t -\chi _{\{v\}}$ for some vertex $v \in V(\vec{G})\setminus\{v_0\}$ since $f(v_0)=0$. Therefore $f_t=f+f'$ where $f' \geq 0$ and $f'(v)=0$. Hence $(D-\vec{Q}^Tf)(v) = (D-\vec{Q}^Tf_t - \vec{Q}^Tf' )(v)\leq (D-\vec{Q}^Tf_t)(v)<0$ so the divisor $D$ satisfies the second condition of Definition~\ref{reduced_def}. Hence $D$ is $v_0$-reduced. $(ii)$: For the sake of contradiction, let $t$ be such that $f_{t+1}=f_t-\chi _{\{v_0\}}$ and $D-\vec{Q}^Tf_t$ is not a $v_0$-reduced divisor. There exists a valid firing strategy $f$ with respect to $v_0$ such that $((D-\vec{Q}^Tf_t)-\vec{Q}^Tf)(v) \geq 0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$. Let $f'=f_t +f$, then we have two cases. Assume there exists $v_i \in V(\vec{G})\setminus \{v_0\}$ such that $f'(v_i)>r_i$ then $f''=f'-R$ is a firing strategy which satisfies the conditions of Lemma~\ref{firing_reduced_lemma}, contradicting the fact that for all $v \in V(\vec{G}) \setminus \{v_0\}$, $(D-\vec{Q}^Tf')(v)>0$. Therefore, we can choose $s$ as large as possible such that $f_s \geq f'$. The choice of $s$ implies that there exists $v \in V(\vec{G})$ such that $f_s(v)=f'(v)$ and $f_{s+1}=f_s-\chi_{\{v\}}$. If $v=v_0$, since $t>s$, $f_{s+1} \geq f_t$ but $f_{s+1}(v_0) < f_t$, a contradiction. Hence $v \in V(\vec{G}) \setminus \{v_0\}$ and $(D-\vec{Q}^Tf_{s})(v) < 0$. But $(D-\vec{Q}^Tf')(v) \leq (D-\vec{Q}^Tf_s)(v)<0$ and this contradicts the choice of $f$ and $f_t$. } \end{proof} We conclude this section with the following definition which will appear in each of the subsequent sections. \begin{definition} \label{def:natural_RR} Let $\vec{G}$ be a directed graph with the Riemann-Roch property. Then $\vec{G}$ has the natural Riemann-Roch property if its canonical divisor $K$ has $i$th entry ${\rm deg}^+(v_i)-2$ for $0\leq i \leq n$. \end{definition} \subsubsection{The Sandpile Model \label{subsec:sandpile}} The sandpile model for a directed graph is a constrained version of the ``row" chip-firing game. We define a divisor $D$ to be a {\it $v_0$-sandpile configuration} if $D$ satisfies the condition (i) from Definition~\ref{reduced_def}. The vertex $v_0$ does not participate in this game and a vertex $v \in V(\vec{G}) \setminus \{v_0\}$ may only fire if it has at least as many chips as its out-degree (so that $v$ does not go in debt), and it never borrows. Morevover, we say that two configurations are the same if they agree at all vertices other than $v_0$. This model has been studied in~\cite{HLMPPW08, Lev11, Spe93}. The goal of this section is show a connection between the sandpile model and the Riemann-Roch property for the row chip-firing game on a strongly connected directed graph. To do this we will first show a connection between this model and $v_0$-reduced divisors. We begin with some necessary definitions. We now restrict our attention to the sandpile model. We call a $v_0$-sandpile configuration {\it $v_0$-stable} if no vertex $v \in V(\vec{G}) \setminus \{v_0\}$ can fire. We note that while some authors require $v_0$ to be a global sink (in order to guarantee that a divisor will eventually stabilize), we simply insist that $v_0$ never fires. We say that a $v_0$-sandpile configuration $D'$ {\it stabilizes} to $D$, a $v_0$-stable configuration, if $D$ is $v_0$-sandpile achievable from $D'$. To see that any $v_0$-sandpile configuration will eventually stabilize to a $v_0$-stable configuration, one may follow an argument similar to the one from Lemma~\ref{reduce_exist_lemma}. We note that, as the language suggests, $D$ is unique, i.e., stabilization is independent of the choice of firings, and a simple proof by induction on $k$, the length of the sequence of firings, gives this fact. A $v_0$-stable configuration $D$ is said to be {\it $v_0$-reachable} from another $v_0$-sandpile configuration $D'$ if there exists an effective divisor $E$ such that $D'+E$ stabilizes to $D$. A $v_0$-stable configuration is {\it $v_0$-recurrent} if it is $v_0$-reachable from any other $v_0$-sandpile configuration. \begin{lemma} \label{lem:recurrentdom} A divisor $D$ is $v_0$-recurrent if and only if there exists a divisor $D'$ such that $D'(v)\geq {\rm deg}^+(v)$ for all $v \in V(\vec{G}) \setminus \{v_0\}$ and $D'$ stabilizes to $D$. \end{lemma} \begin{proof} We begin with the easier of the two directions. Assume that $D$ is $v_0$-recurrent and let $D''$ be some divisor such that $D''(v)\geq {\rm deg}^+(v)$. By definition, $D$ is $v-0$ reachable from $D''$, therefore there exists some effective divisor $E$ such that $D''+E=D'$ stabilized to $D$. This gives the existence of the $D'$ in the stament of the theorem. Conversely, given some $v_0$-sandpile configuration $D`$ such that $D'(v)\geq {\rm deg}^+(v)$ for all $v \in V(\vec{G}) \setminus \{v_0\}$, which stabelizes to $D$, we will show that $D$ is $v_0$-recurrent. Take some $D''$, a $v_0$-sandpile configuration. We will show that $D$ is $v_0$-reachable from $D''$. First let $D''$ stabilize to the configuration $D'''$. Now $D'''\leq D'$ so that $D$ is $v_0$-reachable from $D'''$. Let $D'-D'''=E\geq 0$. We claim that $D'' +E$ stabilizes to $D$. By the observation made above, that stabilization is independent of a choice of firings, it is sufficient to show that there exists a sequence of firings which brings $D'' +E$ to $D$. Because $D'' +E \geq D''$ we can perform the sequence of firings which brought $D''$ to $D'''$. This sequence of firings brings $D'' +E$ to $D'''+E=D'$ and this now stabilizes to $D$. \end{proof} The following definition is for the unconstrained row chip-firing game introduced in the previous section. We say that a divisor $D$ is $v_0$-{\it negatively achievable} from $D'$ if there exists a sequence of borrowings by individual vertices such that at each step the vertex which borrows has a negative number of chips prior to borrowing. \begin{lemma} \label{lem:red_borrow_neg} A divisor $\nu$ is $v_0$-reduced if and only if there exists a divisor $D$ with $D(v) < 0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$ such that $\nu$ is $v_0$-negatively achievable from $D$. \end{lemma} \begin{proof} We will first show that if $\nu$, a $v_0$-sandpile divisor, is $v_0$-negatively achievable from $D$ with $D(v) < 0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$ then $\nu$ is $v_0$-reduced. We now introduce some notation, which will be useful for this proof. Let $S:v_{a_1}, \dots, v_{a_k}$ be the sequence of vertices which borrow and let $f_{S} \leq 0$ be the corresponding firing so that $D-Q^Tf_{S}=\nu$. Let $f_{S,j}$ be the firing strategy defined as $f_{S,j}(v)=|\{i:v_{a_i}=v, i\leq j \}|$ for $1 \leq j \leq k$, with $f_{S,0}=\vec{0}$. Assume that $\nu$ is not $v_0$-reduced and let $f\neq \vec{0}$ be a natural firing such that $\nu-Q^Tf=\nu'$ is a $v_0$-sandpile divisor. If $f+ f_{S}\nleq 0$ then there exists a maximal connected subset $A$ of $V(\vec{G}) \setminus \{v_0\}$ such that $(f+ f_{S})(v)>0$ for all $v\in A$, but the set $A$ loses a net positive amount of money via the firing $(f+ f_{S})$ contradicting the fact that $D-Q^T(f+ f_{S})=\nu'$ is a $v_0$ sandpile configuration and $D(v)<0$ for all $v \in A$. Because $f+ f_{S}\leq 0$ we may take $j$ maximum so that $f_{S,j}\geq f+ f_{S}$ but $f_{S,j+1}\ngeq f+ f_{S}$. This shows that $0\leq \nu'(v_{a_{j+1}})=(D-Q^T( f+ f_{S}))(v_{a_{j+1}})\leq (D-Q^Tf_{S,j})(v_{a_{j+1}})<0$, a contradiction. We now show that for any $v_0$-reduced divisor $\nu$ there exists some $D$ with $D(v) < 0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$ such that $\nu$ is $v_0$-negatively achievable from $D$. Take $\nu$ and greedily fire vertices in $v \in V(\vec{G}) \setminus \{v_0\}$ with an nonnegative number of chips until you obtain $D$ with $D(v) < 0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$. To see that this process will eventually terminate adapt the argument give in Lemma~\ref{reduce_exist_lemma} for why greedy reduction of a divisor terminates. We claim that $D$ is the desire divisor. If we now, as above, we greedily borrow by vertices in $v \in V(\vec{G}) \setminus \{v_0\}$ which are in debt, we will stop at a $v_0$-reduced divisor $\nu'$. To see that this process eventually terminates, again mimic the argument from Lemma~\ref{reduce_exist_lemma}. The fact that $\nu'$ is $v_0$-reduced was proven above. The divisor $\nu'$ is clearly equivalent to $\nu$, and $v_0$ did not participate in the above process, hence the divisor obtained is equal to $\nu$. \end{proof} The authors, independently from Speer~\cite{Spe93}, discovered the following theorem. \begin{theorem} \label{thm:recurrent_reduced} A $v_0$-sandpile configuration $D$ is $v_0$-recurrent if and only if the divisor $\nu$ is a $v_0$-reduced divisor, where $\nu(v_i)={\rm deg}^+(v_i)-1-D(v_i)$ for all $0 \leq i \leq n$. \end{theorem} \begin{proof} Let $K$ be the divisor such that $K(v_i)=deg^+(v_i)-2$. We first note that the map $\phi (D)= K+\vec{1}-D$ is a bijection between divisors $D$ such that $D(v)\geq {\rm deg}^+(v)$ for all $v \in V(\vec{G}) \setminus \{v_0\}$ and divisors $D$ such that $D(v)<0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$. The theorem then follows by observing that $\nu$ is $v_0$-negatively achievable from $D$ with $D(v) <0$ for all $v \in V(\vec{G}) \setminus \{v_0\}$ if and only if $\phi(\nu)$ is $v_0$-sandpile achievable from $\phi(D)$ with $(\phi(D))_i\geq {\rm deg}^+(v_i)$ for all $v \in V(\vec{G}) \setminus \{v_0\}$. \end{proof} We note that using the notion of equivalence given by the unconstrained row chip-firing game, the previous theorem shows that there are exactly $r_0$ $v_0$-recurrent divisors in each equivalence class. This is different from the case of undirected graphs or directed graphs with $v_0$ a global sink, where the recurrent state in each equivalence class is unique. We define a divisor $D$ to be minimally $v_0$-recurrent if, ignoring the value of $D(v_0)$, it is minimal with respect to dominance among all $v_0$-recurrent divisors. Using this definition we have a new way of describing the natural Riemann-Roch property in terms of the sandpile model for strongly connected directed graphs. \begin{theorem} A directed graph, $\vec{G}$ has the natural Riemann-Roch property if and only if for each minimal $v_0$-recurrent divisor $D$ there exists $D'=D+ke_0$, $k\in {\mathbb Z}$, $E_i \in {\mathbb Z}_{\geq 0}$ for $0 \leq i \leq n$ such that $E_i(v_i)=0$ and $E_i(v_j)>0$ for $j \neq i$ and $D' \sim E_i$ and each $D'$ is of fixed degree $g-1\in {\mathbb N}$. \end{theorem} \begin{proof} Clearly $D$ is minimally $v_0$ recurrent if and only if, by Theorem \ref{thm:recurrent_reduced}, we may fix $D'$ as in the statement of the theorem such that $\nu=K-D' +\vec{\bf 1}$ is extreme $v_0$-reduced. Hence, $\vec{G}$ has the natural Riemann Roch property if and only if $\nu'= D' - \vec{\bf 1} \in Ext(\Sigma({\Lambda})$ and is fixed degree $g-1$, which occurs precisely when $D' \in Ext(\Sigma_{{\mathbb R}}({\Lambda}))$ and is of fixed degree $g-1$. By Lemma \ref{extreme_sigma_closure_thm}, the Theorem follows. \end{proof} \subsection{Column Chip-Firing Game, $\vec{G}$-Parking Functions, and Riemann-Roch Theory \label{subsec:G-parking}} In this section we present a chip-firing game which comes from the columns of the Laplacian matrix. \begin{definition} { \label{g-parking_def} We call a divisor $D$ a directed $\vec{G}$-parking function (or simply $\vec{G}$-parking) with respect to $v_0$ if the following two conditions hold: \begin{enumerate} \item[(i)] for all $v \in V(\vec{G})\setminus \{v_0\}, D(v)\geq 0$, \item[(ii)] for every set $A \subseteq V(\vec{G}) \setminus \{v_0\}$, there exists some $v \in A$ such that $|{\{(v,u) \in E(\vec{G}): u \notin A}\}|\geq D(v)$. \end{enumerate} } \end{definition} We introduce the following ``column" chip-firing game wherein if a vertex $v$ {\it fires}, it loses ${\rm deg}^+(v)$ chips and sends a chip along each incoming edge $(u,v) \in E(\vec{G})$ ({\it borrowing} is defined as the inverse of firing). Note that the total number of chips is not preserved by firing in contrast to the previous ``row" chip-firing game. It is not hard to see that if all vertices in a set $A$ fire once then a vertex $v \in A$ will lose as many chips as it has edges leaving $A$, i.e., $|{\{(v,u): u \notin A}\}|$, while a vertex $u \not \in A$ will gain as many chips as it has edges entering to it from $A$, i.e., $|{\{(v,u): v \in A}\}|$. One may view this game as a walk through the lattice spanned by the columns of the Laplacian of $\vec{G}$ and it follows immediately that if $D$ is a divisor then $(D-\vec{Q}\chi_{A})(v)=D(v)-|{\{(v,u): u \notin A}\}|$ if $v \in A$ and $(D-\vec{Q}\chi_{A})(u)=D(u)+|{\{(v,u): v \in A}\}|$ if $u \notin A$. Because $\vec{Q}$ is orthogonal to $\vec{\bf 1}$, i.e., $\vec{Q}\vec{\bf 1}=\vec{\bf 0}$, we have that for any firing strategy $f$ there exists some firing strategy $f'$ such that $\vec{Q}(f-f')=\vec{\bf 0}$ and $f'\leq \chi_A$ for some $A \subseteq V(\vec{G}) \setminus \{ v_0 \} $. It is also worth mentioning that if $R=(r_0, \dots, r_n) \in {\mathbb N}^{n+1}$ is the vector guaranteed by Lemma~\ref{lem:leftkernel} such that $R^T\vec{Q}={\vec{\bf 0}}^T$, then ${\rm deg}_R(\vec{Q}f)=0$ for all $f \in {\mathbb Z}^{n+1}$, i.e., the total number of chips is preserved in the ``column" chip-firing game with respect to ${\rm deg}_R(\cdot)$. One may also interpret this fact combinatorially by assigning to each vertex $v_i$ its own ``chip currency'' worth $r_i$ of a ``universal chip currency''. Similar notions of ``currencies'' and ``exchange rates'' are employed when discussing chip-firing on arithmetical graphs in Section~\ref{AGrpahs}. A $\vec{G}$-parking function with respect to $v_0$ is a divisor $D$ such that $D(v)\geq 0$ for all $v\in V(\vec{G})\setminus \{v_0\}$ and for each set $A\subset V(\vec{G})\setminus \{v_0\}$ there exists some vertex $v\in A$ such that $|{\{(v,u): u \notin A}\}|>D(v)$. This definition is precisely analogous to the definition of a $v_0$-reduced divisors from the ``row" chip-firing game. More specifically, if we change $\vec{Q}^T$ to $\vec{Q}$ in definition of $v_0$-reduced divisor (Definition~\ref{reduced_def}), then we get the definition of $\vec{G}$-parking function with respect to $v_0$ (Definition~\ref{g-parking_def}). Hence, Dhar's algorithm introduced in~\cite{BN07, Dhar90} applies in verifying whether $D$ is $\vec{G}$-parking function with respect to $v_0$. Note that for undirected graphs the notion of a $v_0$-reduced divisor and a $G$-parking function agree as the Laplacian is symmetric, i.e., the ``row" and ``column" chip-firing games are identical. It is a well known fact, and has several combinatorial proofs, that the $\vec{G}$-parking functions are in bijection with set of rooted directed spanning trees~\cite{CP05}. An {\it Eulerian} directed graph $\vec{H}$ is a directed graph such that ${\rm deg}^+(v)={\rm deg}^-(v)$ for each $v \in V(\vec{H})$. The name is derived from the fact that they are exactly those directed graphs which possess a directed Eulerian circuit. \begin{theorem} \label{thm:RR_column_chip} Let $\vec{G}$ be a strongly connected directed graph with Laplacian $\vec{Q}$ and let $\vec{G'}$ be the Eulerian directed graph with Laplacian $\vec{Q}^T\mathcal{R}$ where $\mathcal{R}=diag(r_0, \dots, r_n)$ where$ \mathcal{R}\vec{1}\vec{Q}$. The directed graph $\vec{G}$ has the Riemann-Roch property for the column chip-firing game if and only if the directed graph $\vec{G'}$ has the Riemann-Roch property for the row chip-firing game. \end{theorem} \begin{proof} Let ${\Lambda}'_{\vec{G}}=\{\vec{Q}f: f \in {\mathbb Z}^{n+1}\}$ be the lattice spanned by the columns of $\vec{Q}$. It follows by Theorem~\ref{thm:RR_R_one} that ${\Lambda}'_{\vec{G}}$ has the Riemann-Roch property if and only if $\mathcal{R}{\Lambda}'_{\vec{G}}$ does. This is the lattice spanned by the rows of $\vec{Q}^T\mathcal{R}$ completing the proof. \end{proof} We note that the column chip-firing game for an Eulerian digraph is the same game as the row chip-firing game played on the same directed graph with as of the orientations of all of the arrows reversed. This explains why we are passing to the transpose of the Laplacian in the proof. Amini and Manjunath~\cite{AM09} have some results related to Eulerian directed graphs (which they call regular digraphs). By the previous theorem, all of these results extend to the column chip-firing game on strongly connected directed graphs. We also remark that for testing whether a divisor is $v_0$-reduced, the burning algorithm of Dhar may be applied (burning along incoming edges) and this algorithm can be used to obtain several of the results of Amini and Manjunath related to Eulerian directed graphs. \section{Arithmetical Graphs \label{AGrpahs}} Let $G$ be a connected undirected multigraph, choose an ordering $\{v_0, \dots, v_n\}$ of vertices of $G$, and let $A$ be the corresponding {\it adjacency matrix} of $G$. Let $R=(r_0, \dots, r_n)^T \in \Bbb N^{n+1}$ be such that $gcd(r_0,r_1 \dots, r_n)=1$ and let $\delta_0, \dots, \delta_{n} \in \Bbb N$ be such that $(\mathcal{D}-A)R=\vec{\bf 0}$, where $\mathcal{D}=diag(\delta_0, \dots, \delta_{n})$. We say $(G,R)$ is an {\it arithmetical graph} with Laplacian $Q=\mathcal{D}-A$ and corresponding multiplicity vector $R$, where for all $0 \leq i \leq n$ the value $r_i$ is the {\it multiplicity} of the vertex $v_i$. Note that an undirected graph $G$ can be considered as an arithmetical graph $(G,{\vec{\bf 1}})$. Consider the following chip-firing game played on the vertices of an arithmetical graph $(G,R)$. Suppose we have a ``universal chip currency'' and each vertex $v_i$ has its own ``$v_i$-chip currency'' such that each $v_i$-chip is worth $r_i$ of the ``universal chip currency''. If a vertex $v_i$ {\it fires}, it loses $\delta_i$ of its own $v_i$-chips and sends $m_{i,j}$ $v_j$-chips to each $v_j$ adjacent to $v_i$, where $m_{i,j}$ is the number of edges between $v_i$ and $v_j$. We define {\it borrowing} to be the inverse of firing. Let ${\Lambda}_{(G,R)}$ be the lattice spanned by the columns of $Q$. It is easy to see that moves in this chip-firing game correspond to translations of some divisor $D$ by a lattice point $l \in {\Lambda}_{(G,R)}$. This observation allows us to make use of definitions and theorems from Section 2 when discussing the chip-firing game. Let $(G,R)$ be an arithmetical graph and $\mathcal{R}=diag(r_0, \dots, r_{n})$. Let $\vec{G}_R$ be the directed graph obtained from $(G,R)$ by replacing each undirected edge $(v_i,v_j)$ with $r_j$ edges directed from $v_i$ to $v_j$ and $r_i$ edges directed from $v_j$ to $v_i$. The chip-firing game for $(G,R)$ corresponds to the row chip-firing game for $\vec{G}_R$ by converting each vertex's currency to the universal chip currency. If we define $\vec{Q}_R$ be the Laplacian of $\vec{G}_R$ we may observe that $\vec{Q}_R^T =\mathcal{R}Q$. By Theorem~\ref{thm:RR_R_one}, it follows that the chip-firing game on $(G,R)$ will have the Riemann-Roch property if and only if the row chip-firing game on $\vec{G}_R$ has the Riemann-Roch property. The row chip-firing game on $\vec{G}_R$ is strictly ``finer'' than the chip-firing game on $(G,R)$ in the sense that a vertex, $v_i$ need not have a multiple of $r_i$ universal chips, although by the previous observation this difference does not effect whether the Riemann-Roch property holds. In our discussion of the chip-firing game for arithmetical graphs we will borrow several definitions and methods from the row chip-firing game whose interpretation will be clear from the context in which they are used. In particular the definition of a $v_0$-reduced divisor and the generalized Dhar's algorithm will be frequently employed. \begin{theorem} \label{asdf} Let $(G,R)$ be an arithmetical graph with Laplacian $Q$ and let $\vec{G}_R$ be the associated directed graph. Then $\vec{G}_R$ has the Riemman-Roch property for the column chip-firing game. \end{theorem} \begin{proof} By Theorem~\ref{thm:RR_column_chip} it is equivalent to ask the question for the row chip-firing game on the directed graph $\vec{H}$ whose Laplacian is $\mathcal{R}\vec{Q'}$ where $\vec{Q'}$ is the Laplacian for $\vec{G}_R$. But $\vec{Q'}$ is simply $\vec{Q}\mathcal{R}$ and so $\vec{H}$ has Laplacian $\mathcal{R}\vec{Q}\mathcal{R}$ which as one can easily check is the Laplacian of the undirected graph obtained from $G$ by replacing each edge $(v_i, v_j)$ with $r_ir_j$ edges. By Baker and Norine, this graph has the Riemman-Roch property and this completes the proof. \end{proof} Let $\mathcal{N}=\{D \in Ext(\Sigma({\Lambda}_{(G,R)})): {\rm deg}_R(D)=g_{\max}-1\}$. For each $0 \leq i \leq n$, let $N(v_i)$ denote the family of vertices which are adjacent to $v_i$ counting their multiplicities. We call $|N(v_i)|$ the {\it degree} of the vertex $v_i$ and we denote it by ${\rm deg}(v_i)$. Recall the definition of $g_0$, the number such that $2g_0-2=\sum_{i=0}^n r_i(\delta_i-2)$. It is not hard to verify and is noted in~\cite{Lor89} that $g_0$ is an integer. It is also easy to see that by firing all of the vertices of the $G$, we get $\sum_{i=0}^n r_i\delta_i=\sum_{i=0}^n r_i{\rm deg}(v_i)$. Therefore $2g_0-2=\sum_{i=0}^n r_i({\rm deg}(v_i)-2)$. \begin{theorem} { \label{gmaxg_0}Let $(G,R)$ be an arithmetical graph. Then $g_{\max}\leq g_0$. } \end{theorem} \begin{proof} { The following proof is an averaging argument employing the generalized Dhar's algorithms and gives a bound twice as good as the naive bound. If one looks closely at the proof, it becomes apparent that arithmetical graphs are precisely those ``directed graphs'' for which such an averaging argument is effective. Let $D \in \mathcal{N}$. Choose a $v_0$-reduced divisor $D' \sim D$ such that $D'(v_0)$ is as large as possible. For proving the theorem, it is enough to show that ${\rm deg}_R(D') \leq g_0-1$. Apply the generalized Dhar's algorithm to $D'$. For all $0 \leq i \leq n$ and $1 \leq k \leq r_i$, define $\mathcal{F}_{i,k}$ to be the firing strategy obtained from the generalized Dhar algorithm such that $\mathcal{F}_{i,k}(v_i)=k$ and the successor of $\mathcal{F}_{i,k}$ is the firing strategy $\mathcal{F}_{i,k}-\chi_{\{v_i\}}$. For each $v_i \in V(\vec{G}) \setminus v_0$ we obtain $r_i$ inequalities as follows:\\ for each $k$ where $1 \leq k \leq r_i$, we have: \begin{equation} \label{othervertex_max-ing} D'(v_i) \leq k\delta_i - \left(\sum_{v_j \in N(v_i)} \mathcal{F}_{i,k}(v_j)\right) -1, \end{equation} which follows from the fact that $(D'-Q\mathcal{F}_{i,k})(v_i) <0$ by choice of $\mathcal{F}_{i,k}$. \\For the vertex $v_0$, we know that for all $1 \leq k \leq r_0$, $$k\delta_0 - \sum_{v_j \in N(v_0)} \mathcal{F}_{0,k}(v_j) \geq 0,$$ by the choice of $D'$ and the second assertion of Lemma~\ref{dhar_alg_thm}. Because $D' \in \mathcal{N}$, by (ii) of Lemma~\ref{extreme_reduced_effective lem} we have that $D'(v_0)< 0$. Hence, for all $1 \leq k \leq r_0$, \begin{equation} \label{v_0max-inq} D'(v_0) \leq k\delta_0 - \left(\sum_{v_j \in N(v_0)} \mathcal{F}_{0,k}(v_j)\right)-1. \end{equation} Note that $\sum_{i=0}^{n} \sum_{k=1}^{r_i} D'(v_i)=D' \cdot R={\rm deg}_R(D')$. Now, taking the sum over all inequalities in~(\ref{othervertex_max-ing}) and~(\ref{v_0max-inq}), we have: \begin{equation} \label{main_ieq} \sum_{i=0}^{n} \sum_{k=1}^{r_i} D'(v_i)\leq \sum_{i=0}^{n}r_i((r_i+1)\delta_i -2)/2 - \sum_{i=0}^{n} \sum_{k=1}^{r_i} \sum_{v_j \in N(v_i)}\mathcal{F}_{i,k}(v_j). \end{equation} We will now restrict our attention to $\sum_{i=0}^{n} \sum_{k=1}^{r_i} \sum_{v_j \in N(v_i)}\mathcal{F}_{i,k}(v_j)$. By reordering the sums, we have $$\sum_{i=0}^{n} \sum_{k=1}^{r_i} \sum_{v_j \in N(v_i)}\mathcal{F}_{i,k}(v_j)=\sum_{i<j, ~ v_iv_j \in E(G)}\left( \sum_{k=1}^{r_i} \mathcal{F}_{i,k}(v_j) + \sum_{{\ell}=1}^{r_j} \mathcal{F}_{j,{\ell}}(v_i)\right).$$ We claim that if $v_iv_j \in E(G)$ then $\sum_{k=1}^{r_i} \mathcal{F}_{i,k}(v_j) + \sum_{{\ell}=1}^{r_j} \mathcal{F}_{j,{\ell}}(v_i) = r_ir_j$. We prove the claim by induction on $r_i+r_j$. If $r_i+r_j=2$, then the claim holds trivially, since $r_i=r_j=1$. Now suppose $r_i+r_j=m \geq 3$. Without loss of generality, assume $\mathcal{F}_{i,r_i}$ is generated before $\mathcal{F}_{j,r_j}$ in the run of the generalized Dhar's algorithm on $D'$. Hence $$\sum_{k=1}^{r_i} \mathcal{F}_{i,k}(v_j) + \sum_{{\ell}=1}^{r_j} \mathcal{F}_{j,{\ell}}(v_i) = r_j + \sum_{k=1}^{r_i-1} \mathcal{F}_{i,k}(v_j) + \sum_{{\ell}=1}^{r_j} \mathcal{F}_{j,{\ell}}(v_i) = r_j+(r_i-1)r_j=r_ir_j.$$ The equality $\sum_{k=1}^{r_i-1} \mathcal{F}_{i,k}(v_j) + \sum_{{\ell}=1}^{r_j} \mathcal{F}_{j,{\ell}}(v_i)=(r_i-1)r_j$ follows from induction hypothesis. This completes the proof of the claim. So $$\sum_{i<j, ~ v_iv_j \in E(G)}\left( \sum_{k=1}^{r_i} \mathcal{F}_{i,k}(v_j) + \sum_{\ell=1}^{r_j} \mathcal{F}_{j,\ell}(v_i)\right) = \sum_{i<j, ~ v_iv_j \in E(G)} r_ir_j={1 \over 2}\left(\sum _{i=0}^{n}r_i \sum_{v_j \in N(v_i)} r_j\right).$$ Since $QR=0$, for all $0 \leq i \leq n$, $\sum_{v_j \in N(v_i)} r_j=r_i\delta_i$. Hence \begin{equation} \label{iductionobtained_eq} \sum_{i=0}^{n} \sum_{k=1}^{r_i} \sum_{v_j \in N(v_i)}\mathcal{F}_{i,k}(v_j)={1 \over 2}\left(\sum _{i=0}^{n}r^2_i\delta_i \right). \end{equation} Now by substituting~(\ref{iductionobtained_eq}) into inequality~(\ref{main_ieq}), we have: $${\rm deg}_R(D')\leq \sum _{i=0}^{n}(r_i((r_i+1)\delta_i -2)/2 - {1 \over 2}\left(\sum _{i=0}^{n}r^2_i\delta_i \right) = \sum _{i=0}^{n}r_i(\delta_i -2)/2= g_0-1.$$ } \end{proof} So the above theorem shows that if, in a configuration of the game identified by $D \in Div((G,R))$, ${\rm deg}_R(D) \geq g_0$, then $D$ has a winning configuration. \begin{corollary} { \label{g=g_0cor} $g_{\max}=g_0$ if and only if all inequalities in~(\ref{othervertex_max-ing}) and~(\ref{v_0max-inq}) obtained in a run of the generalized Dhar's algorithm on a $v_0$-reduced divisor $D \in \mathcal{N}$ are tight, i.e. if $f_i$ is the sequence of firing strategies obtained from the run of the generalized Dhar's algorithm on a $v_0$-reduced divisor $D \in \mathcal{N}$, for all $0 \leq t \leq {\vec{\bf 1}} \cdot R-1 $, if $f_{t+1}=f_t - \chi_{\{v\}}$ then $(D-Q(f_t))(v)=-1$. } \end{corollary} It is clear, and demonstrated below, that if $D\in \mathcal{N}$ and deg$(D)$=$g_{max}$-1, then for each $v \in V(G)$ and $D' \sim D$ such that $D'$ is $v$-reduced, we have $D'(v)=-1$. The following theorem shows that the converse is also true. \begin{theorem} { Let $D \in \mathcal{N}$. Then ${\rm deg}(D)=g_{max}-1$ if and only if for each $D' \sim D$ such that $D'$ is a $v$-reduced divisor, $D'(v)=-1$. } \end{theorem} \begin{proof} { Suppose $D\in \mathcal{N}$ with ${\rm deg}(D)=g_{max}-1$. Take $v \in V(\vec{G})$. By applying (ii) of Lemma~\ref{extreme_reduced_effective lem} we may pick $D'\sim D$ to be a $v$-reduced divisor such that $D'(v)=-1$. Corollary~\ref{g=g_0cor} implies that all the inequalities are tight, so for all $v$-reduced divisor $D''\sim D$, $D''(v)=-1$. Conversely, assume that $D\in \mathcal{N}$ is $v_0$-reduced and suppose that for each $D' \sim D$ which is an extreme $v$-reduced divisor, $D'(v)=-1$. We wish to show that ${\rm deg}(D)=g_{max}-1$. Apply the generalized Dhar's algorithm to $D$, and define $\mathcal{F}_{i,k}$ to be the firing strategy obtained from the generalized Dhar algorithm such that $\mathcal{F}_{i,k}(v_i)=k$ and the successor of $\mathcal{F}_{i,k}$ is the firing strategy $\mathcal{F}_{i,k}-\chi_{\{v_i\}}$. \begin{equation} \label{copyothervertex_max-ing} D(v_i) \leq k\delta_i - \left(\sum_{v_j \in N(v_i)} \mathcal{F}_{i,k}(v_j)\right) -1, \end{equation} which follows from the fact that $(D-Q\mathcal{F}_{i,k})(v_i) <0$ by choice of $\mathcal{F}_{i,k}$. By the previous corollary, to show that ${\rm deg}(D)=g_{max}-1$, it is enough to show that each of the inequalities from (\ref{copyothervertex_max-ing}) hold with equality. For the vertex $v_0$, we know that for all $1 \leq k \leq r_0$, $$k\delta_0 - \sum_{v_j \in N(v_0)} \mathcal{F}_{0,k}(v_j) \geq 0,$$ this follows from the choice of $D$ and the second assertion of Lemma~\ref{dhar_alg_thm}. Because $D$ is extreme, by (ii) of Lemma~\ref{extreme_reduced_effective lem} we have that $D(v_0)< 0$. Hence for all $1 \leq k \leq r_0$, \begin{equation} \label{copyv_0max-inq} D(v_0) \leq k\delta_0 - \left(\sum_{v_j \in N(v_0)} \mathcal{F}_{0,k}(v_j)\right)-1. \end{equation} By assumption all of the inequalities for $v_0$ above hold with equality. So take $v_i \in V(\vec{G})\setminus v_0$ and $1\leq k \leq r_i$. For finishing the proof, we will show that $(D- Q(\mathcal{F}_{i,k}))(v_i)=-1$. Let the firing strategy $f$ be such that $D - Qf$ is $v_i$-reduced and $f(v_i)=k$, where the existence of $f$ is guaranteed by Corollary~\ref{unique_firing_reduced_cor}. Assume $f' \approx f$ is a natural firing strategy. Let $f_t$'s be the sequence of firing strategies obtained from a run of the generalized Dhar's algorithm on $D$. Take $j$ as large as possible such that $f_j \geq f'$. Let $v \in V(\vec{G})$ be such that $f_{j+1}=f_j-\chi_{\{v\}}$. Let the firing strategy $f''$ be such that $f'=f_j-f''$ where $f'' \geq \vec{\bf 0}$ and $f''(v)=0$. We claim that $v=v_i$. If $v \notin \{v_0,v_i\}$ then $(D-Qf')(v)=(D'-Q(f_j-f''))(v)\leq (D-Q(f_j))(v)<0$, contradicting the fact that $D-Qf'$ is a $v_i$-reduced. If $v=v_0$, then $(D-Qf')(v_0)=(D-Q(f_j-f''))(v_0) \leq (D-Q(f_j))(v_0)=-1$ since $D-Qf_j$ is a $v_0$-reduced divisor by the second part of Theorem~\ref{dhar_alg_thm}. But this again contradicts the fact that $D-Qf'$ is a $v_i$-reduced divisor. Hence $v=v_i$ and this finishes the proof of the claim. Therefore $f_j=\mathcal{F}_{i,k}$ and we have: $$-1=(D-Qf')(v_i)=(D-Q(f_j-f''))(v_i)=(D-Q(\mathcal{F}_{i,k}-f''))(v_i)\leq (D-Q(\mathcal{F}_{v_i,k}))(v_i) \leq-1.$$ Hence $(D-Q(\mathcal{F}_{i,k}))(v_i)=-1$ as desired. } \end{proof} We note that a more general version of the previous theorem can be stated for strongly connected directed graphs and might have been included in the section on Dhar's algorithm, but because we do not have statement like Corollary \ref{g=g_0cor} for all strongly connected directed graphs, the statement of this more general theorem would have been awkwardly phrased. \begin{theorem} { \label{g=g_0canonical_thm} Let $K = (\delta_0-2, ... , \delta_n-2)$ be a vector in ${\mathbb Z}^{n+1}$. If $g_{\max}=g_0$ then $D \in \mathcal{N}$ if and only if $K-D \in \mathcal{N}$. } \end{theorem} \begin{proof} { Without loss of generality, we may assume $D$ is a $v_0$-reduced divisor. Apply the generalized Dhar's algorithm on $D$ and let $f_i$ be the output sequence. Let $\mathcal{F}_{i,k}$ be the firing strategies defined in the proof of Theorem~\ref{gmaxg_0}. Define the divisor $D'$ such that for all $0 \leq i \leq n$, $$D'(v_i)= k\delta_i-\left(\sum_{v_j \in N(v_i)} (R-\mathcal{F}_{i,r_i+1-k})(v_j)\right)-1.$$ We claim that $D'$ is well-defined. For proving the claim, it is enough to show that for all $0 \leq i \leq n$, the value of $D'(v_i)$ does not depend upon $k$. We will show $D'=K-D$. Since $g_{\max}=g_0$, Corollary~\ref{g=g_0cor} implies that for all $0 \leq i \leq n$, $\sum_{v_j \in N(v_i)} \mathcal{F}_{i,r_i+1-k}(v_j)=(r_i+1-k)\delta_i-D(v_i)-1$. For all $0 \leq i \leq n$, we have: $$\sum_{v_j \in N(v_i)} (R-\mathcal{F}_{i,r_i+1-k})(v_j)=\left(\sum_{v_j \in N(v_i)}r_j\right)-\left((r_i+1-k)\delta_i-D(v_i)-1\right)=-\delta_i+k\delta_i+D(v_i)+1.$$ Therefore, $$D'(v_i)= k\delta_i-\left(\sum_{v_j \in N(v_i)} (R-\mathcal{F}_{i,r_i+1-k})(v_j)\right)-1=k\delta_i-(-\delta_i+k\delta_i+D(v_i)+1)-1=\delta_i-2-D(v_i).$$ Since ${\rm deg}_R(K-D)=g_0-1$, for finishing the proof we only need to show that $K-D$ is not equivalent to an effective divisor. Assume to the contrary that $D'$ is equivalent to some effective divisor $E$ and let $f$ be such that $D'-Qf=E$. Let $f' \approx f$ be a natural firing strategy guaranteed by Lemma~\ref{natural_lemma}. Define a ``reverse sequence'' of firing strategies $f'_i=R-f_{\vec{\bf 1} \cdot R-i}$ for all $0 \leq i \leq \vec{\bf 1} \cdot R$. Take $t$ as large as possible such that $f'_t \geq f'$. So there exists $v_i \in V(\vec{G})$ such that $f'(v_i)=f'_t(v_i)$. By the definition of the reverse sequence, there exists $1 \leq k \leq r_i$ such that $f'_t=R-\mathcal{F}_{i,r_i+1-k}+\chi_{\{v_i\}}$. Therefore, $$E(v_i) \leq (D'-Qf'_t)(v_i)$$$$=k\delta_i-\left(\sum_{v_j \in N(v_i)} (R-\mathcal{F}_{i,r_i+1-k})(v_j)\right)-1-\left(r_i-(r_i+1-k)-1\right)\delta_i+\left(\sum_{v_j \in N(v_i)} (R-\mathcal{F}_{i,r_i+1-k} + \chi_{\{v_i\}})(v_j)\right)$$ $$=k\delta_i-\left(r_i-(r_i+1-k)-1\right)-1=-1.$$ Note that $\sum_{v_j \in N(v_i)} (R-\mathcal{F}_{i,r_i+1-k} + \chi_{\{v_i\}})(v_j)=\sum_{v_j \in N(v_i)} (R-\mathcal{F}_{i,r_i+1-k})(v_j)$. This contradicts the choice of $E$. Hence $D'=K-D$ is not equivalent to an effective divisor. } \end{proof} We should mention that Theorem~\ref{gmaxg_0} and Theorem~\ref{g=g_0canonical_thm} are due to Lorenzini~\cite{Lor09}. His approach in proving these theorems is purely algebraic. As mentioned in~\cite{Lor09}, he was interested in combinatorial proof of these facts which could be the one presented in this paper. \begin{theorem} { \label{g_0g_ming_max_equal}Let $(G,R)$ be an arithmetical graph. If $g_0=g_{\min}=g_{\max}$, then $(G,R)$ has the Riemann-Roch property. Moreover, the corresponding directed graph has the natural Riemann-Roch property. } \end{theorem} \begin{proof} { The first part of the theorem follows as an immediate consequence of Theorem~\ref{RR_formula_equiv_U_RI_thm} and Theorem~\ref{g=g_0canonical_thm}. The second part of the theorem follows by Corollary \ref{cor:canonical_1_R}, which in this context says that if $g_0=g_{\min}=g_{\max}$, then the canonical divisor for the corresponding digraph $\vec{G_R}$ has $i$th entry ${\rm deg}^+(v_i)-2$, i.e., $\vec{G_R}$ satisfies Definition \ref{def:natural_RR} for the row chip-firing game. Moreover, we note that $(\delta_0-2, ... , \delta_n-2) \sim ({\rm deg}(v_0)-2, ..., {\rm deg}(v_n)-2)$ as is easily observed by computing $Q \vec 1$. } \end{proof} \begin{corollary} { \label{unique_extreme} Let $(G,R)$ be an arithmetical graph. If ${\Lambda}_{(G,R)}$ has a unique class of extreme divisors, i.e. $Ext(\Sigma({\Lambda}_{(G,R)}))=\{\nu+{\ell} :{\ell} \in {\Lambda}_{(G,R)}\}$, then ${\Lambda}_{(G,R)}$ has the Riemann-Roch property. } \end{corollary} \subsection{Arithmetical Graphs with the Riemann-Roch Property} \begin{theorem} { \label{g_0_less_than_one} Let $(G,R)$ be an arithmetical graph. If $g_0\leq 1$ then $(G,R)$ has the Riemann-Roch property. } \end{theorem} \begin{proof} { Let $v_0$ be a vertex such that $r_0 \leq r_i$ for all $1 \leq i \leq n$. Let $D$ be an extreme $v_0$-reduced divisor with $D(v_0)=-1$. By Theorem~\ref{gmaxg_0} $g_{\max} \leq g_0$, so ${\rm deg}(D) \leq g_{\max}-1 \leq 0$. Now we have two cases: \begin{itemize} \item[(i)] $D(v_i)=0$ for all $1 \leq i \leq n$, part (ii) of Lemma~\ref{extreme_reduced_effective lem} and the choice of $r_0$ implies that $D$ is the unique extreme $v_0$-reduced divisor, and the assertion of the lemma holds by Corollary~\ref{unique_extreme}. Note that in this case $g_{\max} \neq g_0$ unless $g_0=0$ and $r_0=1$. \item[(ii)] There exists $1 \leq i \leq n$ such that $D(v_i) > 0$. Since ${\rm deg}(D) \leq 0$, $r_i=r_0$ and $v_i$ is the only vertex with $D(v_i)>0$. This implies that the divisor $D'$ with $D'(v_0)=-1$ and $D'(v_j)=0$ for all $1 \leq j \leq n$ is not an extreme divisor. Hence, $g_0=g_{\min}=g_{\max}=1$, and assertion of the lemma follows by Theorem~\ref{g_0g_ming_max_equal}. \end{itemize} } \end{proof} Using the definition of $g_0$ the following is immediate consequence of the Theorem~\ref{g_0_less_than_one}. \begin{corollary} { Let $(G,R)$ be an arithmetical graph with all $\delta_i$'s equal to two or all ${\rm deg}(v_i)$'s equal to two. Then $(G,R)$ has the Riemann-Roch property. } \end{corollary} The former arithmetical graphs are those coming from the connection between Lie algebras or elliptical curves which have been classified~\cite{CSM95} and the latter arithmetical graphs where the underlying graph is a cycle. The following two examples show that both cases described in the proof of Theorem~\ref{g_0_less_than_one} occur. \begin{example} { Let $(G,R)$ be an arithmetical graph where $G$ is the even cycle $v_0, \dots, v_{2n-1}$ for $n \geq 2$, and for all $0 \leq i \leq n-1$, the multiplicities of the vertices $v_{2i}$ and $v_{2i+1}$ are $1$ and $2$, respectively. Then $g_{\min}=g_{\max}=g_0=1$, and in particular $(G,R)$ has the Riemann-Roch property. } \end{example} \begin{proof} { We claim that the set of extreme $v_0$-reduced divisors for $(G,R)$ are the set of divisors $D_i=\chi_{\{v_{2i}\}} -\chi_{\{{v_0}\}}$ for all $1\leq i \leq n-1$. Assume $1 \leq i \leq n-1$, and the vector $f$ is a valid firing strategy with respect to $v_0$ such that $D_i-Qf \geq {\vec{\bf 0}}$. Observe that if $f(v_{2i})=1$, then in order to $(D_i-Qf)(v_{2i}) \geq 0$ we must have $f(v_{2i-1})+f(v_{2i-1}) \geq 3$. By symmetry, assume that $f(v_{2i-1}) \geq 2$. Since $(D_i-Qf)(v_{2i-1}) \geq 0$, we have $f(v_{2i-2}) =1$. By repeating the argument, we conclude that $f(v_0)=1$, a contradiction. This shows that $D_i$ is $v_0$-reduced and since $r_0=1$, (i) of Lemma~\ref{extreme_reduced_effective lem} implies that $D_i$ is not equivalent to an effective divisor. For proving the fact that $D_i$ is an extreme divisor, it is enough to show that $D_i+\chi_{\{v_j\}}$ is equivalent to an effective divisor, for all $0\leq j \leq 2n-1$. It is easy to see that $g_0=1$. If $0 \leq j \leq 2n-1$ is odd, then the divisor $D_i+\chi_{\{v_j\}}$ has degree $2 > g_0$, thus Theorem~\ref{gmaxg_0} implies that $D_i+\chi_{\{v_j\}}$ is effective. We claim that for all $0 \leq j \leq i \leq n-1$, the divisor $D_i+\chi_{\{v_{2j}\}}$ is equivalent effective. We prove the claim by induction on $j$. If $j=0$, then the assertion of the claim trivially holds. So, assume $j >0$ and let $f=\chi_{\{v_{2j-1},\dots,v_{2i+1}\}}$. A simple computation gives that $D_i+\chi_{\{v_{2j}\}}-Qf=D_{i+1}+\chi_{\{v_{2j-2}\}}$. The induction hypothesis implies that $D_{i+1}+\chi_{\{v_{2j-2}\}}$ is equivalent an effective divisor, so is $D_{i+1}+\chi_{\{v_{2j}\}}$. This shows that $D_i$'s are extreme $v_0$-reduced divisors. Now assume that $D$ is an extreme $v_0$-reduced divisor. Part (ii) of Lemma~\ref{extreme_reduced_effective lem} implies that $D(v_0)=-1$. If $D(v_{2i+1})=1$ for some $0 \leq i \leq n-1$, then $D$ is not a $v_0$-reduced divisor. The above argument shows that if $D(v_{2i})=2$ or $D(v_{2i})=D(v_{2j})=1$ for some $0 \leq i \ne j \leq n-1$, the divisor $D$ is equivalent to an effective divisor. Obviously $D \neq -\chi_{\{v_0\}}$, and this completes the proof of the claim. Since each extreme $v_0$-reduced divisor $D_i$, $1\leq i \leq n-1$ has degree zero, $g_{\min}=g_{\max}=g_0$. Theorem~\ref{g_0g_ming_max_equal} implies that $(G,R)$ has the Riemann-Roch property. } \end{proof} \begin{example} { Let $(G,R)$ be an arithmetical graph where $G$ is a cycle $v_1, \dots, v_n$ for $n \geq 3$ and the multiplicity of vertex $v_i$ is $i$ for all $1 \leq i \leq n$. Then $(G,R)$ has Riemann-Roch property. } \end{example} \begin{proof} { It is easy to see that $g_0=1$. Now assume $D$ is an extreme $v_1$-reduced divisor. The part (ii) of Lemma~\ref{extreme_reduced_effective lem} implies that $D(v_1)=-1$. If there exists $2 \leq i \leq n$ such that $D(v_i) \geq 1$, then degree of $D$ is at least one. Thus, Theorem~\ref{gmaxg_0} implies that $D$ is equivalent to an effective divisor. This shows that $D=-\chi_{\{v_1\}}$ is the unique extreme $v_1$-reduced divisor and the assertion of the lemma follows Corollary~\ref{unique_extreme}. } \end{proof} The following example introduced in~\cite{Lor09} has the Riemann-Roch property. \begin{example} { Let $(G,R)$ be an arithmetical graph where $G$ is a graph with vertex set $\{v_0,v_1\}$ such that $v_0$ is connected to $v_1$ with $r_0r_1$ edges where $r_0$ and $r_1$ are the multiplicity of the vertex $v_0$ and $v_1$, respectively. Then $(G,R)$ has the Riemann-Roch property. } \end{example} \begin{proof} { The proof follows from Corollary~\ref{unique_extreme}, since there exists a unique extreme $v_0$-reduced divisor, $D=-\chi_{\{v_0\}}+(r^2_0-1)\chi_{\{v_1\}}$. Hence $g_{\min}=g_{\max}=g_{0}$. } \end{proof} Given any two integers $r_0 > r_1$ we can recursively construct a decreasing sequence $r_i$'s where $r_{i+1}=\delta_ir_i-r_{i-1}$, $r_{i+1} < r_i$ and $\delta_i \in \Bbb N$ for all $i\geq 1$. We call such a sequence the {\it Euclidean sequence generated by $r_0$ and $r_1$}. Note that the Euclidean sequence generated by $r_0$ and $r_1$ is finite and it comes from a simple variation of Euclid's algorithm. Let $(G,R)$ be an arithmetical graph. We define a {\it Euclidean chain leaving $v_0$ generated by $r_0$ and $r_1$} to be an induced path $C=v_0,v_1 \dots,v_n$ of length $n+1\geq 2$ in $G$ such that ${\rm deg}_G(v_{n})=1$ where the corresponding sequence of multiplicities, $r_0,r_1 \dots r_n$ is the Euclidean sequence generated by $r_0$ and $r_1$. Note that $r_n=gcd(r_i,r_{i+1})$ for all $0 \leq i \leq n-1$. If $v_0$, $r_0$ and $r_1$ are clear from the context, we may simply refer to the path as a {\it Euclidean chain}. Lorenzini~\cite{Lor89} uses a slight variation of the Euclidean chain for building arithmetical graphs. We also use Euclidean chain to construct a arithmetical graph with the Riemann-Roch property. A {\it Euclidean star generated by} $r_0$ and $r_1$ is an arithmetical graph $(G,R)$ with the {\it center} vertex $v_0$ with multiplicity $r_0$ and $r_0$ identical Euclidean chains leaving $v_0$ generated by $r_0$ and $r_1$. We call the vertex $v_0$ the {\it center vertex}. When $r_0$ and $r_1$ are clear from the context, we will simply say {\it Euclidean star}. We will show that every Euclidean star generated by $r_0$ and $r_1$ with $gcd(r_0,r_1)=1$, has the Riemann-Roch property. \begin{definition} Let $r_0 > r_1$ be two positive integers with $gcd(r_0,r_1)=1$. Assume $r_0, r_1, \dots, r_m$ is the Euclidean sequence generated by $r_0$ and $r_1$. Given a nonnegative integer $x$, we say $x$ has a good representation with respect to $r_0$ and $r_1$ if there exist $0 \leq t_i \leq \delta_i-1$, for all $1 \leq i \leq m$ such that ${x=\sum_{i=1}^m t_ir_i}$, and there exist no $1 \leq i<j \leq m$ such that $t_i=\delta_i-1$, $t_j=\delta_j-1$ and for all $i<k<j$, $t_k =\delta_k-2$. \end{definition} \begin{lemma} { \label{unique_representation_lem} Let $r_0$ and $r_1$ be positive integers with $gcd(r_0,r_1)=1$. Given a nonnegative integer $x$, $x$ has a good representation with with respect to $r_0$ and $r_1$ if and only if $0 \leq x \leq r_0-1$. Moreover, if $0 \leq x \leq r_0-1$ such a representation is unique. } \end{lemma} \begin{proof} { Assume $r_0, r_1, \dots, r_m$ is the Euclidean sequence generated by $r_0$ and $r_1$. We prove by induction on $m$. If $m=1$, the assertion of the lemma is obvious. Now assume $m \geq 2$ and $x$ is an arbitrary nonnegative integer. It is easy to see that $t_1 \leq \lfloor {x \over r_1} \rfloor$. If $t_1 < \lfloor {x \over r_1} \rfloor$, then $x-t_1r_1 \geq r_1$, so by the induction hypothesis $x-t_1r_1$ does not have a good representation with respect to $r_1$ and $r_2$ because $gcd(r_1,r_2)=1$ and the Euclidean sequence obtained from $r_1$ and $r_2$ is $r_1,r_2, \dots, r_m$. Hence, we may assume $t_1 = \lfloor {x \over r_1} \rfloor$, so by induction hypothesis $x-t_1r_1$ has a good representation with respect to $r_1$ and $r_2$. If $t_1 \leq \delta_1-2$, then the good representation of $x-t_1r_1$ with respect to $r_1$ and $r_2$ extends to a good representation of $x$ with respect to $r_0$ and $r_1$. If $t_1 = \delta_1-1$, then $x-(\delta_1-1)r_1=x-r_0-r_2+r_1 < r_1-r_2$, therefore $x-t_1r_1+r_2=\sum_{i=2}^mt_ir_i$ is a unique good representation with respect to $r_1$ and $r_2$. We claim $t_2 \geq 1$. If $t_2=0$ then $x-t_1r_1+r_2$ has a good representation with respect to $r_2$ and $r_3$, therefore by induction $x-t_1r_1+r_2 < r_2$, so $x-t_1r_1<0$, a contradiction. Therefore $(t_2-1)r_2+\sum_{i=3}^mt_ir_i$ is the unique good representation of $x-t_1r_1$ with respect to $r_1$ and $r_2$. We claim that $t_1r_1+(t_2-1)r_2+\sum_{i=3}^mt_ir_i$ is the unique good representation of $x$ with respect to $r_0$ and $r_1$. Uniqueness has been established so it remains to show that the representation is good. Assume the representation is not good. It follows that there exists $i \geq 3$ such that $t_i=\delta_i-1$ and for all $2 < k < i$, $t_k=\delta_k-2$, and $t_2-1=\delta_2-2$. Therefore, $t_2=\delta_2-1$, which implies $\sum_{i=2}^mt_ir_i$ is not a good representation of $x-t_1r_1+r_2$ with respect to $r_0$ and $r_1$, a contradiction. Suppose there exists an integer $x \geq r_0$ such that $x$ has a good representation with respect to $r_0$ and $r_1$, $x=\sum_{i=1}^mt_ir_i$. If $t_1 \leq \delta_1-2$ then $x-t_1r_1 \geq x-(r_0+r_2)+2r_1 \geq r_1$. So by induction hypothesis $x-t_1r_1$ does not have a good representation respect to $r_1$ and $r_2$, a contradiction. Hence $t_1=\delta_1-1$ and $x-t_1r_1 < r_1$. This implies that $x-t_1r_1 \geq x-(r_0+r_2)+r_1 \geq r_1-r_2$. Let $x-t_1r_1=\sum_{i=2}^m t_ir_i$ be the good representation of $x-t_1r_1$ with respect to $r_1$ and $r_2$. By induction hypothesis $x-t_1r_1+r_2 \geq r_1$ does not have a good representation with respect to $r_1$ and $r_2$. Either there exists $3 \leq j \leq m$ such that $t_j=\delta_j-1$, $t_2+1=\delta_2-1$ and $t_i=\delta_i-2$ for all $2<i<j$, or $t_2+1=\delta_2$, both of which contradict the fact that $\sum_{i=1}^mt_ir_i$ is a good representation of $x$ with respect to $r_0$ and $r_1$ because $t_1=\delta_1-1$. } \end{proof} \begin{lemma} { \label{v_0_reduced_good_rep_lem} Let $(G,R)$ be a Euclidean star generated by $r_0$ and $r_1$ with center vertex $v_0$. Then the set of all $v_0$-reduced divisors are the set of divisors such that for any Euclidean chain $C=v_0,v_1, \dots,v_m$ leaving $v_0$, $x=\sum_{i=1}^mD(v_i)r_i$ is a good representation with respect to $r_0$ and $r_1$. } \end{lemma} \begin{proof} { Let $D$ be a $v_0$-reduced divisor and $C=v_0,v_1, \dots,v_m$ be a Euclidean chain leaving $v_0$. It is clear that if $x=\sum_{i=1}^mD(v_i)r_i$ is not a good representation with respect to $r_0$ and $r_1$ then $D$ is not a $v_0$-reduced divisor. Conversely, let $D$ be a divisor such that for every Euclidean chain $C=v_0,v_1, \dots,v_m$ leaving $v_0$, $x=\sum_{i=1}^mD(v_i)r_i$ is a good representation with respect to $r_0$ and $r_1$, but $D$ is not a $v_0$-reduced divisor. Let $f \geq {\vec{\bf 0}}$ be a firing strategy such that $f(v_0) = 0$ and $D'=D-Qf$ is a $v_0$-reduced divisor. Note that the existence of $f$ is guaranteed by Corollary~\ref{positive_divisor_reduction_cor}. Let $C=v_0,v_1, \dots,v_m$ be a Euclidean chain leaving $v_0$. Without loss of generality we may assume $f' \neq {\vec{\bf 0}}$ where $f'$ is the projection of $f$ into the first $m+1$ coordinates. If $f'(v_1) > 0$ then $\sum_{i=1}^mD'(v_i)r_i <0$, therefore there exists $1 \leq i \leq m$ such that $D'(v_i) < 0$, a contradiction. Hence, $\sum_{i=1}^mD'(v_i)r_i=\sum_{i=1}^mD(v_i)r_i$. Since $f' \neq {\vec{\bf 0}}$, by Lemma~\ref{lem:leftkernel} and the uniqueness of the representation of $\sum_{i=1}^mD(v_i)r_i$ implied by Lemma~\ref{unique_representation_lem}, $\sum_{i=1}^mD'(v_i)r_i$ is not a good representation. Therefore $D'$ is not $v_0$-reduced, a contradiction. } \end{proof} \begin{definition} Let $(G,R)$ be a Euclidean star generated by $r_0$ and $r_1$ with the center vertex $v_0$. We say a divisor $S$ is a {\it staircase divisor} if there exists a labeling $C_0, \dots, C_{r_0-1}$ of the Euclidean chains leaving $v_0$ where $P_i=v_{0},v_{i,1}, \dots, v_{i,m}$ is the induced path of $C_i$ such that $\sum_{j=1}^m S(v_{i,j})r_j$ is the good representation of $i$, for all $0 \leq i \leq r_0-1$, and $S(v_0)=-1$. \end{definition} \begin{lemma} { \label{extreme_divisor_star_lem} Let $(G,R)$ be a Euclidean star generated by $r_0$ and $r_1$ with the center vertex $v_0$. A divisor $D$ is an extreme $v_0$-reduced divisor if and only if $D$ is a staircase divisor. } \end{lemma} \begin{proof} { Let $S$ be a staircase divisor and $C_0, \dots, C_{r_0-1}$ be a labeling of the Euclidean chains leaving $v_0$ where $v_{0},v_{i,1}, \dots, v_{i,m}$ are the vertices of $C_i$. We claim that $S$ is not equivalent to an effective divisor. For proving the claim, it is enough to show that all $v_0$-reduced divisors equivalent to $S$ are staircase divisors. Let $1 \leq k \leq r_0$ and $f_k$ be the firing strategy guaranteed by Corollary~\ref{unique_firing_reduced_cor}, such that $f_k(v_0)=k$ and $S_k=S-Qf_k$ is a $v_0$-reduced divisor. Note that since $S$ is a $v_0$-reduced divisor, by Lemma~\ref{v_0_reduced_good_rep_lem}, the divisor $S$ is $v_0$-reduced. So, as an application of part (ii) of Theorem~\ref{dhar_alg_thm}, we may assume $f_k \geq \vec{\bf 0}$. It is clear from the proof of Lemma~\ref{v_0_reduced_good_rep_lem}, $\sum_{j=1}^mS_k(v_{i,j})r_j$ is a good representation of $i+kr_1 \hbox{ mod } r_0$ for all $0 \leq i \leq r_0-1$. Note that $S_k$ is a staircase divisor and $s_k(v_0)=-1$. So (i) of Lemma~\ref{extreme_reduced_effective lem} implies that $S_k$ is not equivalent to an effective divisor. Now, we prove that for any $v_0$-reduced divisor $D$ not equivalent to an effective, there exists a staircase divisor $S$ such that and $D' \sim D$ such that $D' \leq S$. Let $C_0, \dots, C_{r_0-1}$ be a labeling of the Euclidean chains leaving $v_0$ where $v_{0},v_{i,1}, \dots, v_{i,m}$ are the vertices of $C_i$ such that $\sum_{j=1}^mD(v_{i,j})r_j \leq \sum_{j=1}^mD(v_{i+1,j})r_j$ for all $0 \leq i \leq r_0-2$. Let $S$ be the staircase divisor defined by the same labeling of the Euclidean chains leaving $v_0$. If for all $0 \leq i \leq r_0-1$, $\sum_{j=1}^mD(v_{i,j})r_j \leq i$ then $D \leq S$, so we may assume that there exists $0 \leq i \leq r_0-1$ such that $\sum_{j=1}^mD(v_{i,j})r_j > i$. Let $k$ be such that $kr_1 \equiv r_0-i-1 \hbox{ (mod) } r_0$. By Corollary~\ref{unique_firing_reduced_cor} there exist firing strategies $f_D$ and $f_S$ such that $f_D(v_0)=f_S(v_0)=k$ and the divisors $D_k=D-Qf_D$ and $S_k=S-Qf_S$ are $v_0$-reduced. We claim that $D_k$ is effective, in particular $D_k(v_0)=0$. We have $f_D(v_{{\ell},1})=f_S(v_{{\ell},1})=\lfloor {kr_1 \over r_0}\rfloor$ for all $0 \leq {\ell} \leq i-1$ and $f_D(v_{{\ell},1})=f_S(v_{{\ell},1})=\lceil {kr_1 \over r_0}\rceil$ for all $i+1 \leq {\ell} \leq r_0-1$, but $f_D(v_{i,1})=\lceil {kr_1 \over r_0}\rceil$ while $f_S(v_{i,1})=\lfloor {kr_1 \over r_0}\rfloor$. This proves the claim and completes the proof of the lemma. } \end{proof} \begin{theorem} { \label{euc_star_thm} Let $(G,R)$ be a Euclidean star then $(G,R)$ has the Riemann-Roch property. } \end{theorem} \begin{proof} { By Lemma~\ref{extreme_divisor_star_lem}, we know that the set of staircase divisors is the set of extreme $v_0$-reduced divisors, hence $$g_{\min}-1=g_{\max}-1=(\sum_{i=0}^{r_0-1}i)-r_0=r_0(r_0-3)/2.$$ Let $V(\vec{G})=\{v_0, \dots, v_n\}.$ Using the formula $$g_0-1=\sum_{i=0}^n r_i({\rm deg}(v_i)-2)/2=r_0(r_0-3)/2=\left({\begin{matrix} r_0-1 \cr 2\end{matrix}}\right)-1.$$ Now the assertion of the theorem follows from Theorem~\ref{g_0g_ming_max_equal}. } \end{proof} \subsection{Arithmetical Graphs without the Riemann-Roch Property} It follows from Theorem~\ref{RR_formula_equiv_U_RI_thm} that an arithmetical graph $(G,R)$ fails to have the Riemann-Roch property if $(G,R)$ is not uniform or is not reflection invariant. The following examples show that all of these three possibilities can happen. \begin{example} \label{withot_RR_NU_NR_exa} { Let $(G,R)$ be an arithmetical graph, where $G$ is the graph obtained by adding two edges connecting $v_0$ to $v_{3}$ to the $6$-cycle $v_0, \dots, v_{5}$, and the multiplicity of the vertex $v_i$ is $1$ if $i \in \{0,2,4\}$ and is $2$ otherwise. Then $(G,R)$ is neither uniform nor reflection invariant. } \end{example} \begin{proof} { Let $\nu_1=-\chi_{\{v_0\}}+\chi_{\{v_{2},v_{3},v_{4}\}}$, $\nu_2=-\chi_{\{v_0\}}+\chi_{\{v_{2}\}}+2\chi_{\{v_{4}\}}$ and $\nu_3=-\chi_{\{v_0\}}+2\chi_{\{v_{2}\}}+\chi_{\{v_{4}\}}$. We claim that $\mathcal{E}=\{\nu_1,\nu_2,\nu_3\}$ is the set of extreme $v_0$-reduced divisors of $(G,R)$. Note that ${\rm deg}_R(\nu_1)=3$ and ${\rm deg}_R(\nu_2)={\rm deg}_R(\nu_3)=2$. For proving the claim we start by showing that $\nu_1$ is $v_0$-reduced. Let $f$ be a valid firing strategy with respect to $v_0$ such that $(D_1-Qf)(v_i) \geq 0$, for all $1 \leq i \leq 5$. If $f(v_{2})=1$, since $(D_1-Qf)(v_{2}) \geq 0$, we have $f(v_1)+f(v_{3}) \geq 3$. If $f(v_1) =2$, since $(D_1-Qf)(v_{1}) \geq 0$ we must have $f(v_0) \geq 1$, a contradiction. So $f(v_{3})=2$ and this implies that in order to have $(D_1-Qf)(v_{3}) \geq 0$ we must have $f(v_{4}) = 3$, a contradiction. This shows that $f(v_1)=0$, and by symmetry $f(v_5)=f(v_4)=0$, which shows that $f(v_3)=0$. This shows that $f={\vec{\bf 0}}$, which contradicts the fact that $f$ is valid strategy with respect to $v_0$. Hence, $\nu_1$ is $v_0$-reduced, as desired. By applying a similar argument, we can see that $\nu_2$ and $\nu_3$ are $v_0$-reduced divisors. Note that since $r_0=1$, by Lemma~\ref{extreme_reduced_effective lem}(i), the $v_0$-reduced divisors $\nu_1,\nu_2,\nu_3$ are not effective and they are pairwise inequivalent. It is easy to compute that ${\rm deg}_R(\nu_1)=3=g_0-1$, so Theorem~\ref{gmaxg_0} implies that $\nu_1$ is extreme. Hence, by symmetry, we only need to prove that $\nu_2$ is extreme. For proving this fact it is enough to show that $D=\nu_2+\chi_{\{v_i\}}$ is equivalent to an effective divisor for all $0 \leq i \leq 5$. If $i \not \in \{0,2,4\}$, then degree of $D$ is $4 = g_0$, so Theorem~\ref{gmaxg_0} implies that $D$ is equivalent to an effective divisor. If $i=0$, then $D$ is trivially effective. If $i=2$, then we have a firing strategy $f={\vec{\bf 1}}-\chi_{\{v_0\}}$ such that $D-Qf=3\chi_{\{v_0\}} \geq {\vec{\bf 0}}$. Also if $i=4$, then we have $f=\chi_{\{v_4,v_5\}}$ such that $D-Qf=\chi_{\{v_2,v_3\}} \geq {\vec{\bf 0}}$. This completes the proof of the fact that $\nu_1, \nu_2, \nu_3$ are extreme $v_0$-reduced divisors. Suppose $\nu$ is an extreme $v_0$-reduced divisor. It is easy to see that $\nu(v_2) \leq 2$ (by symmetry $\nu(v_4) \leq 2$), since otherwise $\nu-Qf \geq 0$, where $f=\chi_{\{v_1,v_2\}}$. Note that $\nu(v_1)=\nu(v_5)=0$ and $\nu(v_3) \leq 1$. It follows that $\mathcal{E}$ is the set of $v_0$-reduced divisors and this completes the proof of the claim. This demonstrates that $(G,R)$ is not uniform. Now, we are going to show that $(G,R)$ is not reflection invariant. Let ${\Lambda}$ be the lattice spanned by Laplacian of $(G,R)$. By applying Lemma~\ref{reduce_exist_lemma} and (ii) of Lemma~\ref{extreme_reduced_effective lem}, we conclude that $Ext(\Sigma({\Lambda}))=\{\nu+{\ell}: \ell \in {\Lambda}, \nu \in \mathcal{E}\}$. Corollary~\ref{extreme_L_ciritical_cor} implies $Crit({\Lambda})=\mathcal{P}+{\Lambda}$, where $\mathcal{P}=\{\pi(\nu+{\vec{\bf 1}}): \nu \in \mathcal{E}\}$. Let $p_i=\pi(\nu_i+\vec{\bf 1})=(\nu_i+\vec{\bf 1})-\left({(\nu_i+\vec{\bf 1}) \cdot R \over R \cdot R}\right)R$. An easy computation shows that $p_1={1 \over 5}(-4,-3,6,2,6,-3), p_2={1 \over 15}(-11,-7,19,-7,34,-7)$ and $p_3={1 \over 15}(-11,-7,34,-7,19,-7)$. For seeking a contradiction, assume there exists $v \in {\mathbb R}^{6}$ such that $-Crit({\Lambda})=Crit({\Lambda})+v$. Either there exist ${\ell},{\ell}',{\ell}'' \in {\Lambda}$ such that $-p_1=p_1+{\ell}+v$, $-p_2=p_2+{\ell}'+v$ and $-p_3=p_3+{\ell}''+v$, in this case $2(p_i-p_j) \in {\Lambda}$ for all $1 \leq i \neq j \leq 3$. Or, there exist ${\ell},{\ell}' \in {\Lambda}$ and $\{i,j,k\}=\{1,2,3\}$ such that $-p_i=p_j+{\ell}+v$, and $-p_k=p_k+{\ell}'+v$, in this case $-p_j=p_i+{\ell}+v$ and we must have $-2p_k+p_i+p_j \in {\Lambda}$. Note that ${\Lambda} \subseteq {\mathbb Z}^6$, so an easy computation shows that none of the above cases happen. This proves that $(G,R)$ is not reflection invariant. } \end{proof} \begin{example} { \label{exa:uni_not_ref} Let $(G,R)$ be an arithmetical graph, where $G$ is a graph obtained from $K_4$ where $V(K_4)=\{v_0,v_1,v_2,v_3\}$, by subdividing the edge $v_2v_3$ twice. The multiplicity of the vertices $v_0$ and $v_1$ are $2$ and $4$ respectively, and the multiplicity of the other vertices are $3$. Then $(G,R)$ is uniform but not reflection invariant. } \end{example} \begin{proof} { Let $P=v_2v_4v_5v_3$ be the induced path connecting $v_2$ to $v_3$, i.e., the path obtained by subdividing the edge $v_2v_3$ in the graph $K_4$. Let $\nu_1=-\chi_{\{v_0\}}+\chi_{\{v_2,v_4\}}$, $\nu_2=-\chi_{\{v_0\}}+2\chi_{\{v_2\}}$ and $\nu_3=-\chi_{\{v_0\}}+2\chi_{\{v_3\}}$. We claim that $\mathcal{E}=\{\nu_1,\nu_2,\nu_3\}$ is the set of extreme $v_0$-reduced divisors of $(G,R)$. By running the Generalized Dhar's Algorithm on each $\nu_i$, $1 \leq i \leq 3$, it is not hard to see that $\nu_1 \sim -\chi_{\{v_0\}}+\chi_{\{v_3,v_5\}}$, $\nu_2 \sim -\chi_{\{v_0\}}+\chi_{\{v_3,v_4\}}$ and $\nu_3 \sim -\chi_{\{v_0\}}+\chi_{\{v_2,v_5\}}$. We will leave the details of the fact that $\nu_i$, $1 \leq i \leq 3$ is $v_0$-reduced to the reader. (It follows from Lemma~\ref{dhar_alg_thm}, or case analysis similar to that one used in the proof of the Example~\ref{withot_RR_NU_NR_exa}.) It is easy to compute that $g_0=7$, and for all $\nu \in \mathcal{E}$ and $0 \leq i \leq 5$, ${\rm deg}_R(\nu+\chi_{\{v_i\}}) \geq 7$. Now, Theorem~\ref{gmaxg_0} implies that $\nu+\chi_{\{v_i\}}$ is equivalent to an effective divisor. This shows that $\nu_i$, $1 \leq i \leq 3$ is extreme $v_0$-reduced. To finish the proof of the claim, it is enough to show that if $\nu$ is extreme $v_0$-reduced divisor then $\nu \in \mathcal{E}$. Note that $\nu(v_1)=0$ since otherwise $\nu-Qf \geq 0$ where $f=\chi_{\{v_0\}}+3\chi_{\{v_1\}}+2\chi_{\{v_2,v_3,v_4,v_5\}}$. Also, note that if $\nu(v_2) \geq 1$ and $\nu(v_3) \geq 1$, then $\nu-Qf \geq \chi_{\{v_1\}}$ where $f=\chi_{\{v_0, \dots, v_5\}}$. This shows that there exists $1 \leq i \leq 3$ such that $\nu=\nu_i$ or $\nu \sim \nu_i$. The uniformity of $(G,R)$ immediately follows from the fact that for all $\nu \in \mathcal{E}$, ${\rm deg}_R(\nu)=4$. For proving the fact that $(G,R)$ is not reflection invariant, we apply a similar argument we used in the proof of Example~\ref{withot_RR_NU_NR_exa}. Let $\mathcal{P}=\{p_1,p_2,p_3\}$ be the same set as defined in Example~\ref{withot_RR_NU_NR_exa}. An easy computation shows that $p_1={1 \over 3}(-2,-1,4,-1,4,-1), p_2={1 \over 3}(-2,-1,7,-1,1,-1)$ and $p_3={1 \over 5}(-4,-3,1,7,1,-3)$. For seeking a contradiction, assume there exists $v \in {\mathbb R}^{6}$ such that $-Crit({\Lambda})=Crit({\Lambda})+v$. Either there exist ${\ell},{\ell}',{\ell}'' \in {\Lambda}$ such that $-p_1=p_1+{\ell}+v$, $-p_2=p_2+{\ell}'+v$ and $-p_3=p_3+{\ell}''+v$, in this case $2(p_i-p_j) \in {\Lambda}$ for all $1 \leq i \neq j \leq 3$. Or, there exist ${\ell},{\ell}' \in {\Lambda}$ and $\{i,j,k\}=\{1,2,3\}$ such that $-p_i=p_j+{\ell}+v$, and $-p_k=p_k+{\ell}'+v$, in this case $-p_j=p_i+{\ell}+v$ and we must have $-2p_k+p_i+p_j \in {\Lambda}$. Note that ${\Lambda} \subseteq {\mathbb Z}^6$, so an easy computation shows that none of the above cases occur. This proves that $(G,R)$ is not reflection invariant. } \end{proof} \begin{example} { Suppose $R=(r_0,r_1,r_2)=(1,2,3)$. Let $(G,R)$ be an arithmetical graph where $G$ is a graph with vertex set $\{v_0,v_1,v_2\}$ such that the multiplicity of $v_i$ is $r_i$ and $v_i$ is connected to $v_j$ with $r_ir_j$ edges for all $0 \leq i \neq j \leq 2$. Then $(G,R)$ is not uniform but it is reflection invariant. } \end{example} \begin{proof} { We claim that $\nu_1=-\chi_{\{v_0\}}+3\chi_{\{v_1\}}+2\chi_{\{v_2\}}$ and $\nu_2=-\chi_{\{v_0\}}+\chi_{\{v_1\}}+3\chi_{\{v_2\}}$ are the only extreme $v_0$-reduced divisors. Suppose $\nu$ is an extreme $v_0$-reduced divisor. Lemma~\ref{extreme_reduced_effective lem} (ii) implies that $\nu(v_0)=-1$. It is not hard to see that $\nu(v_1) \leq 3$ and $\nu(v_2) \leq 3$, otherwise $\nu-Qf$ is effective where $f=\chi_{\{v_1,v_2\}}$ and $f=\chi_{\{v_1\}}+2\chi_{\{v_2\}}$ respectively. Moreover, if $D=-\chi_{\{v_0\}}+2\chi_{\{v_1\}}+3\chi_{\{v_2\}}$, then $D-Qf$ is effective where $f'=2\chi_{\{v_1\}}+3\chi_{\{v_2\}}$. Therefore the only possible extreme divisors are $\nu_1$ and $\nu_2$. By running the generalized Dhar's algorithm on $\nu_1$ and $\nu_2$, and applying Lemma~\ref{dhar_alg_thm}, one can check that $\nu_1$ are $\nu_2$ are $v_0$-reduced and therefore they are not equivalent to effective divisors. Note that the above computation shows that we already checked some of the different possible firing strategies in a run of the generalized Dhar's Algorithm on $\nu_1$ and $\nu_2$. So, we claim that if an arithmetical graph $(G,R)$ has only two $v_0$-reduced divisors then $(G,R)$ is reflection invariant. Let ${\Lambda}$ be the lattice spanned by Laplacian of $(G,R)$ and $\mathcal{E}$ be the set of extreme divisors of ${\Lambda}$. By applying Lemma~\ref{reduce_exist_lemma} and (ii) of Lemma~\ref{extreme_reduced_effective lem}, we conclude that $Ext(\Sigma({\Lambda}))=\{\nu+{\ell}: \ell \in {\Lambda}, \nu \in \mathcal{E}\}$. Corollary~\ref{extreme_L_ciritical_cor} implies $Crit({\Lambda})=\mathcal{P}+{\Lambda}$ where $\mathcal{P}=\{\pi(\nu+{\vec{\bf 1}}): \nu \in \mathcal{E}\}$. Let $\nu_1$ and $\nu_2$ be the only extreme $v_0$-divisors of $(G,R)$ and $p_1=\pi(\nu_1+{\vec{\bf 1}})$ and $p_2=\pi(\nu_2+{\vec{\bf 1}})$. For proving the claim its enough to show that $-Crit({\Lambda})=Crit({\Lambda})+v$ where $v=-p_1-p_2$. Assume $p \in Crit({\Lambda})$, therefore there exists $1 \leq i \leq 2$ and $\ell \in {\Lambda}$ such that $p=p_i+\ell$. Now, it is easy to see that $p_i+\ell+v=-p_j+\ell=-(p_j-\ell)$ where $j=-i+3$ and $p_j-\ell \in Crit({\Lambda})$. This completes the proof of the claim. So by a similar argument mentioned in proof of Example~\ref{exa:uni_not_ref}, $(G,R)$ is reflection invariant. Since ${\rm deg}_R(\nu)=11$ and ${\rm deg}_R(\nu')=10$, we have $g_{\max}=12$ and $g_{\min}=11$. This shows that $(G,R)$ is not uniform. } \end{proof} \addcontentsline{toc}{section}{Acknowledgments} \section*{Acknowledgments} We would like to thank Matthew Baker for introducing the problem to us and for directing us with helpful discussions and suggestion of potential approaches toward solving the problem. We also thank Dino Lorenzini, Farbod Shokrieh and Robin Thomas for valuable conversations and useful comments. \addcontentsline{toc}{section}{References}
1,116,691,497,967
arxiv
\section{Introduction} As usual, a {\it convex body} of the Euclidean space $\mathbb{E}^d$ is a compact convex set with non-empty interior. Let $\mathbf{C}\subset\mathbb{E}^d$ be a convex body, and let $H\subset\mathbb{E}^d$ be a hyperplane. Then the distance $w(\mathbf{C} , H)$ between the two supporting hyperplanes of $\mathbf{C}$ parallel to $H$ is called the {\it width of $\mathbf{C}$ parallel to $H$}. Moreover, the smallest width of $\mathbf{C}$ parallel to hyperplanes of $\mathbb{E}^d$ is called the {\it minimal width} of $\mathbf{C}$ and is denoted by $w(\mathbf{C})$. Recall that in the 1930's, Tarski posed what came to be known as the plank problem. A {\it plank} $\mathbf{P}$ in $\mathbb{E}^d$ is the (closed) set of points between two distinct parallel hyperplanes. The {\it width} $w(\mathbf{P})$ of $\mathbf{P}$ is simply the distance between the two boundary hyperplanes of $\mathbf{P}$. Tarski conjectured that if a convex body of minimal width $w$ is covered by a collection of planks in $\mathbb{E}^d$, then the sum of the widths of these planks is at least $w$. This conjecture was proved by Bang in his memorable paper \cite{Ba51}. (In fact, the proof presented in that paper is a simplification and generalization of the proof published by Bang somewhat earlier in \cite{Ba50}.) Thus, we call the following statement Bang's plank theorem. \begin{theorem}\label{Bang-plank-th} If the convex body $\mathbf{C}$ is covered by the planks $\mathbf{P}_1, \mathbf{P}_2, \dots , \mathbf{P}_n$ in $\mathbb{E}^d, d\ge 2$ (i.e., $\mathbf{C}\subset \mathbf{P}_1\cup \mathbf{P}_2\cup \dots \cup \mathbf{P}_n\subset\mathbb{E}^d$), then $\sum_{i=1}^n w(\mathbf{P}_i)\ge w(\mathbf{C})$. \end{theorem} In \cite{Ba51}, Bang raised the following stronger version of Tarski's plank problem called the affine plank problem. We phrase it via the following definition. Let $\mathbf{C}$ be a convex body and let $\mathbf{P}$ be a plank with boundary hyperplanes parallel to the hyperplane $H$ in $\mathbb{E}^d$. We define the {\it $\mathbf{C}$-width} of the plank $\mathbf{P}$ as $\frac{w(\mathbf{P}) }{w(\mathbf{C} , H) }$ and label it $w_{\mathbf{C}}(\mathbf{P})$. (This notion was introduced by Bang \cite{Ba51} under the name ``relative width''.) \begin{conjecture}\label{Bang-conjecture} If the convex body $\mathbf{C}$ is covered by the planks $\mathbf{P}_1, \mathbf{P}_2, \dots ,$ $\mathbf{P}_n$ in $\mathbb{E}^d, d\ge 2$, then $\sum_{i=1}^n w_{\mathbf{C}}(\mathbf{P}_i)\ge 1$. \end{conjecture} The special case of Conjecture \ref{Bang-conjecture}, when the convex body to be covered is centrally symmetric, has been proved by Ball in \cite{Bal91}. Thus, the following is Ball's plank theorem. \begin{theorem}\label{Ball-plank-th} If the centrally symmetric convex body $\mathbf{C}$ is covered by the planks $\mathbf{P}_1, \mathbf{P}_2, \dots , \mathbf{P}_n$ in $\mathbb{E}^d, d\ge 2$, then $\sum_{i=1}^n w_{\mathbf{C}}(\mathbf{P}_i)\ge 1$. \end{theorem} It was Alexander \cite{Al68} who noticed that Conjecture \ref{Bang-conjecture} is equivalent to the following generalization of a problem of Davenport. \begin{conjecture}\label{Alexander--Davenport} If a convex body $\mathbf{C}$ in $\mathbb{E}^d, d\ge 2$ is sliced by $n-1$ hyperplane cuts, then there exists a piece that covers a translate of $\frac{1}{n}\mathbf{C}$. \end{conjecture} We note that the paper \cite{BeBe96} of A. Bezdek and the author proves Conjecture \ref{Alexander--Davenport} for successive hyperplane cuts (i.e., for hyperplane cuts when each cut divides one piece). Also, the same paper (\cite{BeBe96}) introduced two additional equivalent versions of Conjecture \ref{Bang-conjecture}. As they seem to be of independent interest we recall them following the terminology used in \cite{BeBe96}. Let $\mathbf{C}$ and $\mathbf{K}$ be convex bodies in $\mathbb{E}^d$ and let $H$ be a hyperplane of $\mathbb{E}^d$. The {\it $\mathbf{C}$-width of $\mathbf{K}$ parallel to $H$} is denoted by $ w_{\mathbf{C}}(\mathbf{K} , H)$ and is defined as $\frac{w(\mathbf{K} , H)}{w(\mathbf{C} , H) }$. The {\it minimal $\mathbf{C}$-width of $\mathbf{K}$} is denoted by $ w_{\mathbf{C}}(\mathbf{K})$ and is defined as the minimum of $ w_{\mathbf{C}}(\mathbf{K} , H)$, where the minimum is taken over all possible hyperplanes $H$ of $\mathbb{E}^d$. Recall that the inradius of $\mathbf{K}$ is the radius of the largest ball contained in $\mathbf{K}$. It is quite natural then to introduce the {\it $\mathbf{C}$-inradius of $\mathbf{K}$} as the factor of the largest positive homothetic copy of $\mathbf{C}$, a translate of which is contained in $\mathbf{K}$. We need to do one more step to introduce the so-called successive $\mathbf{C}$-inradii of $\mathbf{K}$ as follows. Let $r$ be the $\mathbf{C}$-inradius of $\mathbf{K}$. For any $0<\rho\le r$ let the {\it $\rho\mathbf{C}$-rounded body of $\mathbf{K}$} be denoted by ${\mathbf{K}}^{\rho\mathbf{C}}$ and be defined as the union of all translates of $\rho\mathbf{C}$ that are covered by $\mathbf{K}$. Now, take a fixed integer $m\ge 1$. On the one hand, if $\rho>0$ is sufficiently small, then $ w_{\mathbf{C}}({\mathbf{K}}^{\rho\mathbf{C}})>m\rho$. On the other hand, $ w_{\mathbf{C}}({\mathbf{K}}^{r\mathbf{C}})=r\le mr$. As $ w_{\mathbf{C}}({\mathbf{K}}^{\rho\mathbf{C}})$ is a decreasing continuous function of $\rho>0$ and $m\rho$ is a strictly increasing continuous function of $\rho$, there exists a uniquely determined $\rho>0$ such that $$ w_{\mathbf{C}}({\mathbf{K}}^{\rho\mathbf{C}})=m\rho.$$ This uniquely determined $\rho$ is called the {\it $m$th successive $\mathbf{C}$-inradius of $\mathbf{K}$} and is denoted by $r_{\mathbf{C}}(\mathbf{K} , m)$. Now, the two equivalent versions of Conjecture \ref{Bang-conjecture} and Conjecture \ref{Alexander--Davenport} introduced in \cite{BeBe96} can be phrased as follows. \begin{conjecture}\label{Bezdek--Bezdek-1} If a convex body $\mathbf{K}$ in $\mathbb{E}^d, d\ge 2$ is covered by the planks $\mathbf{P}_1, \mathbf{P}_2,$ $\dots , \mathbf{P}_n$, then $\sum_{i=1}^n w_{\mathbf{C}}(\mathbf{P}_i)\ge w_{\mathbf{C}}(\mathbf{K})$ for any convex body $\mathbf{C}$ in $\mathbb{E}^d$. \end{conjecture} \begin{conjecture}\label{Bezdek--Bezdek-2} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$. If $\mathbf{K}$ is sliced by $n-1$ hyperplanes, then the minimum of the greatest $\mathbf{C}$-inradius of the pieces is equal to the $n$th successive $\mathbf{C}$-inradius of $\mathbf{K}$, i.e., it is $r_{\mathbf{C}}(\mathbf{K} , n)$. \end{conjecture} Recall that Theorem \ref{Ball-plank-th} gives a proof of (Conjecture \ref{Bezdek--Bezdek-1} as well as) Conjecture \ref{Bezdek--Bezdek-2} for centrally symmetric convex bodies $\mathbf{K}$ in $\mathbb{E}^d, d\ge 2$ (with $\mathbf{C}$ being an arbitrary convex body in $\mathbb{E}^d, d\ge 2$). Another approach that leads to a partial solution of Conjecture \ref{Bezdek--Bezdek-2} was published in \cite{BeBe96}. Namely, in that paper A. Bezdek and the author proved the following theorem that (under the condition that $\mathbf{C}$ is a ball) answers a question raised by Conway (\cite{BeBe95}) as well as proves Conjecture \ref{Bezdek--Bezdek-2} for successive hyperplane cuts. \begin{theorem}\label{Bezdek-Bezdek-Conway} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d$, $d\ge 2$. If $\mathbf{K}$ is sliced into $n\ge 1$ pieces by $n-1$ successive hyperplane cuts (i.e., when each cut divides one piece), then the minimum of the greatest $\mathbf{C}$-inradius of the pieces is the $n$th successive $\mathbf{C}$-inradius of $\mathbf{K}$ (i.e., $r_{\mathbf{C}}(\mathbf{K} , n)$). An optimal partition is achieved by $n-1$ parallel hyperplane cuts equally spaced along the minimal $\mathbf{C}$-width of the $r_{\mathbf{C}}(\mathbf{K} , n)\mathbf{C}$-rounded body of $\mathbf{K}$. \end{theorem} Akopyan and Karasev (\cite{AkKa12}) just very recently have proved a related partial result on Conjecture~\ref{Bezdek--Bezdek-1}. Their theorem is based on a nice generalization of successive hyperplane cuts. The more exact details are as follows. Under the {\it convex partition} $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ of $\mathbb{E}^d$ we understand the family $\mathbf{V}_1, \mathbf{V}_2, \dots, \mathbf{V}_n$ of closed convex sets having pairwise disjoint non-empty interiors in $\mathbb{E}^d$ with $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n=\mathbb{E}^d$. Then we say that the convex partition $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ of $\mathbb{E}^d$ is an {\it inductive partition} of $\mathbb{E}^d$ if for any $1\le i\le n$, there exists an inductive partition $\mathbf{W}_1\cup\dots\cup\mathbf{W}_{i-1}\cup\mathbf{W}_{i+1}\cup\dots\cup\mathbf{W}_n$ of $\mathbb{E}^d$ such that $\mathbf{V}_j\subset\mathbf{W}_j$ for all $j\neq i$. A partition into one part $\mathbf{V}_1=\mathbb{E}^d$ is assumed to be inductive. We note that if $\mathbb{E}^d$ is sliced into $n$ pieces by $n-1$ successive hyperplane cuts (i.e., when each cut divides one piece), then the pieces generate an inductive partition of $\mathbb{E}^d$. Also, the Voronoi cells of finitely many points of $\mathbb{E}^d$ generate an inductive partition of $\mathbb{E}^d$. Now, the main theorem of \cite{AkKa12} can be phrased as follows. \begin{theorem}\label{Akopyan-Karasev} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$ and let $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ be an inductive partition of $\mathbb{E}^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$. Then $\sum_{i=1}^{n}r_{\mathbf{C}}(\mathbf{V}_i\cap\mathbf{K}, 1)\ge r_{\mathbf{C}}(\mathbf{K} , 1)$. \end{theorem} \section{Extensions to Successive Inradii} First, we state the following stronger version of Theorem~\ref{Bezdek-Bezdek-Conway}. Its proof is an extension of the proof of Theorem~\ref{Bezdek-Bezdek-Conway} published in \cite{BeBe96}. \begin{theorem}\label{Bezdek-Bezdek-Conway-generalized} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d$, $d\ge 2$ and let $m$ be a positive integer. If $\mathbf{K}$ is sliced into $n\ge 1$ pieces by $n-1$ successive hyperplane cuts (i.e., when each cut divides one piece), then the minimum of the greatest $m$th successive $\mathbf{C}$-inradius of the pieces is the $(mn)$th successive $\mathbf{C}$-inradius of $\mathbf{K}$ (i.e., $r_{\mathbf{C}}(\mathbf{K} , mn)$). An optimal partition is achieved by $n-1$ parallel hyperplane cuts equally spaced along the minimal $\mathbf{C}$-width of the $r_{\mathbf{C}}(\mathbf{K} , mn)\mathbf{C}$-rounded body of $\mathbf{K}$. \end{theorem} Second, the method of Akopyan and Karasev (\cite{AkKa12}) can be extended to prove the following stronger version of Theorem \ref{Akopyan-Karasev}. In fact, that approach extends also the relavant additional theorems of Akopyan and Karasev stated in \cite{AkKa12} and used in their proof of Theorem~\ref{Akopyan-Karasev}. However, in this paper following the recommendation of the referee, we derive the next theorem directly from Theorem~\ref{Akopyan-Karasev}. \begin{theorem}\label{Akopyan-Karasev-Bezdek} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$ and let $m$ be a positive integer. If $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ is an inductive partition of $\mathbb{E}^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$, then $\sum_{i=1}^{n}r_{\mathbf{C}}(\mathbf{V}_i\cap\mathbf{K}, m)\ge r_{\mathbf{C}}(\mathbf{K} , m)$. \end{theorem} \begin{corollary}\label{corollary of Akopyan-Karasev-Bezdek} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$. If $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ is an inductive partition of $\mathbb{E}^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$, then $\sum_{i=1}^{n}w_{\mathbf{C}}(\mathbf{V}_i\cap\mathbf{K})\ge w_{\mathbf{C}}(\mathbf{K})$. \end{corollary} For the sake of completeness we mention that in two dimensions one can state a bit more. Namely, recall that Akopyan and Karasev (\cite{AkKa12}) proved the following: Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^2$ and let $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n=\mathbf{K}$ be a partition of $\mathbf{K}$ into convex bodies $\mathbf{V}_i$, $1\le i\le n$. Then $\sum_{i=1}^{n}r_{\mathbf{C}}(\mathbf{V}_i, 1)\ge r_{\mathbf{C}}(\mathbf{K} , 1)$. Now, exactly the same way as Theorem~\ref{Akopyan-Karasev-Bezdek} is derived from Theorem~\ref{Akopyan-Karasev}, it follows that $\sum_{i=1}^{n}r_{\mathbf{C}}(\mathbf{V}_i, m)\ge r_{\mathbf{C}}(\mathbf{K} , m)$ holds for any positive integer $m$. Finally, we close this section stating that Conjectures~\ref{Bang-conjecture}, ~\ref{Alexander--Davenport}, ~\ref{Bezdek--Bezdek-1}, and ~\ref{Bezdek--Bezdek-2} are all equivalent to the following two conjectures: \begin{conjecture}\label{Bezdek--Bezdek-11} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$ and let $m$ be a positive integer. If $\mathbf{K}$ is covered by the planks $\mathbf{P}_1, \mathbf{P}_2,$ $\dots , \mathbf{P}_n$ in $\mathbb{E}^d$, then $\sum_{i=1}^n r_{\mathbf{C}}(\mathbf{P}_i , m)\ge r_{\mathbf{C}}(\mathbf{K} , m)$ or equivalently, $\sum_{i=1}^n w_{\mathbf{C}}(\mathbf{P}_i)\ge m r_{\mathbf{C}}(\mathbf{K} , m)$. \end{conjecture} \begin{conjecture}\label{Bezdek--Bezdek-22} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$ and let the positive integer $m$ be given. If $\mathbf{K}$ is sliced by $n-1$ hyperplanes, then the minimum of the greatest $m$th successive $\mathbf{C}$-inradius of the pieces is the $(mn)$th successive $\mathbf{C}$-inradius of $\mathbf{K}$, i.e., it is $r_{\mathbf{C}}(\mathbf{K} , mn)$. \end{conjecture} In the rest of the paper we prove the claims of this section. \section{Proof of Theorem \ref{Bezdek-Bezdek-Conway-generalized}} \subsection{On Coverings of Convex Bodies by Two Planks} On the one hand, the following statement is an extension to higher dimensions of Theorem 4 in \cite{Al68}. On the other hand, the proof presented below is based on Theorem 4 of \cite{Al68}. \begin{lemma}\label{Bezdek-Bezdek-Alexander-inequality} If a convex body $\mathbf{K}$ in $\mathbb{E}^d, d\ge 2$ is covered by the planks $\mathbf{P}_1$ and $\mathbf{P}_2$, then $w_{\mathbf{C}}(\mathbf{P}_1)+w_{\mathbf{C}}(\mathbf{P}_2)\ge w_{\mathbf{C}}(\mathbf{K})$ for any convex body $\mathbf{C}$ in $\mathbb{E}^d$. \end{lemma} \proof Let $H_1$ (resp., $H_2$) be one of the two hyperplanes which bound the plank $\mathbf{P}_1$ (resp., $\mathbf{P}_2$). If $H_1$ and $H_2$ are translates of each other, then the claim is obviously true. Thus, without loss of generality we may assume that $L:=H_1\cap H_2$ is a $(d-2)$-dimensional affine subspace of $\mathbb{E}^d$. Let $\mathbb{E}^2$ be the $2$-dimensional linear subspace of $\mathbb{E}^d$ that is orthogonal to $L$. If $(\cdot)'$ denotes the (orthogonal) projection of $\mathbb{E}^d$ parallel to $L$ onto $\mathbb{E}^2$, then obviously, $w_{\mathbf{C'}}(\mathbf{P}_1')=w_{\mathbf{C}}(\mathbf{P}_1)$, $w_{\mathbf{C'}}(\mathbf{P}_2')=w_{\mathbf{C}}(\mathbf{P}_2)$ and $w_{\mathbf{C'}}(\mathbf{K'})\ge w_{\mathbf{C}}(\mathbf{K})$. Thus, it is sufficient to prove that $$ w_{\mathbf{C'}}(\mathbf{P}_1')+w_{\mathbf{C'}}(\mathbf{P}_2')\ge w_{\mathbf{C'}}(\mathbf{K'}). $$ In other words, it is sufficient to prove Lemma \ref{Bezdek-Bezdek-Alexander-inequality} for $d=2$. Hence, in the rest of the proof, $\mathbf{K}, \mathbf{C}, \mathbf{P}_1, \mathbf{P}_2, H_1 $, and $H_2$ mean the sets introduced and defined above, however, for $d=2$. Now, we can make the following easy observation $$ w_{\mathbf{C}}(\mathbf{P}_1)+w_{\mathbf{C}}(\mathbf{P}_2)=\frac{w(\mathbf{P}_1)}{w(\mathbf{C}, H_1)}+\frac{w(\mathbf{P}_2)}{w(\mathbf{C}, H_2)} $$ $$ =\frac{w(\mathbf{P}_1)}{w(\mathbf{K}, H_1)}\frac{w(\mathbf{K}, H_1)}{w(\mathbf{C}, H_1)}+\frac{w(\mathbf{P}_2)}{w(\mathbf{K}, H_2)}\frac{w(\mathbf{K}, H_2)}{w(\mathbf{C}, H_2)} $$ $$ \ge \left(\frac{w(\mathbf{P}_1)}{w(\mathbf{K}, H_1)}+ \frac{w(\mathbf{P}_2)}{w(\mathbf{K}, H_2)} \right)w_{\mathbf{C}}(\mathbf{K}) $$ $$ =\left(w_{\mathbf{K}}(\mathbf{P}_1)+ w_{\mathbf{K}}(\mathbf{P}_2)\right)w_{\mathbf{C}}(\mathbf{K}). $$ Then recall that Theorem 4 in \cite{Al68} states that if a convex set in the plane is covered by two planks, then the sum of their relative widths is at least $1$. Thus, using our terminology, we have that $w_{\mathbf{K}}(\mathbf{P}_1)+ w_{\mathbf{K}}(\mathbf{P}_2)\ge 1$, finishing the proof of Lemma \ref{Bezdek-Bezdek-Alexander-inequality}. \endproof \subsection{Minimizing the Greatest $m$th Successive $\mathbf{C}$-Inradius} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d$, $d\ge 2$. We prove Theorem \ref{Bezdek-Bezdek-Conway-generalized} by induction on $n$. It is trivial to check the claim for $n=1$. So, let $n\ge 2$ be given and assume that Theorem \ref{Bezdek-Bezdek-Conway-generalized} holds for at most $n-2$ successive hyperplane cuts and based on that we show that it holds for $n-1$ successive hyperplane cuts as well. The details are as follows. Let $H_1, \dots, H_{n-1}$ denote the hyperplanes of the $n-1$ successive hyperplane cuts that slice $\mathbf{K}$ into $n$ pieces such that the greatest $m$th successive $\mathbf{C}$-inradius of the pieces is the smallest possible say, $\rho$. Then take the first cut $H_1$ that slices $\mathbf{K}$ into the pieces $\mathbf{K}_1$ and $\mathbf{K}_2$ such that $\mathbf{K}_1$ (resp., $\mathbf{K}_2$) is sliced into $n_1$ (resp., $n_2$) pieces by the successive hyperplane cuts $H_2, \dots, H_{n-1}$, where $n=n_1+n_2$. The induction hypothesis implies that $\rho\ge r_{\mathbf{C}}(\mathbf{K}_1 , mn_1)=:\rho_1$ and $\rho\ge r_{\mathbf{C}}(\mathbf{K}_2 , mn_2)=:\rho_2$ and therefore \begin{equation}\label{induction-hypothesis-1} w_{\mathbf{C}}({\mathbf{K}_1}^{\rho\mathbf{C}})\le w_{\mathbf{C}}({\mathbf{K}_1}^{\rho_1\mathbf{C}})=mn_1\rho_1\le mn_1\rho ; \end{equation} moreover, \begin{equation}\label{induction-hypothesis-2} w_{\mathbf{C}}({\mathbf{K}_2}^{\rho\mathbf{C}})\le w_{\mathbf{C}}({\mathbf{K}_2}^{\rho_2\mathbf{C}})=mn_2\rho_2\le mn_2\rho . \end{equation} Now, we need to define the following set. \begin{definition} Assume that the origin $\mathbf{o}$ of $\mathbb{E}^d$ belongs to the interior of the convex body $\mathbf{C}\subset\mathbb{E}^d$. Consider all translates of $\rho\mathbf{C}$ which are contained in the convex body $\mathbf{K}\subset\mathbb{E}^d$. The set of points in the translates of $\rho\mathbf{C}$ that correspond to $\mathbf{o}$ form a convex set called the inner $\rho\mathbf{C}$-parallel body of $\mathbf{K}$ denoted by $\mathbf{K}_{-\rho\mathbf{C}}$. \end{definition} Clearly, $$ (\mathbf{K}_1)_{-\rho\mathbf{C}}\cup(\mathbf{K}_2)_{-\rho\mathbf{C}}\subset\mathbf{K}_{-\rho\mathbf{C}} \ {\rm with}\ (\mathbf{K}_1)_{-\rho\mathbf{C}}\cap(\mathbf{K}_2)_{-\rho\mathbf{C}}=\emptyset . $$ Also, it is easy to see that there is a plank $\mathbf{P}$ with $w_{\mathbf{C}}(\mathbf{P})=\rho$ such that it is parallel to $H_1$ and contains $H_1$ in its interior; moreover, $$ \mathbf{K}_{-\rho\mathbf{C}}\subset(\mathbf{K}_1)_{-\rho\mathbf{C}}\cup(\mathbf{K}_2)_{-\rho\mathbf{C}}\cup \mathbf{P} . $$ \noindent Now, let $H_1^+$ (resp., $H_1^-$) be the closed halfspace of $\mathbb{E}^d$ bounded by $H_1$ and containing $\mathbf{K}_1$ (resp., $\mathbf{K}_2$) and let $\mathbf{P}^+:=\mathbf{P}\cap H_1^+$ (resp., $\mathbf{P}^-:=\mathbf{P}\cap H_1^-$). Moreover, let $\mathbf{K}_{-\rho\mathbf{C}}^+:=\mathbf{K}_{-\rho\mathbf{C}}\cap H_1^+$ (resp., $\mathbf{K}_{-\rho\mathbf{C}}^-:=\mathbf{K}_{-\rho\mathbf{C}}\cap H_1^-$). Hence, applying Lemma \ref{Bezdek-Bezdek-Alexander-inequality} to $\mathbf{K}_{-\rho\mathbf{C}}$ partitioned into $\mathbf{K}_{-\rho\mathbf{C}}^+\cup\mathbf{K}_{-\rho\mathbf{C}}^-$ and to $\mathbf{K}_{-\rho\mathbf{C}}^+$ covered by the plank $\mathbf{P}^+$ and the plank generated by the minimal $\mathbf{C}$-width of $(\mathbf{K}_1)_{-\rho\mathbf{C}}$ as well as to $\mathbf{K}_{-\rho\mathbf{C}}^-$ covered by the plank $\mathbf{P}^-$ and the plank generated by the minimal $\mathbf{C}$-width of $(\mathbf{K}_2)_{-\rho\mathbf{C}}$ we get that \begin{equation}\label{width-inequality-for-inner-parallel-bodies} w_{\mathbf{C}}\left(\mathbf{K}_{-\rho\mathbf{C}}\right)\le w_{\mathbf{C}}\left(\mathbf{K}_{-\rho\mathbf{C}}^+\right)+w_{\mathbf{C}}\left(\mathbf{K}_{-\rho\mathbf{C}}^-\right) \le w_{\mathbf{C}}\left((\mathbf{K}_1)_{-\rho\mathbf{C}}\right)+\rho+w_{\mathbf{C}}\left((\mathbf{K}_2)_{-\rho\mathbf{C}}\right). \end{equation} \noindent By definition $w_{\mathbf{C}}\left((\mathbf{K}_1)_{-\rho\mathbf{C}}\right)$ $=w_{\mathbf{C}}({\mathbf{K}_1}^{\rho\mathbf{C}})-\rho$, $w_{\mathbf{C}}\left((\mathbf{K}_2)_{-\rho\mathbf{C}}\right)$ $=w_{\mathbf{C}}({\mathbf{K}_2}^{\rho\mathbf{C}})-\rho$ and $w_{\mathbf{C}}\left(\mathbf{K}_{-\rho\mathbf{C}}\right)=w_{\mathbf{C}}(\mathbf{K}^{\rho\mathbf{C}})-\rho$. Hence, (\ref{width-inequality-for-inner-parallel-bodies}) is equivalent to \begin{equation}\label{width-inequality-for-rounded-bodies} w_{\mathbf{C}}(\mathbf{K}^{\rho\mathbf{C}})\le w_{\mathbf{C}}({\mathbf{K}_1}^{\rho\mathbf{C}})+w_{\mathbf{C}}({\mathbf{K}_2}^{\rho\mathbf{C}}). \end{equation} Finally, (\ref{induction-hypothesis-1}),(\ref{induction-hypothesis-2}), and (\ref{width-inequality-for-rounded-bodies}) yield that \begin{equation}\label{final-inductive-inequality} w_{\mathbf{C}}(\mathbf{K}^{\rho\mathbf{C}})\le mn_1\rho+mn_2\rho=mn\rho. \end{equation} Thus, (\ref{final-inductive-inequality}) clearly implies that $r_{\mathbf{C}}(\mathbf{K} , mn)\le \rho$. As the case, when the optimal partition is achieved, follows directly from the definition of the $mn$th successive $\mathbf{C}$-inradius of $\mathbf{K}$, the proof of Theorem \ref{Bezdek-Bezdek-Conway-generalized} is complete. \section{Proof of Theorem~\ref{Akopyan-Karasev-Bezdek}} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$ and let $m$ be a positive integer. It follows from the definition of $r_{\mathbf{C}}(\mathbf{K} , m)$ that $r_{\mathbf{C}}(\mathbf{K} , m)$ is a translation invariant, positively $1$-homogeneous, inclusion-monotone functional over the family of convex bodies $\mathbf{K}$ in $\mathbb{E}^d$ for any fixed $\mathbf{C} $ and $m$. On the other hand, if $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ is an inductive partition of $\mathbb{E}^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$, then Theorem~\ref{Akopyan-Karasev} applied to $\mathbf{C}=\mathbf{K}$ yields the existence of translation vectors $\mathbf{t}_1, \mathbf{t}_2, \dots , \mathbf{t}_n$ and positive reals $\mu_1,\mu_2,\dots ,\mu_n$ such that $\mathbf{t}_i+\mu_i \mathbf{K}\subset \mathbf{V}_i\cap\mathbf{K}$ for all $1\le i\le n$ satisfying $\sum_{i=1}^{n}\mu_i\ge 1$. Therefore $$r_{\mathbf{C}}(\mathbf{V}_i\cap\mathbf{K} , m)\ge r_{\mathbf{C}}(\mathbf{t}_i+\mu_i \mathbf{K} , m)=r_{\mathbf{C}}(\mu_i \mathbf{K} , m)=\mu_i r_{\mathbf{C}}(\mathbf{K} , m)$$ holds for all $1\le i\le n$, finishing the proof of Theorem~\ref{Akopyan-Karasev-Bezdek}. \section{Proof of Corollary~\ref{corollary of Akopyan-Karasev-Bezdek}} Let $1\le m_1\le m_2$ be positive integers. Recall that if $\rho_1$ (resp., $\rho_2$) denotes the $m_1$th (resp., $m_2$th) successive $\mathbf{C}$-inradius of $\mathbf{K}$, then by definition $ w_{\mathbf{C}}({\mathbf{K}}^{\rho_1\mathbf{C}})=m_1\rho_1$ (resp., $ w_{\mathbf{C}}({\mathbf{K}}^{\rho_2\mathbf{C}})=m_2\rho_2$). As $ w_{\mathbf{C}}({\mathbf{K}}^{\rho\mathbf{C}})$ is a decreasing continuous function of $\rho>0$, it follows that $$m_1r_{\mathbf{C}}(\mathbf{K} , m_1)=m_1\rho_1\le m_2\rho_2=m_2r_{\mathbf{C}}(\mathbf{K} , m_2)\ .$$ Thus, the sequence $mr_{\mathbf{C}}(\mathbf{K} , m), m=1,2,\dots$ is an increasing one with $$\lim_{m\to+\infty}mr_{\mathbf{C}}(\mathbf{K} , m)=w_{\mathbf{C}}(\mathbf{K})\ .$$ Hence, Corollary~\ref{corollary of Akopyan-Karasev-Bezdek} follows from Theorem \ref{Akopyan-Karasev-Bezdek}. \section{The equivalence of Conjectures~\ref{Bang-conjecture}, ~\ref{Alexander--Davenport}, ~\ref{Bezdek--Bezdek-1}, ~\ref{Bezdek--Bezdek-2}, ~\ref{Bezdek--Bezdek-11}, and ~\ref{Bezdek--Bezdek-22}} Recall that according to \cite{BeBe96} Conjectures~\ref{Bang-conjecture}, ~\ref{Alexander--Davenport}, ~\ref{Bezdek--Bezdek-1}, and ~\ref{Bezdek--Bezdek-2} are equivalent to each other. So, it is sufficent to show that Conjecture~\ref{Bezdek--Bezdek-1} implies Conjecture~\ref{Bezdek--Bezdek-11} and Conjecture~\ref{Bezdek--Bezdek-11} implies Conjecture~\ref{Bezdek--Bezdek-22} moreover, Conjecture~\ref{Bezdek--Bezdek-22} implies Conjecture~\ref{Bezdek--Bezdek-2}. As according to the previous section the sequence $mr_{\mathbf{C}}(\mathbf{K} , m), m=1,2,\dots$ is an increasing one with $\lim_{m\to+\infty}mr_{\mathbf{C}}(\mathbf{K} , m)=w_{\mathbf{C}}(\mathbf{K})$ therefore Conjecture~\ref{Bezdek--Bezdek-1} implies Conjecture~\ref{Bezdek--Bezdek-11}. Next, it is obvious that Conjecture~\ref{Bezdek--Bezdek-22} implies Conjecture~\ref{Bezdek--Bezdek-2}. So, we are left to show that Conjecture~\ref{Bezdek--Bezdek-11} implies Conjecture~\ref{Bezdek--Bezdek-22}. In order to do so we introduce the following equivalent description for $r_{\mathbf{C}}(\mathbf{K} , m)$. If $\mathbf{C}$ is a convex body in $\mathbb{E}^d$, then $$\mathbf{t}+\mathbf{C}, \mathbf{t}+\lambda_2\mathbf{v}+\mathbf{C}, \dots , \mathbf{t}+\lambda_m\mathbf{v}+\mathbf{C}$$ is called a {\it linear packing} of $m$ translates of $\mathbf{C}$ positioned parallel to the line $\{\lambda\mathbf{v}\ |\ \lambda\in\mathbb{R}\}$ with direction vector $\mathbf{v}\neq\mathbf{o}$ if the $m$ translates of $\mathbf{C}$ are pairwise non-overlapping, i.e., if $$( \mathbf{t}+\lambda_i\mathbf{v}+{\rm int}\mathbf{C}) \cap ( \mathbf{t}+\lambda_j\mathbf{v}+{\rm int}\mathbf{C})=\emptyset$$ holds for all $1\le i\neq j\le m$ (with $\lambda_1=0$). Furthermore, the line $l\subset \mathbb{E}^d$ passing through the origin $\mathbf{o}$ of $\mathbb{E}^d$ is called a {\it separating direction} for the linear packing $$\mathbf{t}+\mathbf{C}, \mathbf{t}+\lambda_2\mathbf{v}+\mathbf{C}, \dots , \mathbf{t}+\lambda_m\mathbf{v}+\mathbf{C}$$ if $${\rm Pr}_{l}( \mathbf{t}+\mathbf{C}), {\rm Pr}_{l}( \mathbf{t}+\lambda_2\mathbf{v}+\mathbf{C}), \dots , {\rm Pr}_{l}( \mathbf{t}+\lambda_m\mathbf{v}+\mathbf{C})$$ are pairwise non-overlapping intervals on $l$, where ${\rm Pr}_l: \mathbb{E}^d\to l$ denotes the orthogonal projection of $\mathbb{E}^d$ onto $l$. It is easy to see that every linear packing $$\mathbf{t}+\mathbf{C}, \mathbf{t}+\lambda_2\mathbf{v}+\mathbf{C}, \dots , \mathbf{t}+\lambda_m\mathbf{v}+\mathbf{C}$$ possesses at least one separating direction in $\mathbb{E}^d$. Finally, let $\mathbf{K}$ be a convex body in $\mathbb{E}^d$ and let $m\ge 1$ be a positive integer. Then let $\overline{\rho}>0$ be the largest positive real with the following property: for every line $l$ passing through the origin $\mathbf{o}$ in $\mathbb{E}^d$ there exists a linear packing of $m$ translates of $\overline{\rho}\mathbf{C}$ lying in $\mathbf{K}$ and having $l$ as a separating direction. It is straightforward to show that $$\overline{\rho}=r_{\mathbf{C}}(\mathbf{K} , m).$$ Now, let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb{E}^d, d\ge 2$ and let the positive integer $m$ be given. Assume that the origin $\mathbf{o}$ of $\mathbb{E}^d$ lies in the interior of $\mathbf{C}$. Furthermore, assume that $\mathbf{K}$ is sliced by $n-1$ hyperplanes say, $H_1, H_2, \dots , H_{n-1}$ and let $\rho$ be the greatest $m$th successive $\mathbf{C}$-inradius of the pieces of $\mathbf{K}$ obtained in this way. Then let $\mathbf{P}_i:=\bigcup_{\mathbf{p}\in H_i}\left(\mathbf{p}+(-m\rho)\mathbf{C}\right)$, $1\le i\le n-1$. Based on the above description of $m$th successive $\mathbf{C}$-inradii, it is easy to see that $\mathbf{K}_{-m\rho\mathbf{C}}\subset \bigcup_{i=1}^{n-1}\mathbf{P}_i$ with $w_{\mathbf{C}}(\mathbf{P}_i )=m\rho$ for all $1\le i\le n-1$. Thus, Conjecture~\ref{Bezdek--Bezdek-11} implies that $(n-1)m\rho=\sum_{i=1}^{n-1} w_{\mathbf{C}}(\mathbf{P}_i )\ge m r_{\mathbf{C}}( \mathbf{K}_{-m\rho\mathbf{C}} , m)=m\left(r_{\mathbf{C}}(\mathbf{K}^{\rho\mathbf{C}}, m)-\rho\right)$ and so, $mn\rho\ge w_{\mathbf{C}}(\mathbf{K}^{\rho\mathbf{C}})$. Hence, $\rho\ge r_{\mathbf{C}}(\mathbf{K} , mn)$ finishing the proof of Conjecture~\ref{Bezdek--Bezdek-22}. \section{Conclusion} Theorems~\ref{Akopyan-Karasev} and ~\ref{Akopyan-Karasev-Bezdek} have covering analogues. Namely recall that Akopyan and Karasev (\cite{AkKa12}) introduced the following definition. Under the {\it convex covering} $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ of $\mathbb{E}^d$ we understand the family $\mathbf{V}_1, \mathbf{V}_2, \dots, \mathbf{V}_n$ of closed convex sets in $\mathbb{E}^d$ with $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n=\mathbb{E}^d$. Then we say that the convex covering $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ of $\mathbb{E}^d$ is an {\it inductive covering} of $\mathbb{E}^d$ if for any $1\le i\le n$, there exists an inductive covering $\mathbf{W}_1\cup\dots\cup\mathbf{W}_{i-1}\cup\mathbf{W}_{i+1}\cup\dots\cup\mathbf{W}_n$ of $\mathbb{E}^d$ such that $\mathbf{W}_j\subset\mathbf{V}_j\cup \mathbf{V}_i$ for all $j\neq i$. A covering by one set $\mathbf{V}_1=\mathbb{E}^d$ is assumed to be inductive. \cite{AkKa12} proves that if $\mathbf{K}$ and $\mathbf{C}$ are convex bodies in $\mathbb{E}^d, d\ge 2$ and $\mathbf{V}_1\cup\mathbf{V}_2\cup\dots\cup\mathbf{V}_n$ is an inductive covering of $\mathbb{E}^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$, then $\sum_{i=1}^{n}r_{\mathbf{C}}(\mathbf{V}_i\cap\mathbf{K}, 1)\ge r_{\mathbf{C}}(\mathbf{K} , 1)$. Now, exactly the same way as Theorem~\ref{Akopyan-Karasev-Bezdek} is derived from Theorem~\ref{Akopyan-Karasev}, it follows that \begin{equation} \label{Akopyan-Karasev-II} \sum_{i=1}^{n}r_{\mathbf{C}}(\mathbf{V}_i\cap\mathbf{K}, m)\ge r_{\mathbf{C}}(\mathbf{K} , m) \end{equation} holds for any positive integer $m$. This raises the following rather natural question (see also Conjecture~\ref{Bezdek--Bezdek-11}). \begin{problem} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb E^d$, $d\geq 2$ and let $m$ be a positive integer. Prove or disprove that if $\mathbf{V}_1 \cup \mathbf{V}_2 \cup \ldots \cup \mathbf{V}_n$ is a convex partition (resp., covering) of $\mathbb E^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$, then $\sum_{i=1}^{n}r_{\mathbf{C}}(\mathbf{V}_i\cap\mathbf{K}, m)\ge r_{\mathbf{C}}(\mathbf{K} , m)$. \end{problem} Next observe that (\ref{Akopyan-Karasev-II}) implies in a straightforward way that if $\mathbf{K}$ and $\mathbf{C}$ are convex bodies in $\mathbb E^d$ and $\mathbf{V}_1 \cup \mathbf{V}_2 \cup \ldots \cup \mathbf{V}_n$ is an inductive covering of $\mathbb E^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$, then the greatest $m$th successive $\mathbf{C}$-inradius of the pieces $\mathbf{V}_i\cap\mathbf{K}, i=1, 2, \dots , n$ is at least $\frac{1}{n}r_{\mathbf{C}}(\mathbf{K} , m)$. As the sequence $mr_{\mathbf{C}}(\mathbf{K} , m), m=1,2,\dots$ is an increasing one, therefore $\frac{1}{n}r_{\mathbf{C}}(\mathbf{K} , m)\le r_{\mathbf{C}}(\mathbf{K} , mn)$ raising the following question (see also Conjecture~\ref{Bezdek--Bezdek-22}). \begin{problem} Let $\mathbf{K}$ and $\mathbf{C}$ be convex bodies in $\mathbb E^d$, $d\geq 2$ and let $m$ be a positive integer. Prove or disprove that if $\mathbf{V}_1 \cup \mathbf{V}_2 \cup \ldots \cup \mathbf{V}_n$ is a convex partition (resp., covering) of $\mathbb E^d$ such that ${\rm int}(\mathbf{V}_i\cap\mathbf{K})\neq\emptyset$ for all $1\le i\le n$, then the greatest $m$th successive $\mathbf{C}$-inradius of the pieces $\mathbf{V}_i\cap\mathbf{K}, i=1, 2, \dots , n$ is at least $r_{\mathbf{C}}(\mathbf{K} , mn)$. \end{problem} \bibliographystyle{amsplain}
1,116,691,497,968
arxiv
\section{#1} \setcounter{equation}{0}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \begin{document} \title{On the hidden mechanism behind non-uniqueness for the anisotropic Calder\'on problem with data on disjoint sets} \author{Thierry Daud\'e \footnote{Research supported by the French National Research Projects AARG, No. ANR-12-BS01-012-01, and Iproblems, No. ANR-13-JS01-0006} $^{\,1}$, Niky Kamran \footnote{Research supported by NSERC grant RGPIN 105490-2011} $^{\,2}$ and Francois Nicoleau \footnote{Research supported by the French National Research Project NOSEVOL, No. ANR- 2011 BS0101901} $^{\,3}$\\[12pt] $^1$ \small D\'epartement de Math\'ematiques. UMR CNRS 8088, Universit\'e de Cergy-Pontoise, \\ \small 95302 Cergy-Pontoise, France. \\ \small Email: [email protected] \\ $^2$ \small Department of Mathematics and Statistics, McGill University,\\ \small Montreal, QC, H3A 2K6, Canada. \\ \small Email: [email protected] \\ $^3$ \small Laboratoire de Math\'ematiques Jean Leray, UMR CNRS 6629, \\ \small 2 Rue de la Houssini\`ere BP 92208, F-44322 Nantes Cedex 03. \\ \small Email: [email protected] } \maketitle \begin{abstract} We show that there is generically non-uniqueness for the anisotropic Calder\'on problem at fixed frequency when the Dirichlet and Neumann data are measured on disjoint sets of the boundary of a given domain. More precisely, we first show that given a smooth compact connected Riemannian manifold with boundary $(M,g)$ of dimension $n\geq 3$, there exist in the conformal class of $g$ an infinite number of Riemannian metrics $\tilde{g}$ such that their corresponding DN maps at a fixed frequency coincide when the Dirichlet data $\Gamma_D$ and Neumann data $\Gamma_N$ are measured on disjoint sets and satisfy $\overline{\Gamma_D \cup \Gamma_N} \ne \partial M$. The conformal factors that lead to these non-uniqueness results for the anisotropic Calder\'on problem satisfy a nonlinear elliptic PDE of Yamabe type on the original manifold $(M,g)$ and are associated to a natural but subtle gauge invariance of the anisotropic Calder\'on problem with data on disjoint sets. We then construct a large class of counterexamples to uniqueness in dimension $n\geq 3$ to the anisotropic Calder\'on problem at fixed frequency with data on disjoint sets and \emph{modulo this gauge invariance}. This class consists in cylindrical Riemannian manifolds with boundary having two ends (meaning that the boundary has two connected components), equipped with a suitably chosen warped product metric. \vspace{0.5cm} \noindent \textit{Keywords}. Inverse problems, Anisotropic Calder\'on problem, Nonlinear elliptic equations of Yamabe type. \noindent \textit{2010 Mathematics Subject Classification}. Primaries 81U40, 35P25; Secondary 58J50. \end{abstract} \tableofcontents \Section{Introduction} \subsection{The anisotropic Calder\'on problem} The anisotropic Calder\'on problem on smooth compact connected Riemannian manifolds with boundary is a model example of an inverse problem which consists in recovering the physical properties of a medium (like its electrical conductivity) by making only electrical measurements at its boundary. In this paper, we consider the case where the Dirichlet and Neumann data are measured on \emph{disjoint} subsets of the boundary, an inverse problem which is important from a practical point of view and which is still largely open \cite{GT2, IUY2, KS1, KS2, KLO, LO1, LO2}. In order to state our results, we first recall the geometric formulation of the Calder\'on problem due Lee and Uhlmann \cite{LeU}. We refer to the surveys \cite{GT2, KS2, Sa, U1} for the current state of the art on the anisotropic Calder\'on problem and also to \cite{DSFKSU, DSFKLS, GSB, GT1, KS1, LaTU, LaU, LeU} for important contributions to the subject. Let $(M, g)$ be an $n$ dimensional smooth compact connected Riemannian manifold with smooth boundary $\partial M$. Let us denote by $\Delta_{LB}$ the positive Laplace-Beltrami operator on $(M,g)$. In a local coordinate system $(x^i)_{i = 1,\dots,n}$, the Laplace-Beltrami operator $\Delta_{LB}$ is given by $$ \Delta_{LB}= -\Delta_g = -\frac{1}{\sqrt{|g|}} \partial_i \left( \sqrt{|g|} g^{ij} \partial_j \right), $$ where $|g| = \det \left(g_{ij}\right)$ is the determinant of the metric tensor $(g_{ij})$, where $\left(g^{ij}\right)$ is the inverse of $(g_{ij})$ and where we use the Einstein summation convention. We recall that the Laplace-Beltrami operator $-\Delta_g$ with Dirichlet boundary conditions is selfadjoint on $L^2(M, dVol_g)$ and has pure point spectrum $\{ \lambda_j\}_{j \geq 1}$ with $0 < \lambda_1 < \lambda_2 \leq \dots \leq \lambda_j \to +\infty$ (see for instance \cite{KKL}). We consider the Dirichlet problem at a frequency $\lambda \in \R$ on $(M,g)$ such that $\lambda \notin \{ \lambda_j\}_{j \geq 1}$. We are interested thus in the solutions $u$ of \begin{equation} \label{Eq00} \left\{ \begin{array}{cc} -\Delta_g u = \lambda u, & \textrm{on} \ M, \\ u = \psi, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} It is well known (see for instance \cite{Sa, Ta1}) that for any $\psi \in H^{1/2}(\partial M)$, there exists a unique weak solution $u \in H^1(M)$ of (\ref{Eq00}). This allows us to define the Dirichlet-to-Neumann (DN) map as the operator $\Lambda_{g}(\lambda)$ from $H^{1/2}(\partial M)$ to $H^{-1/2}(\partial M)$ defined for all $\psi \in H^{1/2}(\partial M)$ by \begin{equation} \label{DN-Abstract} \Lambda_{g}(\lambda) (\psi) = \left( \partial_\nu u \right)_{|\partial M}, \end{equation} where $u$ is the unique solution of (\ref{Eq00}) and $\left( \partial_\nu u \right)_{|\partial M}$ is its normal derivative with respect to the unit outer normal vector $\nu$ on $\partial M$. Here $\left( \partial_\nu u \right)_{|\partial M}$ is interpreted in the weak sense as an element of $H^{-1/2}(\partial M)$ by $$ \left\langle \Lambda_{g}(\lambda) \psi | \phi \right \rangle = \int_M \langle du, dv \rangle_g \, dVol_g, $$ for any $\psi \in H^{1/2}(\partial M)$ and $\phi \in H^{1/2}(\partial M)$ such that $u$ is the unique solution of (\ref{Eq00}) and $v$ is any element of $H^1(M)$ such that $v_{|\partial M} = \phi$. If $\psi$ is sufficiently smooth, we can check that $$ \Lambda_{g}(\lambda) \psi = g(\nu, \nabla u)_{|\partial M} = du(\nu)_{|\partial M} = \nu(u)_{|\partial M}, $$ where $\nu$ represents the unit outer normal vector to $\partial M$, so that an expression in local coordinates for the normal derivative is thus given by \begin{equation} \label{DN-Coord} \partial_\nu u = \nu^i \partial_i u. \end{equation} We shall be interested in the \emph{partial} DN maps defined as follows. Let $\Gamma_D$ and $\Gamma_N$ be two open subsets of $\partial M$. We define the partial DN map $\Lambda_{g,\Gamma_D,\Gamma_N}(\lambda)$ as the restriction of the global DN map $\Lambda_g(\lambda)$ to Dirichlet data given on $\Gamma_D$ and Neumann data measured on $\Gamma_N$. Precisely, consider the Dirichlet problem \begin{equation} \label{Eq0} \left\{ \begin{array}{cc} -\Delta_g u = \lambda u, & \textrm{on} \ M, \\ u = \psi, & \textrm{on} \ \Gamma_D, \\ u = 0, & \textrm{on} \ \partial M \setminus \Gamma_D. \end{array} \right. \end{equation} We define $\Lambda_{g,\Gamma_D,\Gamma_N}(\lambda)$ as the operator acting on the functions $\psi \in H^{1/2}(\partial M)$ with $\textrm{supp}\,\psi \subset \Gamma_D$ by \begin{equation} \label{Partial-DNmap} \Lambda_{g,\Gamma_D,\Gamma_N}(\lambda) (\psi) = \left( \partial_\nu u \right)_{|\Gamma_N}, \end{equation} where $u$ is the unique solution of (\ref{Eq0}). In its simplest form, the anisotropic partial Calder\'on problem can be stated as follows: \emph{Does the knowledge of the partial DN map $\Lambda_{g,\Gamma_D, \Gamma_N}(\lambda)$ at a fixed frequency $\lambda$ determine uniquely the metric $g$}? The answer to the above question is negative because of a number of natural gauge invariances that are inherent to the problem. Indeed, it follows from the definition (\ref{Eq0}) - (\ref{Partial-DNmap}) that in any dimension, the partial DN map $\Lambda_{g, \Gamma_D, \Gamma_N}(\lambda)$ is invariant under pullback of the metric by the diffeomorphisms of $M$ that restrict to the identity on $\Gamma_D \cup \Gamma_N$, \textit{i.e.} \begin{equation} \label{Inv-Diff} \forall \phi \in \textrm{Diff}(M) \ \textrm{such that} \ \phi_{|\Gamma_D \cup \Gamma_N} = Id, \quad \Lambda_{\phi^*g, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g, \Gamma_D, \Gamma_N}(\lambda). \end{equation} In the two dimensional case and for zero frequency $\lambda = 0$, there is an additional gauge invariance of the DN map due to the fact that the Laplace-Beltrami operator is acted on by scalings under conformal changes of the metric. More precisely, recall that if $\dim M=2$, then $$ \Delta_{cg} = \frac{1}{c} \Delta_g, $$ for any smooth function $c >0$. Therefore, we have in dimension $2$ \begin{equation} \label{Inv-Conf} \forall c \in C^\infty(M) \ \textrm{such that} \ c >0 \ \textrm{and} \ c_{|\Gamma_N} = 1, \quad \Lambda_{c g, \Gamma_D, \Gamma_N}(0) = \Lambda_{g, \Gamma_D, \Gamma_N}(0), \end{equation} since the unit outer normal vectors $\nu_{cg}$ and $\nu_g$ coincide on $\Gamma_N$ in that case. It therefore follows that the appropriate question to address (called the \emph{anisotropic Calder\'on conjecture}) is the following. \\ \noindent \textbf{(Q1)}: \emph{Let $M$ be a smooth compact connected manifold with smooth boundary $\partial M$ and let $g,\, \tilde{g}$ be smooth Riemannian metrics on $M$. Let $\Gamma_D, \Gamma_N$ be any open subsets of $\partial M$ and assume that $\lambda \in \R$ does not belong to $\sigma(-\Delta_g) \cup \sigma(-\Delta_{\tilde{g}})$. If $$ \Lambda_{g,\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{g},\Gamma_D, \Gamma_N}(\lambda), $$ is it true that $$ g = \tilde{g}, $$ up to the gauge invariance (\ref{Inv-Diff}) if $\dim M \geq 3$ and up to the gauge invariances (\ref{Inv-Diff}) - (\ref{Inv-Conf}) if $\dim M = 2$ and $\lambda = 0$}? \\ There are three subcases of the above problem which are of particular interest: \begin{itemize} \item \textbf{Full data}: $\Gamma_D = \Gamma_N = \partial M$. In that case, we denote the DN map simply by $\Lambda_g(\lambda)$. \item \textbf{Local data}: $\Gamma_D = \Gamma_N = \Gamma$, where $\Gamma$ can be any nonempty open subset of $\partial M$. In that case, we denote the DN map by $\Lambda_{g, \Gamma}(\lambda)$. \item \textbf{Data on disjoint sets}: $\Gamma_D$ and $\Gamma_N$ are disjoint open sets of $\partial M$. \end{itemize} If $\dim M \geq 3$, one may also consider a simpler inverse problem by assuming that the Riemannian manifolds $(M,g)$ and $(M,\tilde{g})$ belong to the same conformal class, that is $\tilde{g} = c g$ for some smooth strictly positive function c. In that case, $g$ is considered as a given known background metric and the problem consists in determining the unknown scalar function $c$ from the DN map $\Lambda_{c g,\Gamma_D, \Gamma_N}(\lambda)$. In that case, the anisotropic Calder\'on problem becomes: \\ \noindent \textbf{(Q2)}: \emph{Let $(M,g)$ be a smooth compact connected Riemannian manifold of dimension $n\geq 3$ with smooth boundary $\partial M$ and let $\Gamma_D, \Gamma_N$ be open subsets of $\partial M$. Let $c$ be a smooth strictly positive function on $M$ and assume that $\lambda \in \R$ does not belong to $\sigma(-\Delta_g) \cup \sigma(-\Delta_{c g})$. If $$ \Lambda_{c g,\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g,\Gamma_D, \Gamma_N}(\lambda), $$ does there exist a diffeomorphism $\phi: \, M \longrightarrow M$ with $\phi_{| \, \Gamma_D \cup \Gamma_N} = Id$ such that} \begin{equation} \label{Inv-Conformal} \phi^* g = c g? \end{equation} Since any diffeomorphism $\phi: \, M \longrightarrow M$ which satisfies $\phi^* g = c g$ and $\phi_{|\Gamma} = Id$ for a non-empty open subset $\Gamma$ of $\partial M$ must be the identity \cite{Li}\footnote{Although Proposition 3.3 in \cite{Li} has been stated in the case $\Gamma = \partial M$, the result remains true when $\Gamma$ is replaced by any non-empty open subset of $\partial M$ }, we see that there is no ambiguity arising from diffeomorphisms in the solution of the anisotropic Calder\'on problem \textbf{(Q2)}. The condition (\ref{Inv-Conformal}) may therefore be replaced by the condition \begin{equation} \label{Inv-Conformal-1} c = 1, \quad \textrm{on} \ M. \end{equation} A third version of the anisotropic Calder\'on problem which is somewhat related to \textbf{(Q2)}, but involves now an external potential, is given by the following. Consider the solution of the Schr\"odinger equation on $(M,g)$ with potential $V \in L^\infty(M)$ \begin{equation} \label{Eq0-Schrodinger} \left\{ \begin{array}{cc} (-\Delta_g + V) u = \lambda u, & \textrm{on} \ M, \\ u = \psi, & \textrm{on} \ \Gamma_D, \\ u = 0, & \textrm{on} \ \partial M \setminus \Gamma_D. \end{array} \right. \end{equation} It is well known (see for example \cite{DSFKSU, Sa}) that if $\lambda$ does not belong to the Dirichlet spectrum of $-\Delta_g +V$, then for any $\psi \in H^{1/2}(\partial M)$, there exists a unique weak solution $u \in H^1(M)$ of (\ref{Eq0-Schrodinger}). This allows us to define the partial Dirichlet-to-Neumann map $\Lambda_{g, V, \,\Gamma_D, \Gamma_N}(\lambda)$ for all $\psi \in H^{1/2}(\partial M)$ with supp $\psi \subset \Gamma_D$ by \begin{equation} \label{DN-Abstract-Schrodinger} \Lambda_{g, V,\Gamma_D, \Gamma_N}(\lambda) (\psi) = \left( \partial_\nu u \right)_{|\Gamma_N}, \end{equation} where $u$ is the unique solution of (\ref{Eq0-Schrodinger}) and $\left( \partial_\nu u \right)_{|\Gamma_N}$ is its normal derivative with respect to the unit outer normal vector $\nu$ on $\Gamma_N$. We assume again here that $g$ is a given background metric and the problem consists in determining the unknown potential $V \in L^\infty(M)$ from the DN map $\Lambda_{g, V, \,\Gamma_D, \Gamma_N}(\lambda)$. Precisely, the question is: \\ \noindent \textbf{(Q3)}: \emph{Let $(M,g)$ be a smooth compact connected Riemannian manifold with smooth boundary $\partial M$ and let $\Gamma_D, \Gamma_N$ be open subsets of $\partial M$. Let $V_1$ and $V_2$ be potentials in $L^\infty(M)$ and assume that $\lambda \in \R$ does not belong to the Dirichlet spectra of $-\triangle_g + V_1$ and $-\triangle_g + V_2$. If $$ \Lambda_{g, V_1, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g, V_2, \Gamma_D, \Gamma_N}(\lambda), $$ is it true that} $$ V_1 = V_2? $$ If $\dim M \geq 3$, there is a straightforward link between \textbf{(Q2)} and \textbf{(Q3)} that is based on the transformation law for the Laplace-Beltrami operator under conformal changes of metric, \begin{equation} \label{ConformalScaling} -\Delta_{c^4 g} u = c^{-(n+2)} \left( -\Delta_g + q_{g,c} \right) \left( c^{n-2} u \right), \end{equation} where \begin{equation} \label{q} q_{g,c} = c^{-n+2} \Delta_{g} c^{n-2}. \end{equation} We have: \begin{prop} \label{Link-c-to-V} Let $\lambda \in \R$ be fixed. Assume that $c$ is a smooth strictly positive function on $M$ such that $c = 1$ on $\Gamma_D \cup \Gamma_N$. \\ 1. If $\Gamma_D \cap \Gamma_N = \emptyset$, then \begin{equation} \label{Link} \Lambda_{c^4 g, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g, V_{g,c,\lambda}, \Gamma_D,\Gamma_N}(\lambda), \end{equation} where \begin{equation} \label{Vgc} V_{g,c,\lambda} = q_{g,c} + \lambda(1-c^4), \quad q_{g,c} = c^{-n+2} \Delta_{g} c^{n-2}. \end{equation} 2. If $\Gamma_D \cap \Gamma_N \ne \emptyset$ and $\partial_{\nu} c = 0$ on $\Gamma_N$, then (\ref{Link}) also holds. \end{prop} \begin{proof} Given a function $c$ satisfying the assumptions of the Proposition, consider the Dirichlet problem at fixed frequency $\lambda$ associated to the metric $c^4 g$, \textit{i.e.} \begin{equation} \label{z1} \left\{ \begin{array}{cc} -\Delta_{c^4 g} u = \lambda u, & \textrm{on} \ M, \\ u = \psi, & \textrm{on} \ \Gamma_D, \\ u = 0, & \textrm{on} \ \partial M \setminus \Gamma_D. \end{array} \right. \end{equation} Using (\ref{ConformalScaling}) and setting $v = c^{n-2} u$, the Dirichlet problem (\ref{z1}) is equivalent to \begin{equation} \label{z3} \left\{ \begin{array}{cc} (-\Delta_{g} + q_{g,c} + \lambda (1-c^4)) v = \lambda v, & \textrm{on} \ M, \\ v = c^{n-2} \psi, & \textrm{on} \ \Gamma_D, \\ v = 0, & \textrm{on} \ \partial M \setminus \Gamma_D. \end{array} \right. \end{equation} Since $c = 1$ on $\Gamma_D$, we see that the function $v$ satisfies \begin{equation} \label{z4} \left\{ \begin{array}{cc} (-\Delta_{g} + V_{g,c,\lambda}) v = \lambda v, & \textrm{on} \ M, \\ v = \psi, & \textrm{on} \ \Gamma_D, \\ v = 0, & \textrm{on} \ \partial M \setminus \Gamma_D. \end{array} \right. \end{equation} where $V_{g,c,\lambda}$ is given by (\ref{Vgc}). In other words, $v$ is the unique solution of the Dirichlet problem (\ref{z4}) at frequency $\lambda$ associated to the Schr\"odinger operator $-\triangle_g + V_{g,c,\lambda}$. Let us show now that $\Lambda_{c^4 g,\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g, V_{g,c,\lambda}, \Gamma_D, \Gamma_N}(\lambda)$ in the different cases stated in the Proposition. On one hand, since the conformal factor $c$ satisfies $c = 1$ on $\Gamma_N$, the unit outgoing normal vector $\tilde{\nu}$ associated to $\tilde{g} = c^4 g$ is equal to the unit outgoing normal vector $\nu$ associated to $g$ on $\Gamma_N$. Thus by definition of the partial DN map, we have \begin{equation} \label{z5} \Lambda_{c^4 g,\Gamma_D,\Gamma_N}(\lambda) \psi = (\partial_\nu u)_{|\Gamma_N}, \end{equation} where $u$ is the unique solution of (\ref{z1}). On the other hand, since $v = c^{n-2} u$ is the unique solution of (\ref{z4}), we have $$ \Lambda_{g, V_{g,c,\lambda},\Gamma_D,\Gamma_N}(\lambda) \psi = (\partial_\nu v)_{|\Gamma_N} = \big((\partial_\nu c^{n-2}) u + c^{n-2} \partial_\nu u \big)_{|\Gamma_N}. $$ Since $c = 1$ and $u = \psi$ on $\Gamma_N$, we thus obtain \begin{equation} \label{z6} \Lambda_{g,V_{g,c,\lambda},\Gamma_D,\Gamma_N}(\lambda) \psi = \big((\partial_\nu c^{n-2}) \psi + \partial_\nu u \big)_{|\Gamma_N}. \end{equation} If $\Gamma_D \cap \Gamma_N = \emptyset$, which is Case 1 in our Proposition, we have $\psi = 0$ on $\Gamma_N$. Hence we obtain \begin{equation} \label{z7} \Lambda_{g,V_{g,c,\lambda},\Gamma_D,\Gamma_N}(\lambda) \psi = \big(\partial_\nu u \big)_{|\Gamma_N} = \Lambda_{c^4 g,\Gamma_D,\Gamma_N}(\lambda) \psi. \end{equation} If $\Gamma_D \cap \Gamma_N \ne \emptyset$ and $\partial_\nu c = 0$ on $\Gamma_N$, which is Case 2, we also get \begin{equation} \label{z8} \Lambda_{g,V_{g,c,\lambda},\Gamma_D,\Gamma_N}(\lambda) \psi = \big(\partial_\nu u \big)_{|\Gamma_N} = \Lambda_{c^4 g,\Gamma_D,\Gamma_N}(\lambda) \psi. \end{equation} \end{proof} Proposition \ref{Link-c-to-V} gives a clear link between the anisotropic Calder\'on problems \textbf{(Q2)} and \textbf{(Q3)}. As an application and by way of a conclusion for this sub-section, let us show for instance how \textbf{(Q3)} implies \textbf{(Q2)} in the case of local data, \textit{i.e.} $\Gamma_D = \Gamma_N = \Gamma$ any open subset in $\partial M$. \begin{prop} \label{Q3-to-Q2} If $\Gamma_D = \Gamma_N = \Gamma$ is any open set in $\partial M$ and $\lambda \in \R$, then \textbf{(Q3)} implies \textbf{(Q2)}. \end{prop} \begin{proof} Assume that \textbf{(Q3)} holds and assume that for two metrics $g$ and $c^4g$, we have \begin{equation} \label{u1} \Lambda_{c^4 g, \Gamma}(\lambda) = \Lambda_{g, \Gamma}(\lambda), \end{equation} where $\Lambda_{c^4 g, \Gamma}(\lambda)$ stands for $\Lambda_{c^4 g, \Gamma, \Gamma}(\lambda)$. Then by local boundary determination (\cite{DSFKSU, KY, LeU}, we can conclude that $c_{|\Gamma} = 1$ and $\left( \partial_{\nu} c \right)_{|\Gamma} = 0$. Hence, we can use (\ref{Link}) to show that (\ref{u1}) is equivalent to (with the previously defined notations) \begin{equation} \label{u2} \Lambda_{g, V_{g,c,\lambda}, \Gamma}(\lambda) = \Lambda_{g, 0, \Gamma}(\lambda), \end{equation} with $V_{g,c,\lambda}$ given by (\ref{Vgc}). Finally, our hypothesis that \textbf{(Q3)} holds true now implies that $V_{g,c,\lambda} = 0$, or in other words that $$ \Delta_g c^{n-2} + \lambda (1-c^4) c^{n-2} = 0. $$ Since $c^{n-2}_{|\Gamma} = 1$, $\left( \partial_{\nu} c^{n-2} \right)_{|\Gamma} = 0$ and $c$ is bounded, unique continuation principle for 2nd order elliptic PDE on a smooth manifold with smooth boundary (see \cite{Ho}, Section 28 or \cite{Ta}, Theorem 4) shows that $c = 1$ on $M$ and \textbf{(Q2)} is proved. \end{proof} \subsection{A brief survey of known results on the anisotropic Calder\'on problem} The most comprehensive results known on the anisotropic Calder\'on problems \textbf{(Q1)}, \textbf{(Q2)} and \textbf{(Q3)} pertain to the case of \emph{zero frequency}, that is $\lambda = 0$, under the hypotheses of full data ($\Gamma_D = \Gamma_N = \partial M$) or local data ($\Gamma_D = \Gamma_N = \Gamma$ with $\Gamma$ any open subset of $M$). In dimension $2$, the anisotropic Calder\'on problem \textbf{(Q1)} for global and local data with $\lambda = 0$ has been given a positive answer for compact connected Riemannian surfaces in \cite{LaU, LeU}. We also refer to \cite{ALP} for similar results answering \textbf{(Q1)} for global and local data in the case of anisotropic conductivities which are only $L^\infty$ on bounded domains of $\R^n$. A positive answer to \textbf{(Q1)} for global and local data and zero frequency $\lambda = 0$ in dimension $3$ or higher has been given for compact connected real analytic Riemannian manifolds with real analytic boundary, satisfying certain topological assumptions, in \cite{LeU}. These assumptions were later weakened in \cite{LaU, LaTU}. Similarly, \textbf{(Q1)} has been answered positively for compact connected Einstein manifolds with boundary in \cite{GSB}. The general anisotropic Calder\'on problem \textbf{(Q1)} in dimension $n\geq 3$ full or local data is still a major open problem. Some important results on the special cases covered by questions \textbf{(Q2)} and \textbf{(Q3)} have been obtained recently in \cite{DSFKSU, DSFKLS, KS1} for classes of smooth compact connected Riemannian manifolds with boundary that are called \emph{admissible}. Such manifolds $(M,g)$ are \emph{conformally transversally anisotropic}, meaning that $$ M \subset \subset \R \times M_0, \quad g = c ( e \oplus g_0), $$ where $(M_0,g_0)$ is a $n-1$ dimensional smooth compact connected Riemannian manifold with boundary, $e$ is the Euclidean metric on the real line and $c$ is a smooth strictly positive function in the cylinder $\R \times M_0$. Furthermore the transverse manifold $(M_0, g_0)$ is assumed to be \emph{simple}\footnote{A compact manifold $(M_0,g_0)$ is said to be simple if any two points in $M_0$ can be connected by a unique geodesic depending smoothly on the endpoints, and if $\partial M_0$ is strictly convex as a submanifold of $(M,g) = c ( e \oplus g_0)$, meaning that its second fundamental form is positive definite.}. It has been shown in \cite{DSFKSU, DSFKLS} that for admissible manifolds, the conformal factor $c$ is uniquely determined from the knowledge of the DN map at zero frequency $\lambda = 0$, so that both \textbf{(Q2)} and \textbf{(Q3)} have positive answers in this context. These results have been further extended to the case of partial data in \cite{KS1} (see below). We also refer to \cite{GT1, Is, IUY1} for additional results in the case of local data and to the surveys \cite{GT2, KS2} for further references. There are also positive results for problem \textbf{(Q3)} in the case of bounded domains $\Omega$ of $\R^n, \ n \geq 3$ equipped with the Euclidean metric, for data measured on distinct subsets $\Gamma_D, \Gamma_N$ of $\partial M$ which are not assumed to be disjoint, \cite{KSU}. The requirement here is that the sets $\Gamma_D, \Gamma_N$ where the measurements are made must overlap, in the sense that $\Gamma_D \subset \partial \Omega$ can possibly have very small measure, in which case $\Gamma_N$ must have slightly larger measure than $\partial \Omega \setminus \Gamma_D$. These results have been generalized in \cite{KS1} to the case of admissible Riemannian manifolds, where use is made of the fact that admissible manifolds admit \emph{limiting Carleman weights}\footnote{We refer to \cite{DSFKSU} for the definition and properties of limiting Carleman weights on manifolds and their applications.} $\varphi$. Thanks to the existence of $\varphi$, we can decompose the boundary of $M$ as $$ \partial M = \partial M_+ \cup \partial M_{\textrm{tan}} \cup \partial M_-, $$ where $$ \partial M_\pm = \{ x \in \partial M: \ \pm \partial_\nu \varphi(x) > 0 \}, \quad \partial M_{\textrm{tan}} = \{ x \in \partial M: \ \partial_\nu \varphi(x) = 0 \}. $$ In essence, the authors of \cite{KS1} show that the answer to \textbf{(Q3)} is positive\footnote{In fact, additional geometric assumptions on the transverse manifold $(M_0,g_0)$ are needed to give a full proof of this result. We refer to \cite{KS1} Theorem 2.1 for the precise statement.} if the set of Dirichlet data $\Gamma_D$ contains $\partial M_- \cup \Gamma_a$ and the set of Neumann measurements $\Gamma_N$ contains $\partial M_+ \cup \Gamma_a$ where $\Gamma_a$ is some open subset of $\partial M_{\textrm{tan}}$. Hence in particular, the sets $\Gamma_D$ and $\Gamma_N$ must overlap in order to have uniqueness. The only exception occurs in the case where $\partial M_{\textrm{tan}}$ has zero measure, in which case it is enough to take $\Gamma_D = \partial M_-$ and $\Gamma_N = \partial M_+$ to have uniqueness in \textbf{(Q3)} (see Theorem 2.3 of \cite{KS1}). Note in this case that $\Gamma_D \cap \Gamma_N = \partial M_- \cap \partial M_+ = \emptyset$. Only a few results are known in the case of data measured on \emph{disjoint sets}, and these apply to the case of zero frequency $\lambda = 0$. Besides the paper \cite{KS1} which concerns a certain subclass of admissible Riemannian manifolds, the only other result we are aware is due to Imanuvilov, Uhlmann and Yamamoto \cite{IUY2} which applies to the $2$-dimensional case, and concerns the potential of a Schr\"odinger equation on a two-dimensional domain homeomorphic to a disc. It is shown that when the boundary is partitioned into eight clockwise-ordered arcs $\Gamma_1, \Gamma_2, \dots, \Gamma_8$, then the potential is determined by boundary measurements with sources supported on $S = \Gamma_2 \cup \Gamma_6$ and fields observed on $R = \Gamma_4 \cup \Gamma_8$, hence answering \textbf{(Q3)} positively in this special setting. Finally, we mention some related papers by Rakesh \cite{Rak}, by Oksanen, Lassas \cite{LO1, LO2} and by Kurylev, Oksanen, Lassas \cite{KLO} , which are concerned with the \emph{hyperbolic} anisotropic Calder\'on problem, which amounts to the case in which the partial DN map is assumed to be known at all frequencies $\lambda$. We refer to \cite{KKL} for a detailed discussion of the hyperbolic anisotropic Calder\'on problem and to \cite{KKLM} for the link between the hyperbolic DN map and the elliptic DN map at all frequencies. We also mention the work of Rakesh \cite{Rak}, who proved that the coefficients of a wave equation on a one-dimensional interval are determined by boundary measurements with sources supported on one end of the interval and the waves observed on the other end. Here again, the uniqueness result entails to know the hyperbolic DN map or equivalently the DN map at all frequencies. \subsection{Main results} In our previous paper \cite{DKN2}, we showed that the answers to \textbf{(Q2)} (and thus \textbf{(Q1)}) as well as \textbf{(Q3)} were negative when the Dirichlet and Neumann data are measured on disjoint sets of the boundary. Within the class of \emph{rotationally invariant toric cylinders} of dimensions $2$ and $3$, we constructed an infinite number of pairs of non isometric metrics and potentials having the same partial DN maps when $\Gamma_D \cap \Gamma_N = \emptyset$ and for any fixed frequency $\lambda$ not belonging to the Dirichlet spectra of the corresponding Laplace-Beltrami or Schr\"odinger operators. With respect to the inverse problems \textbf{(Q1)} and \textbf{(Q2)}, an interesting fact was that any pair of such metrics turned out to belong to the same conformal class, where the corresponding conformal factor had to satisfy a certain nonlinear ODE. In Section \ref{1}, we explain the hidden mechanism behind the results of \cite{DKN2} and as a consequence, construct counterexamples to uniqueness for the anisotropic Calder\'on problem for any smooth compact connected Riemannian manifold with boundary, of dimension higher than $3$, with Dirichlet data and Neumann data measured on disjoint subsets $\Gamma_D$ and $\Gamma_N$ such that $\overline{\Gamma_D \cup \Gamma_N} \ne \partial M$. More precisely, we highlight a subtle gauge invariance admitted by the anisotropic Calder\'on problem with disjoint sets satisfying the above assumption. This gauge invariance is given by certain conformal rescalings of a fixed metric $g$ by a conformal factor that satisfies a nonlinear elliptic PDE of Yamabe type with appropriate boundary conditions (see Theorem \ref{Main-1}). We are able to find smooth positive solutions of this nonlinear equation of Yamabe type using the standard technique of lower and upper solutions. We emphasize that this technique works thanks to the crucial assumption $\overline{\Gamma_D \cup \Gamma_N} \ne \partial M$, that allows us to play on the boundary conditions appearing in the nonlinear equation. The main results of Section \ref{1} are Theorem \ref{Main-1} and Definition \ref{Gauge0}. In Section \ref{2}, we pursue our analysis by considering the anisotropic Calder\'on problem \textbf{(Q3)} with disjoint sets. We first show that the gauge invariance for the anisotropic Calder\'on problem \textbf{(Q2)} turns out not to be a gauge invariance for the problem \textbf{(Q3)} through the link established in Proposition \ref{Link-c-to-V}. In fact, given a fixed potential $V = V_{g,c,\lambda}$ as in (\ref{Vgc}), there exist infinitely many conformal factors $\tilde{c}$ such that $V_{g,\tilde{c},\lambda} = V$. We show that this family of conformal factors $\tilde{c}$ precisely corresponds to the whole gauge associated to the metric $c^4 g$ in the sense of Definition \ref{Gauge0}. Second, recall that despite of the lack of gauge invariance for the problem \textbf{(Q3)}, non trivial counterexamples to uniqueness for the problem \textbf{(Q3)} were found in \cite{DKN2} within the class of rotationally invariant toric cylinders. In the core of Section \ref{2}, we improve our previous construction and find a large class of new counterexamples to uniqueness for the problem \textbf{(Q3)}. This class consists in cylindrical Riemannian manifolds having two ends, \textit{i.e.} whose boundary consists in two disconnected components, and equipped with a warped product metric. We show non-uniqueness for \textbf{(Q3)} when the Dirichlet and Neumann data belong to distinct connected components of the boundary, a requirement which turns out to be crucial. This is done in Theorem \ref{NonUniquenessQ3}. In Section \ref{3}, we come back to the anisotropic Calder\'on problem \textbf{(Q2)} and use the counterexamples to uniqueness for the problem \textbf{(Q3)} found in Section \ref{2} to construct counterexamples to uniqueness for the problem \textbf{(Q2)} which do not arise from the gauge invariance defined in Section \ref{1}. To do this, we make crucial use of the link between \textbf{(Q2)} and \textbf{(Q3)} stated in Proposition \ref{Link-c-to-V}. The main point here is to construct from a fixed frequency $\lambda$ and a fixed potential $V$ satisfying certain conditions a conformal factor $c$ such that $V = V_{g,c,\lambda}$ as in (\ref{Vgc}). This amounts to solving a nonlinear elliptic equation of Yamabe type of the same type as the one considered in Section \ref{1}. This is done once again using the lower and upper solutions technique. We stress the fact that the counterexamples to uniqueness for the problem \textbf{(Q2)} obtained in this way are still cylindrical Riemannian manifolds having two ends and that the Dirichlet and Neumann data are measured on distinct connected components of the boundary. The main result in this Section is Theorem \ref{NonUniquenessQ4}. Finally, in Section \ref{4}, we summarize our results and conjecture some additional results concerning the anisotropic Calder\'on problem with disjoint sets depending on the connectedness or not of the boundary. \Section{The gauge invariance for the anisotropic Calder\'on problem in dimension $n\geq 3$} \label{1} Throughout this Section, we assume that $\dim M \geq 3$. The result of the following proposition relies on the simple observation that there is a subtle gauge invariance behind the anisotropic Calder\'on problem when the Dirichlet and Neumann data are measured on disjoint sets. This gauge invariance is given by certain conformal rescalings of a fixed metric $g$ by a strictly positive smooth function that satisfies a nonlinear elliptic PDE of Yamabe type (see (\ref{Main-EDP})). \begin{prop} \label{Main} Let $(M,g)$ be a smooth compact connected Riemannian manifold of dimension $n\geq 3$ with smooth boundary $\partial M$ and let $\lambda \in \R$ not belong to the Dirichlet spectrum $\sigma(-\Delta_g)$. Let $\Gamma_D, \Gamma_N$ be open sets of $\partial M$ such that $\Gamma_D \cap \Gamma_N = \emptyset$. If there exists a smooth strictly positive function $c$ satisfying \begin{equation} \label{Main-EDP} \left\{ \begin{array}{cc} \Delta_{g} c^{n-2} + \lambda ( c^{n-2} - c^{n+2}) = 0, & \textrm{on} \ M, \\ c = 1, & \textrm{on} \ \Gamma_D \cup \Gamma_N, \end{array} \right. \end{equation} then the conformally rescaled Riemannian metric $\tilde{g} = c^4 g$ satisfies $$ \Lambda_{\tilde{g},\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g,\Gamma_D, \Gamma_N}(\lambda). $$ \end{prop} \begin{proof} Consider the Dirichlet problem at fixed frequency $\lambda$ associated to $\tilde{g} = c^4 g$, \textit{i.e.} \begin{equation} \label{a1} \left\{ \begin{array}{cc} -\Delta_{\tilde{g}} u = \lambda u, & \textrm{on} \ M, \\ u = \psi, & \textrm{on} \ \Gamma_D, \\ u = 0, & \textrm{on} \ \partial M \setminus \Gamma_D. \end{array} \right. \end{equation} As in the proof of Proposition \ref{Link-c-to-V} and thanks to our assumptions on $\Gamma_D$ and $\Gamma_N$, it is immediate to see that the function $v = c^{n-2} u$ satisfies \begin{equation} \label{a3} \left\{ \begin{array}{cc} (-\Delta_{g} + V_{g,c,\lambda}) v = \lambda v, & \textrm{on} \ M, \\ v = c^{n-2} \psi, & \textrm{on} \ \Gamma_D, \\ v = 0, & \textrm{on} \ \partial M \setminus \Gamma_D, \end{array} \right. \end{equation} where $V_{g,c,_\lambda}$ is given by (\ref{Vgc}). Assume now that there exists a smooth positive function $c: M \longrightarrow \R^{+*}$ satisfying \begin{equation} \label{Cond-c} \left\{ \begin{array}{rcl} V_{g,c,\lambda} & = & 0, \ \textrm{on} \ M, \\ c & = & 1 \ \textrm{on} \ \Gamma_D \cup \Gamma_N. \end{array} \right. \end{equation} Using (\ref{Vgc}), these conditions can be written as the nonlinear Dirichlet problem for $w = c^{n-2}$ \begin{equation} \label{EDPc} \left\{ \begin{array}{cc} \Delta_{g} w + \lambda (w - w^{\frac{n+2}{n-2}}) = 0, & \textrm{on} \ M, \\ w = \eta, & \textrm{on} \ \partial M, \end{array} \right. \end{equation} where $\eta = 1$ on $\Gamma_D \cup \Gamma_N$. Note that (\ref{EDPc}) is nothing but the PDE (\ref{Main-EDP}) in the statement of the Proposition. Assuming the existence of a positive solution $w$ of (\ref{EDPc}) and thus of the corresponding conformal factor $c = w^{\frac{1}{n-2}}$ of (\ref{Cond-c}), the function $v = c^{n-2} u$ satisfies \begin{equation} \label{a4} \left\{ \begin{array}{cc} -\Delta_{g} v = \lambda v, & \textrm{on} \ M, \\ v = \psi, & \textrm{on} \ \Gamma_D, \\ v = 0, & \textrm{on} \ \partial M \setminus \Gamma_D. \end{array} \right. \end{equation} Therefore, the function $v$ is the unique solution of the Dirichlet problem (\ref{a1}) at fixed frequency $\lambda$ for the metric $g$. We conclude that $$ \Lambda_{\tilde{g},\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g,\Gamma_D, \Gamma_N}(\lambda), $$ as in the proof of Proposition \ref{Link-c-to-V}. \end{proof} \begin{rem} Using the well-known fact that the potential $q_{g,c}$ in (\ref{q}) can be expressed as \begin{equation} \label{ScalarCurvature} q_{g,c} = \frac{n-2}{4(n-1)} \left( Scal_g - c^4 \, Scal_{c^4 g} \right), \end{equation} where $Scal_g$ and $Scal_{c^4 g}$ denote the scalar curvatures associated to $g$ and $\tilde{g} = c^4 g$ respectively, the nonlinear PDE (\ref{Main-EDP}) satisfied by the conformal factor $c$ may be re-expressed in more geometric terms by observing that $c$ will satisfy (\ref{Main-EDP}) is and only if \begin{equation} \label{GeometricInterpretation} Scal_{c^4 g} = \frac{Scal_g + \frac{4(n-1)}{n-2} \lambda (1-c^4)}{c^4}. \end{equation} \end{rem} In view of Proposition \ref{Main}, we see that in order to construct counterexamples to uniqueness for the anisotropic Calder\'on problem on a smooth compact Riemannian manifold $(M,g)$ of dimension $n\geq 3$ with smooth boundary $\partial M$, where the Dirichlet and Neumann data measured on disjoint subsets of the boundary, it is sufficient to find a conformal factor $c$ satisfying the nonlinear PDE of Yamabe type (\ref{Main-EDP}) and such that $c \ne 1$ on $M$ (see \ref{Inv-Conformal-1}). We shall see below that this can been done by using the well-known technique of lower and upper solutions. Indeed, recall that we are interested in solutions $w = c^{n-2}$ of the nonlinear elliptic PDE (see (\ref{EDPc})): \begin{equation} \label{Eqw} \left\{ \begin{array}{cc} \Delta_g w + f(w) =0 , & \textrm{on} \ M, \\ w = \eta, & \textrm{on} \ \partial M, \end{array} \right. \end{equation} where $f(w) = \lambda (w-w^{\frac{n+2}{n-2}})$ and $\eta$ is a smooth function on $\partial M$ such that $\eta = 1$ on $\Gamma_D \cup \Gamma_N$. We may thus more generally consider the nonlinear Dirichlet problem \begin{equation} \label{GeneralDP} \left\{ \begin{array}{cc} \Delta_g w + f(x,w) =0 , & \textrm{on} \ M, \\ w = \eta, & \textrm{on} \ \partial M, \end{array} \right. \end{equation} where $f$ is a smooth function on $M \times \R$ and $\eta$ is a smooth function on $\partial M$. We recall the definitions of an upper solution and a lower solution of (\ref{GeneralDP}). \begin{defi} An upper solution ${\overline{w}}$ is a function in $C^2(M) \cap C^0(\overline{M})$ satisfying \begin{equation}\label{upper} \Delta_g {\overline{w}}+ f(x,{\overline{w}}) \leq 0 \ \textrm{on} \ M, \quad \textrm{and} \quad {\overline{w}}_{|\partial M} \geq \eta. \end{equation} Similarly, a lower solution ${\underline{w}}$ is a function in $C^2(M) \cap C^0(\overline{M})$ satisfying \begin{equation}\label{under} \Delta_g {\underline{w}}+ f(x,{\underline{w}}) \geq 0 \ \textrm{on} \ M, \quad \textrm{and} \quad {\underline{w}}_{|\partial M} \leq \eta. \end{equation} \end{defi} It is well-known (see \cite{Sat}, Thm 2.3.1. or \cite{Ta2}, Section 14.1) that if we can find a lower solution ${\underline{w}}$ and an upper solution ${\overline{w}}$ satisfying ${\underline{w}} \leq {\overline{w}}$ on $M$, then there exists a solution $w \in C^{\infty}(\overline{M})$ of (\ref{GeneralDP}) such that ${\underline{w}} \leq w \leq {\overline{w}}$ on $M$. For completeness, let us briefly sketch the construction of such a solution : we pick $\mu>0$ such that $|\partial_w f(x,w)| \leq \mu$ for $w \in [\min \ {\underline{w}} , \max \ {\overline{w}}]$. Then, we define recursively a sequence $(w_k)$ by $w_0 = {\underline{w}}$, $w_{k+1} = \Phi(w_k)$ where $\Phi(w) = \varphi$ is given by solving \begin{equation} \Delta_g \varphi - \mu \varphi = -\mu w - f(x,w) \ ,\ \varphi_{|\partial M} = \eta. \end{equation} Using the maximum principle, we see that this sequence satisfies \begin{equation}\label{sequence} {\underline{w}}=w_0 \leq w_1 \leq \cdots \leq w_k \cdots \leq {\overline{w}}. \end{equation} We deduce that $w = \displaystyle\lim_{k \to \infty} w_k$ is a solution of (\ref{Eqw}). The details of the construction are given in the above references \cite{Sat, Ta2}. Now, we can establish the following elementary result. \begin{prop} \label{NonlinearDirichletPb} For all $\lambda \geq 0$, (resp. for all $\lambda < 0$), and for all smooth positive functions $\eta$ such that $\eta \ne 1$ on $\partial M$, (resp. $\eta \lneq 1$ on $\partial M$), there exists a positive solution $w \in C^{\infty}(\overline{M})$ of (\ref{Eqw}) satisfying $w \ne 1$ on $M$. \end{prop} \begin{proof} 1. Assume first that $\lambda \geq 0$. \\ a) If $\eta \gneq 1$, then ${\underline{w}}=1$ is a lower solution and ${\overline{w}}= \max \eta$ is an upper solution of (\ref{Eqw}). Moreover, they clearly satisfy ${\underline{w}} \leq {\overline{w}}$. \\ b) Likewise, if $0 < \eta \lneq 1$, then ${\underline{w}}= \min \eta$ is a lower solution and ${\overline{w}}= 1$ is an upper solution of (\ref{Eqw}). They still satisfy ${\underline{w}} \leq {\overline{w}}$. \\ c) Finally, if $0 < \min \eta < 1 < \max \eta$, then ${\underline{w}}= \min \eta$ is a lower solution and ${\overline{w}}= \max \eta$ is an upper solution of (\ref{Eqw}). Moreover, they satisfy ${\underline{w}} \leq {\overline{w}}$. \\ 2. Assume now that $\lambda < 0$ and $0 < \eta \lneq 1$. \\ We define ${\underline{w}}$ as the unique solution of the Dirichlet problem \begin{equation} \label{Dir1} \left\{ \begin{array}{cc} \Delta_g \underline{w} + \lambda \underline{w} =0 , & \textrm{on} \ M, \\ \underline{w} = \eta, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} Since $\lambda < 0$, the strong maximum principle implies that $0 < \underline{w} \leq \max \eta$ on $M$, (see \cite{GT}, Corollary 3.2 and Theorem 3.5). Moreover, $$\triangle_g \underline{w} + \lambda (\underline{w} - (\underline{w})^{\frac{n+2}{n-2}}) = -\lambda (\underline{w})^{\frac{n+2}{n-2}} \geq 0.$$ It follows that $\underline{w}$ is a lower solution of (\ref{Eqw}). We then define ${\overline{w}}$ as the unique solution of the Dirichlet problem \begin{equation} \label{Dir2} \left\{ \begin{array}{cc} \Delta_g \overline{w} + \lambda \overline{w} = \lambda (\max \eta)^{\frac{n+2}{n-2}} , & \textrm{on} \ M, \\ \overline{w} = \eta, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} According to the maximum principle, we also have $0 \leq \overline{w}$ on $M$. Now, setting $v= \overline{w}- \max \eta$, and since $\eta \leq 1$, we have \begin{equation} \label{Dir11} \left\{ \begin{array}{cc} \Delta_g v + \lambda v = \lambda (\max \eta^{\frac{n+2}{n-2}}- \max \eta) \geq 0 , & \textrm{on} \ M, \\ v = \eta - \max \eta \leq 0, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} So, according to the maximum principle again, we deduce that $v \leq 0$ on $M$, or equivalently $\overline{w} \leq \max \eta$. We deduce as previously that $\overline{w}$ is an upper solution of (\ref{Eqw}). Finally, $\overline{w} - \underline{w}$ satisfies \begin{equation} \label{Dir3} \left\{ \begin{array}{cc} \Delta_g (\overline{w} - \underline{w}) + \lambda (\overline{w} - \underline{w}) = \lambda (\max \eta)^{\frac{n+2}{n-2}} < 0 , & \textrm{on} \ M, \\ \overline{w} - \underline{w} = 0, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} Then, the maximum principle implies again $\overline{w} \geq \underline{w}$ which finishes the proof. \end{proof} In order to use the existence results of Proposition \ref{NonlinearDirichletPb} for the construction of a conformal factor $c$ satisfying (\ref{Main-EDP}) and $c \ne 1$ on $M$, we need to be able to choose $\eta \ne 1$ on $\partial M$. We thus make the crucial assumption on the disjoint Dirichlet and Neumann data that \begin{equation} \label{Main-Hyp} \overline{\Gamma_D \cup \Gamma_N} \ne \partial M. \end{equation} Putting together then the results of Proposition \ref{Main} and Proposition \ref{NonlinearDirichletPb}, we have proved \begin{thm} \label{Main-1} Let $(M,g)$ be a smooth compact connected Riemannian manifold of dimension $n\geq 3$ with smooth boundary $\partial M$. Let $\Gamma_D, \Gamma_N$ be open subsets of $\partial M$ such that $\Gamma_D \cap \Gamma_N = \emptyset$ and $\overline{\Gamma_D \cup \Gamma_N} \ne \partial M$. Consider a conformal factor $c \ne 1$ on $M$ whose existence is given in Proposition \ref{NonlinearDirichletPb}, defined as a smooth solution of the nonlinear Dirichlet problem \begin{equation} \label{Main-EDP-1} \left\{ \begin{array}{cc} \Delta_{g} c^{n-2} + \lambda (c^{n-2} - c^{n+2}) = 0, & \textrm{on} \ M, \\ c^{n-2} = \eta, & \textrm{on} \ \partial M, \end{array} \right. \end{equation} where $\eta$ is a suitable smooth positive function on $\partial M$ satisfying $\eta = 1$ on $\Gamma_D \cup \Gamma_N$ and $\eta \ne 1$ on $\partial M \setminus (\Gamma_D \cup \Gamma_N)$. Then the Riemannian metric $\tilde{g} = c^4 g$ with $c \ne 1$ on $M$ satisfies $$ \Lambda_{\tilde{g},\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g,\Gamma_D, \Gamma_N}(\lambda). $$ \end{thm} This gauge invariance for the anisotropic Calder\'on problem with disjoint data can be formalized in the following way. \begin{defi}[Gauge invariance] \label{Gauge0} Let $(M,g)$ and $(M,\tilde{g})$ be smooth compact connected Riemannian manifolds of dimension $n\geq 3$ with smooth boundary $\partial M$. Let $\lambda \in \R$ not belong to the union of the Dirichlet spectra of $-\Delta_g$ and $-\Delta_{\tilde{g}}$. Let $\Gamma_D, \Gamma_N$ be open subsets of $\partial M$ such that $\Gamma_D \cap \Gamma_N = \emptyset$ and $\overline{\Gamma_D \cup \Gamma_N} \ne \partial M$. We say that $g$ and $\tilde{g}$ are gauge related if there exists a smooth positive conformal factor $c$ such that: \\ \begin{equation} \label{Gauge} \left\{ \begin{array}{rl} \tilde{g} & = c^4 g, \\ \Delta_{g} c^{n-2} + \lambda (c^{n-2} - c^{n+2}) & = 0, \textrm{on} \ M, \\ c & = 1, \textrm{on} \ \Gamma_D \cup \Gamma_N, \\ c & \ne 1, \textrm{on} \ \partial M \setminus (\Gamma_D \cup \Gamma_N). \end{array} \right. \end{equation} In that case, we have: $\Lambda_{\tilde{g}, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g, \Gamma_D, \Gamma_N}(\lambda)$. \end{defi} \begin{rem} In dimension $2$, the gauge invariance described in Definition \ref{Gauge0} for the anisotropic Calder\'on problem with disjoint data is not relevant except for the case of zero frequency. Indeed, the nonlinear PDE (\ref{Gauge}) that should satisfy the conformal factor $c$ becomes \begin{equation} \label{EDP-Dim2} \lambda (1 - c^4) = 0, \ \textrm{on} \ M. \end{equation} In other words, $c$ must be identically equal to $1$ if $\lambda \ne 0$. Recalling that in dimension $2$ and for zero frequency, a conformal transformation is already known to be a gauge invariance of the anisotropic Calder\'on problem, we see that our construction will not lead to new counterexamples to uniqueness in dimension $2$, for any frequency $\lambda$. \end{rem} We conclude this Section by stating a version of the anisotropic Calder\'on conjecture with disjoint data modulo the previously defined gauge invariance. \\ \noindent \textbf{(Q4)} \emph{Let $M$ be a smooth compact connected manifold with smooth boundary $\partial M$ and let $g,\, \tilde{g}$ be smooth Riemannian metrics on $M$. Let $\Gamma_D, \Gamma_N$ be any open sets of $\partial M$ such that $\Gamma_D \cap \Gamma_N = \emptyset$ and $\lambda \in \R$ not belong to $\sigma(-\Delta_g) \cup \sigma(-\Delta_{\tilde{g}})$. If $\Lambda_{g,\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{g},\Gamma_D, \Gamma_N}(\lambda)$, is it true that $g = \tilde{g}$ up to the gauge invariances: \\ 1. (\ref{Inv-Diff}) in any dimension, \\ 2. (\ref{Inv-Conf}) if $\dim M = 2$ and $\lambda = 0$}, \\ 3. (\ref{Gauge}) if $\dim M \geq 3$ and $\overline{\Gamma_D \cup \Gamma_N} \ne \partial M$? \Section{The anisotropic Calder\'on problem for Schr\"odinger operators in dimension $n\geq 2$} \label{2} In this Section, we consider the anisotropic Calder\'on problem \textbf{(Q3)} for Schr\"odinger operators on a fixed smooth compact connected Riemannian manifold $(M,g)$ of dimension $n\geq 2$, with smooth boundary $\partial M$, under the assumption that the Dirichlet and Neumann data are measured on disjoint subsets of the boundary. We first show that the previously constructed counterexamples to uniqueness for the anisotropic Calder\'on problem \textbf{(Q2)} in dimension higher than $3$ cannot be used to construct counterexamples to uniqueness for \textbf{(Q3)} through the link (\ref{Link}). To this effect, we start by proving the following elementary lemma: \begin{lemma}\label{lemmafactor} Let $(M,g)$ be a smooth compact connected Riemannian manifolds of dimension $n\geq 3$ with smooth boundary $\partial M$. Consider two smooth conformal factors $c_1$ and $c_2$ such that $c:= \frac{c_2}{c_1}$ satisfies \begin{equation}\label{factor} \Delta_{c_1^4 g} c^{n-2} + \lambda (c^{n-2} - c^{n+2}) = 0 \ \rm{on}\ M. \end{equation} Then, \begin{equation} \label{c2} V_{g,c_1,\lambda} = V_{g,c_2,\lambda}. \end{equation} \end{lemma} \begin{proof} Using (\ref{ConformalScaling}) and (\ref{q}) with the conformal factor $c_1$, we obtain easily \begin{equation} (\Delta_g -q_{g,c_1}) c_2^{n-2} + \lambda \left( c_1^4 c_2^{n-2}-c_2^{n+2} \right) =0. \end{equation} So, using (\ref{q}) again with the conformal factor $c_2$, we get \begin{equation} (q_{g, c_2} - q_{g,c_1}) + \lambda \left( c_1^4 -c_2^4\right) =0, \end{equation} or equivalently $$ V_{g,c_1,\lambda} = V_{g,c_2,\lambda}. $$ \end{proof} \vspace{0.2cm} As a consequence, let $\Gamma_D, \Gamma_N$ be open subsets of $\partial M$ such that $\Gamma_D \cap \Gamma_N = \emptyset$. Consider two smooth conformal factors $c_1$ and $c_2$ such that the metrics $G=c_1^4 g $ and $\tilde{G}=c_2^4 g$ are gauge equivalent in the sense of Definition \ref{Gauge0}, \textit{i.e.} $\Lambda_{G, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{G}, \Gamma_D, \Gamma_N}(\lambda)$. Then, we obtain from (\ref{Link}) that $$ \Lambda_{g, V_{g,c_1,_\lambda}, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g, V_{g,c_2,\lambda}, \Gamma_D, \Gamma_N}(\lambda), $$ but Lemma \ref{lemmafactor} implies that $ V_{g,c_1,\lambda} = V_{g,c_2,\lambda}$. Thus, the gauge invariance for the anisotropic Calder\'on problem \textbf{(Q2)} with disjoint data highlighted in Section \ref{1} is not a gauge invariance for the corresponding anisotropic Calder\'on problem \textbf{(Q3)}. In other words, we just showed that the gauge invariance for \textbf{(Q2)} corresponds in fact to all the possible conformal factors $c$ satisfying $V_{g,c,\lambda} = V_{g,c_0,\lambda} = q$ for a fixed conformal factor $c_0$ or related potential $q$. \vspace{0.5cm} Nevertheless, we exhibited in \cite{DKN2} some constructive counterexamples to uniqueness for the anisotropic Calder\'on problem \textbf{(Q3)} with disjoint sets on smooth compact connected Riemannian toric cylinders equipped with a warped product metric in dimensions $2$ or $3$. Precisely, we refer to Theorems 3.2 and 3.4 in \cite{DKN2} for counterexamples of \textbf{(Q2)} and \textbf{(Q3)} respectively in dimension $2$ and to Theorem 4.7 in \cite{DKN2} for counterexamples of \textbf{(Q3)} in dimension $3$. In this Section, we generalize the results of \cite{DKN2} and show that the same type of constructive counterexamples to uniqueness can be obtained for any smooth compact connected Riemannian cylinder $M$ having two ends (meaning that the boundary $\partial M$ consists in two connected components), equipped with a warped product metric. More precisely, we consider the general model in which $M = [0,1]\times K$, where $K$ is an arbitrary $(n-1)$-dimensional closed manifold, equipped with a Riemannian metric of the form \begin{equation} \label{Metric} g = f^4(x) [dx^2 + g_K], \end{equation} where $f$ is a smooth strictly positive function on $[0,1]$ and $g_K$ denotes a smooth Riemannian metric on $K$. Clearly, $(M,g)$ is a $n$-dimensional warped product cylinder and the boundary $\partial M$ has two connected components, namely $\partial M = \Gamma_0 \cup \Gamma_1$ where $\Gamma_0 = \{0\} \times K$ and $\Gamma_1 = \{1\} \times K$ correspond to the two ends of $(M,g)$ . The positive Laplace-Beltrami operator on $(M,g)$ has the expression \begin{equation} \label{Laplacian} -\Delta_g = f^{-(n+2)} \left( -\partial_x^2 - \triangle_K + q_f(x) \right) f^{n-2}, \end{equation} where $-\triangle_K$ denotes the positive Laplace-Beltrami operator on $(K,g_K)$ and $q_f = \frac{(f^{n-2})''}{f^{n-2}}$. Let us consider a potential $V = V(x) \in L^\infty(M)$ and $\lambda \in \R$ such that $\lambda \notin \{ \lambda_j\}_{j \geq 1}$ where $\{ \lambda_j\}_{j \geq 1}$ is the Dirichlet spectrum of $-\Delta_g + V$. We are interested in the unique solution $u$ of the Dirichlet problem \begin{equation} \label{eq0} \left\{ \begin{array}{rcl} (-\triangle_g + V) u & = & \lambda u, \ \textrm{on} \ M, \\ u & = & \psi, \ \textrm{on} \ \partial M. \end{array} \right. \end{equation} Thanks to (\ref{Laplacian}) and setting $v = f^{n-2} u$, this can be written as \begin{equation} \label{Eq1} \left\{ \begin{array}{rcl} \left[ -\partial^2_x - \triangle_K + q_f + (V-\lambda) f^4 \right] v & = & 0, \ \textrm{on} \ M, \\ v & = & f^{n-2} \psi, \ \textrm{on} \ \partial M. \end{array} \right. \end{equation} In order to construct the DN map corresponding to the problem (\ref{eq0}), we shall use the following notations. Since the boundary $\partial M$ of $M$ has two disjoint components $\partial M = \Gamma_0 \cup \Gamma_1$, we can decompose the Sobolev spaces $H^s(\partial M)$ as $H^s(\partial M) = H^s(\Gamma_0) \oplus H^s(\Gamma_1)$ for any $s \in \R$ and we shall use the vector notation $$ \varphi = \left( \begin{array}{c} \varphi^0 \\ \varphi^1 \end{array} \right), $$ to denote the elements $\varphi$ of $H^s(\partial M) = H^s(\Gamma_0) \oplus H^s(\Gamma_1)$. The DN map is a linear operator from $H^{1/2}(\partial M)$ to $H^{-1/2}(\partial M)$ and thus has the structure of an operator valued $2 \times 2$ matrix $$ \Lambda_g(\lambda) = \left( \begin{array}{cc} \Lambda_{g,\Gamma_0,\Gamma_0}(\lambda) & \Lambda_{g,\Gamma_1, \Gamma_0}(\lambda) \\ \Lambda_{g,\Gamma_0, \Gamma_1}(\lambda) & \Lambda_{g,\Gamma_1,\Gamma_1}(\lambda) \end{array} \right), $$ whose components are operators from $H^{1/2}(K)$ to $H^{-1/2}(K)$. Now we use the warped product structure of $(M,g)$ and the fact that $V = V(x)$ to find a simple expression of the DN map by decomposing all the relevant quantities onto a Hilbert basis of harmonics $(Y_k)_{k \geq 0}$ of the Laplace-Beltrami operator $-\triangle_K$ on the closed manifold $K$. We first write $\psi = (\psi^0, \psi^1) \in H^{1/2}(\Gamma_0) \times H^{1/2}(\Gamma_1)$ using their Fourier expansions as $$ \psi^0 = \sum_{k \geq 0} \psi^0_k Y_k, \quad \psi^1 = \sum_{k \geq 0} \psi^1_k Y_k. $$ Note that for any $s \in \R$, the space $H^{s}(K)$ can be described as $$ \varphi \in H^{s}(K) \ \Longleftrightarrow \ \left\{ \varphi \in \D'(K), \ \varphi = \sum_{k \geq 0} \varphi_k Y_k, \quad \sum_{k \geq 0} (1 + \mu_k)^{s} |\varphi_k|^2 < \infty \ \right\}, $$ where $0 = \mu_0 < \mu_1 \leq \mu_2 \leq \dots$ are the eigenvalues of $-\triangle_K$. Now the unique solution $v$ of (\ref{Eq1}) takes the form $$ v = \sum_{k \geq 0} v_k(x) Y_k(\omega), $$ where the functions $v_k$ are the unique solutions of the boundary value problems given by \begin{equation} \label{Eq2} \left\{ \begin{array}{c} -v_k'' + [ q_f + (V-\lambda) f^4] v_k = -\mu_k v_k, \ \textrm{on} \ [0,1], \\ v_k(0) = f^{n-2}(0) \psi^0_k, \quad v_k(1) = f^{n-2}(1) \psi^1_k. \end{array} \right. \end{equation} Moreover the DN map can be diagonalized in the Hilbert basis $\{ Y_k \}_{k \geq 0}$ and thus shown to take the following convenient expression \begin{equation} \label{DN1} \Lambda_{g,V}(\lambda)_{|<Y_k>} = \Lambda^k_{g,V}(\lambda) = \left( \begin{array}{c} \frac{(n-2) f'(0)}{f^{n+1}(0)} v_k(0) - \frac{v_k'(0)}{f^n(0)} \\ -\frac{(n-2) f'(1)}{f^{n+1}(1)} v_k(1) + \frac{v_k'(1)}{f^n(1)} \end{array} \right) . \end{equation} Let us now interpret the quantities $v_k(0), v_k'(0), v_k(1), v_k'(1)$ in terms of the boundary values of $v_k$. For this, we introduce the characteristic and Weyl-Titchmarsh functions of the boundary value problem \begin{equation} \label{Eq3} \left\{ \begin{array}{c} -v'' + [q_{f}(x) +(V-\lambda) f^4(x)] v = - \mu v, \\ v(0) = 0, \quad v(1) = 0. \end{array}\right. \end{equation} Note that the equation (\ref{Eq3}) is nothing but equation (\ref{Eq2}) in which the angular momentum $-\mu_k$ is written as $-\mu$ and is interpreted as the \emph{spectral parameter} of the equation. Since the potential $q_f +(V-\lambda) f^4 \in L^1([0,1])$ and is real, we can define for all $\mu \in \C$ two fundamental systems of solutions of (\ref{Eq3}) $$ \{ c_0(x,\mu), s_0(x,\mu)\}, \quad \{ c_1(x,\mu), s_1(x,\mu)\}, $$ by imposing the Cauchy conditions \begin{equation} \label{FSS} \left\{ \begin{array}{cccc} c_0(0,\mu) = 1, & c_0'(0,\mu) = 0, & s_0(0,\mu) = 0, & s_0'(0,\mu) = 1, \\ c_1(1,\mu) = 1, & c'_1(1,\mu) = 0, & s_1(1,\mu) = 0, & s'_1(1,\mu) = 1. \end{array} \right. \end{equation} \begin{rem} \label{Utile1} In terms of the Wronskian $W(u,v) = uv' - u'v$, we have $$ W(c_0,s_0) = 1, \quad W(c_1,s_1) = 1 $$ Moreover, we remark (see \cite{PT}) that the functions $\mu \to c_j(x,\mu), \, s_j(x,\mu)$ and their derivatives with respect to $x$ are entire functions of order $\frac{1}{2}$. \end{rem} Following \cite{DKN2}, we define the characteristic function of (\ref{Eq3}) by \begin{equation} \label{Char} \Delta_{g,V}(\mu) = W(s_0, s_1), \end{equation} and the Weyl-Titchmarsh functions by \begin{equation} \label{WT} M_{g,V}(\mu) = - \frac{W(c_0, s_1)}{\Delta_{g,V}(\mu)}, \quad N_{g,V}(\mu) = - \frac{W(c_1, s_0)}{\Delta_{g,V}(\mu)}. \end{equation} \begin{rem} \label{Utile2} 1. Since the function $\Delta_{g,V}$ is entire, its zeros form a discrete set in $\C$. We denote this set by $(\alpha_k)_{k \geq 1}$ and remark that they correspond to "minus"\footnote{since the spectral parameter of (\ref{Eq3}) is $-\mu$.} the Dirichlet spectrum of the 1D Schr\"odinger operator $-\frac{d^2}{dx^2} + [q_f + (V-\lambda) f^4]$. Moreover, these zeros are simple, (see Theorem 2, p. 30 of \cite{PT}).\\ 2. The functions $M_{g,V}$ and $N_{g,V}$ are meromorphic with poles given by $(\alpha_k)_{k \geq 1}$. Under our assumption that $\lambda$ does not belong to the Dirichlet spectrum of $-\triangle_g + V$, we can show that the eigenvalues $(\mu_k)_{k \geq 1}$ of $-\triangle_K$ cannot be poles of $M_{g,V}$ and $N_{g,V}$. In particular, $0$ is not a pole of $M_{g,V}$ and $N_{g,V}$. We refer to \cite{DKN2}, Remark 3.1, for the detailed proof of this assertion. \end{rem} Writing the solution $v_k$ of (\ref{Eq3}) as $$ v_k(x) = \alpha \,c_0(x,\mu_k) + \beta \,s_0(x,\mu_k) = \gamma \,c_1(x,\mu_k) + \delta \,s_1(x,\mu_k), $$ for some constants $\alpha,\beta,\gamma,\delta$, a straightforward calculation as in \cite{DKN2}, section 4, shows that the DN map $\Lambda_g^k(\lambda)$ on each harmonic $Y_k, \, k \geq 0$ has the expression \begin{equation} \label{DN-Partiel} \Lambda^k_g(\lambda) = \left( \begin{array}{cc} \frac{(n-2)f'(0)}{f^3(0)} - \frac{M_{g,V}(\mu_k)}{f^2(0)} & -\frac{f^{n-2}(1)}{f^n(0) \Delta_{g,V}(\mu_k)} \\ -\frac{f^{n-2}(0)}{f^n(1) \Delta_{g,V}(\mu_k)} & -\frac{(n-2)f'(1)}{f^3(1)} - \frac{N_{g,V}(\mu_k)}{f^2(1)} \end{array} \right) . \end{equation} Hence the DN map $\Lambda_{g,V}^k(\lambda)$ on each harmonic $Y_k$ is simply a multiplication by a $2 \times 2$ matrix whose coefficients are expressed in terms of some boundary values of the metric $g$ and its first normal derivative $\partial_\nu g$ as well as the characteristic function $\Delta_{g,V}$ for the anti-diagonal components and the Weyl-Titchmarsh functions $M_{g,V} $and $N_{g,V}$ for the diagonal components evaluated at the $\{\mu_k\}_{k \geq 0}$ which are the eigenvalues of the Laplacian $-\triangle_K$ on the closed manifold $K$. Note that the non locality of the DN map is seen through the multiplication by the functions $\Delta_{g,V}(\mu_k), M_{g,V}(\mu_k)$ and $N_{g,V}(\mu_k)$ since they depend on the whole potential $q_f + (V-\lambda) f^4$ and thus on the whole metric $g$ and potential $V$. Let us now come back to the study of the anisotropic Calder\'on problem \textbf{(Q3)} for metrics (\ref{Metric}) and potentials $V = V(x)$ when the Dirichlet and Neumann data are measured on disjoint sets of the boundary. Assume precisely that $\Gamma_D, \Gamma_N$ are open subsets of $\partial M$ that belong to distinct connected components of $\partial M$. For instance, if we choose $\Gamma_D \subset \Gamma_0$ and $\Gamma_N \subset \Gamma_1$, then the measured partial DN map $\Lambda_{g,V,\Gamma_D, \Gamma_N}(\lambda)$ is given by \begin{equation} \label{DN2} \Lambda_{g,V,\Gamma_D, \Gamma_N}(\lambda) \psi = - \left( \sum_{k \geq 0} \frac{f^{n-2}(0)}{f^n(1) \Delta_{g,V}(\mu_k)} \psi_k Y_k \right)_{|\Gamma_N}, \end{equation} where $\psi = \sum_{k \geq 0} \psi_k Y_k$ and supp$\,\psi \subset \Gamma_D$. It is clear from the expression (\ref{DN2}) that the characteristic function $\Delta_{g,V}$ is the essential quantity that determines uniquely $\Lambda_{g,V,\Gamma_D, \Gamma_N}(\lambda)$ when $\Gamma_D$ and $\Gamma_N$ belong to distinct connected components of the boundary. We thus consider the following question: can we find potentials $\tilde{V}$ distinct from $V$ and such that $\Delta_{g,V}(\mu) = \Delta_{g,\tilde{V}}(\mu)$ for all $\mu \in \C$? In the positive case, we will thus have found counterexamples to uniqueness for the Calder\'on problem \textbf{(Q3)} with disjoint data. The answer is yes and is provided by the following key Lemma. \begin{lemma} \label{Link-Iso} Let $g$ be a fixed metric as in (\ref{Metric}) and $V = V(x), \tilde{V} = \tilde{V}(x) \in L^\infty(M)$. Then $$ \Delta_{g,V}(\mu) = \Delta_{g,\tilde{V}}(\mu), \quad \forall \mu \in \C, $$ if and only if $$ q_f + (V-\lambda)f^4 \ \textrm{and} \ q_f + (\tilde{V}-\lambda)f^4 \ \textrm{are isospectral for} \ (\ref{Eq3}). $$ \end{lemma} \begin{proof} We recall first from Remark \ref{Utile1} that the FSS $(c_j(x,\mu), s_j(x,\mu)), \ j=1,2$ are entire of order $\frac{1}{2}$ with respect to $\mu$. Hence we deduce easily from (\ref{Char}) that $\Delta_{g,V}, \Delta_{g,\tilde{V}}$ are also entire of order $\frac{1}{2}$. Moreover, we know from Remark \ref{Utile2} that $0$ is not a zero of $\Delta_{g,V}$ and $\Delta_{g,\tilde{V}}$. It follows then from the Hadamard factorization Theorem (see for instance \cite{Lev}) that \begin{equation} \label{r1} \Delta_{g,V}(\mu) = C \prod_{k \geq 1} \left( 1 - \frac{\mu}{\alpha_k} \right), \quad \Delta_{g,\tilde{V}}(\mu) = \tilde{C} \prod_{k \geq 1} \left( 1 - \frac{\mu}{\tilde{\alpha_k}} \right), \end{equation} where $(\alpha_k)_{k \geq 1}, \ (\tilde{\alpha_k})_{k \geq 1}$ denote "minus" the Dirichlet spectra of the 1D Schr\"odinger operators $-\frac{d^2}{dx^2} + [q_f + (V-\lambda) f^4]$ and $-\frac{d^2}{dx^2} + [q_f + (\tilde{V}-\lambda) f^4]$ respectively (see Remark \ref{Utile1} again) and $C, \tilde{C}$ are constants. Second, it turns out that $\Delta_{g,V}$ and $\Delta_{g,\tilde{V}}$ have universal asymptotics when $\mu \to \infty$. Precisely, we know from \cite{PT} and \cite{DKN2}, Corollary 2.1 that \begin{equation} \label{r2} \Delta_{g,V}(\mu), \, \Delta_{g,\tilde{V}}(\mu) \sim \frac{\sinh(\sqrt{\mu})}{\sqrt{\mu}}, \quad \mu \to \infty. \end{equation} As a consequence, we deduce from (\ref{r1}) that if $\Delta_{g,V}(\mu) = \Delta_{g,\tilde{V}}(\mu)$ for all $\mu \in \C$, then $\alpha_k = \tilde{\alpha_k}$ for all $k \geq 1$. This means precisely that the potentials $q_f + (V-\lambda)f^4$ and $q_f + (\tilde{V}-\lambda)f^4$ are isospectral for the boundary value problem (\ref{Eq3}). Conversely, if we assume that $q_f + (V-\lambda)f^4$ and $q_f + (\tilde{V}-\lambda)f^4$ are isospectral for (\ref{Eq3}), then $\alpha_k = \tilde{\alpha_k}$ for all $k \geq 1$. This means using (\ref{r1}) that $\Delta_{g,V}(\mu) = \frac{C}{\tilde{C}} \Delta_{g,\tilde{V}}(\mu)$ for all $\mu \in \C$. But the universal asymptotics (\ref{r2}) imply then that $C = \tilde{C}$. Hence $\Delta_{g,V} = \Delta_{g,\tilde{V}}$. \end{proof} Thanks to the fundamental results of P\"oschel and Trubowitz \cite{PT}, Theorem 5.2, we have a complete description of the class of isospectral potentials for the Schr\"odinger operator with Dirichlet boundary conditions (\ref{Eq3}). This result shows that for each eigenfunction $\phi_k, \ k \geq 1$ of (\ref{Eq3}), we can find a one parameter family of explicit potentials isospectral to $Q(x) = q_f + (V-\lambda) f^4 \in L^2([0,1])$ by the formula \begin{equation} \label{Iso1} Q_{k,t}(x) = Q(x) - 2 \frac{d^2}{dx^2} \log \theta_{k,t}(x), \quad \quad \forall t \in \R, \end{equation} where \begin{equation} \label{Iso2} \theta_{k,t}(x) = 1 + (e^t - 1) \int_x^1 \phi_k^2(s) ds. \end{equation} Using the definition $Q(x) = q_f + (V-\lambda) f^4$, we get the explicit one parameter families of potentials $\tilde{V}$ \begin{equation} \label{IsoPot} \tilde{V}_{k,t}(x) = V(x) - \frac{2}{f^4(x)} \frac{d^2}{dx^2} \log \theta_{k,t}(x), \quad \forall k \geq 1, \quad \forall t \in \R, \end{equation} where $\theta_{k,t}$ is given by (\ref{Iso2}). Using Lemma \ref{Link-Iso} and (\ref{DN2}), we have proved \begin{thm} \label{NonUniquenessQ3} Let $(M,g)$ be a cylindrical warped product as in (\ref{Metric}), $V=V(x) \in L^\infty(M)$ and $\lambda \in \R$ not belong to the Dirichlet spectrum of $-\triangle_g + V$. Then the family of potentials $\tilde{V}_{k,t}$ defined in (\ref{IsoPot}) for all $k \geq 1$ and $t \in \R$ satisfies $$ \Lambda_{g,V,\Gamma_D,\Gamma_N}(\lambda) = \Lambda_{g,\tilde{V}_{k,t},\Gamma_D,\Gamma_N}(\lambda), $$ whenever $\Gamma_D$ and $\Gamma_N$ are open sets that belong to different connected components of $\partial M$. \end{thm} We emphasize that the non-uniqueness result of the Theorem holds when $\Gamma_D = \Gamma_0$ and $\Gamma_N = \Gamma_1$, hence when $\overline{\Gamma_D \cup \Gamma_N} = \partial M$. \begin{rem} \label{Rem-Iso} \begin{itemize} \item The potentials $\tilde{V}_{k,t}$ have the same regularity properties as $V$ on $[0,1]$ for all $k \geq 1$ and for all $t \in \R$. Indeed, the normalized eigenfunctions $\phi_k(x)$ are smooth on $[0,1]$ by elliptic regularity. Hence, the functions $\theta_{k,t}$ are also smooth and never vanish on $[0,1]$ for all $k \geq 1$ and for all $t \in \R$ by (\ref{Iso2}). In particular, if $V$ is smooth on $[0,1]$, then $\tilde{V}_{k,t}$ is also smooth by (\ref{IsoPot}). \item For all $k \geq 1$ and for all $t \in \R$, $\tilde{V}_{k,t}(0) = V(0)$ and $\tilde{V}_{k,t}(1) = V(1)$. This follows from a short calculation using (\ref{Iso2}) and (\ref{IsoPot}). \item If moreover $V > 0$ (resp. $V<0$), then for all $k \geq 1$, there exists $T_k > 0$ such that $\tilde{V}_{k,t} >0$ (resp. $\tilde{V}_{k,t} < 0$) for all $-T_k < t < T_k$. Indeed, it is clear that for a fixed $k \geq 1$, the function $2 \frac{d^2}{dx^2} \log \theta_{k,t}(x)$ can be made arbitrarily small as $t \to 0$ uniformly w.r.t. $x \in [0,1]$. The result follows thanks to (\ref{IsoPot}). \end{itemize} \end{rem} \begin{rem} \label{WT-vs-Char} The preceding construction fails when $\Gamma_D, \Gamma_N$ belong to the same connected component of the boundary $\partial M$. This is due to the fact that on each harmonic $Y_k$, the associated partial DN map $\Lambda_{g,\Gamma_D,\Gamma_N}(\lambda)$ acts essentially as an operator of multiplication by the Weyl-Titchmarsh functions $M_{g,V}(\mu_k)$ or $N_{g,V}(\mu_k)$ (see (\ref{DN2})) instead of the characteristic function $\Delta_{g,V}(\mu_k)$. But as it is well known in 1D inverse spectral theory, the Weyl-Titchmarsh functions contain much more information than the characteristic function. This is the object of the Borg-Marchenko Theorem (see \cite{Be, Bo1, Bo2, ET, FY, GS, KST}). In particular, for rotationally invariant toric cylinders of dimensions 2 and 3, we showed in (\cite{DKN2}, Theorems 3.4 and 4.6), that if $\Gamma_D$ and $\Gamma_N$ belong to the same connected component of the boundary $\partial M$ {\footnote{with a technical assumption on the size of $\Gamma_N$}}, then $\Lambda_{g,V,\Gamma_D,\Gamma_N}(\lambda) = \Lambda_{g,\tilde{V},\Gamma_D,\Gamma_N}(\lambda)$ implies $V = \tilde{V}$. \end{rem} \Section{Counterexamples to uniqueness for the anisotropic Calder\'on problem with disjoint data in dimension $n\geq 3$, modulo the gauge invariance} \label{3} In this Section, we show that the counterexamples to uniqueness given in Theorem \ref{NonUniquenessQ3} for the anisotropic Calder\'on problem \textbf{(Q3)} lead to non trivial counterexamples to uniqueness for the anisotropic Calder\'on problem \textbf{(Q2)} in dimension $n\geq 3$ modulo the gauge invariance introduced in Section \ref{1}, Definition \ref{Gauge0}. To do this, we have in mind Proposition \ref{Link-c-to-V} which gives a clear link between the anisotropic Calder\'on problems \textbf{(Q2)} and \textbf{(Q3)} when $\Gamma_D \cap \Gamma_N = \emptyset$. More precisely, we fix $(M,g)$ a cylindrical warped product as in (\ref{Metric}), $V=V(x) \in C^\infty(M)$ and $\lambda \in \R$ not belonging to the Dirichlet spectrum of $-\triangle_g + V$. Given a potential $\tilde{V}$ given by (\ref{IsoPot}), we would like to try to construct conformal factors $c$ and $\tilde{c}$ in such a way that (see (\ref{Vgc}) for the notations) $$ V_{g,c,\lambda} = V, \quad V_{g,\tilde{c},\lambda} = \tilde{V}, $$ and $$ c, \tilde{c} = 1 \ \textrm{on} \ \Gamma_D \cup \Gamma_N. $$ If we manage to construct such conformal factors $c$ and $\tilde{c}$, then Theorem \ref{NonUniquenessQ3} and Proposition \ref{Link-c-to-V} would imply immediately that $$ \Lambda_{c^4 g, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{c}^4 g, \Gamma_D, \Gamma_N}(\lambda) $$ whenever $\Gamma_D \cap \Gamma_N = \emptyset$. Moreover, the metrics $c^4 g$ and $\tilde{c}^4 g$ wouldn't be gauge related in the sense of Definition \ref{Gauge0} since they are associated to different potentials $V \ne \tilde{V}$ (see Lemma \ref{lemmafactor} and the paragraph just after). Considering only the problem of finding $c > 0$ satisfying $V_{g,c,\lambda} = V$, $c = 1$ on $\Gamma_D \cup \Gamma_N$, we see from (\ref{Vgc}) that is sufficient to find a smooth positive solution $w$ of the nonlinear Dirichlet problem \begin{equation} \label{DirichletPb-q} \left\{ \begin{array}{rl} \triangle_g w + (\lambda - V)w - \lambda w^{\frac{n+2}{n-2}} & = 0, \ \textrm{on} \ M, \\ w & = \eta, \ \textrm{on} \ \partial M, \end{array} \right. \end{equation} where $\eta = 1$ on $\Gamma_D \cup \Gamma_N$ and $0\eta >0$ on $\partial M$. For zero frequency $\lambda = 0$, the nonlinear Dirichlet problem (\ref{DirichletPb-q}) becomes linear, so that the usual existence and uniqueness Theorem for a Dirichlet problem on a Riemannian manifold with boundary as well as the strong maximum principle can be used to prove \begin{prop}[Zero frequency] \label{q-to-c-0} Assume that $\lambda = 0$ and $V \geq 0$ on $M$. Then for each positive smooth function $\eta$ on $\partial M$ such that $\eta = 1$ on $\Gamma_D \cup \Gamma_N$, there exists a unique smooth positive solution $w$ of (\ref{DirichletPb-q}) such that $0 < w \leq \max \eta$ on $M$. \end{prop} We now turn to the case of frequency $\lambda \in \R$, and prove the following: \begin{prop}[general case] \label{q-to-c-1} 1. If $\lambda >0$ and $0 <V(x) <\lambda$ on $M$, then for each positive function $\eta$ on $\partial M$ such that $\max \eta \geq 1$ on $\partial M$, there exists a smooth positive solution $w$ of (\ref{DirichletPb-q}). \\ 2. If $\lambda \leq 0$ and $V(x) \geq 0$ on $M$, then for each for each positive function $\eta$ on $\partial M$ such that $\eta \leq 1$ on $\partial M$, there exists a smooth positive solution $w$ of (\ref{DirichletPb-q}). \\ \end{prop} \begin{proof} 1. We use again the technique of lower and upper solutions. We define ${\underline{w}}= \epsilon$ where $\epsilon>0$ is small enough. We have \begin{equation} \Delta_g \underline{w} + (\lambda-V) \underline{w} -\lambda (\underline{w})^{\frac{n+2}{n-2}} = \epsilon \left( (\lambda-V) -\lambda \epsilon^{{\frac{n+2}{n-2}} -1} \right) >0, \end{equation} so $\underline{w}$ is a lower solution. In the same way, we define $\overline{w} = \max \eta$ and we have \begin{equation} \Delta_g \overline{w} + (\lambda-V) \overline{w} -\lambda (\overline{w})^{\frac{n+2}{n-2}} = \lambda ( \max \eta - \max \eta^{\frac{n+2}{n-2}} ) - V \max \eta \leq 0. \end{equation} It follows that $\overline{w}$ is an upper solution and clearly $\underline{w} \leq \overline{w}$. \vspace{0.2cm} 2. In the case $\lambda \leq 0$, $V \geq 0$ and $\eta \leq 1$, we define ${\underline{w}}$ as the unique solution of the Dirichlet problem \begin{equation} \label{Dir4} \left\{ \begin{array}{cc} \Delta_g \underline{w} + (\lambda-V) \underline{w} = 0 , & \textrm{on} \ M, \\ \underline{w} = \eta, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} Since $(\lambda - V) \leq 0$, the strong maximum principle implies that $0 < \underline{w} \leq \max \eta$ on $M$. Moreover, $\triangle_g \underline{w} + (\lambda - V) \underline{w} - \lambda (\underline{w})^{\frac{n+2}{n-2}} = -\lambda (\underline{w})^{\frac{n+2}{n-2}} \geq 0$. Hence $\underline{w}$ is a lower solution of (\ref{DirichletPb-q}). \vspace{0.2cm} \noindent Now, we define ${\overline{w}}$ as the unique solution of the Dirichlet problem \begin{equation} \label{Dir5} \left\{ \begin{array}{cc} \Delta_g \overline{w} + (\lambda - V) \overline{w} = (\lambda-V) (\max \eta)^{\frac{n+2}{n-2}} , & \textrm{on} \ M, \\ \overline{w} = \eta, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} According to the maximum principle, we also have $\overline{w} \geq 0$ on $M$. Setting $v = \overline{w}- \max \eta$, we see that \begin{equation} \Delta_g v + (\lambda-V) v = (\lambda-V)(\max \eta^{\frac{n+2}{n-2}} - \max \eta) \geq 0, \end{equation} since $\eta \leq 1$. Hence, the maximum principle implies that $v \leq 0$ on $M$, or equivalently $ \overline{w} \leq \max \eta$. \vspace{0.2cm} We deduce that \begin{equation} \Delta_g \overline{w} + (\lambda - V) \overline{w} - \lambda \overline{w}^{\frac{n+2}{n-2}} = (\lambda -V) (\max \eta^{\frac{n+2}{n-2}} - \overline{w}^{\frac{n+2}{n-2}}) - V \overline{w}^{\frac{n+2}{n-2}} \leq 0, \end{equation} since $V$ is positive. Thus, $\overline{w}$ is an upper solution of (\ref{DirichletPb-q}). Finally, $\overline{w} - \underline{w}$ satisfies \begin{equation} \label{Dir6} \left\{ \begin{array}{cc} \Delta_g (\overline{w} - \underline{w}) + (\lambda - V) (\overline{w} - \underline{w}) = (\lambda-V) (\max \eta)^{\frac{n+2}{n-2}} < 0 , & \textrm{on} \ M, \\ \overline{w} - \underline{w} = 0, & \textrm{on} \ \partial M. \end{array} \right. \end{equation} Then, the maximum principle implies again $\overline{w} \geq \underline{w}$. Then according to the lower and upper solutions technique, there exists a smooth positive solution $w$ of (\ref{DirichletPb-q}) such that $w \ne 1$ on $M$. \\ \end{proof} \vspace{0.2cm} Let us now come back to the geometric setting of Theorem \ref{NonUniquenessQ3}. Here $M = [0,1] \times K$ is equipped with a warped product metric $g$ as in (\ref{Metric}). First, let us fix a frequency $\lambda \in \R$. \vspace{0.2cm} 1. Assume that $\lambda >0$. Consider a potential $V = V(x) \in C^\infty(M)$ such that $0<V(x)<\lambda$ and such that $\lambda$ does not belong to the Dirichlet spectrum of $-\Delta_g +V$. This is always possible since the discrete spectrum of $-\Delta_g + V$ is unstable under small perturbations of $V$. Now, consider a potential $\tilde{V} = \tilde{V}_{k,t}(x)$ as in (\ref{IsoPot}) and such that $0<\tilde{V}(x) < \lambda$. Observe that this can always been achieved for small enough $-\epsilon < t < \epsilon$ thanks to the definition (\ref{IsoPot}) of $\tilde{V}_{k,t}$ (see Remark \ref{Rem-Iso}). At last, consider a smooth positive function $\eta$ on $\partial M$ such that $\eta =1$ on $\Gamma_D \cup \Gamma_N$ and such that $\max \eta \geq 1$. Then, Proposition \ref{q-to-c-1} implies the existence of smooth positive conformal factors $c$ and $\tilde{c}$ such that $$ V_{g,c,\lambda} = V, \quad c = 1 \ \textrm{on} \ \Gamma_D \cup \Gamma_N, $$ and $$ V_{g,\tilde{c},\lambda} = \tilde{V}, \quad \tilde{c} = 1 \ \textrm{on} \ \Gamma_D \cup \Gamma_N. $$ But from Theorem \ref{NonUniquenessQ3}, we have $$ \Lambda_{g, V, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{g, \tilde{V}, \Gamma_D, \Gamma_N}(\lambda). $$ Therefore from Proposition \ref{Link-c-to-V}, we conclude that $$ \Lambda_{c^4 g, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{c}^4 g, \Gamma_D, \Gamma_N}(\lambda). $$ \vspace{0.2cm} 2. Assume that $\lambda \leq 0$. Consider a potential $V(x)>0$ and a smooth positive function $\eta$ on $\partial M$ such that $\eta =1$ on $\Gamma_D \cup \Gamma_N$ and such that $\eta \leq 1$. Clearly, $\lambda$ does not belong to the Dirichlet spectrum of $-\Delta_g +V$. Then, we follow the same stategy as in the previous case. \vspace{0.2cm} We emphasize that the metrics $c^4 g$ and $\tilde{c}^4 g$ aren't connected by the invariance gauge of Section \ref{1} since they correspond to different potentials $V = V_{g,c,\lambda}$ and $\tilde{V} = V_{g,\tilde{c},\lambda}$. Hence we have constructed a large class of counterexamples to uniqueness for the anisotropic Calder\'on problem when the Dirichlet and Neumann data are measured on disjoint sets of the boundary \emph{modulo this gauge invariance}. Therefore we have proved: \begin{thm} \label{NonUniquenessQ4} Let $M = [0,1] \times K$ be a cylindrical manifold having two ends equipped with a warped product metric $g$ as in (\ref{Metric}). Let $\Gamma_D, \Gamma_N$ be open sets that belong to different connected components of $\partial M$. Let $\lambda \in \R$ be a fixed frequency. Then there exists an infinite number of smooth positive conformal factors $c$ and $\tilde{c}$ on $M$ with aren't gauge equivalent in the sense of Definition \ref{Gauge0} such that $$ \Lambda_{c^4 g, \Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{c}^4 g, \Gamma_D, \Gamma_N}(\lambda). $$ \end{thm} \Section{Conclusions and open problems} \label{4} In this paper, we have highlighted a natural gauge invariance for the anisotropic Calder\'on problem on smooth compact connected Riemannian manifolds, that arises in the case of disjoint data. We refer to Definition \ref{Gauge0} for the definition of the gauge invariance that led to the formulation \textbf{(Q4)} of the anisotropic Calder\'on conjecture. Moreover, we managed to construct some explicit counterexamples to uniqueness for \textbf{(Q4)}, \textit{i.e.} modulo this gauge invariance, within the class $(M,g)$ of cylindrical manifolds with two ends equipped with a warped product metric. This was done in Theorem \ref{NonUniquenessQ3} for Schr\"odinger operators in dimensions $\geq 2$ and in Theorem \ref{NonUniquenessQ4} for the usual anisotropic Calder\'on problem in dimensions $\geq 3$. The latter counterexamples to uniqueness rely crucially on the fact that the boundary of $(M,g)$ has more than one connected component and that the Dirichlet and Neumann data are measured on distinct connected components of the boundary. This can easily be seen from the expression (\ref{DN2}) of the associated DN map. On the one hand, the expression of the partial DN map when $\Gamma_D, \Gamma_N$ belong to the same connected component of $\partial M$ depends essentially on the Weyl-Titchmarsh operator (\ref{WT}). On the other hand, the expression of the partial DN map when $\Gamma_D, \Gamma_N$ do not belong to the same connected component of $\partial M$ depends essentially on the characteristic operator (\ref{Char}). The latter contains much less information than the former (this fact is encoded in the Borg-Marchenko theorem, see \cite{Be, Bo1, Bo2, ET, FY, GS, KST}) and allows us to construct the above mentioned counterexamples when $\Gamma_D$ and $\Gamma_N$ belong to different connected components of $\partial M$. Finally, we stress the fact that if $\Gamma_D$ and $\Gamma_N$ were disjoint but belonged to the same connected component of $\partial M$, then we would have uniqueness for the the anisotropic Calder\'on problem \textbf{(Q3)} and thus also for \textbf{(Q2)} modulo the gauge invariance (see Remark \ref{WT-vs-Char}). Therefore, we see that the connectedness or non-connectedness of the boundary $\partial M$ plays a critical role in the anisotropic Calder\'on problem with disjoint data. More precisely, we conjecture: \\ \noindent \textbf{(Q5)}: \emph{Let $M$ be a smooth compact connected manifold with smooth boundary $\partial M$ and let $g,\, \tilde{g}$ be smooth Riemannian metrics on $M$. Let $\Gamma_D, \Gamma_N$ be any open sets of $\partial M$ such that $\Gamma_D \cap \Gamma_N = \emptyset$ and suppose that $\lambda \in \R$ does not belong to $\sigma(-\Delta_g) \cup \sigma(-\Delta_{\tilde{g}})$. \\ 1. If $\partial M$ is connected and $\Lambda_{g,\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{g},\Gamma_D, \Gamma_N}(\lambda)$, then $g = \tilde{g}$ up to the gauge invariances: \begin{itemize} \item (\ref{Inv-Diff}) in any dimension, \item (\ref{Inv-Conf}) if $\dim M = 2$ and $\lambda = 0$, \item (\ref{Gauge}) if $\dim M \geq 3$ and $\overline{\Gamma_D \cup \Gamma_N} \ne \partial M$. \end{itemize} 2. If $\partial M$ is not connected, then there exist metrics $g$ and $\tilde{g}$ not related by one of the above gauge invariances for which $\Lambda_{g,\Gamma_D, \Gamma_N}(\lambda) = \Lambda_{\tilde{g},\Gamma_D, \Gamma_N}(\lambda)$, at least when $\Gamma_D$ and $\Gamma_N$ belong to distinct connected components of the boundary.} \\ \vspace{0.8cm} \noindent \textbf{Acknowledgements}: The authors would like to warmly thank Yves Dermenjian for suggesting the crucial role of the transformation law of the Laplacian under conformal scaling in the gauge invariance for the Calderon problem with disjoint data and also Gilles Carron for his help in solving the nonlinear PDE of Yamabe type encountered in Sections \ref{1} and \ref{3}. \\
1,116,691,497,969
arxiv
\section{Introduction} \label{sec:intro} Exceptionally high gas-to-stellar mass ratio systems are of particular interest in extragalactic astronomy as they represent one extreme of galaxy formation, namely some of the lowest mass objects that succeed in forming any stars. Blind radio surveys of neutral hydrogen (H\,{\sc i}) have uncovered a plethora of gas-rich systems that have few, or perhaps no, stars \citep{Saul+2012,Adams+2013,Taylor+2013,Cannon+2015}. However, distinguishing those which may be genuine extreme low-mass dwarf galaxies from other classes of objects \citep{Cannon+2015}, such as tidal debris and high velocity clouds \citep{Adams+2016}, is a challenging process owing to the faintness of any associated stellar counterpart to these objects \citep[e.g.][]{Janesh+2019}, as well as confusion with foreground Milky Way H\,{\sc i} \ emission, which often dominates the velocity range where candidates are expected to be detectable. However, attempts to distinguish these objects have led to surprising discoveries, most notably SECCO~1 \citep[also called AGC~226067;][]{Adams+2013,Bellazzini+2015,Sand+2015,Adams+2015,Beccari+2017,Sand+2017,Bellazzini+2018} and AGC~226178 \citep{Cannon+2015,Junais+2021,Jones+2022}. These are two young, blue, extremely low-mass ($M_\ast \sim 10^5$~\Msol), gas-rich, metal-rich, actively star-forming stellar systems in the Virgo cluster. AGC~226178 has a gas-to-stellar mass ratio $(1.4M_\mathrm{HI}/\mathrm{M_\ast})\sim1000$, while SECCO~1 has a ratio of $\sim$150.\footnote{Here a factor of 1.4 is used to account for helium in the gas mass.} The properties of both systems imply that they formed via in situ star formation (SF) in gaseous debris stripped from a much larger object. In the case of AGC~226178, the likely parent object has been identified as the nearby galaxy VCC~2034, to which it is connected via a tenuous, low column density, 70~kpc-long H\,{\sc i} \ bridge \citep{Jones+2022}. However, it is unclear whether this gas was stripped by a high-speed tidal encounter, or by ram pressure from the intracluster medium (ICM). In the case of SECCO~1, despite it being relatively close to the Virgo cluster center, it is still sufficiently isolated that its origin is uncertain, and multiple possible parent objects have been suggested \citep{Sand+2017,Bellazzini+2018}. As alluded to by \citet{Sand+2017} and \citet{Jones+2022}, these two objects are not unique but instead appear to be part of a larger population of similar objects in Virgo. SECCO~1 and AGC~226178 were originally identified through their H\,{\sc i} \ line emission, thereby guaranteeing gas richness. However, with the latest and deepest wide field imaging surveys it is possible to visually identify objects in the Virgo cluster with similar optical and UV properties, though not necessarily equivalently gas-rich. In this work, we present comprehensive observations of a sample of isolated, blue stellar systems in the Virgo cluster as part of a campaign to understand their physical properties and origins. These additional candidate objects, along with AGC226067/SECCO1 and AGC226178, were followed up with Hubble Space Telescope (HST) F606W and F814W imaging with the Advanced Camera for Surveys (ACS), and H\,{\sc i} \ observations with the Jansky Very Large Array (VLA) and the Green Bank Telescope (GBT). Additional observations with the MUSE (Multi Unit Spectroscopic Explorer) integral field spectrograph on the VLT (Very Large Telescope), are presented in a companion paper, \citet{Bellazzini+2022}, hereafter \citetalias{Bellazzini+2022}. The sample identification is described in \S\ref{sec:sample} and their follow-up observations in \S\ref{sec:obs}. The results, H\,{\sc i} \ and stellar masses, star formation rates (SFRs), and metallicity measurements are presented in \S\ref{sec:results}. In \S\ref{sec:discuss} we search for potential points of origin of these objects and we discuss potential formation scenarios in \S\ref{sec:formation}. Finally, in \S\ref{sec:fate} \& \S\ref{sec:future} we discuss the fate of these objects and future directions of investigation, before drawing our conclusions in \S\ref{sec:conclude}. We adopt 16.5~Mpc \citep{Mei+2007} as the distance to the Virgo cluster throughout. \section{Target identification} \label{sec:sample} We performed a visual search for isolated, blue stellar systems, similar in optical appearance to SECCO~1, using the $\sim$100~deg$^2$ of NGVS \citep[Next Generation Virgo cluster Survey,][]{Ferrarese+2012} $ugi$ imaging of the Virgo cluster, along with GALEX \citep[Galaxy Evolution Explorer,][]{Martin+2005} UV imaging when available. Characteristic systems display an over-density of compact blue sources with strong associated UV emission. They also lack a diffuse red component typical of Virgo dwarf galaxies, even when they have ongoing SF. Partial results from this search were presented in \citet{Sand+2017}. In total, five isolated, blue stellar system candidates (or BCs), which we number 1-5, were identified. All five were followed-up with observations with HST, the VLA, and MUSE/VLT. The coordinates of these five targets are listed in Table \ref{tab:BCs}, and their locations relative to the Virgo cluster are shown in Figure \ref{fig:BC_locs}. The object we refer to as BC3 is an independent re-identification (based on optical appearance) of the H\,{\sc i}-selected object AGC~226178 from the ALFALFA survey \citep{Haynes+2011,Cannon+2015}. This object has already been studied in detail \citep{Cannon+2015,Junais+2021,Jones+2022} and is the BC most similar to SECCO~1. As discussed in the remainder of this paper, we are now confident that four of the five BCs are genuine SECCO~1 analogs. \begin{table} \centering \caption{BC coordinates and H\,{\sc i} \ velocities} \begin{tabular}{cccc} \hline \hline Object & R.A. & Dec. & $v_\mathrm{HI}/\mathrm{km\,s^{-1}}$\\ \hline BC1 & 12:39:02.0 & +12:12:16.7 & \\ BC2$^\ddag$ & 12:44:27.9 & +12:37:13.4 & \\ BC3 & 12:46:42.5 & +10:22:04.8 & 1581 \\ BC4 & 12:26:25.7 & +14:23:12.2 & \\ BC5 & 12:26:30.9 & +15:10:26.2 & \\ SECCO1 & 12:21:53.9 & +13:27:37.0 & $-142^\dag$ \\ \hline \end{tabular} \tablenotetext{}{Columns: (1) object name; (2 \& 3) coordinates (J2000) of the main body of each object; (4) heliocentric velocity of H\,{\sc i} \ emission \citep{Haynes+2011}. $^\dag$Value for the main body from \citet{Adams+2015}. $^\ddag$BC2 is a spurious object (\S\ref{sec:morph}). } \label{tab:BCs} \end{table} \section{Observations and reduction}\ \label{sec:obs} After the initial identification of our target BCs using NGVS and GALEX we had little information about their properties except that they were similar to SECCO~1 in optical appearance (extremely blue, faint and clumpy) and that their UV emission indicated some recent or ongoing SF. We therefore pursued a three-pronged observational strategy to uncover their nature: 1) HST imaging to better understand their detailed morphology and stellar populations; 2) Observations with MUSE/VLT to measure their redshifts via the H$\alpha$ line and obtain metallicity measurements; 3) VLA D-array and GBT observations to search for any associated H\,{\sc i} \ line emission and quantify their neutral gas content. \subsection{HST observations} Each of the five candidates was observed with ACS in the F606W and F814W filters as part of program 15183 (PI: D.~Sand). Each target was observed for a total of 2120~s and 2180~s in the two filters respectively, except BC4, which was observed for 2000~s in each filter. \texttt{DOLPHOT}'s \citep{Dolphin2000,dolphot} ACS module was used to align the exposures and perform point source photometry of the resolved stellar population. The dust maps of \citet{Schlegel+1998} and $R_\mathrm{F606W}$ and $R_\mathrm{F814W}$ values of \citet{Schlafly+2011} were used to correct for Galactic extinction at the position of each source. Stars were selected from the resulting \texttt{DOLPHOT} catalog following a similar approach to \citet{Jones+2022}. Briefly, we select all point-like (type 1 and 2) objects with no photometry flags from the \texttt{DOLPHOT} source catalog. We removed sources with greater than 1 mag of crowding (combined, from the two filters). Finally, the combined (in quadrature) absolute sharpness value was enforced to be below $\sqrt{0.075}$ and a roundness threshold of less than 1 (in both filters) was set. Completeness limits were also estimated as in \citet{Jones+2022}, based on artificial stars added evenly over both images in each field. The measured 90\% completeness limits were fit with the combination of a horizontal line and a one-sided parabola (e.g. Figure \ref{fig:BC1_HST_GLX}, bottom panels), and the 50\% limits were fit with straight lines. In addition to the point source photometry in \S\ref{sec:stellarmasses} we also perform aperture photometry on the combined, drizzled images in each band to measure the integrated magnitudes and colors of the systems. This was performed using the \texttt{Astropy} package \texttt{Photutils} \citep{photutils} and a combination of manually-constructed circular and elliptical apertures enclosing the various clumps of each source. In each case the sky background was subtracted based on the median value within an annulus (circular or elliptical) surrounding the aperture. \subsection{MUSE/VLT observations} To robustly identify H\,{\sc ii} \ regions, obtain optical redshifts and basic kinematics, and measure metallicities, we observed all BCs with MUSE/VLT \citep{Bacon+2014}. These observations were carried out as part of program 0101.B-0376A (P.I: R. Mu\~noz). They covered the spectral range 4650-9300~\AA \ and a $\simeq1.0\arcmin \times 1.0\arcmin$ field centered on each target. These observations are discussed in detail in \citetalias{Bellazzini+2022}, and here we present an outline of the data reduction process. The reduction and analysis of these data followed \citet{Beccari+2017}. The individual dithered exposures were calibrated separately and then combined into a single stacked data cube for each target. H$\alpha$ (and integrated light) peaks at least 3$\sigma$ above the background were identified using \texttt{Sextractor} \citep{Bertin+1996}. The flux of each of these detected sources was measured using a 1.5\arcsec \ (radius) aperture and a 1D spectrum (with a step size of 1.25~\AA) of each source was produced. Redshifts were measured for all detected H$\alpha$ clumps, and line fluxes for H$\beta$, [N{\sc ii}], and [O{\sc iii}] were measured wherever possible \citepalias[Tables 2 \& 3 of][]{Bellazzini+2022}. In Section \ref{sec:MUSE_results} we summarize the findings of these measurements and their implications for the origins of BCs. \subsection{GALEX data} \label{sec:galex_data} We searched for archival NUV and FUV data from GALEX at the location of each BC (and SECCO~1). Most of the BCs are within the footprint of the GALEX Ultraviolet Virgo Cluster Survey \citep[GUViCS,][]{Boselli+2011}, however, this is not always the deepest data available. For BCs 2, 3, 4 and SECCO~1 we use tiles from GUViCS (typically $\sim$1.6~ks in both bands), but no FUV data are available for either BC4 or SECCO~1. For BC1 we use tiles ``Virgo\_Epoque\_MOS05" ($\sim$16~ks) and ``NGA\_Virgo\_MOS04" ($\sim$1.6~ks) for NUV and FUV, respectively. For BC5 we use tiles ``NGA\_NGC4421" ($\sim$2~ks) and ``GI1\_079012\_Group5" ($\sim$1.6~ks). In \S\ref{sec:SFRs} we perform aperture photometry on these GALEX tiles and estimate the SFR in each object. The flux within each aperture was measured from the corresponding background subtracted GALEX tile. Uncertainties were estimated by placing 10000 circular apertures (equal in area to the target apertures) randomly across the GALEX tile after masking the brightest 1\% of pixels. Magnitudes were calculated following the conversions of \citet{Morrissey+2007} and extinction corrections used $R_\mathrm{NUV} = 8.20$ and $R_\mathrm{FUV} = 8.24$ \citep{Wyder+2007}. Finally, these magnitude measurements were converted to SFRs following \citet{Iglesias-Paramo+2006}, using 4.74 as the bolometric solar absolute magnitude. \subsection{VLA observations} BC3 was observed previously as part of the ALFALFA ``Almost Dark" galaxies sample \citep[VLA program 13A-028, PI: J.~Cannon;][]{Cannon+2015}. These data were obtained in D-configuration, have a channel width of 7.81~kHz ($\sim$1.65~\kms), a total bandwidth of 8~MHz, and a total on-source integration time of approximately 1.6 h. These data were re-reduced by \citet{Jones+2022} using standard reduction methods in the Common Astronomy Software Applications package \citep[\texttt{CASA},][]{CASA}. The final imaging used Brigg's robust=0.5 weighting to provide a compromise between sensitivity and angular resolution for the detected H\,{\sc i} \ emission. The channels were averaged and re-binned to a velocity resolution of 5~\kms. The remaining 4 candidates were observed in the VLA program 18A-185 (PI: K.~Spekkens). Each target was observed on-source for approximately 1.5~h in D-configuration. The initial observations of both BC1 and BC2 suffered from severe interference and were subsequently re-observed, greatly improving the data quality. As the redshift of the objects were not known prior to the observations, we used a 32~MHz bandwidth (from 1394.416 to 1426.416~MHz, or approximately $-$1250 to 5500~\kms) to search for any H\,{\sc i} \ emission associated with the optical candidates. This range was split up into 3072 channels of 10.42~kHz ($\sim$2.2~\kms), which during the data reduction was averaged over 4 channels resulting in a velocities resolution of 8.8~\kms. Initially the entire bandwidth of the data were reduced to search for H\,{\sc i} \ emission. However, after the redshifts for all candidates were obtained from MUSE spectroscopy with the VLT, a narrow sub-band was re-reduced (spanning $\sim$1000~\kms), allowing for improved local continuum subtraction. The reduction was performed with a \texttt{Python} and \texttt{CASA}-based pipeline that will be presented in full in Jones et al. (in prep.). The most severe interference was flagged manually and the \texttt{tfcrop} flagging algorithm was run in addition. For BC1 we also used \texttt{rflag}, after initial calibrations, as there were no bright lines which might be mistaken for interference (other than Milky Way emission). Imaging used Brigg's robust=2 weighting in order to maximize our detection capabilities. Refer to Table \ref{tab:VLAobs} for details of the beam sizes and rms noise for each observation. \begin{table} \centering \caption{VLA data summary} \begin{tabular}{cccc} \hline \hline Object & Beam size & $\sigma_\mathrm{rms}/\mathrm{mJy \, beam^{-1}}$ & $\Delta v_\mathrm{chan}/$\kms \\ \hline BC1 & 60\arcsec$\times$51\arcsec & 1.9 & 8.8 \\ BC2 & 63\arcsec$\times$55\arcsec & 1.1 & 8.8 \\ BC3 & 56\arcsec$\times$45\arcsec & 1.2 & 5 \\ BC4 & 65\arcsec$\times$54\arcsec & 0.9 & 8.8 \\ BC5 & 65\arcsec$\times$54\arcsec & 1.0 & 8.8 \\ \hline \end{tabular} \tablenotetext{}{Columns: (1) object name; (2) synthesized beam size; (3) rms noise; (4) velocity resolution.} \label{tab:VLAobs} \end{table} \subsection{GBT observations} The large surface area and low system temperature of the GBT allow it to obtain much deeper H\,{\sc i} \ spectra than the VLA, providing a more stringent constraint on any neutral gas content. However, after the redshifts of the candidates were known (from their H$\alpha$ emission) it was determined that only BC1 was suitable for single dish follow-up as BC4 and BC5 would be confused with Milky Way emission, BC3 had already been strongly detected with the VLA \citep{Cannon+2015}, and the HST imaging of BC2 indicated that it was a background galaxy group (\S\ref{sec:morph}). A director's discretionary time proposal (21A-433, PI: M.~Jones) was submitted to the GBT and BC1 was observed for a total of 3~h using $\mathrm{ON-OFF}$ position switching. The data were reduced using standard GBT \texttt{IDL} procedures. The resulting spectrum has an rms noise of 0.25~mJy (within $\pm300$~\kms \ of the redshift of BC1) after smoothing to a velocity resolution of 30~\kms. \section{Results} \label{sec:results} \begin{figure} \centering \includegraphics[width=\columnwidth]{BC_locations_in_Virgo_Xray.pdf} \caption{Locations of BCs (and SECCO~1) in the direction of Virgo overlaid on a ROSAT mosaic of hard (0.4-2.4~keV) X-ray emission \citep{Brown+2021}. Virgo members and possible members \citep[from the Extended Virgo Cluster Catalog;][]{Kim+2014} are plotted as faint black, unfilled circles. The area of each circle is proportional to the total $r$-band flux of the galaxy it represents. The BCs are shown with blue symbols (see legend) and SECCO~1 is shown as a purple cross. The symbol for BC2 is unfilled as this object is spurious (see \S\ref{sec:morph}). The approximate virial radius \citep[taken to be 1.7~Mpc,][]{Kashibadze+2020} of the cluster is shown by a large dashed black circle.} \label{fig:BC_locs} \end{figure} \begin{figure*} \centering \includegraphics[width=\columnwidth]{BlueCand1_HST_RGB_aperture.pdf} \includegraphics[width=\columnwidth]{BlueCand1_GALEX_colour_aperture.pdf} \includegraphics[width=0.5\columnwidth]{BlueCand1_CMD.pdf} \includegraphics[width=0.5\columnwidth]{BlueCand1_CMD_BKGD.pdf} \caption{\textit{Top-left}: False color HST F606W+F814W image of BC1. The dashed green circle shows the region used to construct the CMD. At the distance of the Virgo cluster (16.5~Mpc) 20\arcsec \ is 1.6~kpc. \textit{Top-right}: GALEX NUV+FUV image showing the same field. \textit{Bottom-left}: CMD of the point sources within the aperture shown. The dashed lines indicates the 90\% completeness limit and the dotted line the 50\% limit. The errorbars indicate the typical uncertainties (from artificial star tests) in the F814W magnitude and F606W-F814W color, as a function of F814W magnitude. \textit{Bottom-right}: The CMD of a background region of the HST image away from bright sources. The aperture used was equal in area to the target aperture.} \label{fig:BC1_HST_GLX} \end{figure*} \begin{figure*} \centering \includegraphics[width=\columnwidth]{BlueCand2_HST_RGB_aperture.pdf} \includegraphics[width=\columnwidth]{BlueCand2_GALEX_colour_aperture.pdf} \includegraphics[width=0.5\columnwidth]{BlueCand2_CMD.pdf} \includegraphics[width=0.5\columnwidth]{BlueCand2_CMD_BKGD.pdf} \caption{\textit{Top-left}: False color HST F606W+F814W image of BC2. The dashed green circle shows the region used to construct the CMD. Unlike the other BCs, this HST images appears to indicate that this is a background galaxy group. \textit{Top-right}: GALEX NUV+FUV image showing the same field. There is only very weak NUV emission associated with BC2. \textit{Bottom}: CMD within the aperture shown (left) and a blank field aperture (right). The dotted and dashed lines, and error bars, are the same as described in Figure \ref{fig:BC1_HST_GLX}, bottom panels. This CMD appears to be consistent with background, supporting the conclusion that this is a spurious blue stellar system candidate.} \label{fig:BC2_HST_GLX} \end{figure*} \begin{figure*} \centering \includegraphics[width=\columnwidth]{BlueCand3_HST_RGB_aperture.pdf} \includegraphics[width=\columnwidth]{BlueCand3_GALEX_colour_aperture.pdf} \includegraphics[width=0.5\columnwidth]{BlueCand3_CMD.pdf} \includegraphics[width=0.5\columnwidth]{BlueCand3_CMD_BKGD.pdf} \caption{\textit{Top-left}: False color HST F606W+F814W image of BC3. The dashed green ellipse and circles show the regions used to construct the CMD. \textit{Top-right}: GALEX NUV+FUV image showing the same field. \textit{Bottom}: CMD within the apertures shown (left) and a blank field aperture (right). See Figure \ref{fig:BC1_HST_GLX} caption for further details.} \label{fig:BC3_HST_GLX} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.5\columnwidth]{BlueCand4_HST_RGB_large_aperture.pdf} \includegraphics[width=1.5\columnwidth]{BlueCand4_GALEX_colour_aperture.pdf} \\ \includegraphics[width=0.5\columnwidth]{BlueCand4_CMD.pdf} \includegraphics[width=0.5\columnwidth]{BlueCand4_CMD_BKGD.pdf} \caption{\textit{Top}: False color HST F606W+F814W image of BC4. The dashed green ellipses and circles show the regions used to construct the CMD. \textit{Middle}: GALEX NUV image showing the same field. \textit{Bottom}: CMD within the apertures shown (left) and a blank field aperture (right). See Figure \ref{fig:BC1_HST_GLX} caption for further details.} \label{fig:BC4_HST_GLX} \end{figure*} \begin{figure*} \centering \includegraphics[width=\columnwidth]{BlueCand5_HST_RGB_aperture.pdf} \includegraphics[width=\columnwidth]{BlueCand5_GALEX_colour_aperture.pdf} \includegraphics[width=0.49\columnwidth]{BlueCand5_CMD.pdf} \includegraphics[width=0.49\columnwidth]{BlueCand5_CMD_BKGD.pdf} \caption{\textit{Top-left}: False color HST F606W+F814W image of BC5. The dashed green ellipse and circle show the regions used to construct the CMD. The component BC5c was identified via H$\alpha$ emission \citepalias{Bellazzini+2022} to be at the same velocity as the main body, but may only be a single cluster of stars. \textit{Top-right}: GALEX NUV+FUV image showing the same field. \textit{Bottom}: CMD within the apertures shown (left) and a blank field aperture (right). See Figure \ref{fig:BC1_HST_GLX} caption for further details.} \label{fig:BC5_HST_GLX} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{BC1_GBT_VLA_spec.pdf} \caption{H\,{\sc i} \ spectra of BC1 from the GBT (top) and the VLA (bottom). The VLA spectrum was extracted using an aperture equal in area to the synthesized beam. The vertical dashed lines correspond to the H$\alpha$ velocity measurement from MUSE. No significant signal is detected in either spectrum.} \label{fig:BC1_VLA_GBT_specs} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{BlueCand3_VLAHIspec_masked.pdf} \caption{H\,{\sc i} \ spectra of BC3 from ALFALFA and the VLA. The ALFALFA spectrum is the public spectrum from \citet{Haynes+2018} and the VLA spectrum was created using the extended source mask of \citet{Jones+2022}. The vertical dashed line corresponds to the H$\alpha$ velocity measurement from MUSE. BC3 is detected at high signal-to-noise ratio in both spectra and both agree with the H$\alpha$ velocity. However, the VLA measures a somewhat lower flux, with most of the missing emission lying on the approaching side of the line profile. This likely indicates the presence of extended emission below the surface brightness limit of the VLA observations \citep{Cannon+2015,Jones+2022}.} \label{fig:BC3_VLA_specs} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{BlueCand4_VLAHIspec_narrow.pdf} \includegraphics[width=\columnwidth]{BlueCand5_VLAHIspec_narrow.pdf} \caption{H\,{\sc i} \ spectra in the directions of BC4 (top) and BC5 (bottom) extracted from the VLA H\,{\sc i} \ data cubes within an aperture equal in area to the synthesized beam. The vertical dashed lines correspond to the H$\alpha$ velocity measurements from MUSE. Neither shows a significant H\,{\sc i} \ line signal, although they could be contaminated with Milky Way H\,{\sc i} \ emission. The apparent peak coincident with the H$\alpha$ velocity of BC5 is below 3$\sigma$ and is likely a noise spike.} \label{fig:BC45_VLA_specs} \end{figure} In this section we present the results of our multi-wavelength investigation of BCs, providing a description of their morphology, colors, redshifts, stellar masses, metallicities, and gas content. These physical properties will then be used as the basis for a search for candidate parent objects in the following section and a discussion of potential formation pathways in \S\ref{sec:formation}. We include SECCO~1 in this sample throughout and either use quantities measured in previous work or (re)measure them as needed (e.g. to provide equivalent values across the whole sample). \subsection{Morphology and location} \label{sec:morph} The HST images of the BCs are shown in Figures \ref{fig:BC1_HST_GLX}, \ref{fig:BC2_HST_GLX}, \ref{fig:BC3_HST_GLX}, \ref{fig:BC4_HST_GLX}, \ref{fig:BC5_HST_GLX}, while that for SECCO~1 can be found in \citet{Sand+2017}. In all cases, except BC2, the BCs appears to be very blue, highly irregular, and frequently broken up into multiple components. Their stellar populations also appear to be partially resolved with some individual stars discernible. These are almost exclusively blue and likely only represent the youngest, brightest stars, not the underlying stellar population. However, their extremely blue appearance (discussed further in \S\ref{sec:stellarpops} and \S\ref{sec:stellarmasses}) also suggests that any redder, underlying population is likely minimal. The largest single component of any BC is BC3a (Figure \ref{fig:BC3_HST_GLX}, top-left), which is approximately 30\arcsec \ across its major axis (2.4~kpc at the distance of Virgo). However, what constitutes a single component is quite subjective, for example, BC4a, b, and c could justifiably be considered as a single object \citepalias{Bellazzini+2022}. The smallest components (e.g. BC3c, BC5b, BC5c) are less than 5\arcsec \ ($\sim$400~pc) across and may only consist of a single cluster of stars. The six components of BC4 are spread over $\sim$1.5\arcmin \ ($\sim$7~kpc) and may indicate that this is either a very young collection of objects formed in a gas-rich stream, or a somewhat older object that has become gravitationally unbound. Even for the other BCs, which are mostly defined by one or two components, their highly irregular and clumpy structure points to them being extremely low-mass and potentially unbound. Although the individual components of the BCs were identified visually, based primarily on the HST images, nearly all of these clumps have corresponding UV and (usually) H$\alpha$ emission \citepalias{Bellazzini+2022}, which indicate ongoing SF. In the case of the latter, the components are all kinematically associated (see \S\ref{sec:MUSE_results}). In the case of BC2, the HST image (Figure \ref{fig:BC2_HST_GLX}, top-left) is quite distinct from the other BCs and indicates that this is a spurious candidate. It appears to be a distant background group of galaxies rather than a nearby young object. Unlike the other BCs there is also minimal UV emission (particularly FUV) associated with this candidate (Figure \ref{fig:BC2_HST_GLX}, top-right), and it was undetected in H$\alpha$ by MUSE (\S\ref{sec:MUSE_results}). Furthermore, almost no stars were identified in its CMD (Figure \ref{fig:BC2_HST_GLX}, bottom-left), which is consistent with background (Figure \ref{fig:BC2_HST_GLX}, bottom-right). Henceforth, we will not regard BC2 as a genuine blue stellar system and statements regarding the global properties of BCs should be assumed to include SECCO~1, but not BC2. Figure \ref{fig:BC_locs} shows the locations of the BCs on the sky in relation to Virgo cluster galaxies and the cluster virial radius. BC2 is shown as an unfilled symbol. All of the BCs are within the virial radius of the cluster. However, none are in the very cluster center, within $\sim$2$^\circ$ ($\sim$575~kpc) of M~87 (the central galaxy in the Virgo cluster, Figure \ref{fig:BC_locs}). BC1 is the closest, with a projected separation of approximately 600~kpc. This may indicate that the parent objects of BCs are recent additions to the cluster. \subsection{H$\alpha$ velocities and metallicities} \label{sec:MUSE_results} \begin{table} \centering \caption{Metallicities of BCs} \begin{tabular}{ccccc} \hline \hline Object & $v_{\mathrm{H}\alpha}/\mathrm{km\,s^{-1}}$ & $N_{\mathrm{H}\alpha}$ & $N_\mathrm{O/H}$ & $\langle 12 + \log \mathrm{O/H} \rangle$\\ \hline BC1 & $1117 \pm 6$ & 18 & 2 & $8.35 \pm 0.15$ \\ BC3 & $1584 \pm 4$ & 15 & 5 & $8.29 \pm 0.17$ \\ BC4 & $-60 \pm 19$ & 16 & 6 & $8.73 \pm 0.15$ \\ BC5 & $-74 \pm 5$ & 4 & 2 & $8.70 \pm 0.14$ \\ SECCO1$^\dagger$ & $-153.2 \pm 1.4$ & 33 & 9 & $8.38 \pm 0.11$ \\ \hline \end{tabular} \tablenotetext{}{H$\alpha$ redshift and metallicity measurements from \citetalias{Bellazzini+2022}. Columns: (1) object name; (2) mean velocity (and standard deviation) of H$\alpha$ clumps detected with MUSE; (3) number of clumps detected in H$\alpha$; (4) number of clumps detected in H$\alpha$, H$\beta$, [N{\sc ii}], and [O{\sc iii}] (suitable for deriving an O/H estimate); (5) mean oxygen abundance and uncertainties (standard deviation of clumps and scatter in O3N2 calibration, 0.14~dex). $^\dagger$Values from \citet{Beccari+2017}.} \label{tab:metallicity} \end{table} MUSE detected H$\alpha$ emission in all BCs, identifying between 4 and 18 distinct clumps of emission in each object \citepalias{Bellazzini+2022}. The mean velocity (and standard deviation) of these clumps in each source is shown in Table \ref{tab:metallicity}. Only BC3 (and SECCO~1) has a prior velocity from an H\,{\sc i} \ detection (Table \ref{tab:BCs}), which matches closely with the H$\alpha$ velocity for that object. All the objects have velocities which are consistent with Virgo cluster membership \citep[$-500 < cz_\odot/\mathrm{km\,s^{-1}} < 3000$, e.g.][]{Mei+2007}, and all are (projected) within the virial radius of the cluster (Figure \ref{fig:BC_locs}). We note that although BC4, BC5, and SECCO~1 all have negative radial velocities, they are in the vicinity of M~86 ($cz_\odot = -224$~\kms), a region of the Virgo cluster where negative radial velocities are common. As described in \citetalias{Bellazzini+2022}, the average oxygen abundance of each BC was estimated based on N2 and O3N2 \citep[following][]{Pettini+2004}, which were corrected for extinction based on the relative strengths of H$\alpha$ and H$\beta$. The resulting metallicity estimates are shown in Table \ref{tab:metallicity}. All the BCs are extremely high metallicity given their very low stellar masses (\S\ref{sec:stellarpops}), which suggests that they formed from gas pre-enriched in more massive objects. Of particular note are BC4 and 5, both of which are found to be marginally super-solar in metallicity \citep[$12 + \log (\mathrm{O/H})_\odot = 8.69$,][]{Asplund+2009}. The details of the kinematics and metallicity spreads of the clumps within the BCs are discussed in \citetalias{Bellazzini+2022}. \subsection{H\,{\sc i} \ mass \& limits} \label{sec:hi_mass} \begin{table} \centering \caption{H\,{\sc i} \ masses of BCs} \begin{tabular}{ccc} \hline \hline Object & $M_\mathrm{HI}/\mathrm{M_\odot}$ & Telescope\\ \hline BC1 & $<1.6 \times 10^6$ & GBT \\ BC3 & $4.0 \times 10^7$ & Arecibo$^\dag$ \\ BC4 & $<2.9 \times 10^6$ & VLA \\ BC5 & $<3.2 \times 10^6$ & VLA \\ SECCO1 & $1.5 \times 10^7$ & Arecibo$^\ddag$ \\ \hline \end{tabular} \tablenotetext{}{Columns: (1) object name; (2) H\,{\sc i} \ mass or 3$\sigma$ upper limit; (3) telescope for the stated value. $^\dag$\citet{Haynes+2011}. $^\ddag$\citet{Adams+2015}.} \label{tab:HImasses} \end{table} SECCO~1 is the prototype BC, first detected via its H\,{\sc i} \ emission \citep{Adams+2013}, having a total H\,{\sc i} \ mass of $1.5\times10^{7}$~\Msol \ \citep{Adams+2015}. The low resolution of H\,{\sc i} \ observations means that the main and secondary body of SECCO~1 appear as one source in H\,{\sc i}. However, \citet{Adams+2015} also identified an additional H\,{\sc i}-only component slightly to the north as well as another potential optical component, also to the north, but not coincident with any H\,{\sc i}. Owing to the similar optical/UV appearance of BCs 1-5 it was anticipated that they would also be H\,{\sc i}-rich, which motivated our VLA follow-up program. In Figures \ref{fig:BC1_VLA_GBT_specs}, \ref{fig:BC3_VLA_specs}, and \ref{fig:BC45_VLA_specs} we present the VLA (and GBT) H\,{\sc i} \ spectra of BCs 1, 3, 4, and 5 (the spectrum of BC2 is discussed in Appendix \ref{sec:BC2spec}). The VLA H\,{\sc i} \ spectra were extracted from the data cubes using an aperture equal to the synethesized beam size, centered on the location of the main body of each BC (Table \ref{tab:BCs}). In addition to these spectra the data cubes were visually inspected channel by channel and \texttt{SoFiA} was run to search for significant emission features that might be extended spatially or spectrally. Like SECCO~1, BC3 was known a priori to contain a significant H\,{\sc i} \ reservoir as it was originally identified in the ALFALFA survey \citep{Haynes+2011}. However, among BCs 1-5 this is the only object that was detected in our VLA observations. Based on the VLA spectrum (Figure \ref{fig:BC3_VLA_specs}, extracted using the \texttt{SoFiA} source mask), and an assumed distance of 16.5~Mpc, BC3 has an H\,{\sc i} \ mass of $\log M_\mathrm{HI}/\mathrm{M_\odot} = 7.3$. This value is 0.3~dex lower than that measured by ALFALFA \citep{Haynes+2011} suggesting that the VLA has not recovered all the extended flux \citep[this was also noted by][]{Cannon+2015,Jones+2022}. \citet{Jones+2022} show that when viewed in the ALFALFA data cube (which has better column density sensitivity for extended emission than the VLA observations) the H\,{\sc i} \ emission coincident with BC3 appears to connect to the galaxy VCC~2034, approximately 70~kpc to the SW. This galaxy is almost certainly the source of the gas that formed BC3 \citep[discussed further in \S\ref{sec:discuss}, and][]{Jones+2022}. If the other BCs had comparable H\,{\sc i} \ masses to BC3 and SECCO~1, then they would be detected with the VLA observations, but none were (Figures \ref{fig:BC1_VLA_GBT_specs} \& \ref{fig:BC45_VLA_specs}). The slight caveat is that, because of their low radial velocities, BC4 and BC5 might be blended with MW H\,{\sc i} \ emission. The spectrum of BC5 (Figure \ref{fig:BC45_VLA_specs}, bottom) also appears to have a peak coincident with the H$\alpha$ velocity of BC5. However, this peak is below 3$\sigma$ and extremely narrow, and is likely a noise spike. All the BCs that are undetected in H\,{\sc i} \ have optical redshift measurements from MUSE H$\alpha$ observations (BC3 and SECCO~1 do also) and the available H\,{\sc i} \ data can therefore be used with confidence to set upper limits on their H\,{\sc i} \ masses. For BC4 and BC5 the deepest data are those from the VLA, which have rms noise values of 0.9 and 1.0~mJy/beam (at 8.8~\kms \ resolution), respectively, at the velocities of the H$\alpha$ emission. Assuming that any H\,{\sc i} \ emission would fit within one synthesized beam (Table \ref{tab:VLAobs}) and would have a velocity width of 30 \kms, then these equate to 3$\sigma$ upper limits of $\log M_\mathrm{HI}/\mathrm{M_\odot} < 6.46$ and 6.51, respectively, assuming a fiducial distance of 16.5~Mpc in both cases. For BC1 the GBT follow up spectrum is by far the more sensitive. With an rms of 0.28~mJy (at 30~\kms \ resolution) this gives the 3$\sigma$ upper limit as $\log M_\mathrm{HI}/\mathrm{M_\odot} < 6.2$, again assuming a fiducial distance of 16.5~Mpc. These limits are listed in Table \ref{tab:HImasses}. \subsection{Stellar populations} \label{sec:stellarpops} \begin{table} \centering \caption{Magnitudes and stellar mass estimates} \begin{tabular}{cccc} \hline \hline Object & F814W & F606W-F814W & $M_\ast/\mathrm{M_\odot}$ \\ \hline BC1 & $20.29 \pm 0.38$ & $0.08 \pm 0.41$ & $\sim5 \times 10^{4}$ \\ BC3 & $20.23 \pm 0.15$ & $-0.23 \pm 0.17$ & $\sim5 \times 10^{4}$ \\ BC4 & $19.86 \pm 0.26$ & $-0.26 \pm 0.29$ & $\sim1 \times 10^{5}$ \\ BC5 & $20.56 \pm 0.10$ & $0.06 \pm 0.12$ & $\sim5 \times 10^{4}$ \\ SECCO1 & $20.39 \pm 0.41$ & $-0.23 \pm 0.46$ & $\sim4 \times 10^{4}$ \\ \hline \end{tabular} \tablenotetext{}{Columns: (1) object name; (2) F814W magnitude (extinction corrected); (3) F606W-F814W color (extinction corrected); (4) stellar mass estimate (\S\ref{sec:stellarpops}).} \label{tab:Mstar} \end{table} \begin{figure*} \centering \includegraphics[width=0.49\columnwidth]{BlueCand1_CMD_isochrones_16.5Mpc.pdf} \includegraphics[width=0.49\columnwidth]{BlueCand3_CMD_isochrones_16.5Mpc.pdf} \includegraphics[width=0.49\columnwidth]{BlueCand4_CMD_isochrones_16.5Mpc.pdf} \includegraphics[width=0.49\columnwidth]{BlueCand5_CMD_isochrones_16.5Mpc.pdf} \caption{Reproduced CMDs of BCs 1, 3, 4, and 5, with \texttt{PARSEC} isochrones for different stellar population ages overlaid, assuming a distance of 16.5~Mpc to all objects. The isochrones for BC1 and 3 use a metallicity of $[M/H] = -0.35$, which is approximately the value for both objects. For BC4 and 5 the value is $[M/H] = 0.05$. In the latter case it is possible that the two objects are associated, while the similar metallicities of BC1 and 3 are likely by chance. In the leftmost panel we also plot an isochrone indicating where an old (10~Gyr) RGB population would reside in these diagrams, below the completeness limit in the lower right corner and barely visible.} \label{fig:isochrones} \end{figure*} The HST (and GALEX) images and associated CMDs for all BCs are shown in Figures \ref{fig:BC1_HST_GLX}, \ref{fig:BC2_HST_GLX}, \ref{fig:BC3_HST_GLX}, \ref{fig:BC4_HST_GLX}, and \ref{fig:BC5_HST_GLX}. The blue optical colors and UV emission indicate that the BCs have predominantly young, blue stellar populations. Furthermore, the detection of H$\alpha$ in all BCs indicates that the youngest stars must be $\leq$10~Myr old. As discussed by \citet{Jones+2022}, the CMD of BC3 (Figure \ref{fig:BC3_HST_GLX}, bottom) is most similar to that of SECCO~1 \citep{Sand+2017}, apparently made up almost entirely of blue main sequence (MS) and helium burning stars (F814W~$\gtrsim$~24.5, F606W-F814W~$\lesssim$~0) and red helium burning (RHeB) stars (23.5~$\lesssim$~F814W~$\lesssim$~26.5 and F606W-F814W~$\gtrsim$~0.6~mag), with almost no candidates for red giant branch (RGB) stars, highlighting the young age of the population. BC1's CMD (Figure \ref{fig:BC1_HST_GLX}, bottom) is similar, but the brightest RHeB stars are more numerous and fainter than in BC3. These slight differences likely indicate that BC1 is somewhat older than BC3 \citep[as RHeB peak brightness is a function of age, e.g.][]{McQuinn+2011}, which would be consistent with its non-detection in H\,{\sc i}, if sufficient time has passed for its neutral gas to have been evaporated or stripped. The CMDs of BC4 and BC5 (Figures \ref{fig:BC4_HST_GLX} \& \ref{fig:BC5_HST_GLX}, bottom panels) are again similar, but the RHeB stars are even fainter and continue to the completeness limit. This likely indicates that BC4 and BC5 are the oldest objects in the sample. The color spread between the blue and RHeB stars is also wider for BC4 and BC5 than for any of the other BCs. In Figure \ref{fig:isochrones} we overplot \texttt{PARSEC} isochrones \citep[PAdova and TRieste Stellar Evolution Code,][]{Bressan+2012} on the CMDs of each object for a variety of stellar population ages. As pointed out by \citet{Jones+2022} the faintest RHeB stars in BC3 appear consistent with the 50~Myr isochrone, likely indicating that the stellar population in this object cannot be much older than 50~Myr. In the case of the other BCs, as mentioned above, their CMDs imply that their oldest stars are somewhat older (although they must still have formed young stars within the past 10~Myr, as they contain H\,{\sc ii} \ regions), but the proximity of the RHeB stars to the completeness limit prevents them from being used to estimate ages. The isochrones also explain the different color gap between the bluest and reddest stars in the CMDs of BC4 and BC5 versus BC1 and BC3. This spread is approximately reproduced in the isochrones and is a function of the higher metallicity of these two objects, which despite their feeble appearance is marginally super-solar (Table \ref{tab:metallicity}). The CMD of SECCO~1 was presented and discussed in \citet{Sand+2017} and \citet{Bellazzini+2018}. The general appearance is similar to the other BCs. The spread between the reddest and bluest stars is most similar to BC1 and BC3, again a reflection of the similar metallicities of these objects. \citet{Sand+2017} also simulate a mock stellar population and argue that SECCO~1 must be younger than $\sim$50~Myr based on the luminosity of the RHeB stars. This is roughly consistent with our estimate of 60~Myr in \S\ref{sec:stellarmasses} based on the integrated F814W magnitude and SFR of SECCO~1. If we compare the CMDs of the BCs to the low-mass, gas-rich dwarf Leo~P \citep{McQuinn+2015b}, then we see a striking difference. In addition to the young blue stars in Leo~P there is also a clear, well-populated RGB at a similar magnitude that is entirely absent from the BCs CMDs. This clear RGB is the result of the old underlying population in Leo~P, but for extremely young stellar populations (which BCs appear to be) no RGB population exists. Furthermore, any RGB stars would be significantly less luminous than the young stars that dominate the CMDs of the BCs. However, the proximity of Leo~P ($D=1.6$~Mpc) means that the depth of its CMD is a mis-match for those of BCs, making it an unfair comparison, despite it being one of the most similar objects known in terms of SFR and stellar mass (but notably not metallicity). A fairer comparison can be made by considering a blue, irregular dwarf at the distance of Virgo, in this case VCC~1816 (KDG~177, $M_V = -15.2$), which has similar depth HST observations as the BCs. The CMD of this galaxy \citep[Figure 4 of][]{Karachentsev+2014} shows both a blue population (at F606W-F814W$\sim$0) and a red population (at F606W-F814W$\sim$1), similar to BCs. The former is likely made up of blue helium burning stars and young MS stars, as in the BCs, while the latter is likely made up of a combination of asymptotic giant branch and RHeB stars. The number of stars in the CMD increases towards fainter F814W magnitudes (near F606W-F814W$\sim$1) probably indicating the presence of a well-populated RGB near the completeness limit, which is lacking in the BC CMDs. A similar lack of evidence for any RGB was noted for SECCO~1 by \citet{Sand+2017} and \citet{Bellazzini+2018}, but in comparison to a red dwarf spheroidal in Virgo, rather than a star-forming dwarf more in line with the appearance of BCs. At the distance of the Virgo cluster the tip of the red giant branch (TRGB) is expected to be at F814W~$\sim$~27~mag \citep[e.g.][]{Jiang+2019}, which would be borderline detectable with our HST observations. However, at high metallicities the TRGB becomes less defined and RGB stars become redder, both of which would impede the detectability of an RGB in our observations (Figure \ref{fig:isochrones}, leftmost panel). Thus, it is not possible to conclusively rule out there being an RGB based on the CMDs of BCs. Despite this we still view the existence of an underlying old population as extremely unlikely in these objects. They are extremely blue, to the point where stellar population models struggle to reproduce their colors, even when assuming very young ages (\S\ref{sec:stellarmasses}). In addition, BCs were specifically selected (\S\ref{sec:sample}) to be lacking any visible diffuse red component in the deep NGVS images. Together these points make it highly unlikely that there could be any significant underlying old population of stars, even though the CMDs themselves are insufficiently deep to reach any potential RGB. Overall the CMDs of the BCs can be characterized as having a population of stars made up exclusively of young blue main sequence and (blue and red) helium burning stars, with no evidence of a RGB. The luminosities of the RHeB stars suggest that the youngest BCs (BC3 \& SECCO~1) are around 50~Myr old. Finally, the remarkably high metallicities measured with MUSE (\S\ref{sec:MUSE_results}) appear to be consistent with the color difference between the reddest and bluest helium burning branch stars in the CMDs. \subsection{Star formation rates} \label{sec:SFRs} SFRs (Table \ref{tab:SFRs}) were estimated for each candidate by measuring the NUV and FUV fluxes within the same apertures used to produce their CMDs (\S \ref{sec:stellarpops}), as described in \S\ref{sec:galex_data}. An additional uncertainty of 15\% was added to the error budget as this is the stated accuracy of the conversion in \citet{Iglesias-Paramo+2006}. The SFRs of all BCs fall in the range $-3.5 < \log \mathrm{SFR/M_\odot \, yr^{-1}} < -3$ and are generally quite consistent between NUV and FUV (where both images are available), likely indicating that their SFRs have not varied strongly over the past $\sim$100~Myr (or that they are younger than this). Although matching NUV and FUV SFR estimates could be the result of a bursty SF histories over the past $\sim$100~Myr, with an average rate that equals that of the past $\sim$10~Myr, it seems highly unlikely that this could be the case for all BCs, and a constant SFR is a more natural explanation for this finding. The UV-based SFRs are also roughly consistent with the SFRs estimated from the integrated H$\alpha$ fluxes \citepalias{Bellazzini+2022}, with the slight exception of BC1 (for which the SFR may be beginning to decline), again supporting the assertion that the SFRs appear to have been relatively constant in the recent past. We note that had we adopted a different conversion scheme for our UV-based SFR estimates \citep[e.g.][]{McQuinn+2015a} then our SFR$_\mathrm{FUV}$ values could be up to 0.6~dex higher. However, given the general consistency between the H$\alpha$ and UV-based SFR estimates, the conversion scheme we originally selected appears appropriate for these objects. This range of SFRs is similar to the faintest dwarf irregular galaxies in the Local Volume \citep{Lee+2009}. However, the extremely low stellar masses of BCs (\S\ref{sec:stellarmasses}) make it difficult to directly compare to equivalent star-forming dwarf galaxies, as almost none are known at these masses. For example, even Leo~P \citep{Giovanelli+2013} is almost an order of magnitude higher stellar mass than most BCs, but its SFR is around an order of magnitude lower \citep[$\log \mathrm{SFR/M_\odot \, yr^{-1}} = -4.4$][]{McQuinn+2015b}. Leo~T \citep{Irwin+2007} is of comparable stellar mass to BCs \citep[$M_\ast = 1.4\times10^5$~\Msol,][]{Weisz+2014}, but is apparently no longer forming stars, or is between episodes \citep{Kennicutt+2008}. If we put the SFRs of BCs in terms of their specific SFRs (sSFR) then they fall in the range $-8.2 < \log (\mathrm{SFR}/M_\ast)/\mathrm{yr^{-1}} < -7.7$, which would place them significantly higher than average, but within the scatter, of sSFR for low-mass, gas-rich, field galaxies \citep{Huang+2012,James+2015}. \begin{table*} \centering \caption{UV fluxes and SFR estimates} \begin{tabular}{cccccccccc} \hline \hline Object & & $\mathrm{SNR}_\mathrm{NUV}$ & NUV flux & $\log \frac{\mathrm{SFR_{NUV}}}{\mathrm{M_\odot\,yr^{-1}}}$ & $\mathrm{SNR}_\mathrm{FUV}$ & FUV flux & $\log \frac{\mathrm{SFR_{FUV}}}{\mathrm{M_\odot\,yr^{-1}}}$ & $\log \frac{\mathrm{SFR_{H\alpha}}}{\mathrm{M_\odot\,yr^{-1}}}$ & $\log \frac{M_\mathrm{HI}/\mathrm{SFR}}{\mathrm{yr}}$ \\ \hline BC1 & & 10.1 & $9.10 \pm 0.90$ & $-3.25 \pm 0.08$ & 13.7 & $1.98 \pm 0.14$ & $-3.42 \pm 0.07$ & $-$3.9 & $<9.5$ \\%& $5.5\pm0.2$ & \\ BC3 & & & $17.4 \pm 0.7$ & $-3.03 \pm 0.07$ & & $4.02 \pm 0.09$ & $-3.18 \pm 0.07$ & $-$3.1 & 10.3 \\%& $31.3\pm0.5$ & \\ & a & 23.8 & $14.1 \pm 0.6$ & $-3.13 \pm 0.07$ & 43.9 & $3.36 \pm 0.08$ & $-3.26 \pm 0.07$ & & \\%& & \\ & b & 9.3 & $2.78 \pm 0.30$ & $-3.83 \pm 0.08$ & 13.6 & $0.57 \pm 0.04$ & $-4.03 \pm 0.07$ & & \\%& & \\ & c & 4.6 & $0.57 \pm 0.12$ & $-4.52 \pm 0.11$ & 5.6 & $0.10 \pm 0.02$ & $-4.77 \pm 0.10$ & & \\%& & \\ BC4 & & & $13.1 \pm 0.7$ & $-3.10 \pm 0.07$ & & & & $-$3.2 & $<9.6$ \\%& $23.0\pm0.2$ & \\ & a & 16.5 & $4.07 \pm 0.25$ & $-3.60 \pm 0.07$ & & & & & \\%& & \\ & b & 4.7 & $1.04 \pm 0.22$ & $-4.20 \pm 0.11$ & & & & & \\%& & \\ & c & 11.8 & $2.37 \pm 0.20$ & $-3.84 \pm 0.08$ & & & & & \\%& & \\ & d & 6.1 & $1.49 \pm 0.24$ & $-4.04 \pm 0.10$ & & & & & \\%& & \\ & e & 7.4 & $3.56 \pm 0.48$ & $-3.66 \pm 0.09$ & & & & & \\%& & \\ & f & 4.1 & $0.55 \pm 0.13$ & $-4.47 \pm 0.13$ & & & & & \\%& & \\ BC5 & & & $6.13 \pm 0.31$ & $-3.48 \pm 0.07$ & & $1.20 \pm 0.05$ & $-3.69 \pm 0.07$ & $-$3.8 & $<10.0$ \\%& $5.7\pm0.2$ & \\ & a & 20.1 & $5.57 \pm 0.28$ & $-3.52 \pm 0.07$ & 27.4 & $1.13 \pm 0.04$ & $-3.72 \pm 0.07$ & & \\%& & \\ & b & 2.8 & $0.38 \pm 0.14$ & $-4.68 \pm 0.16$ & 2.8 & $0.06 \pm 0.02$ & $-4.96 \pm 0.16$ & & \\%& & \\ & c & 3.7 & $0.18 \pm 0.05$ & $-4.99 \pm 0.12$ & 1.3 & $0.01 \pm 0.01$ & $-5.67 \pm 0.33$ & & \\%& & \\ SECCO1 & & & $10.4 \pm 0.8$ & $-3.14 \pm 0.07$ & & & & $-$3.2$^\dagger$ & $10.3$ \\%& & \\ & MB & 11.1 & $6.72 \pm 0.06$ & $-3.33 \pm 0.08$ & & & & & \\%& & \\ & SB & 7.6 & $3.63 \pm 0.48$ & $-3.60 \pm 0.09$ & & & & & \\%& & $-3.15^\dagger$\\ \hline \end{tabular} \tablenotetext{}{Columns: (1) object name and sub-component (where relevant); (2) SNR of NUV emission (see Section \ref{sec:SFRs} for details); (3) NUV flux in units of $10^{-17} \; \mathrm{erg\,s^{-1}\,cm^{-2}\,\AA^{-1}}$; (4) NUV-based SFR estimate; (5) SNR of FUV emission; (6) FUV flux in units of $10^{-16} \; \mathrm{erg\,s^{-1}\,cm^{-2}\,\AA^{-1}}$; (7) FUV-based SFR estimate; (8) H$\alpha$ SFR estimates from the integrated H$\alpha$ flux of each object in MUSE \citepalias{Bellazzini+2022} following the conversion of \citet{Kennicutt1998}; (9) gas consumption timescale using the larger of the NUV and FUV SFR estimates (we note that this quantity is distance independent). For uniformity, all objects are assumed to be at 16.5~Mpc. $^\dagger$H$\alpha$-based SFR estimate from \citet{Beccari+2017,Beccari+2017err}.} \label{tab:SFRs} \end{table*} \subsection{Stellar masses} \label{sec:stellarmasses} The integrated F606W and F814W magnitudes of the BCs were measured from the co-added images in each filter. The same apertures indicated in Figures \ref{fig:BC1_HST_GLX}, \ref{fig:BC3_HST_GLX}, \ref{fig:BC4_HST_GLX}, and \ref{fig:BC5_HST_GLX} were used to measure the total magnitude of each source after masking the few clear background galaxies contained within these apertures. Galactic extinction corrections were made using the dust maps of \citet{Schlegel+1998} and the reddening $R_\nu$ values of \citet{Schlafly+2011}. The final magnitudes are listed in Table \ref{tab:Mstar}. The uncertainties were estimated by placing 10 apertures across the full ACS FoV (avoiding bright stars and background galaxies) and using the standard deviation of the counts to approximate the uncertainty in the counts of each BC. In young stellar populations the emitted light is dominated by the youngest stars, but the mass is generally dominated by the oldest, most numerous stars. As BCs are apparently such young objects, the correct mass-to-light ratio to use is highly uncertain, and would depend strongly on the assumed age of each object. Thus, widely used mass-to-light ratio prescriptions \citep[e.g.][]{Zibetti+2009,Taylor+2011} cannot be used with confidence for such a young, irregular, low-mass, and metal-rich stellar population. We therefore adopt an unconventional strategy for estimating the stellar masses of the BCs. If the current SFRs are assumed to be reasonable representations of the SFRs over the (short) lifetimes of the BCs then the total stellar mass is simply the age of each object times its SFR. In order to estimate the age we build up the integrated F814W magnitude of a stellar population forming stars at a constant rate (in 10~Myr steps), based on the \texttt{PARSEC} \citep{Bressan+2012} population models. When the artificial F814W magnitude equals the measured magnitude, we obtain an age estimate for the BC in question (to the nearest 10~Myr). To estimate the ages, and subsequently the stellar masses (age $\times$ SFR), we used the NUV SFR measurements (Table \ref{tab:SFRs}) for each object, as these are available for all objects and reflect a slightly longer SF timescale. For BC1, BC3, and SECCO1 a metallicity of $[M/H] = -0.35$ was used, and $[M/H] = 0.05$ for BC4 and BC5. These values approximately correspond to their observed O/H values (Table \ref{tab:metallicity}). The age estimates\footnote{We note that these age estimates should be treated with caution. Ideally the full SF histories of the BCs would be calculated, but the currently existing data are inadequate to do this. These ages represent an approximation to the age of the oldest stellar population in each BC, based on the assumption of a roughly constant SFR.} for BCs 1, 3, 4, 5, and SECCO~1 are 90, 50, 110, 160, and 60~Myr, and the resulting stellar mass estimates are shown in Table \ref{tab:Mstar}. A significant caveat to this approach is that the \texttt{PARSEC} models are incapable of correctly reproducing the colors of the BCs \citep[as noted by][]{Sand+2017}. Although this issue is not fully addressed in this work, we chose to rely on the F814W magnitudes as the discrepancy is assumed to be most severe for the youngest, bluest stars. Hence the redder band is expected to be somewhat less impacted. We also note that there are encouraging trends in the values that we obtained, that at least indicate internal consistency. For example, BC3 and SECCO~1 are the only BCs detected in H\,{\sc i} \ and we estimate these are by far the youngest objects --- a finding that the CMDs of the BCs would also seem to support. The estimated ages of BC4 and BC5 are also the oldest and it seems plausible (\S\ref{sec:discuss_BC4}) that the two formed from the same origin. In addition, \citet{Junais+2021} independently estimated the stellar mass of BC3 by fitting the spectral energy distribution (from photometry in $ugriz$, H$\alpha$, NUV, and FUV) of each clump with a single stellar population, via a grid search over metallciity and population age, and found a near identical value ($\sim 5 \times 10^4$~\Msol). \section{Points of origin} \label{sec:discuss} \begin{figure} \centering \includegraphics[width=\columnwidth]{BC_LV_Z.pdf} \caption{V-band luminosity versus metallicity (relative to solar) for BCs, Local Group dwarfs \citep{Kirby+2013}, Local Volume dwarfs \citep{Berg+2012}, TDGs \citep{Duc+1998,Weilbacher+2003,Duc+2007,Croxall+2009,Lee-Waddell+2018}, and extremely metal-poor galaxies \citep[XMPs,][]{Skillman+2013,McQuinn+2015b,Hirschauer+2016,Hsyu+2017,Izotov+2019,McQuinn+2020}. Metallicity is measured either from Fe/H or O/H as indicated in the legend. Both BCs and TDGs sit well above the luminosity--metallicity relation for dwarf galaxies of equivalent luminosities.} \label{fig:LV_Z} \end{figure} \begin{figure*} \centering \includegraphics[width=\columnwidth]{BC1_NN_2deg.pdf} \includegraphics[width=\columnwidth]{BC3_NN_2deg.pdf} \includegraphics[width=\columnwidth]{BC4_NN_2deg.pdf} \includegraphics[width=\columnwidth]{BC5_NN_2deg.pdf} \caption{All VCC, EVCC, and ALFALFA neighbors of BC1, 3, 4, and 5 (top-left to bottom-right) in a 4~sq~deg region centered on the BC (blue star in each panel) and within $\pm$500~\kms \ of their H$\alpha$ velocity (Table \ref{tab:metallicity}). The area of each circular marker corresponds to the apparent magnitude (in $g$-band) of the galaxy it represents. The color of the markers corresponds to the galaxy's $g-i$ color, with the narrow transition from blue to red occurring at $g-i=0.9$ (green). Objects circled with a dashed blue line were detected in ALFALFA and thus contain significant quantities of H\,{\sc i} \ (note that BC3 itself was detected in ALFALFA, but is not circled here). At the distance of Virgo 30\arcmin \ corresponds to $\sim$140~kpc. The scale bar in the bottom-right panel applies to all panels.} \label{fig:BC_NN} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{SECCO1_NN_2deg.pdf} \caption{As for Figure \ref{fig:BC_NN}, but for SECCO~1.} \label{fig:SECCO1_NN} \end{figure} The observations presented above reveal the surprising result that although all the BCs are actively forming stars, only BC3 and SECCO~1 have a detectable quantity of H\,{\sc i}. However, the typical values of the SFR estimates are of the order of $10^{-3} \; \mathrm{M_\odot \, yr^{-1}}$ (Table \ref{tab:SFRs}), which means that even below our H\,{\sc i} \ detection limits (\S\ref{sec:hi_mass}) these objects could still have gas consumption timescales in excess of 1~Gyr (although such long timescales are not uncommon for low-mass galaxies). In addition, the high metallicities of BCs (Table \ref{tab:metallicity}) clearly point to them all having formed from pre-enriched gas that originated in a larger galaxy, as has been shown explicitly to be the case for BC3 \citep{Jones+2022}, where the gas trail can still be traced back to its parent galaxy. Figure \ref{fig:LV_Z} compares the metallicities of BCs to other objects of comparable luminosity. As expected, tidal dwarf galaxies (TDGs) are similar to BCs, being of equivalent metallicity, but typically somewhat higher luminosities. This is a point to which we will return in the following section (\S \ref{sec:formation}), but BCs are likely too low-mass to be TDGs. On the opposite end of the metallicity spectrum we compare BCs to a small selection of extremely metal-poor galaxies (XMPs). These objects can appear superficially similar to BCs. Both are usually extremely blue, have clumpy morphologies, and the faintest XMPs are the same luminosity as BCs. However, their metallicities could scarcely be more different and the two populations are clearly distinct in origin. If we take the metallicities of the BCs and use them to infer a stellar mass from the mass--metallicity relation (MZR) then this should provide a reasonable estimate of the type of galaxies from which they formed. Using the MZR of \citet{Andrews+2013}, the metallicities of the BCs imply that their parent objects could have stellar masses anywhere in the range $8.3 \lesssim \log M_\ast/\mathrm{M_\odot} \lesssim 10.1$ (we note that because the MZR is an asymptotic relation, the lower bound of this range is much better constrained than the upper bound). This covers a broad range from dwarf galaxies almost to Milky Way-like galaxies \citep[$\log M_{\ast,\mathrm{MW}}/\mathrm{M_\odot} = 10.8$,][]{Licquia+2015}, but all are massive enough that, unless particularly low surface brightness (LSB), they should be mostly included in existing catalogs of Virgo cluster galaxies. Furthermore, it is also reasonable to assume that the parent objects are gas-bearing (or were in the recent past), as they must have been able to supply the gas that formed the young stellar populations of the BCs. The quoted range is for the average metallicity and does not account for metallicity variations within the parent galaxies, which could potentially expand the range if the material that formed a BC originated from a region that strongly deviated from the average metallicity. \subsection{Search for points of origin} We performed a detailed search considering all known Virgo members in the vicinity (and at a similar velocity to) each BC, paying particular attention to gas-bearing galaxies detected in ALFALFA \citep{Haynes+2018}. Even though most BCs are undetected in H\,{\sc i}, they appear to have formed from stripped gas and must have contained gas in the recent past as they have all formed stars recently. Thus, nearby, gas-rich galaxies are good candidate progenitor systems. The galaxies neighboring each BC are shown in Figures \ref{fig:BC_NN} \& \ref{fig:SECCO1_NN}. Here we present the conclusions of our search, but the full details can be found in Appendix \ref{sec:origin_search}. Within the entire 4~deg$^2$ region shown around BC1 in Figure \ref{fig:BC_NN} (top-left), there is only one galaxy that contains H\,{\sc i} \ gas and is sufficiently massive to have formed a BC, NGC~4579. This galaxy is approximately 140~kpc (30\arcmin) to the SW of BC1 and separated from it in velocity by $\sim$400~\kms. Thus, BC1 would have needed a large ejection velocity ($>$500~\kms \ in total) for NGC~4579 to be its parent object. Furthermore, other than a slightly H\,{\sc i}-deficient disk\footnote{\citet{Chung+2009} define the H\,{\sc i}-deficiency of a galaxy's disk as the logarithmic decrement between the observed and expected mean H\,{\sc i} \ surface density within the optical disk, where for the latter they use the average value for isolated galaxies from \citet{Haynes+1984}.}, NGC~4579 shows little sign of recent disturbance in either its H\,{\sc i} \ or CO morphology and kinematics \citep{Chung+2009,Brown+2021}. Finally, NGC~4579 appears to be too metal-rich \citep[$12+\log(\mathrm{O/H}) = 8.87 \pm 0.05$,][]{DeVis+2019} to match the metallicity of BC1 ($\langle 12+\log(\mathrm{O/H}) \rangle = 8.35 \pm 0.15$). Thus, NGC~4579 does not seem to be a viable candidate point of origin for BC1, and the genuine point of origin must presumably be beyond the region shown in Figure \ref{fig:BC_NN} ($>280$~kpc away), but we are unable to identify any strong candidates this far away. BC3 (also called AGC~226178) was discussed in detail by \citet{Jones+2022}. This is an extremely complicated field with multiple foreground systems projected on it. BC3's nearest apparent neighbor is NGVS~3543 (also called AGC~229166), which \citet{Junais+2021} argued was a LSB galaxy at the same distance as BC3. However, based on the CMDs produced from HST imaging, \citet{Jones+2022} demonstrated that NGVS~3543 is a foreground object at $\sim$10~Mpc, while BC3 is consistent with being in Virgo at 16.5~Mpc. H\,{\sc i} \ observations with the VLA \citep{Cannon+2015} and Arecibo \citep{Giovanelli+2005,Haynes+2011,Minchin+2019} indicate a possible bridge between BC3 and a pair of galaxies, VCC~2034 and 2037. However, the closer (in projection) of these, VCC~2037, is actually another foreground object at approximately 10~Mpc \citep{Karachentsev+2014}. Thus, VCC~2034 ($cz_\odot = 1507$ \kms), $\sim$70~kpc to the SW, is the likely source of BC3's H\,{\sc i} \ gas. However, \citet{Jones+2022} were unable to determine whether ram pressure or tidal stripping was responsible for removing the gas from VCC~2034. BC4 and BC5 likely formed from the same parent object as they are only separated by 45\arcmin \ on the sky, are at almost the same velocity (Table \ref{tab:metallicity}), have nearly identical metallicity measurements (Table \ref{tab:metallicity}), and have similar age estimates (\S\ref{sec:stellarmasses}). NGC~4419 is in fairly close proximity to both BCs and based on its estimated stellar mass and metallicity, it is likely a close match for the metallicity of these BCs. In addition, there is strong evidence in both H\,{\sc i} \ \citep{Chung+2009} and CO \citep{Brown+2021} that this galaxy is being ram pressure stripped. However, as ram pressure tails only extend in one general direction (in the wake of a galaxy's motion through the ICM) and BC4 is to the south of NGC~4419 and BC5 is to the north, it is extremely unlikely that this is the point of origin of these BCs. The extension of the molecular gas distribution of NGC~4419 is roughly towards the south \citep{Brown+2021}, in the direction of BC4, but away from BC5. If NGC~4419 were simultaneously undergoing ram pressure and tidal stripping then it could plausibly have formed both BCs, but there is no evidence of this in the optical, H\,{\sc i}, or CO images. There are are a few other gas-bearing galaxies within 1~deg of either BC4 or BC5, but these were all discounted due to a mismatch in properties or because of evidence showing ram pressure stripping in the wrong direction. Upon searching further afield we immediately identified UGC~7695 (VCC~1450, IC~3476) as a strong candidate. This galaxy is a well-studied example of ram pressure stripping in action \citep{Boselli+2021}, and has a prominent bow-shaped wake extending in the approximate direction of BC4 and BC5. Existing measurements of the metallicity of UGC~7695 \citep{Hughes+2012,Boselli+2021} also approximately match those of BC4 and BC5, making this a promising candidate for their point of origin. As the separation between the BCs and UGC~7695 is approximately 450~kpc in projection they would presumably require a very large (perhaps over 1000~\kms) ejection velocity, depending on when the stripping episode began. Finally, we consider SECCO~1. Figure \ref{fig:SECCO1_NN} shows the neighbors of SECCO~1 within a 4~deg$^2$ field and $\pm$500~\kms, and demonstrates the extraordinary isolation of this system given that it is within the virial radius of a cluster. The potential points of origin for SECCO~1 have already been discussed extensively by previous works \citep{Adams+2015,Sand+2017,Bellazzini+2018} and we will only review these briefly here. If formed by a stripping event then the most likely point of origin is either the M~86 subgroup of Virgo, about 350~kpc to the SE, which exhibits an enormous complex of stripped gas visible in X-rays and H$\alpha$ \citep[][and references therein]{Sand+2017}, or the group of dwarf galaxies $\sim$200~kpc to the NW \citep{Bellazzini+2018}. The proximity of VCC~322, 334, and 319 (compared to the M~86 sub-group) might favor this possibility. However, as we have discussed above, in some cases the separation between parent and BC may be quite large. What is a stronger argument is that the metallicities of VCC~322 and 334 are a close match to that of SECCO~1 \citep{Bellazzini+2018}. VCC~322 also has a stellar tail that extends in the general direction of SECCO~1. Although these galaxies are less massive than some of the others considered, we note that the apparent parent object of BC3 is also a dwarf galaxy and only a few times more massive than BC3 itself. However, the complex of stripped gas \citep[e.g.][]{Boselli+2018} in the M~86 sub-group is also a good candidate point of origin, for example, if NGC~4438 (VCC~1043, beyond the FoV shown in Figure \ref{fig:SECCO1_NN}) fell towards this sub-group via the location of SECCO~1. In this case, a combination of ram pressure and tidal forces could be responsible for SECCO~1 and the complex of stripped gas. We also noted the blue dwarf irregular IC~3355 near NGC~4438 (in the approximate direction of SECCO~1). However, the lower metallicity of this object \citep[$12 + \log (\mathrm{O/H}) \approx 8.0$,][]{DeVis+2019} suggests that it did not form from stripped gas. \subsection{Other origin scenarios} In the above discussion we considered that BCs were likely formed from a gas-bearing galaxy sufficiently massive to be included in existing catalogues of Virgo cluster galaxies. However, there are a few other scenarios that we briefly consider here. \citet{Junais+2021} suggested that BC3 might have formed from gas stripped from a LSB galaxy. Although this scenario is ruled out for BC3 itself \citep[as the LSB galaxy in question is actually a foreground object,][]{Jones+2022} it is possible that LSB galaxies have been missed in our search above, as they are frequently absent from established catalogs of cluster members. In general this mechanism would imply that the LSB galaxy being stripped would be relatively close by to the BC, as it would presumably have a smaller gas reservoir (that would evaporate more rapidly when stripped) than a larger galaxy. Therefore, even though LSB galaxies can be challenging to detect, it seems unlikely that a close neighbor would have been overlooked in multiple cases, and we do not consider this a likely formation pathway, but note that it is difficult to entirely exclude. An additional scenario that we considered is the possibility that the BCs could be dark objects that contained neutral gas for an extended period, but formed essentially no stars until very recently \citep[c.f.][]{Kent+2009,Minchin+2019}. This scenario is highly unlikely for two main reasons. Firstly, the search for bona fide dark galaxies that cannot be explained as tidal or spurious objects has turned up few convincing results to date \citep[e.g.][]{Taylor+2013,Cannon+2015}, calling into question whether this scenario is valid. Secondly, the high metallicity of the BCs indicates that there have been multiple prior SF episodes that have enriched their gas, thus ruling out that they could be primordial dark objects \citep[c.f.][]{Corbelli+2021}. \section{Formation mechanism} \label{sec:formation} As shown in Figure \ref{fig:LV_Z}, the universally high metallicities of BCs (in relation to their luminosities or stellar masses) means that the only plausible mechanism for their formation is that they formed from material stripped from a larger galaxy. Their metallicities are a full order of magnitude higher than those of galaxies of the same V-band luminosity, and owing to their extremely young stellar populations, this discrepancy would be even larger if the samples were compared in terms of their stellar masses. Figure \ref{fig:LV_Z} also indicates that BCs are of slightly lower luminosity than TDGs, but of similar metallicity. Their stellar and H\,{\sc i} \ masses indicate that BCs are considerably less massive than long-lived TDGs. Despite the strong evidence that BCs formed from stripped material, it is unclear whether they formed through tidal or ram pressure stripping. As is the case for BC3, even when the H\,{\sc i} \ connection to the parent galaxy is still detectable \citep{Jones+2022}, it may not be possible to distinguish between ram pressure or tidal forces as the dominant mechanism stripping the gas. Indeed it is possible that both are valid mechanisms. Regardless of the mechanism by which gas is stripped to form BCs, the parent objects must be new cluster members. \citet{Oman+2016} and \citet{Oman+2021} simulated the stripping and quenching of galaxies falling into clusters and found that essentially all new members are stripped of their gas and quenched during their first orbit, usually around pericenter passage \citep[see also][for reviews]{Cortese+2021,Boselli+2021b}. Thus to have sufficient gas to form a BC, the parent galaxy must likely be on its first infall into the cluster. Although we only have a sample of five objects, this would also appear to agree with their spatial distribution which is inside the virial radius, where significant stripping is expected, but avoids the cluster center. In the remainder of this section we discuss the evidence for and against tidal and ram pressure formation scenarios, compare BCs to other classes of objects known to form from stripped gas, and give an overview of related simulation results. \subsection{Comparison to TDGs and the need for ram pressure stripping} The typical masses of long-lived TDGs are expected to be over $10^8$~\Msol \ \citep{Bournaud+2006}, as below this mass they generally cannot resist the tidal field of their parent galaxies for long enough to escape as bound objects. This threshold mass is considerably larger than any of the BCs, disfavoring a tidal formation pathway, as the most massive BCs (BC3 and SECCO~1) are a few times 10$^7$~\Msol. However, we note that if a lower mass TDG were to be ejected at particularly high speed it may be able to survive, as it would more rapidly escape the tidal field of its parent galaxy. We also note that the simulations of \citet{Bournaud+2006} assume that the parent object of TDGs are loosely MW-like, however, VCC~2034 \citep[the apparent parent object of BC3,][]{Jones+2022} has a stellar mass of only $10^{8.2}$~\Msol. If BCs are tidal in origin then perhaps they formed from lower mass progenitors and are correspondingly lower mass than typical long-lived TDGs. However, such a mechanism could presumably only apply to those BCs (BC1, BC3, and SECCO~1) with slightly lower metallicities that correspond to similarly low-mass progenitors (via the MZR), unless the more metal-rich BCs (BC4 and BC5) formed from recently enriched gas that was stripped before it had sufficient time to mix with the rest of the interstellar medium in the parent galaxy. TDGs are typically ejected at around the circular velocity of the galaxy they originate from \citep{Bournaud+2006}. For a relatively massive galaxy this might mean an ejection velocity of $\sim$300~\kms. If this were preferentially aligned along the direction perpendicular to the line-of-sight, it would still take a TDG $\sim$1~Gyr to traverse 300~kpc in projection. Thus the isolation of BCs, coupled with their very young stellar populations, is difficult to explain via a tidal formation mechanism. In contrast, in the case of ram pressure stripping the velocity of the galaxies relative to the cluster can exceed 1000~\kms, and galaxies with the largest tails are generally found to be traveling at highest speeds \citep{Jaffe+2018}. The fact that BCs have been identified in a cluster also points to ram pressure stripping as the most likely formation pathway. All gas-rich galaxies falling with sufficient velocity into a cluster are expected to undergo some degree of ram pressure stripping, and while tidal interactions are certainly commonplace in clusters, these most frequently take the form of brief, high speed encounters (e.g. ``galaxy harassment"), which are less likely to strip large quantities of gas \citep[e.g.][]{Smith+2010} than the strong, drawn out interactions in galaxy groups (where TDGs are typically found). However, although the relative velocity between an infalling galaxy and the ICM can easily exceed 1000~\kms, this does not necessarily translate into an equivalent velocity for the stripped gas, as once stripped it does not immediately become stationary relative to the ICM. The ram pressure stripping simulations of \citet{Kapferer+2009} consider gas-rich galaxies falling at 1000~\kms \ relative to an ICM of varying densities (from $1\times10^{-28}$ to $5\times10^{-27}\;\mathrm{g\,cm^{-3}}$). They show that after 500~Myr of stripping the length of the plume of stripped gas in the wake of the parent galaxy is strongly dependent on the density of the surrounding ICM (e.g. their Figure 20). In this case the most distant gas clouds (for $\rho_\mathrm{ICM} \geq 1\times10^{-27}\;\mathrm{g\,cm^{-3}}$) are $\sim$400~kpc from their parent galaxy, indicating that their average relative velocity over the past 500~Myr has been $\sim$800~\kms. This is somewhat slower than the velocity of the parent galaxy relative to the ICM (and would be considerably slower still for lower ICM densities), but is still several times greater than the relative velocities typically expected for TDGs. For comparison, the electron number density of the ICM in Virgo \citep{Nulsen+1995} exceeds $10^{-2}\;\mathrm{cm^{-3}}$ ($\sim2\times10^{-26}\;\mathrm{g\,cm^{-3}}$) near M~87 and at a distance of 230~kpc has decreased to $6\times10^{-4}\;\mathrm{cm^{-3}}$ ($\sim1\times10^{-27}\;\mathrm{g\,cm^{-3}}$). Thus ram pressure stripping (in Virgo) provides a more viable mechanism for rapidly achieving large separations between stripped material and its parent object. This is especially true within a few hundred kpc of the cluster center, but could be true almost anywhere within the cluster should an infalling galaxy collide with a dense pocket in the ICM \citep[which are known to exist in other clusters, e.g. ][]{Morandi+2014,Eckert+2015}. The most similar known objects to BCs are ``fireballs" \citep[e.g.][]{Cortese+2007,Yoshida+2008,Hester+2010}, clumps of SF seen in the wake of galaxies being actively ram pressure stripped. Indeed, as discussed by \citet{Bellazzini+2018}, many of the properties of BCs match well with those of fireballs \citep[e.g.][]{Fumagalli+2011}, including their metallicities \citep[e.g. fireballs in the wake of IC~3418 have $8.22 < 12 + \log (\mathrm{O/H}) < 8.38$,][]{Kenney+2014}. Other related objects include SF clumps in filamentary structures in the vicinity of NGC~1275 in the Perseus cluster \citep{Conselice+2001,Canning+2014} and in stripped material in Stephan's Quintet \citep{MendesdeOliveira+2004}. However, BCs are distinct from fireballs and similar objects, in that they are remarkably isolated (e.g. Figures \ref{fig:BC_NN} \& \ref{fig:SECCO1_NN}). Fireballs are found within a few 10s of kpc of their parent galaxy, where there can be little doubt over their point of origin, and where they may still eventually fall back onto their parent galaxy \citep[e.g.][]{Vollmer+2001,Tonnesen+2012}. To form BCs requires a mechanism which can carry neutral gas several 100s of kpc from a galaxy within the hostile environment of a cluster. \subsection{Properties of ram pressure stripped gas clumps in simulations} \citet{Lee+2022} argue that many of the molecular gas clouds seen in the tail of ram pressure stripped galaxies \citep[e.g.][]{Moretti+2018,Jachym+2019} could form in situ, by rapid cooling of warm ionized gas \citep[][also suggest a similar mechanism]{Tonneson+2012,Moretti+2020}. The metal-rich gas and absence of young stars in close proximity (unlike within the disks of most gas-rich galaxies) make conditions favorable for radiative cooling. Furthermore, \citet{Muller+2021} argue that magnetic sheathing could help to protect ram pressure stripped gas from evaporation in a cluster environment. In the radiative hydrodynamical simulations of \citet{Lee+2022} SF clumps are seen out to $\sim$100~kpc from the parent galaxy. In their models this SF in the distant tail occurs $\sim$200~Myr after the initial onset of ram pressure stripping, suggesting that the very young ages (50-150~Myr) of the stellar populations of BCs may underestimate how long ago their progenitor gas was stripped (although large velocities $>$500~\kms \ would likely still be required to explain their isolation). Finally they note that although bright H$\alpha$ clumps will only track SF activity, more diffuse H$\alpha$ emission (fainter than $6\times10^{38} \; \mathrm{erg\,s^{-1}\,kpc^{-2}}$) is expected throughout the ram pressure tail due to recombinations in the warm ionized gas. Therefore, H$\alpha$ observations significantly more sensitive than this threshold might be capable of robustly identifying the points of origin of BCs. The nominal 1$\sigma$ surface brightness sensitivity of the VESTIGE survey is $2\times10^{-18}\;\mathrm{ergs\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$ \citep{Boselli+2018}, which for a distance of 16.5~Mpc equates to $3.4\times10^{36} \; \mathrm{erg\,s^{-1}\,kpc^{-2}}$. Thus, such features, should they exist, would be detectable in VESTIGE. The hydrodynamic simulations of \citet{Kapferer+2009} also produce numerous gas clumps in the wakes of ram pressure stripped galaxies, but out to much greater distances ($\sim$400~kpc). They find that SF is only induced in these clumps if the wind speed exceeds 500~\kms \ and that it is enhanced by yet stronger ram pressure \citep[but note that][find the opposite trend]{Tonneson+2012}. Ram pressure therefore appears to be a promising candidate for producing clumps of star forming gas far from their parent galaxies, but do the physical properties of these systems match with those observed in BCs? In the case of the above mentioned ram pressure simulations the gas clumps in the wakes of the stripped galaxies are generally presented in terms of gas density rather than masses of distinct clumps. However, in the simulations of \citet{Tonneson+2021} the masses of such clumps are found to be on the order of $10^5$~\Msol, with the most massive distinct clouds being $\sim$10$^6$~\Msol. This matches quite well with the masses of gas clumps typically found in the immediate wakes of ram pressure stripped galaxies \citep[e.g.][]{Poggianti+2019}, but is more than an order of magnitude less massive than BC3 and SECCO~1. However, the earlier ram pressure stripping simulations of \citet{Kronberger+2008} do form bound objects, analogous to TDGs, with total masses of $\sim$10$^7$~\Msol, but these simulations are now thought to oversimplify fluid instabilities \citep[e.g.][]{Sijacki+2012}, calling into question the details of these results. It terms of metallicity it is generally assumed that, in either the tidal or ram pressure stripping scenario, the BC formed will exhibit the same metallicity as its parent galaxy. However, \citet{Tonneson+2021} also find that all their simulated ram pressure stripped clouds rapidly mix with the ICM. Thus, they predict that the metallicity of ram pressure stripped clouds should decrease with distance from their parent galaxy. This seems to be directly contradicted by the high metallicities of BCs, given their relative isolation and large separations (e.g. $>$300~kpc in some cases) from their apparent points of origin. We also note that \citet{Calura+2020} find that more massive H\,{\sc i} \ clouds (similar to SECCO~1 and BC3) can survive intact for on the order of a Gyr, while moving rapidly through the ICM. It may be that the gas clouds from which BCs form are exceptional objects and not typical of the underlying population of gas clouds that are stripped in ram pressure events. For example, these could be some of the most loosely bound gas that is the first to be stripped, or they could be stripped by a denser clump of the ICM. This could explain the lack of similar objects in the simulations of \citet{Tonneson+2021}. As a closing remark for this discussion we also note that despite the apparently simple requirement for ram pressure stripping (i.e. sufficient ram pressure to overcome the gravitational attraction of the gas disk) and extensive efforts to simulate this process in increasing detail, there remain systems that are challenging to explain. In particular, the recent discovery of an enormous (apparently) ram pressure stripped H\,{\sc i} \ tail in a system outside of a cluster, where no significant inter-galatic medium could be detected \citep{Scott+2022}, poses difficult questions regarding its origin, and could even suggest that some BC-like objects might exist outside of clusters. \subsection{Summary} In summary, BCs appear to be distinct from both TDGs and fireballs, being too low-mass to be the former, too high-mass to be the latter, and too isolated for either. The isolation of some BCs is their property that is hardest to explain, and would seem to necessitate the large velocities expected in ram pressure stripping events, but not for strong tidal interactions. We therefore favor ram pressure stripping as the most likely formation mechanism of BCs. If ram pressure stripping is confirmed to be the formation pathway then BCs can be thought of as ``ram pressure dwarfs", analogous to tidal dwarfs, but unlikely to survive as bound structures on long timescales. Simulations provide a somewhat conflicting picture of how such objects might form via ram pressure stripping, however, this may be because BCs represent atypical objects that, unlike fireballs, are not formed in large numbers during stripping episodes. Regardless of their formation mechanism the properties of BCs appear to be distinct from any other stellar systems of which we are aware. \section{Fate and production rate} \label{sec:fate} Based on their morphologies and stellar mass estimates, BCs are unlikely to be gravitationally bound. In the case of BC3 and SECCO~1, their H\,{\sc i} \ content might be sufficient for them to remain bound in the short term \citep[e.g.][]{Calura+2020}, but this neutral gas (the majority of their total mass) will eventually be lost to the ICM. It is challenging to accurately assess the stability of BCs due to their irregular morphologies and because their velocity dispersions are not well resolved by the MUSE observations. However, based on the stellar mass estimates in Table \ref{tab:Mstar} and their apparent sizes, we estimate that if they are extremely dynamically cold (e.g. $\sigma_v < 1$~\kms) then they may be bound, but for $\sigma_v > 2$~\kms \ they would certainly be unbound \citep[using Equation 8 of][]{Calura+2020}. The stability of BCs is considered further in \citetalias{Bellazzini+2022}, however, the most likely scenario is that each BC as a whole is unbound, but individual component clumps or star clusters may be bound, if sufficiently dynamically cold. Thus, in the long term BCs will likely disperse (either as individual stars or star clusters) into the intracluster light. However, even if BCs were to remain bound, without sustained SF they would quickly become essentially undetectable. Currently they are only identifiable at all (in optical/UV) because of their young, blue stars. In \citet{Jones+2022} we argued that as BCs are only expected to be visible for a short period, they must be continually produced in the cluster. We estimated that an object such as BC3 might be detectable for at most 500~Myr, meaning that for five BCs to be visible today, they must be being produced at a rate on the order of 1 per 100~Myr. However, given that all five of the known BCs appear to have ages of less than 200~Myr, this might be a more reasonable estimate, making the production rate closer to 1 per 50~Myr. We speculate that this might be a common phenomenon with many newly infalling galaxies producing such objects. With this in mind we also note that, in hindsight, the metallicities of BCs are perhaps not surprising. As mentioned in \S\ref{sec:discuss}, the metallicities of BCs correspond to a stellar mass range $8.3 \lesssim \log M_\ast/\mathrm{M_\odot} \lesssim 10.1$. However, this wide range likely encompasses most galaxies that could possibly form BCs (suggesting it is a common occurrence). Galaxies significantly less massive than $\log M_\ast/\mathrm{M_\odot} = 8.3$ would have H\,{\sc i} \ reservoirs scarcely larger than those of SECCO~1 and BC3, and are thus probably too small to form a BC themselves. Whereas galaxies significantly more massive than $\log M_\ast/\mathrm{M_\odot} = 10.1$ are increasingly uncommon and increasingly likely to be gas-poor. \section{Future directions} \label{sec:future} Although the faintness and peculiar properties of BCs make them challenging objects to study, we suggest a few directions where progress could likely be made. We experienced significant difficulties in attempting to identify the point of origin of most of the BCs, likely because it has been several hundred Myr since some of them were first stripped from their parent galaxy. However, if ram pressure stripping is the formation pathway of these objects then it is possible that extremely faint H$\alpha$ trails still connect the BCs to their parent objects \citep[e.g.][]{Lee+2022}. A deep H$\alpha$ search around the BCs (or others identified in the future), is a promising approach to robustly identifying their parent objects, which in turn will allow for more detailed study of their formation mechanism. X-ray emission is also frequently found to accompany H$\alpha$ tails of ram pressure stripped galaxies \citep[e.g.]{Sun+2007,Sun+2021}, and may represent another means to characterize the properties of the stripping events that formed the BCs in cases where the parent object can be identified. CO observations with the Atacama Large Millimeter/submillimeter Array (ALMA) have successfully made detections of individual clumps of molecular gas in the wakes of a ram pressure stripped galaxies \citep[e.g.][]{Jachym+2019} as well as individual giant molecular clouds in TDGs \citep{Querejeta+2021}. If BCs still contain significant quantities of molecular gas (as would be expected based on their recent SF) then they should be readily detectable with ALMA, especially as their high metallicity measurements imply a favorable CO-to-H$_2$ conversion factor in comparison to other low-mass objects \citep{Bolatto+2013}. \citet{Lee+2022} find that ram pressure stripped gas clouds may travel for over a hundred Myr before SF occurs in them. Although we are limited by a very small sample size, all of the BCs with slightly older stellar populations estimates ($\gtrsim100$~Myr, rather than $\sim$50~Myr) are undetected in H\,{\sc i}. The long gas consumption (by SF) timescales in Table \ref{tab:SFRs} indicate that the gas in these systems is not (for the most part) consumed by SF. BCs 1, 4, and 5 must have contained significant cold gas reservoirs within the past 200~Myr to permit the formation of their observed stellar populations, and they likely still contain some molecular gas as they have all formed new stars in the past 10~Myr, yet today we find no evidence of any H\,{\sc i} \ content. If BCs have traveled through the ICM for significantly longer than the current age of their stellar populations in order to reach their current state of isolation, perhaps this indicates that it is the SF episode itself that triggers the evaporation of the neutral gas. For example, it seems plausible that an object like SECCO~1 is essentially an earlier stage of an object like BC4 and BC5. Both are broken into two main components, but SECCO~1 has yet to lose its gas, and BC4 appears as though it may be disintegrating (Figure \ref{fig:BC4_HST_GLX}). If this were the case then SECCO~1 would likely be on the verge of losing its H\,{\sc i} \ gas. This disagrees somewhat with the findings of \citet{Calura+2020}. However, those authors note that the details of the SF episode, particularly when it began, are quite uncertain, and we suggest that this possibility might warrant further investigation. Along similar lines, with the HST observations it has only been possible to characterize the stellar populations of BCs from the stars formed in the past $\sim$50~Myr. While some BCs may genuinely contain no stars that are older than this, others might. To detect or rule out this older stellar population, and therefore to constrain the full SF histories of BCs, is possible with the James Webb Space Telescope observations (JWST). With a moderate investment of observing time ($\sim$10~h) JWST is capable of detecting stars several magnitudes below the TRGB, should an RGB exist, at the distance of the Virgo cluster. Such observations would not only determine the age of the oldest stellar component of BCs, but (if RGB stars exist in BCs) would also be capable of conclusively demonstrating Virgo membership via TRGB distance measurements. Finally, we note that if our hypothesis is correct and BCs are commonly produced when new member galaxies fall into a cluster, then they should exist in other clusters as well as Virgo. Unfortunately, due to how faint BCs are, they would be undetectable in any galaxy clusters significantly farther away than Virgo. We therefore suggest that the Fornax cluster could be a suitable location to extend the search and would represent an independent environment where our findings could be cross-checked. The distance modulus for Fornax is only $\sim$0.5~mag greater than for Virgo, thus the brightest BCs (Table \ref{tab:Mstar}) would likely still be detectable and slightly longer HST observations could provide similar quality CMDs. The ongoing MeerKAT Fornax survey \citep{Serra+2016,Kleiner+2021} aims to map 12~deg$^2$ of the cluster in H\,{\sc i}. These observations will be approximately 5 times deeper than our pointed VLA observations of BCs, and will have a factor of $\sim$3 times better angular resolution. Thus this survey will be ideal for identifying ``dark", dense H\,{\sc i} \ clouds analogous to those in \citet{Adams+2013} and \citet{Cannon+2015} that led to the discovery of BCs. Furthermore, the improved column density sensitivity will exceed that of ALFALFA and will be readily capable of detecting residual H\,{\sc i} \ streams that might still connect young BCs to their parent objects, as is the case for BC3 \citep{Jones+2022}. The Fornax cluster is also the target of both the Next Generation Fornax Survey \citep{Munoz+2015}, $ugi$ imaging with the Dark Energy Camera on the Blanco telescope, and the Fornax Deep Survey with the Very Large Survey Telescope \citep{Peletier+2020}, which is imaging the Fornax cluster in $ugri$ at comparable depth to the NGVS in Virgo. Together these surveys will provide the means to identify BCs both through their young blue stellar populations and, where it exists, their H\,{\sc i} \ gas. \section{Conclusions} \label{sec:conclude} We have presented follow up HST, VLT/MUSE, VLA (and GBT) observations of five candidate young, blue, faint, stellar systems in the direction of the Virgo cluster that are analogous to SECCO~1. With the exception of one spurious object, we find that these are all comparable to SECCO~1 in terms of their extremely low stellar masses, blue stellar populations, and high metallicities, leading us to conclude that they must have formed from gas stripped from more massive galaxies. However, only one is detected in H\,{\sc i}, suggesting that the others have likely survived sufficiently long to lose much of their initial gas content. Some of these objects are also surprisingly isolated, residing several hundred kpc from the nearest potential source of gas, which poses a challenge for robustly identifying their points of origin. We considered both tidal and ram pressure stripping scenarios as the potential formation mechanism of these stellar systems. Although we cannot confidently exclude either of these mechanisms, and indeed there may not be one single mechanism responsible for all BCs, ram pressure stripping is most consistent with the observed properties. In particular, the isolation of some BCs is difficult to explain with the low velocities ($\leq$300~\kms) expected for ejected TDGs, but can more naturally be explained by ram pressure stripping proceeding at $>$1000~\kms. In addition, BCs are likely too low mass to be long-lived TDGs. However, gas clumps formed in ram pressure stripping simulations are typically much lower mass than BCs (based on the H\,{\sc i} \ masses of BC3 and SECCO~1), and we suggest that these objects may be atypical and form from the first loosely bound gas to be stripped, or as a result of stripping in a clumpy ICM. These massive clumps ($\sim$10$^7$~\Msol) of stripped gas moving at high speed can likely survive sufficiently long in the ICM to form the stellar populations observed and to become relatively isolated. However, they will ultimately lose their gas content (the majority of their total mass) and likely become unbound. BCs therefore represent a new class of stellar system that form from large ($\sim$10$^7$~\Msol) clumps of pre-enriched, stripped gas, are (assumed to be) dark matter free, and are capable of surviving sufficiently long in the hostile ICM to become isolated ($>$100~kpc away) from their parent galaxies. A further census of this class of object in the Virgo cluster, and potentially the Fornax cluster, will allow for improved constraints on their lifetimes and how frequently they are produced. However, robust identification of their parent objects will remain challenging, owing to their isolation. Deep, wide-field H$\alpha$ imaging, to identify diffuse emission, is a potential approach for systems where the majority of the neutral gas has already been evaporated. \begin{acknowledgments} The authors thank Kyle Artkop for assistance in identifying blue candidates in Virgo. We also thank Toby Brown and co-authors for providing their X-ray mosaic from archival ROSAT observations of the Virgo cluster. This work is based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. These observations are associated with program \# HST-GO-15183. Support for program \# HST-GO-15183 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. It is also based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programme 0101.B-0376A. This work used both previously unpublished and archival data from the Karl G. Jansky Very Large Array. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The data were observed as part of programs 13A-028 (PI: J.~Cannon) and 18A-185 (PI: K.~Spekkens). The work used images from the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID 2014B-0404; PIs: David Schlegel and Arjun Dey). Full acknowledgment at \url{https://www.legacysurvey.org/acknowledgment/}. DJS acknowledges support from NSF grants AST-1821967 and 1813708. MB acknowledges the financial support to this research from the INAF Main Stream Grant 1.05.01.86.28 assigned to the program {\em The Smallest Scale of the Hierarchy (SSH)} KS acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC). BMP is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2001663. EAKA is supported by the WISE research programme, which is financed by the Dutch Research Council (NWO). GB acknowledges financial support through the grant (AEI/FEDER, UE) AYA2017-89076-P, as well as by the Ministerio de Ciencia, Innovación y Universidades, through the State Budget and by the Consejería de Economía, Industria, Comercio y Conocimiento of the Canary Islands Autonomous Community, through the Regional Budget. JS acknowledges support from the Packard Foundation. MPH acknowledges support from NSF/AST-1714828 and grants from the Brinson Foundation. JMC, JF, and JLI are supported by NST/AST 2009894. R.~R.~M. gratefully acknowledges support by the ANID BASAL project FB210003. Research by DC is supported by NSF grant AST-1814208. AK acknowledges financial support from the State Agency for Research of the Spanish Ministry of Science, Innovation and Universities through the "Center of Excellence Severo Ochoa" awarded to the Instituto de Astrof\'{i}sica de Andaluc\'{i}a (SEV-2017-0709) and through the grant POSTDOC\_21\_00845 financed from the budgetary program 54a Scientific Research and Innovation of the Economic Transformation, Industry, Knowledge and Universities Council of the Regional Government of Andalusia. \end{acknowledgments} \facilities{Blanco, GALEX, GBT, HST (ACS), ROSAT, VLA, VLT:Yepun (MUSE)} \software{\href{http://americano.dolphinsim.com/dolphot/}{\texttt{DOLPHOT}} \citep{Dolphin2000}, \href{https://casa.nrao.edu/}{\texttt{CASA}} \citep{CASA}, \href{https://www.astropy.org/index.html}{\texttt{astropy}} \citep{astropy2013,astropy2018}, \href{https://aplpy.github.io/}{\texttt{APLpy}} \citep{aplpy2012,aplpy2019}, \href{https://photutils.readthedocs.io/en/stable/}{\texttt{Photutils}} \citep{photutils}, \href{https://reproject.readthedocs.io/en/stable/}{\texttt{reproject}} \citep{reproject}, \href{https://acstools.readthedocs.io/en/latest/}{\texttt{acstools}} \citep{acstools}, \href{https://sites.google.com/cfa.harvard.edu/saoimageds9}{\texttt{DS9}} \citep{DS9}, \href{https://dustmaps.readthedocs.io/en/latest/}{\texttt{dustmaps}} \citep{Green2018}, \href{https://astroalign.readthedocs.io/en/latest/}{\texttt{Astroalign}} \citep{astroalign}, \href{https://www.astromatic.net/software/sextractor/}{\texttt{Sextractor}} \citep{Bertin+1996}, \href{https://aladin.u-strasbg.fr/}{\texttt{Aladin}} \citep{Aladin2000,Aladin2014}}
1,116,691,497,970
arxiv
\section{INTRODUCTION} After the JILA experiments on two hyperfine states of $^{87}$Rb \cite{Hall1998}, the coupled Bose-Einstein condensates (BEC) became a highly useful artificial model for studying a wide variety of real condensed matter systems. The study of miscibility-immiscibility phase transition \cite{Jain2011} of the two-component BEC and its tunable interaction through magnetic or optical Feshbach-resonances \cite{Chin2010} provides rich insight into many-body quantum physics of the system and the origin of such phenomena. Examples of these quantum phenomena are the Kibble-Zurek mechanism \cite{Nicklas2015}, the production of dipolar molecules \cite{Molony2014}, vortex-antivortex molecules \cite{Geurts2008}, phase separation \cite{ McCarron2011, Wacker2015, Wang2016, Papp2008}, pattern formation \cite{Sabbatini2011, Hoefer2011, Hamner2011, De2014}, symmetry breaking transitions \cite{Lee2009}, skyrmions \cite{Kawakami2012, Orlova2016}, exotic vortex lattices \cite{Kuopanportti2012}, solitary multiquantum vortices \cite{Kuopanportti2015} collective modes \cite{Barbut2014}, nonlinear dynamical excitations \cite{Mertes2007, Eto2016}, quantum turbulence \cite{Takeuchi2010}, vortex bright solitons \cite{Law2010} and vortex dynamics in coherently coupled BEC \cite{Calderaro2017}. Diverse investigations in the above mentioned arenas of research have been carried out on two-component BEC using two different alkali-metal atoms \cite{Lercher2011, Pasquiou2013, Roy2015, Lee2016, Bandyopadhyay2017} or different isotopes of same atom \cite{Sugawa2011, Inouye1998, Tojo2010} or same isotopes with different hyperfine states \cite{Stenger1998, Sadler2006}. Our recent work \cite{Anal2018} on the analysis of the structure of a two-component BEC with paraxial Laguerre-Gaussian (LG) beam has motivated us to study further on matter-vortices in the BEC mixture due to non-paraxial LG beam. Although the effect of orbital angular momentum (OAM) of LG beam on the center-of-mass (CM) motion of atoms at BEC was experimentally demonstrated \cite{Andersen2006,Wright2008} more than a decade ago, it is the theoretical derivation of Mondal \textit{et al.} \cite{Mondal2014, Mondal2015} that provides a detailed picture of the transfer mechanism of both the orbital and spin angular momentum (SAM) from paraxial LG beam to the internal and external motions of atoms below their recoil limit. However, the transfer mechanism of angular momenta from the non-paraxial LG beam to the ultra-cold atoms is quite different compared to that of the paraxial LG beam. Unlike the latter case, the OAM and the SAM are no longer conserved separately for the former case, in interaction with an ultra-cold atom or molecule, but the total angular momentum (OAM+SAM) is conserved \cite{Marrucci2006,Zhao2007}. Our recent study \cite{Bhowmik2016}, shows that the OAM of focused LG beam can be transferred to the electronic motion of an ultracold atom even at the dipole transition level. That paper demonstrates the generation of three possible transition channels of light-matter interaction distributing the total angular momentum of the focused light to the internal electronic and external CM motions of atoms \cite{Bhowmik2016}. This extra degree of freedom provides control on the interaction as well as on the choice of the channels. Among those three channels, two channels are comparatively weak, lets call them 'side-band' transitions. These side-band transitions channels also correspond to the transfer of the field polarization to the external motion of the atoms. In spite of their weakness, we will show the importance and enhancement of strength of these channels in particular physical conditions. We further study the effects of the focusing angle of the LG beam interacting with the two-component BEC using the proper choice of the inter- and intra- component interaction strengths. The nonparaxial vortex beams have important applications in different fields of science, such as trapping of atoms \cite{Bhowmik2018, Chu1986} or microparticles \cite{Ashkin1986}, optical transitions in semiconductors \cite{Quinteiro2010}, quantum information processing \cite{Beugnon2007} and cell biology \cite{Mehta1999}, etc. The main aims of this paper are to study the effect of the non-paraxial nature of the LG beam on the two-component BEC and its application to analyze the structure of the density of the BEC depending on the inter-component coupling strength. Two hyperfine states of $^{87}$Rb are considered as two-component BEC here. To realize the effect quantitatively, we study the variation of the Rabi frequencies of the two-photon stimulated Raman transitions for different focusing angles of the LG beam, which interacts with the diverse ground state structures of the two component BEC produced due to the different inter- and intra-component scattering lengths. We find that the effect of non-paraxial LG beam is significant on the two component BEC for certain values of inter-component interaction at a fixed intra-component interaction strengths. \section{THEORY} In the mean field approximation, the stationary ground-state of a dilute mixture of two-component BEC trapped in a harmonic potential at $T=0$ K is governed by coupled Gross-Pitaeveskii (GP) equations \cite{Ho1996, Jezek2001, Pu1997} \begin{center} \begin{equation} \hspace{-2.3cm}\left[-\frac{\hbar^2\nabla^2}{2m_i}+V_i(\textbf{R})+ \sum_{j=1}^{2}U_{ij}|\Psi_j(\textbf{R})|^2\right]\Psi_i(\textbf{R})=\mu_i \Psi_i(\textbf{R}), \end{equation} \end{center} where $i=1$ and $2$ are indexes of components of BEC with the normalization condition $\int |\Psi_i(\textbf{R})|^2 d\textbf{R}=N_i$. Here $N_i$, $m_i$ and $\mu_i$ denote the number of atoms, mass of the atom, and the chemical potential of the $i$-th component of BEC. $\Psi_i$ is the CM wavefunctions of the corresponding component of BEC. The asymmetrical harmonic potential is $V_i(\textbf{R})=\frac{1}{2}m_i(\omega^2_{\bot}R^2+\omega^2_{Z}Z^2)$, where $\omega_{\bot}$ and $\omega_Z$ are trapping frequencies in the $X-Y$ plane and along $Z$ axis, respectively. $U_{ii}= {4\pi a_{ii} \hbar^2}/{m_i}$ and $U_{ij}={2\pi a_{ij} \hbar^2}(m_i+m_j)/{m_im_j}$ are the intra-component and the inter-component coupling strengths, respectively. We consider an atomic valance electron of mass $m_e$ is moving around the mean field of core electrons and nucleus with total charge $+e$ and mass $m_n$. The CM coordinate with respect to laboratory frame is $\textbf{R}=(m_e \textbf{r}_e + m_n \textbf{r}_n)/m_t $, where $m_t=m_e+m_n$ being the total mass. Here $\textbf{r}_e$ and $\textbf{r}_n$ are the coordinates of the valance electron and the center of atom, respectively. Therefore, the relative (internal) coordinate can be expressed as $\textbf{r}=\textbf{r}_e -\textbf{r}_n$. As the BEC components are coupled to each other, any perturbation to one of the components leads to the change in the CM wavefunction of the other component. Here, we consider the perturbation is coming from the interaction of the non-paraxial LG beam, which is produced from a circularly polarized paraxial pulse with OAM by passing it through a lens with high numerical aperture (NA). The spot size of the paraxial LG beam is overfilled the entrance aperture radius of the objective to take full advantage of the high numerical aperture. Due to the diffraction from the edges of the objective and the focusing from the NA, the SAM and OAM of the light get coupled and form a superposition of plane waves having an infinite number of spatial harmonics \cite{Richards1959, Boivin1965}. For the non-paraxial circularly polarized LG beam, the x, y, z-polarized component of the electric field \cite{Zhao2007, Bhowmik2018, Monteiro2009, Iketaki2007} in the laboratory coordinate system can be expressed as \begin{equation} {E_x}(r^\prime,\phi^{\prime},z^\prime)=(-i)^{l+1}E_0(e^{il\phi ^\prime}I_0^{(l)}+e^{i(l+2\beta)\phi ^\prime}I_{2\beta}^{(l)}), \end{equation} \begin{equation} {E_y}(r^\prime,\phi^{\prime},z^\prime)=\beta(-i)^{l}E_0(e^{il\phi ^\prime}I_0^{(l)}-e^{i(l+2\beta)\phi ^\prime}I_{2\beta}^{(l)}), \end{equation} \begin{equation} {E_z}(r^\prime,\phi^{\prime},z^\prime)=-2\beta(-i)^{l}E_0e^{i(l+\beta)\phi ^\prime}I_{\beta}^{(l)}, \end{equation} where $\beta$ is the polarization of light incident on the lens. Here, we consider that the light is circularly polarized with $\beta = \pm 1$. The amplitude of the focused electric field is $E_0=\frac{\pi f}{\lambda} T_{o} E_{inc}$, where $T_{o}$ is the objective transmission amplitude, $E_{inc}$ is the amplitude of incident electric field on the high NA lens and $f$ is the focal length related with $r^\prime$ by $r^\prime=f \sin\theta$ (Abbe sine condition). The coefficients $I_m^{(l)}$, where $m$ takes the values 0, $\pm1$, $\pm2$ in the above expressions, depend on focusing angle ($\theta_{max}$) by \cite{Zhao2007} \begin{center} \begin{equation} \hspace{-2.1cm}I_m^{(l)}(r _\bot ^\prime ,z ^\prime)=\int_0^{\theta_{max}}d\theta\left({\frac{\sqrt{2}r_\bot^\prime }{w_0 \sin\theta}}\right)^{| l |}{(\sin\theta)}^{| l | +1} \sqrt{\cos\theta} g_{| m |}(\theta) J_{l+m}(kr_\bot^\prime \sin\theta)e^{ikz^\prime \cos\theta}, \end{equation} \end{center} where $r_\bot^\prime$ is the projection of \textbf{r$^\prime$} on the $xy$ plane, $w_0$ is the waist of beam at the position of the objective entrance port and $J_{l+m}(kr_\bot^\prime \sin\theta)$ is cylindrical Bessel function. The angular functions are $g_0 (\theta)=1+\cos\theta$, $g_1 (\theta)=\sin\theta$, and $g_2 (\theta)=1-\cos\theta$. Let $ \psi_i$ and $\Psi_i $ be the internal electronic and the external CM wavefunctions, respectively, of $i$-th component of BEC. The total wavefunction of the two-component BEC can be written as $\Upsilon(\textbf{R}_1, \textbf{R}_2, \textbf{r}_1, \textbf{r}_2)=\Psi_1({\textbf{R}_1}) \Psi_2({\textbf{R}_2})\psi_1({\textbf{r}_1}) \psi_2({\textbf{r}_2})$. The atom-radiation interaction Hamiltonian, $H_{int}$, is derived from the Power-Zienau-Wooley (PZW) scheme \cite{Babiker2002} which is beyond the level of dipole approximation. \begin{equation} H_{int}=-\int d\textbf{r}^\prime P(\textbf{r}^\prime)\boldsymbol{.} \textbf{E}(\textbf{r}^\prime, t) +h.c. \end{equation} where $\textbf{E}(\textbf{r}^\prime, t)$ is the local electric field of the LG beam \ experienced by the atom. $P(\textbf{r}^\prime)$ is the electric polarization given by $ P(\textbf{r}^\prime)=-e\frac{m_n}{m_t}\textbf{r}\int_0^1 d\lambda \delta \Big(\textbf{r}^\prime-\textbf{R}-\lambda\frac{m_n}{m_t}\textbf{r}\Big).$ If the LG beam (with OAM=$+l$ and SAM=$\pm1$) interacts with one of the components of the BEC (say, $n$-th), then the dipole transition matrix element will be (for interaction with single-component BEC, see Ref.\cite{Bhowmik2016}) \begin{eqnarray} \hspace{-2.2cm} M_{i \rightarrow f}^d & \hspace{-1.2cm}=& \langle \Upsilon _f | H_{int} | \Upsilon _i \rangle \nonumber \\ &\hspace{-1.2cm}=& e\frac{m_c}{m_t} \sqrt{\frac{8\pi}{3}}\Bigl[-\epsilon_{\pm 1}\langle \Psi _{nf}({\textbf{R}_n}) | I_0^{(l)}({\textbf{R}_n})e^{il\Phi} | \Psi _{ni}({\textbf{R}_n}) \rangle \langle \psi _{nf}({\textbf{r}_n}) | r Y_1^{\pm 1}(\boldsymbol{\hat{\textbf{r}}})| \psi _{ni}({\textbf{r}_n}) \rangle \nonumber \\ &\hspace{-1.2cm}-& \epsilon_{\mp 1} \langle \Psi _{nf}({\textbf{R}_n}) | I_{\pm 2}^{(l)}({\textbf{R}_n})e^{i(l\pm 2)\Phi}| \Psi _{ni}({\textbf{R}_n}) \rangle \langle \psi _{nf}({\textbf{r}_n})| r Y_1^{\mp 1}(\boldsymbol{\hat{\textbf{r}}})| \psi _{ni}({\textbf{r}_n}) \rangle \nonumber\\ &\hspace{-1.2cm}\pm & \sqrt{2} i \epsilon_{0} \langle \Psi _{nf}({\textbf{R}_n}) | I_{\pm 1}^{(l)}({\textbf{R}_n})e^{i(l\pm 1)\Phi} | \Psi _{ni}({\textbf{R}_n}) \rangle \langle \psi _{nf}({\textbf{r}_n}) | r Y_1^{0}{(\boldsymbol{\hat{\textbf{r}}}})| \psi _{ni}({\textbf{r}_n}) \rangle \Bigr]\nonumber \\ &\hspace{-1.2cm}\times & \prod_{p\neq n}\langle \Psi _{pf}({\textbf{R}_p}) | \Psi _{pi}({\textbf{R}_p}) \rangle \langle \psi _{pf}({\textbf{r}_p}) | \psi _{pi}({\textbf{r}_p}) \rangle, \end{eqnarray} where $\epsilon_\pm= (E_x \pm iE_y)/\sqrt{2}$ and $\epsilon_0=E_z$. Eqn. (7) shows three possible hyperfine sub-levels of electronic transitions and this part of the transition matrix element is calculated using well known relativistic coupled-cluster theory \cite{Bhowmik2017a, Bhowmik2017b, Das2018, Biswas2018}. If the interaction happened with paraxial LG beam, only one of the electronic transitions, corresponds to the first terms in the square bracket, would obtain depending on the choice of SAM of the paraxial LG beam. When a circularly polarized LG beam is focused, it creates different types of LG photon with three different local polarizations, generating three different electronic transitions. To conserve the total angular momentum of each photon, the three different OAMs ($l$, $l\pm2\beta$, $l\pm\beta$) of field are transferred to the CM of the atoms of the interacting component of BEC. Since the motions of the two component are coupled, the generation of three different vorticities in one of the components of BEC, modifies the CM wavefunction of the other component in three different ways. Further, the interaction Hamiltonian also depends on the focusing angle of the LG beam. Therefore, tuning of focusing angle of the LG beam directly affects the strength of interaction of the LG beam with BEC. Since, the coupling between the OAM and SAM of the focused LG beam creates special kind of intensity distribution due to the non-vanishing contributions of Z-component \cite{Monteiro2009}, we expect longitudinal variation of Rabi frequencies during the interaction among the components of BEC. \begin{figure}[!h] \centering \includegraphics[trim={0.5cm 0.5cm 1cm 1cm},width=13cm]{Drawing_1.eps} \caption{Energy level scheme of the two-photon transitions at two-component BEC. Focused LG1 (OAM$=-1$ and SAM$=+1$) and LG2 (OAM$=+1$ and SAM$=-1$) beams are co-propagating and they interact with 1st and 2nd components of BEC, respectively. The ground states of 1st component and 2nd component of the $^{87}$Rb BEC are $| 5s_{\frac{1}{2}} F=1, m_f =-1 \rangle$ and $| 5s_{\frac{1}{2}} F=2, m_f =+1 \rangle$, respectively. $\Delta=-1.5$ GHz represents two-photon detuning. T-1, T-2, T-3, t-1, t-2, and t-3 are Rabi frequency of two-photon transitions.} \end{figure} \section{NUMERICAL RESULTS AND INTERPRETATION} We consider co-propagating two sets of beams, say LG1 and LG2. Each set contains one LG and one Gaussian beams. Individual components of the coupled $ ^{87} $Rb BEC are considered to be non-rotating. The BEC is prepared in a harmonic potential using the two hyperfine states $\psi_1=| 5S_{\frac{1}{2}}, F=1, m_f =-1 \rangle$ and $\psi_2=| 5S_{\frac{1}{2}}, F=2, m_f =1 \rangle$ (see FIG. 1). They are designated, henceforth, as BEC-1 and BEC-2, respectively. The interaction of the focused LG beam with the individual components of the BEC produces three angular momentum channels \cite{Bhowmik2016}. According to the Eq. (7), these three angular momentum channels generate +1, $-1$ and 0 units of topological charge at the CM of the atom of BEC. The proper choice of polarizations of the Gaussian beams can Raman-excite the atoms to the different stoke or anti-stoke electronic states. Let us name the channels as T-1, T-2, and T-3, respectively, for BEC-1, and t-1, t-2, and t-3, respectively, for BEC-2. These three angular momentum channels correspond to the different Raman electronic transitions through the different intermediate states. For BEC-1, the channels have three intermediate electronic hyperfine states, $| 5p_{\frac{3}{2}}, F=2, m_f =0 \rangle$, $| 5p_{\frac{3}{2}}, F=2, m_f =-2 \rangle$, and $| 5p_{\frac{3}{2}}, F=2, m_f =-1 \rangle$, respectively. In case of BEC-2, the intermediate electronic hyperfine states are $| 5p_{\frac{3}{2}}, F=2, m_f =+2 \rangle$, $| 5p_{\frac{3}{2}}, F=2, m_f =0 \rangle$, and $| 5p_{\frac{3}{2}}, F=2, m_f =+1 \rangle$, respectively. Depending on the requirement of the problem, we can choose a particular Gaussian beam for the channel of our interest. Atoms excited by other channels will be lost from the trap due to linear momentum transferred from the focused LG beam. A brief discussion on the interaction of the two-component BEC with the paraxial LG beams will be useful before considering non-paraxial beam in the study. However, the detailed structure of the above BEC mixture for different inter-component interactions and number of atoms along with the formalism of the interaction with paraxial LG beam are available in our recent paper \cite{Anal2018}. The asymmetry parameter of the harmonic trap is $\lambda _{tr} =\omega _Z /\omega _\bot =2$ with the axial frequency is $\omega _Z /2\pi =40$ Hz. The characteristic length is $a _\bot =4.673$ $\mu$m. The intensity of the paraxial LG beam is considered, $I =10^2 $ W cm$^{-2}$ but the intensity of the non-paraxial LG beam before focusing is assumed 10 mW m$^{-2}$ and its waist $w _0 =10 ^{-4}$ m. The intra-component $s$-wave scattering lengths are $a_{11}=1.03\times 5.5$nm, $a_{22}=0.97\times 5.5$nm \cite{Hall1998} and inter-component $s$-wave scattering length $a_{12}=a_{21}=$ $g\times 5.5$ nm, where $g$ is a parameter which can be tuned using Feshbach resonance \cite{Chin2010, Inouye1998}. For simplicity, we consider that both the hyperfine states are populated by equal number of atoms. \subsection{Density structure of non-vortex two-component BEC revisited} FIG. 2 presents the initial non-vortex density distribution of the two-component BEC at $z=0$ plane for $N=10^6$. FIG 2(a) to 2(i) represent the distribution with increasing inter-BEC coupling strength, $g$. Since BEC-1 has relatively stronger intra-BEC strength compared to BEC-2, BEC-1 is radially more expanded than BEC-2 at $g=0$. However, this yields relatively less central density of BEC-1 compared to BEC-2 (see FIG 2(a)). As the mutual interaction between the components of BEC is increased, the components start departing from each other (FIG. 2(b) and 2(c)). Eventually after a certain value of $g$, a part of the BEC-2 breaks and grows at the outer region of BEC-1 (FIG 2(d), 2(e), 2(f) and 2(g)). Further increase of inter-component interaction even breaks BEC-1 in some parts and it appears at the surrounding of BEC-2 (FIG. 2(h) and 2(i)). Therefore, multi-ring shaped density profiles are obtained in the x-y plane with increasing $g$ value. \begin{figure*}[!h] \begin{center} \subfloat[]{\includegraphics[trim = 13cm 1.5cm 7cm 2cm,scale=.16]{a_000.eps}} \subfloat[]{\includegraphics[trim = 9cm 1.5cm 7cm 0cm,scale=.16]{a_050.eps}} \subfloat[]{\includegraphics[trim = 7cm 1.5cm 7cm 2cm,scale=.16]{a_080.eps}}\\ \subfloat[]{\includegraphics[trim = 13cm 1.5cm 7cm 2cm,scale=.16]{a_100.eps}} \subfloat[]{\includegraphics[trim = 9cm 1.5cm 7cm 2cm,scale=.16]{a_110.eps}} \subfloat[]{\includegraphics[trim = 7cm 1.5cm 7cm 0cm,scale=.16]{a_130.eps}}\\ \subfloat[]{\includegraphics[trim = 13cm 1.5cm 7cm 2cm,scale=.16]{a_160.eps}} \subfloat[]{\includegraphics[trim = 9cm 1.5cm 7cm 0cm,scale=.16]{a_170.eps}} \subfloat[]{\includegraphics[trim = 7cm 1.5cm 7cm 0cm,scale=.16]{a_180.eps}} \caption{Plots of the density (in unit of $a_\perp^{-3}$ ) of non-vortex BEC-1 (solid lines, n1) and BEC-2 (dotted line, n2) with respect to distance from the trap axis are presented for N=$10^6$. Figure (a) $g=0$, (b) $g=0.50$, (c) $g=0.80$, (d) $g=1.00$, (e) $ g=1.10$, (f) $ g=1.30$, (g) $g=1.60$, (h) $g=1.70$, and (i) $ g=1.80$. r is in the unit of the harmonic oscillator length scale $(a_\perp)$.} \end{center} \end{figure*} \subsection{Interaction with the paraxial LG beams} Let us consider that the paraxial LG1 (with OAM$=-1$ and SAM$=+1$) and LG2 (with OAM$=+1$ and SAM$=-1$) beams are impinged on BEC-1 and BEC-2, respectively, as shown in FIG. 1. Therefore, for both the components of the BEC, two photon Raman transitions are performed with co-propagating LG and Gaussian (G) beams based on dipole transitions. Due to the particular selection of OAM and SAM of the paraxial LG beams, only T-1 and t-1 channels will be available with one kind of Gaussian beam. The channels transfer $-1$ and $+1$ units of OAM to the atoms, respectively, to a particular electronic state, say, $\psi_2=| 5S_{\frac{1}{2}}, F=2, m_f =1 \rangle$. Therefore, superposition of the vortex and antivortex states is created at the center of mass of the condensate component corresponding to the electronic state, i.e., BEC-2. To examine the effect of inter-component coupling to the T-1 and t-1 transition channels, we have studied and analyzed corresponding two-photon Rabi frequencies in FIG. 3(a). The above non-vortex density profiles are considered as initial wavefunctions of the two-components BEC. In this dipole transition, the OAM of light does not contribute to the internal motion of the atoms. Therefore, the Rabi frequencies are calculated from the multiplications of the center of mass matrix elements and the electronic matrix elements involving OAM and SAM, respectively. This has been discussed in detail in our earlier paper \cite{Anal2018} with many distinct physical features. \begin{figure*}[!h] \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 0.1cm,scale=.30]{pic_3.eps}} \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 0.1cm, scale =.30]{overlap_matrix.eps}} \caption{(a) Variations of dipole Rabi frequency (in Sec$^{-1}$) for T-1 and t-1 channels of Figure 1 (for paraxial laser) are plotted with respect to inter-component interaction strength $(g)$ on a semi-log scale. Initial states of both the components of BECs are considered as non-vortex. b) Variation of overlap matrix $(\Gamma)$ between the non-vortex condensed components is ploted with respect to $g$-values} \end{figure*} The Rabi frequency profiles of T-1 and t-1 transition channels are presented in FIG. 3(a) considering both LG1 and LG2 as paraxial. The figure interprets that BEC-1 and BEC-2 have almost the same initial density structures for $g$-values between 0 and 0.4. Around $g=0.6$, the local peak of Rabi frequency variation for BEC-1 in contrast to the constant descending profile of BEC-2 can be explained clearly from their initial density structures around that coupling region. Initial density profile at that $g$-value shows that BEC-2 is compressed around the center of the trap and BEC-1 is moved away from that center. There is a coincident observed near $g=0.9$. There the Rabi frequencies becomes locally minimum for BEC-1, but locally maximal for BEC-2. Also it is the $g$-value at which the initial BEC components becomes totally immisible as seen from FIG. 3(b) containing the plot for variation of overlap between the components. The overlap parameter is calculated using Eq. (8) of reference \cite{Jain2011}. \begin{equation} \Gamma=\frac{[\int n1(r)n2(r)dr]^2}{[\int n1(r)^2 dr][\int n2(r)^2 dr]}. \end{equation} The complete overlap, i.e. $ \Gamma=1 $ indicates total mixing between the components, whereas for complete phase separation condition we have $ \Gamma=0 $. The cutting points A and B of the Rabi frequency distributions in FIG. 3(a) describe the population of the vortex and antivortex states at BEC-2 will be same. Therefore, these inter-component coupling strengths are ideal for a maximally coherent fringe pattern of interference. When the components are non-miscible, we may get interesting vortex-dipole dynamics \cite{Sang2017} which is beyond the scope of discussion of this paper. However, the interactions of the non-paraxial or focused LG beam with the two-component BEC will not only provide enhanced Rabi frequencies due to increased intensity, but also will generate different channels of transitions along with their external control mechanism as discussed in the following subsection. \begin{figure*}[!h] \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 1.5cm, scale=.30]{plot_40.eps}} \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 1.5cm, scale=.30]{plot_50.eps}}\\ \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 0.1cm,scale=.30]{plot_60.eps}} \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 0.1cm, scale=.30]{plot_70.eps}} \caption{Variations of dipole Rabi frequency (in Sec$^{-1}$) of T-1 ("1st") and t-1 ("2nd") channels are plotted with inter-component interaction strength $(g)$ on a semilog scale for different focusing angles of the LG beams. } \end{figure*} \subsection{Interaction with the Non-Paraxial LG beams} The interaction of the two-component BEC with the non-paraxial LG beams is the main theme of this work. The interactions open up different channels of transitions with variable strengths depending on the parameters of the LG beams. Lets chose first one of the dominant options, where before focusing the OAM and SAM of LG1 (with OAM$=-1$ and SAM$=+1$) and LG2 (with OAM$=+1$ and SAM$=-1$) are such that the strengths of T-1 and t-1 transitions are the strongest among each set of transitions (see FIG. 1). In fact, they are the transitions which were involved in the above mentioned paraxial case. The difference is expected to observe in the strength of Rabi frequencies and the effect of the other transition channels through which atoms are lost from the trap. FIG. 4 shows the variation of the Rabi frequencies, calculated using expression Eq. (2.7), with the inter-component interaction strengths, $g$ of the components of BEC at different focusing angles of the LG beam. The focusing angles of LG1 are considered 40$^{\circ}$, 50$^{\circ}$, 60$^{\circ}$ and 70$^{\circ}$ in FIG. 4(a), 4(b), 4(c) and 4(d) , respectively. An overall increase in the Rabi frequencies is found with the increase in the focusing angle compared to the paraxial beam (compare with FIG. 3(a)). This is understandable as more number of photons are available for interaction with atoms trapped in the harmonic potential having a smaller cross-section compared to the paraxial beam size. In each of the plots of the FIG. 4, the focusing angles of LG2 varies from 40$^{\circ}$ to 70$^{\circ}$ in order to analyze the mutual variations of the component wise interactions. Unlike the paraxial case, the crossing points A and B of the Rabi frequency profiles for both the components can be tuned by changing the focusing angle of the light beam. It means that the maximally coherent interference pattern can be achieved even at $g$-value away from the A and B points obtained in the paraxial case. In other words, we will be able to estimate the focusing angles of LG1 and LG2 by observing the perfect interference pattern by tuning inter-component coupling strength of the BEC mixture. In certain combinations of focusing angles, particularly for a large difference in focusing angles between LG1 and LG2, the cutting points are not available due to the comparatively large enhancement of the average Rabi frequency for the more focused beam case (See black solid line and red dashed line in fig 4(d)). \begin{figure}[!h] \centering \includegraphics[trim={0.5cm 0.5cm 1cm 1cm},width=13cm]{Drawing_2.eps} \caption{Energy level scheme of the two-photon transitions. Focused LG1 (OAM$=+1$ and SAM$=-1$) and LG2 (OAM$=+1$ and SAM$=-1$) beams are co-propagating and they interact with 1st and 2nd components of BEC, respectively. Where the ground states of 1st component and 2nd component of the $^{87}$Rb BEC are $| 5s_{\frac{1}{2}} F=1, m_f =-1 \rangle$ and $| 5s_{\frac{1}{2}} F=2, m_f =+1 \rangle$, respectively. $\Delta=-1.5$ GHz represents two-photon detuning. T-1, T-2, T-3, t-1, t-2, and t-3 are two-photon transitions channels.} \end{figure} \begin{figure*}[!h] \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 1.5cm, scale=.30]{Ratio_3.eps}} \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 1.5cm, scale=.30]{Ratio_4.eps}}\\ \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 1.5cm, scale=.30]{Ratio_1.eps}} \subfloat[]{\includegraphics[trim = 1cm 0.5cm 0.1cm 1.5cm, scale=.30]{Ratio_2.eps}} \caption{Variation of ratio of Rabi frequencies of two-photon transition through T-2 and t-1 channels are plotted with respect to inter-component interaction strength $(g)$ in case of $N=10^5$ and $10^6$. } \end{figure*} \subsection{Estimation of non-paraxial effect of LG beam through the Rabi Frequencies} To estimate the effects of the non-paraxial signature of the LG beam in interaction with the two-component BEC, one of the best approaches is to compare the strength of the side-band transitions (say, T-2 ) with the prime transition (say, t-1). Let us now consider that LG1 and LG2 are generated by focusing the paraxial beams with (OAM, SAM) = $(+1,-1)$ and $(+1,-1)$, respectively. FIG. 5 represents the schematic diagrams of the transitions among the energy levels. Unlike the last case (see FIG. 1), T-2 transition channel couples here with the t-1 transition channel via $| 5p_{\frac{3}{2}} F'=2, m_f =0 \rangle$ as intermediate state. Since T-2 transition channel is only possible (between T-2 and t-1) through spin-orbit coupling of the focused LG1 beam, it is expected that the Rabi frequency profile due to this channel is strongly affected by the focusing angle of the beam. Also, we consider that Gaussian beam is used to take the atoms back to BEC-2 only when intermediate state is $| 5p_{\frac{3}{2}} F'=2, m_f =0 \rangle$. Therefore, atoms, which are excited through other T-1, T-3, t-2 and t-3 channels, will be lost from the trap. Since the above selection of OAM and SAM of LG1 beam transfer $-1$ unit of OAM to the atoms at BEC-1 through the T-2 channel, a vortex-antivortex superposed state is created at the electronic state of BEC-2, i.e, $|5s_{\frac{1}{2}} F=2, m_f =+1 \rangle$, with the help of two-photon Raman transition. Interestingly, the density distribution of this vortex and anti-vortex depends on the initial non-vortex structure of the BEC-2 and BEC-1, respectively. The effect of the non-paraxial nature of the LG beam on this interaction for different density of atoms can be explicit from the distribution of the ratio of Rabi frequencies of Raman transitions through T-2 and t-1 channels in FIG. 6. For example, we consider the two different populations of BEC with $N= 10^5$ and $10^6$. In the figures FIG. 6(a,c) and 6(b,d), we consider that the LG1 beam is focused at an angle 40$^\circ$ and 70$^\circ$, respectively. In the figures, the focusing angle of LG2 is varied from 40$^\circ$ to 70$^\circ$. The structures of the distributions definitely depend on the inter-component coupling through the initial density distributions of the BEC components. It is clear from the figures that the ratio attains a maximum value at around $g=0.64$ for $N=10^6$ and $g=1.25$ for $N=10^5$ in unit of 5.5 nm. Therefore, it is possible to enhance side-band transition significantly over primary transitions even with a comparatively low focused LG beam, if we choose the inter-component coupling strength properly. This phenomenon has a large impact on any experiments where non-paraxial vortex beam is or can be used \cite{Zhang2018}. It is obvious in the figure that at $g=0$ (when the components of BEC are independent of each other), strong focusing is the only possibility to increase the non-paraxial effects of the LG beam. Here, in the case of the coupled-BEC, the effects can also be controlled by specifying the intra- and inter- component interactions of the BECs. In fig 6(b,d), one can see that T-2 (which arises due to the spin-orbit coupling of light) even attains some values which is large compared to the prime transition t-1, which can not be possible in case of one-component BEC. It is possible to carry out experimental study of the above effect of the non-paraxial nature in the interaction with the two-component BEC. In the above scheme of creating the vortex-antivortex superposition from the side-band and primary transitions at the energy level $|5s_{\frac{1}{2}} F=2, m_f =+1 \rangle$ generates an interference pattern \cite{Bhowmik2016}. Let us consider that LG1 and LG2 beams are focused at angles 70$^\circ$ and 40$^\circ$, respectively, for $N=10^6$ atoms. We choose these particular combination of focusing angles of LG beams as they significantly affect the side-band transitions (see FIG. 6). The interference pattern displayed in FIG. 7 for $g=0$ and $g=0.64$ is in the $Z=0$ plane. At $g=0$, the populations of anti-vortex state through T-2 transition channel is much smaller compared to the population of vortex state through t-1 transition channel. In case of $g=0.64$, the situation is opposite and both the populations from T-2 and t-1 transition are closer to each other as well. Therefore, near to maximally coherent interference pattern is produced as shown in 7(b). After fixing the focusing angles, if we tune the inter-component coupling strength of the BEC mixture, we will be able to estimate $g$-value when effect of the non-paraxial nature of the vortex beam is maximal. \begin{figure*}[!h] \subfloat[]{\includegraphics[trim = 2cm 1.0cm 0.1cm 1.5cm, scale=.15]{super_1.eps}} \subfloat[]{\includegraphics[trim = 1cm 1.0cm 0.1cm 5.5cm, scale=.15]{super_2.eps}} \caption{Images of superposition of vortex-antivortex for $N=10^6$ atoms at $Z=0$ plane are displayed. LG1 and LG2 are focused are considered at focusing angles 70$^\circ$ and 40$^\circ$, respectively. Fig (a) is for $g=0$ and (b) is for $g=0.64$. } \end{figure*} \section{CONCLUSION} We have formulated a theory of interaction of the two-component BEC with the Laguerre-Gaussian (LG) beam, which is beyond the paraxial limit. Due to the coupling of the orbital and spin angular momentum of the light, the interaction of the focused LG beam with each of the components of the BEC takes place under three different angular momentum channels. Using the two photon Raman transitions, we calculate the Rabi frequencies of these angular momentum channels and show the variation of the Rabi frequency with the inter-BEC interaction strengths and focusing angles of the beam. We demonstrate the estimation procedure of the phase separation between the initial structure of the components of the BEC from the profiles of the Rabi frequencies. We have seen that the strengths of the side-band transitions achieve a maximum value for a particular value of the inter-BEC interaction strength and that is even larger than that of the strength of the primary transition for larger focusing angle of the beam. An experimental scheme is proposed to estimate the inter-coupling strength of the binary BEC by observing the coherence of the interference pattern based on the vortex-antivortex superposition from the side-band and primary transitions. Considering different orbital angular momentum of the incident beam and using the angular momentum channels, one can create multiply quantized vortices in the two-component BEC \cite{Kuopanportti2012, Kuopanportti2015}. These novel phenomenon of occurring multiply quantized vortices in BEC is observed in multicomponent superconductivity \cite{Milosevic2015} or in rotating two-band Fermi gas \cite{Klimin2018}. The vortex-antivortex superpositions appear as the counter-rotating persistent currents in superconducting circuits \cite{Nakamura1999, Friedman2000} which are propitious candidates for qubits in quantum-information processing and quantum communication networks \cite{Spedalieri2006}. Also, the vortices and multiple-vortices have been the subject of intensive experimental research in trapped superfluid Fermi gases \cite{Zwierlein2005, Zwierlein2006a, Zwierlein2006b} and even in real condensed matter system \cite{Chmiel2018}. We believe this is one of the best approaches to study the effect of the non-paraxial nature of the vortex beam on ultra-cold atoms. This non-paraxial effects of the angular momentum channels could be experimentally verified by measuring the orbital angular momentum in the components of BEC using surface wave spectroscopy \cite{Chevy2000, Haljan2001}. \section*{ACKNOWLEDGMENTS} We thank Rohit Kishan Ray, IIT Kharagpur for useful comments on the manuscript. \clearpage
1,116,691,497,971
arxiv
\section{Introduction} Astronomy has moved into a new golden era with the historic measurements of gravitational waves (GW) from the binary coalescence of black holes \cite{PhysRevLett.116.061102,PhysRevLett.116.241103,PhysRevLett.118.221101,2041-8205-851-2-L35,PhysRevLett.119.141101} and neutron stars \cite{2017PhRvL.119p1101A}. The detection of GW170817 further pushed our understanding with the first multimessenger detection of GWs and electromagnetic signals \cite{2041-8205-848-2-L13,2041-8205-848-2-L12}. These signals have renewed the interest in the search for signals from exotic compact objects (ECO; see e.g. \cite{Alcubierre:2003sx,2016JCAP...10..001G,Cardoso:2016oxy,Hui:2016ltb,Palenzuela:2006wp,Brito:2015yfh,Hanna:2016uhs}) which are strongly gravitating objects which are made out of exotic matter. Self gravitating scalar field solitons are known to have highly compact cores \cite{LEE1992251,PhysRevLett.57.2485,PhysRevLett.66.1659} and provide a family of ECO candidates including Wheeler's ``geons'' \cite{PhysRev.97.511,RevModPhys.29.480}, boson stars \cite{PhysRev.172.1331}, and oscillatons \cite{1968PhRv..172.1331K,1969PhRv..187.1767R,Liddle:1993ha,1994PhRvL..72.2516S,1991PhRvL..66.1659S}. These are closely related to a family of objects known as axion stars \cite{Berezhiani:1989fu,Berezhiani:1989fp,Sakharov1994id,Berezhiani:1992rk,pecceiquinn1977,weinberg1978,wilczek1978,2014JHEP...06..037D,2010ARNPS..60..405J,2006JHEP...06..051S,axiverse,2006JHEP...05..078C,Marsh:2015xka,Cicoli:2012sz,2017JCAP...03..055H,Widdicombe:2018oeo}. Recent work with scalar compact objects head on mergers \cite{Cardoso:2016oxy,Palenzuela:2006wp,Helfer:2018vtq,Choptuik:2009ww,Helfer:2018vtq} as well as mixed mergers \cite{Clough:2018exo,Dietrich:2018jov,Dietrich:2018bvi}, indicates distinctions in the gravitational wave signal with respect to black holes. If these distinctions also exist in binary coalescence (see \cite{Sennett:2017etc,Palenzuela:2017kcg,Bezares:2017mzk} for boson star inspirals), a single GW event could be a smoking gun for the existence of ECOs. In this paper, we study the relativistic head-on collisions of a class of real relativistic scalar fields solitons called oscillatons (OS) \cite{PhysRevLett.66.1659} using full (3+1) dimensional numerical relativity simulations with \textsc{GRChombo} \cite{Clough:2015sqa}. OS are stable on cosmological time scales \cite{Page:2003rd} and could be realised as an axion star where the leading order $\phi^4$ interaction is negligible due to having a high axion decay constant, $f_a$. Formation of such objects have been studied in both non-relativistic \cite{2017MNRAS.465..941D,Amin:2019ums} and relativistic cases \cite{Widdicombe:2018oeo}. One of the key features of an OS is that its scalar field configuration is not static. Instead it oscillates with the characteristic frequency $\omega\sim m$ where $m$ is the effective mass of the field which is inversely related to the axion decay constant $m\propto 1/f_a$. Thus the interactions of any pair of OS will depend not only on their respective masses and the geometry of the interactions, but also on their \emph{relative oscillation phase} $\Delta \theta$. In the case of relativistic OS where gravity is strong, the OS can exhibit very high compactness on the order of tens of percent of the Schwarzschild radius. In this regime, gravity back-reacts strongly on the configuration of the scalar field and sufficiently compact OS can interact to form black holes. In \cite{Helfer:2018vtq}, we showed that the head-on collisions of unboosted OS in this regime can produce gravitational wave signals which are distinct and, at high compactness, more energetic than equivalent equal mass black hole mergers. In this paper, we extend our work into two different directions. First, we consider the collisions of OS with different phases, in particular collisions in which their relative phase is maximal $\Delta \phi = \pi$, dubbed ``anti-phase'' OS collisions, confirming the perturbative gravity results of \cite{2016PDU....12...50P,PhysRevD.94.043513, Amin:2019ums}. We will show that anti-phase OS collisions experience a mutual repulsive force, confirming previous results obtained in perturbative gravity. Secondly, we consider the collisions of \emph{boosted} OS, with relativistic initial center of mass frame velocities, for both equal phase and anti-phase pairs of OS. While at high initial velocities, black holes formed as expected from the hoop conjecture argument \cite{Choptuik:2009ww,East:2012mb,Rezzolla:2012nr}, surprisingly and counter-intuitively, we show that at low velocities, collisions are \emph{less likely to form black holes} when compared to the equivalent configuration with zero initial velocity. This effect is seen in both equal and anti-phase cases, indicating the possible existence of a ``critical point'' (see \kfig{fig:money}). \begin{figure*}[ht!] \centering \includegraphics[width=2.08\columnwidth]{Code_Repo/Moneyplot.png} \caption{Final states of equal mass head-on OS-OS mergers as a function of compactness $\mathcal{C}$ and boost velocity $v$, for equal phase (left) and anti-phase cases (right). Dots indicate numerical simulations which end either in black hole formation (black) or dispersal/bounce (orange). Shown are approximate regions indicating the final states of the collisions for the given initial conditions. The black line is the reduced hoop conjecture line \keq{eqn:hoop_2}, while the red (equal phase) and orange (anti-phase) lines are numerically determined estimates where above them black holes do not form. In both cases, there exists a ``stability band'' between the black lines and the red/orange lines, in which the OS either disperse (equal phase) or bounce (anti-phase) post-collision. Comparing the free fall time and interaction times of the collision yields the blue line ($v \approx \mathcal{C}^{1/2}$), which converges with the reduced hoop conjecture line of $v\approx \sqrt{1-144\mathcal{C}^2}$ at $\mathcal{C}\approx 0.07$. } \label{fig:money} \end{figure*} \section{Oscillatons and Initial set-up} \label{sect:oscillatons} We use units $\hbar=c=1$ and $M_{pl}=1/\sqrt{8\pi G_N}$ which is the reduced Planck mass. Consider the action of a massive scalar field minimally coupled to gravity \begin{equation} S = \int d^4 x \sqrt{-g} \left [ \frac{R}{16 \pi G} - \frac{1}{2} \partial_\mu \phi \partial^\mu \phi - \frac{1}{2} m^2 \phi^2 \right ] \, \end{equation} where $g$ is the determinant of the metric, $R$ is the Ricci scalar and $m$ is the mass of the real scalar field $\phi$. Such a potential suppports self-gravitating quasi-stable equilibrium OS \cite{PhysRevLett.66.1659}, and it has been shown in \cite{Alcubierre:2003sx} that unexcited spherical symmetric solutions span a one-parameter family most conveniently represented by its compactness, $\mathcal{C}$, defined as \begin{equation} \mathcal{C} \equiv \frac{G M_*}{R} \, \label{eqn:compactness} \end{equation} where $M_*$ is the total mass and $R$ is the radius. Note that for a given $\mathcal{C}$ the radius $R(M_*)$ of unexcited OS is completely determined by its mass $M_*$. It has also been shown in \cite{Alcubierre:2003sx} that low compactness OS with $\mathcal{C} < 0.14$ are stable and typically migrate to other stable OS with $\mathcal{C}<0.14$ when strongly radially perturbed. On the other hand, high compactness OS with $\mathcal{C}>0.14$ are unstable, and under radial perturbations may either migrate to a stable lower mass OS with $\mathcal{C}<0.14$ via scalar radiation or collapse into a black hole (\kfig{fig:compactness}). \begin{figure} \centering \includegraphics[width=1\columnwidth]{Code_Repo/compactness.png} \caption{Spherically symmetric unperturbed OS solutions are spanned by a single parameter, here chosen to be the compactness $\mathcal{C}=GM_*/R$, as found in \cite{Alcubierre:2003sx}. OS with $\mathcal{C}>0.14$ are unstable to perturbations, with perturbations either dissipating leading to a final state of $\mathcal{C}<0.14$ or collapsing into a black hole. } \label{fig:compactness} \end{figure} A key property of OS is that it oscillates along a characteristic frequency $\omega \sim m$, and thus interactions of OS depend on their relative phase difference $\Delta \theta$. In particular, the field configuration $\phi(x,t)$ of a head-on collision of equal phase $\Delta \theta=0$ (anti-phase $\Delta \theta=\pi$) OS is \emph{symmetric} (\emph{anti-symmetric}) at the plane of collision parallel to the axis of motion. In between these two limits $0<\Delta \theta<\pi$, the collisions are said to be ``off-phase''. \kfig{fig:relative_phase} illustrates this further. The special case for initially static, equal phase $\Delta \theta=0$ head-on collisions of OS was investigated in \cite{Helfer:2018vtq}. There, we showed that the end state of any such collision depends on the compactness $\mathcal{C}$. For $\mathcal{C}<0.035$ \emph{subcritical} collisions, the collision results in an excited more massive oscillaton, while for $0.035<\mathcal{C}< \mathcal{C}_*$ \emph{critical} collisions, the collision results in the formation of a black hole. For $\mathcal{C}>\mathcal{C}_*$ \emph{degenerate} collisions, since the OS are in the unstable branch (\kfig{fig:compactness}), mutual perturbations cause the OS to collapse into individual black holes before merging as a standard head on black hole collision. In this paper, we will study both equal phase and anti-phase boosted head-on OS collisions. \begin{figure} \includegraphics[width=1\columnwidth]{Code_Repo/gaussian.png} \caption{One dimensional plot of the $\phi$ profile along the axis of collision of two OS for two different phases shown at fixed $t$ when the amplitude of $\phi$ for the left OS is maximised, with $x=0$ being the point of collision. The symmetry and anti-symmetry of the equal phase pair of OS ($\Delta \theta=0$) and an anti-phase pair of OS ($\Delta \theta=\pi$) respectively are constants of motion. } \label{fig:relative_phase} \end{figure} \section{Boosted OS Collisions} \label{sect:boostedOS} According to the hoop conjecture \cite{1972mwm..book..231T}, a quantity of matter/energy $E$ compressed into a spherical region such that a hoop of proper circumference $2\pi R$ completely encloses the matter in all directions, will form a black hole if the corresponding Schwarzschild radius, $R_s = 2 G E$ is greater then $R$. The collisions of two solitons with individual rest mass $M_*$ boosted to $\gamma=(1-v^2)^{-1/2}$ will result in a system with an effective mass of $E=2\gamma M_*$ in the center of mass frame. Applying the conjecture, if $R_s > R_0$ where $R_0$ is the rest frame radius of the soliton, then a black hole will form. Using \keq{eqn:compactness}, we obtain the following condition for black hole formation \begin{equation} \label{eqn:hoop} \gamma \geq \frac{1}{4\mathcal{C}}~. \end{equation} Such relativistic collisions of scalar solitons have been studied numerically before in the context of ``boson stars\footnote{Boson stars are configurations of a complex scalar field with a $U(1)$ potential. In contrast with the real scalar field OS which are stabilized by field oscillations, boson stars are stabilized by their charges. For a review please see \cite{Liebling:2012fv}. }'' of $\mathcal{C}=0.025$ \cite{Choptuik:2009ww} and fluid packets of $\mathcal{C}=0.0125$ \cite{East:2012mb}. In both cases, it was found that black hole formation occurs at the ``reduced'' hoop conjecture condition \begin{equation} \label{eqn:hoop_2} \gamma \geq \gamma_h \equiv \frac{1}{12\mathcal{C}}~, \end{equation} which is roughly about $1/3$ of what is predicted by the hoop conjecture. As we will soon see, we find this to be consistent with our simulations of relativistic OS collisions. We simulated the collisions of two equal mass and hence equal $\mathcal{C}$ OS in numerical general relativity, using \textsc{GRChombo} \cite{Clough:2015sqa} for both equal phase and anti-phase cases. Their initial separation are set at $d=60m^{-1}$. We vary the initial velocities of the OS from $v=0$ to $v=0.8$ relative to the rest frame, with corresponding Lorentz factors $\gamma=1$ to $\gamma=1.4$ (see \kap{subsect:constructing_initial_data} for the details of the construction of initial data). In all cases except for $v=0$, the initial velocities are sufficiently high such that the OS are not initially bounded. We track the OS positions following \cite{Widdicombe:2018oeo} by locating the value and location of maximum density $\rho_{\mathrm{max}}$, which we identify as its center. While the OS started out initially spherical, during the collision process the OS becomes an ellipsoid due to the gravitational attraction along the axis of collision. The major and minor axes of the ellipsoid are then identified by the distance from the center to the point where the density is $5\%$ of $\rho_{\mathrm{center}}$. Black hole formation is identified with a horizon finder. The results of our simulations is presented in Fig. \ref{fig:money}. \subsection{Equal phase $\Delta \theta=0$ Collisions} \begin{figure*}[ht!] \centering \includegraphics[width=2.0\columnwidth]{Figures/inphase/INPHASE.png} \caption{{\bf In-phase $\Delta \theta = 0$ collisions :} Three different slices of energy density $\rho$ with $\mathcal{C} = 0.065$ with \href{https://youtu.be/mOPzPxIaDVg}{$v = 0.3~$}, \href{https://youtu.be/ZyYhJlYN3d8}{$~0.5~$},\href{https://youtu.be/66uwXSIY8tI}{$~ 0.7$} from top to bottom. The slices for the (i) infall, (ii) merger and (iii) post-merger. Black holes form in the \href{https://youtu.be/mOPzPxIaDVg}{$v=0.3$} (top) and \href{https://youtu.be/66uwXSIY8tI}{$v=0.7$} (bottom) cases, with black lines indicating curvature contours at $\chi=0.2$ and $\chi=0.4$. In the \href{https://youtu.be/ZyYhJlYN3d8}{$v=0.5$} (middle) case, the OS ``pass through'' each other and then dissipate. \href{https://www.youtube.com/playlist?list=PLSkfizpQDrcZJRY_vYHmp82OIfLwscNx8}{\color{blue}Link to movies} \cite{movieO3in,movieO5in,movieO7in}.} \label{fig:inphase} \end{figure*} For equal phase $\Delta \theta =0$ case, at $v=0$ we recover the result of \cite{Helfer:2018vtq} whereby black hole formation occured when $\mathcal{C}\geq 0.035$. At sufficiently high $v$, black holes form due to the additional energy imparted by the boost, as we expected. We found that they roughly obey the ``reduced'' hoop conjecture argument \keq{eqn:hoop_2} (as opposed to \keq{eqn:hoop}), providing another data point to add to those of \cite{Choptuik:2009ww,East:2012mb, Rezzolla:2012nr}. However, at low $v$, intriguingly, black hole formation occurs only at \emph{higher} compactness. For example, for $\mathcal{C}=0.04$, black holes will form at $v=0$ but will \emph{not} form at $v>0.2$ (until it meets the hoop conjecture line). In other words, \emph{initial non-zero velocities hinder the formation of black holes.} The velocity required to prevent black hole formation increases with increasing $\mathcal{C}$, with the curve of transition sloping upwards until it meets the line defined by the ``reduced'' hoop conjecture argument \keq{eqn:hoop_2}, at the ``critical'' point $\mathcal{C}\approx 0.068$ and $v\approx 0.55$. Beyond this point $\mathcal{C}>0.068$, black holes form regardless of velocities. In Fig. \ref{fig:inphase}, we show the black hole formation process of $\mathcal{C}=0.065$ OS collisions for the $v=0.7,~0.5,~0.3$ cases. The existence of this ``stability band'' for non-black hole end states can be explained by the fact that higher collisional velocities imply a shorter collision timescale. Since the boosted OS are not energetic enough to form black holes from the hoop conjecture alone, they must interact during the collision to form a sufficiently deep gravitational potential well to generate infall for a collapse into a black hole -- this defines an interaction/collapse timescale. However, in a sufficiently relativistic collision, the collision timescale may be shorter than the interaction/collapse timescale, resulting in the two OS ``passing through'' (or bouncing off) albeit with large perturbations to their initial configuration and at a slower velocity due to the inelastic nature of the collisions. This collision timescale \emph{vs} interaction timescale behaviour has been seen in non-linear dynamics without gravity in the studies of relativistic collisions of non-linear solitons \cite{Giblin:2010bd,Amin:2013eqa,Amin:2013dqa}, where the relative coherence of the solitons post-collisions can be explained by the fact that the collision timescale is much shorter than the interaction timescale. We will discuss this in greater detail in the following section \ref{sec:discussion}. We find that the \emph{initial} formation of black holes is more efficient for the $v=0.3$ case when compared to the $v=0.7$ case -- the black hole mass grow more rapidly for the $v=0.3$ case during the collision. This could be due to the fact that the collision is ``messier'' when collisions are more energetic, and hence it takes longer for the excited debris to fall back into the nascent black hole. Unfortunately, our initial conditions are not sufficiently precise to enable long term tracking of the apparent horizon, leading to instabilities first seen in \cite{Okawa:2014nda}. \subsection{Anti-phase $\Delta \theta=\pi$ Collisions} \begin{figure*}[ht!] \centering \includegraphics[width=2.0\columnwidth]{Figures/antiphase/OFFPHASE.png} \caption{{\bf Anti-phase $\Delta \theta = \pi$ collisions :} Three different slices of energy density $\rho$ with $\mathcal{C} = 0.065$ with \href{https://youtu.be/NyaB3zjtaQ4}{$v = 0.3~$},\href{https://youtu.be/uACT89NESHw}{$~0.5~$},\href{https://youtu.be/bdYYbXSgUcY}{$~ 0.7$} from top to bottom. The slices for the (i) infall, (ii) merger and (iii) post-merger. Black holes form in the \href{https://youtu.be/NyaB3zjtaQ4}{$v=0.3$} (top) and \href{https://youtu.be/bdYYbXSgUcY}{$v=0.7$} (bottom) cases, with black lines indicating curvature contours at $\chi=0.2$ and $\chi=0.4$. In the \href{https://youtu.be/uACT89NESHw}{$v=0.5$} (middle) case, the OS ``bounces back'' post-collision (with black arrows indicating the direction of travel). Notice that in both cases where black holes form, the OS collapse into black holes before merging. \href{https://www.youtube.com/watch?v=mOPzPxIaDVg&list=PLSkfizpQDrcZJRY_vYHmp82OIfLwscNx8}{\color{blue}Link to movies} \cite{movieO3off,movieO5off,movieO7off}.} \label{fig:offphase} \end{figure*} At high $v$, black hole formation again occurs beyond the reduced hoop conjecture line \keq{eqn:hoop_2} -- reinforcing the point that in this regime ``matter does not matter'' and it is the gravitational dynamics that dominate \cite{Choptuik:2009ww}. Similar to the equal phase case above, at low $v$ black hole formation is impeded, although the transition line does not coincide, and is shifted slightly to the right (towards higher compactness). This line meets the reduced hoop conjecture line at the ``critical point'' $\mathcal{C}=0.071$ and $v=0.5$, indicating that there is an additional ``repulsion'' between the two OS when compared to the equal phase case. This repulsion is particularly notable in the $v=0$ case, where the transition from no black hole formation to black hole formation occurs at $\mathcal{C}\approx 0.05$ (compared to $\mathcal{C}\approx 0.035$ for equal phase collisions). This repulsion can be explained as follows. Crucially, for anti-phase collisions, the anti-symmetry of the $\phi$ configuration is a constant of motion, and hence at the point of collision $\phi(x_*,t)=0$ at all times where $x_*$ is the plane of anti-symmetry. This is in contrast with the equal phase pair where $\phi(x_*,t)$ is free to evolve as the two OS approach each other -- the symmetry of this case imposes the condition $\partial_x \phi(x_*,t)=0$ instead. In particular, in \cite{2016PDU....12...50P,PhysRevD.94.043513, Amin:2019ums}, it was shown that in the weak gravity and non-relativistic limit, OS will ``bounce back'' instead of merging for $\Delta \phi<7\pi/8$ \cite{PhysRevD.94.043513}. In this limit, \cite{PhysRevD.94.043513} argues that since the oscillaton equation of motion is linear, in equal phase (anti-phase) collisions, the OS tend to constructively (destructively) interfere, at least at the collision plane $x_*$. In strong gravity, gravitational back-reaction is non-linear, muddling this picture somewhat. Nevertheless, the anti-symmetry of the field configuration is still conserved, so $\phi(x_*,t)$ and its time derivative $\dot{\phi}(x_*,t)$ both remain at zero for all $t$. This means that the time averaged (over a period of oscillation) kinetic energy density of the field configuration $\langle E_K\rangle\sim (1/2)\dot{\phi}^2$ must vanish as $x\rightarrow x_*$. As the OS approach each other, energy conservation forces the time averaged gradient energy $\langle E_G\rangle \sim (1/2)(\nabla \phi)^2$ to absorb this energy, resulting in a rapid increase in the gradient energy and thus a spiking of the scalar field spatial configuration\footnote{While it is natural to desribe this repulsion as a force, its behaviour is not described by a $1/r$ potential nor is it conservative. The anti-symmetric origin of the repulsion is reminiscent of the degenerate pressure of the anti-symmetric wavefunctions of fermions.} . Note that the metric and stress tensor remain symmetric in the diagonal components and anti-symmetric in the off-diagonal components throughout for both equal phase and anti-phase cases, which means that gravitational energy can still dominate near $x_*$. \begin{figure} \includegraphics[width=1\columnwidth]{Code_Repo/rho_pos.png} \caption{The central location of a OS/BH vs time for an anti-phase OS collision with $\mathcal{C}=0.068$ and $v=0.4$. The repulsiveness of the anti-phase OS rapidly slows the initial velocity down to a full stop, before rebounding slightly at $t\sim 80 m^{-1}$ and then collapsing into a BH. The location of the center of the OS is taken to be the point of maximum density.} \label{fig:rho_pos} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{Figures/money.png} \caption{ The time evolution of the profile of the energy density $\rho$ measured along the axis of collision for both equal phase (dotted line) and anti-phase (continuous line) collisions of OS with $\mathcal{C}= 0.053$. The time evolution is indicated by colour, chronologically increasing from deep red to blue. Note that anti-phase collisions experience a repulsion due to the anti-symmetry of the field configuration, and the centers (i.e. maximum density point) of the OS remain distinct. As a result, the OS experience a compression which may lead to individual formation of black holes before final merger, or the OS ``bouncing back''.} \label{fig:compression} \end{figure} To check this dependence, we ran a series of collisions with $\mathcal{C}= 0.028$ with zero boost for both OS, and an initial separation of $d=40m^{-1}$. For this compactness, we have previously shown in \cite{Helfer:2018vtq} that their mergers will lead to a highly excited OS in the limit of $\Delta \phi=0$, and hence we do not expect any black hole formation. Since these are initially bound states, we expect that due to loss to scalar and gravitational wave radiation, the final state of such collisions will be a merged oscillaton. The key question is whether this merger occurs in the first collision as in the equal phase case, or will the off-phase repulsion generate pre-merger ``bounces''. We scan through $\Delta \phi = [0,\pi/8, \pi/4, 3\pi/8, \pi/2,5\pi/8, 3\pi/4, 7\pi/8, 15\pi/16, \pi]$, and found that only for the cases of $\Delta \phi \geq 7\pi/8$, the OS bounces once before merger -- in agreement with \cite{PhysRevD.94.043513} that this repulsion is only dominant when the phase difference is near maximal. The close agreement with the weak gravity results suggests that this repulsion effect is dominated by scalar dynamics. \kfig{fig:compression} illustrates the comparison of the energy densities of equal phase and anti-phase collisions. At large distances, the two cases evolve similarly as they do not yet interact strongly. Their evolution begin to deviate around $d\sim 15m^{-1}$, as the OS begin to overlap and interact with each other. In the equal phase case, the OS merge and form a large central density spike at $d=0$. On the other hand, in the anti-phase case, the OS repulse each other -- note that the energy density drop at $d=0$ -- ``compressing'' to a smaller size but higher energy densities before bouncing back. This repulsion and subsequent compression leads to a dramatically different black hole formation process when compared to the equal phase case. Instead of BH forming from the collapse of scalar matter after merger, the repulsion stops the motion of the OS and prevents the direct merger of the OS from occurring. The accompanying compression of both OS leads to a subsequent \emph{individual} collapse of the OS into separate black holes. These distinct black holes, shorn of the repulsive scalar field, then gravitate towards each other and finally form a final black hole. This general mechanism is seen in both the high velocity (i.e. above the reduced hoop conjecture line) and low velocity BH formation processes (see Figs. \ref{fig:rho_pos} and \ref{fig:compression}). In between these two velocity limits, again as in the equal phase case, the collision does not yield a final black hole. Instead, it results in the two OS bouncing back, and then dispersal. While the OS experience compression during the bounce, the compression is not sufficient to push the OS into an unstable regime that led to collapse -- instead it led to a dispersion of the OS into scalar waves. While oscillatons have been shown to be stable under large spherically symmetric (and shell-like) perturbations \cite{Alcubierre:2003sx}, the perturbations that OS here experience post-bounce are both highly asymmetric and non-shell-like. Thus our results strongly suggests that there exist unstable \emph{non-radial} perturbation modes of OS even at low compactness, although a more detailed study is needed to confirm this conjecture. \section{Discussion} \label{sec:discussion} The most striking result of our simulations is the existence of a ``stability band'' of velocities whereby collisions of OS do not form black holes. We can gain a qualitative understanding as follows. The free fall time scale is given by $\tau_{\mathrm{ff}} \sim 1/\sqrt{G\rho}$, and using $\rho \sim M/R^3$ combined with \keq{eqn:compactness} gives \begin{equation} \tau_{\mathrm{ff}} \sim \frac{GM}{\mathcal{C}^{3/2}}~. \label{eqn:freefall} \end{equation} Meanwhile the interaction timescale can be estimated by the time the two OS overlap since the scalar field configuration of the OS drop off exponentially away from its characteristic size $R$. If we assume that OS ``pass through'' (or bounce back after contact), then roughly the interaction timescale is \begin{equation} \tau_{\mathrm{int}} \sim \frac{2R}{\gamma v} = \frac{2GM}{\gamma v\mathcal{C}}~. \label{eqn:interaction} \end{equation} This a conservative (i.e. \emph{lower}) bound on $\tau_{\mathrm{int}}$ since interactions do slow down the collision -- as we saw especially in the anti-phase case the repulsion slows the collision down significantly, saturating only in the high $v$ limit. To prevent black hole formation, as we argued in Section \ref{sect:boostedOS} the interaction timescale has to be shorter than the free-fall timescale $\tau_{\mathrm{int}} > \tau_{\mathrm{ff}}$. At low $v$, $\gamma \sim 1$, we obtained the following bound \begin{equation} v > 2\mathcal{C}^{1/2}~. \label{eqn:vlower} \end{equation} Since $\tau_{\mathrm{int}}$ is an underestimate, we expect \keq{eqn:vlower} to be a lower bound on $v$. Combining this with the reduced hoop conjecture limit at high $\gamma$ \keq{eqn:hoop_2}, we obtain the following bound when BHs will not form \begin{equation} 2\mathcal{C}^{1/2}< v < \sqrt{1-144\mathcal{C}^2}~. \label{eqn:bound} \end{equation} The two lines intersect at $\mathcal{C}\sim 0.07$ or $v\sim 0.5$, which is what we found numerically (see Fig. \ref{fig:money}). On the other hand, the lower bound does not track the numerical results accurately -- this is not surprising since such timescales arguments do not capture the full range of physics involved. An interesting question is whether this point is a ``critical point'', in the sense that the two different regimes $v>2{\cal C}^{1/2}$ and $v<\sqrt{1-144\mathcal{C}^2}$ constitute different phases and this point is where they meet as they transition into the final black hole phase. Since the two regimes exhibit different post collision behavior, it is interesting to ask whether their respective end states are the same or are they different? In other words, is there a transition in the end states between the high $v$ BH formation and low $v$ BH formation in the black hole phase when $\mathcal{C}\gtrsim 0.07$? The natural end state for these collisions are spherical, non-rotating black holes, hence the no-hair theorem implies that their end states are fully quantified by their final BH masses. To obtain these values require running the simulations to sufficiently long timescales to achieve these final states in addition to removing the unwanted reflection of scalar and tensor waves from the boundary of the simulation domain. We are currently exploring absorptive boundary conditions to overcome this problem. We will leave this, and the computation of gravitational waves signal from such collisions to a future publication. \acknowledgments We would like to thank Mustafa Amin, Marcos Garcia, Helvi Witek and Lam Hui for very useful discussions and Ricardo Becerril for the use of his initial condition code for oscillatons. We would also like to thank the members of the \textsc{GRChombo} Collaboration (http://www.grchombo.org/) and the COSMOS team at DAMTP, Cambridge University for their ongoing technical support. EL is supported by STFC AGP grant ST/P000606/1, and JW is supported by a STFC PhD studentship. TH is supported by NSF Grant No. PHY-1912550, NSF Grant No. AST-1841358, NSF-XSEDE Grant No. PHY-090003, and NASA ATP Grant No. 17-ATP17-0225. This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 690904. The authors would like to acknowledge networking support by the GWverse COST Action CA16104, ``Black holes, gravitational waves and fundamental physics.'' Numerical simulations were performed on the COSMOS supercomputer, the Cambridge CSD3 Peta4 and the Leicester DiAL (Data Intensive Cluster), all funded by DIRAC/BIS, on BSC Marenostrum IV via PRACE grant Tier-0 PPF-PWG, and on Leibnitz Supercomputing Center SuperMUC-NG under PRACE grant Tier-0 Proposal 2018194669. The simulation results were analyzed using the visualization toolkit YT \kcite{2011ApJS..192....9T} and Numpy \kcite{scipy}. Matplotlib \kcite{Hunter:2007} was used to generate the plots seen throughout the paper. \bibliographystyle{h-physrev3.bst}
1,116,691,497,972
arxiv
\section*{Introduction} An $n$-dimensional L\'evy process is a stochastic process $X=(X_t)_{t\geq 0}$ with values in ${{\mathds R}^n}$, with independent and stationary increments, and with c\`adl\`ag (right-continuous, finite left limits) sample paths. It is well known that the transition probability $p_t$ of a L\'evy process can be characterized by the inverse Fourier transform: \begin{equation*} \mathcal{F}^{-1}p_t(\xi) = \mathds E_x e^{i X_t\cdot \xi} =e^{-t\psi(\xi)}, \quad t>0, \quad x, \xi \in{{\mathds R}^n}. \end{equation*} The function $\psi:{{\mathds R}^n}\to\mathds C$ is called the \emph{characteristic exponent} and it is determined by its \emph{L\'evy-Khintchine representation} \begin{equation}\label{psi} \psi(\xi) =i \ell\cdot\xi + \frac{1}{2} \,\xi\cdot Q\xi + \int_{{{\mathds R}^n}\setminus \{0\}} \left( 1-e^{iy\cdot\xi}+\frac{iy\cdot \xi}{1+|y|^2}\right)\nu(dy); \end{equation} here $\ell=(\ell^1,\ldots,\ell^n)\in{{\mathds R}^n}$, $Q = (q^{jk})\in\mathds R^{n\times n}$ is a positive semi-definite matrix and $\nu$ is the L\'evy measure, i.e.\ a measure on ${{\mathds R}^n}\setminus\{0\}$ such that $\int_{{{\mathds R}^n}\setminus \{0\}} (1\wedge |y|^2)\,\nu(dy)<\infty$. If $\ell=0$ and $Q=0$, we will call the corresponding L\'evy process a \emph{pure jump L\'evy process}. Many papers are devoted to distributional properties of L\'evy processes and to the existence of (necessary and) sufficient conditions under which the transition probability $p_t(dx)$ of a L\'evy process is absolutely continuous with respect to Lebesgue measure. The classic paper \cite{HW42} by Hartman and Wintner gives sufficient conditions in terms of the characteristic exponent $\psi$ under which there exists a transition density $p_t(x)$ of $X_t$; these conditions guarantee that $p_t \in C_\infty({{\mathds R}^n})$, where $C_\infty({{\mathds R}^n})$ denotes the set of all continuous functions which vanish at infinity. More precisely, if \begin{equation}\label{hw} \lim_{|\xi|\to\infty} \frac{\Re \psi(\xi) }{\ln (1+|\xi|)} =\infty, \tag{$\textup{HW}_\infty$} \end{equation} then $p_t(dx) = p_t(x)\,dx$ for all $t>0$, and $p_t\in L_1({{\mathds R}^n})\cap C_\infty({{\mathds R}^n})$. Also, if \begin{equation}\label{hw1} \liminf_{|\xi|\to\infty} \frac{\Re \psi(\xi) }{\ln (1+|\xi|)}>\frac{n}{t}, \tag{$\textup{HW}_{1/t}$} \end{equation} then $p_s(dx)=p_s(x)\,dx$ for all $s\geq t$ and $p_s\in L_1({{\mathds R}^n})\cap C_\infty({{\mathds R}^n})$. Note that the important issue is the speed at which the function $\psi$ tends to infinity. Since $\mathcal{F}^{-1} p_t = e^{-t\psi}$, the Riemann-Lebesgue lemma entails that \begin{equation}\label{rie-leb} p_t(dx) = p_t(x)\,dx\text{\ \ for some $t>0$} \implies \lim_{|\xi|\to\infty} \Re\psi(\xi) = \infty, \end{equation} but this necessary condition does not tell anything about the rate of growth of $\psi$. Hartman and Wintner remark that \emph{the difficulties of the gap between these two conditions---\textup{i.e.\ \eqref{hw} and \eqref{rie-leb}}---[...] are rather obscure} \cite[p.\ 287]{HW42}. Tucker \cite{Tu65} provides (complete but rather technical) necessary and sufficient criteria for the existence of a transition density. He mentions that Fourier analytic techniques \emph{are too crude for such a problem} (\cite[p.\ 317]{Tu65}). These two quotations are also the main motivation of our paper: to explain for which densities \eqref{hw} is indeed necessary and sufficient and how far we can get with Fourier analytic techniques. Let us briefly review some of the other known criteria. Hawkes \cite{H79} shows that a L\'evy process has the strong Feller property---i.e.\ $x\mapsto \mathds E_x f(X_t)$ is continuous for all bounded measurable functions $f$---if, and only if, the transition probabilities are absolutely continuous; in this case, the densities $p_t(x)$ are lower semicontinuous. Zabczyk \cite{Z70} shows that for an isotropic L\'evy process $X$ in ${{\mathds R}^n}$, $n\geq 2$, (see below for the precise definition) the following dichotomy holds: either $X$ is a compound Poisson process, or it is absolutely continuous for all $t>0$ with lower semi-continuous transition density. Further sufficient conditions in dimension one were found by Kallenberg \cite[Section 5]{K81}: if \begin{equation}\label{kal} \lim_{\varepsilon \to 0} \frac{ \int_{-\varepsilon}^\varepsilon y^2 \nu(dy)}{\varepsilon^2\, |\ln \varepsilon|}=\infty, \tag{$\textup{K}_\infty$} \end{equation} then $p_t(dx) = p_t(x)\,dx$ for all $t>0$ and $p_t\in C_b^\infty (\mathds R)\cap C_\infty(\mathds R)$; and if \begin{equation}\label{kal1} \liminf_{\varepsilon \to 0} \frac{ \int_{-\varepsilon}^\varepsilon y^2 \nu(dy)}{\varepsilon^2\, |\ln \varepsilon|}> \frac 1t, \tag{$\textup{K}_{1/t}$} \end{equation} then $p_s(dx)=p_s(x)\,dx$ for all $s\geq t>0$ and $p_s\in C_b^\infty (\mathds R)\cap C_\infty(\mathds R)$. For an $n$-dimensional analogue of \eqref{kal} and \eqref{kal1} we refer to \cite{BK08}. See also Orey \cite{O68} for yet another sufficient condition, as well as the monograph of Sato \cite{S99} for more references and results. \medskip The main result of this note is to show for which class of L\'evy processes the Hartman-Wintner condition \eqref{hw} is a necessary and sufficient condition for the existence of a (smooth) transition density $p_t(x)$. For isotropic processes we can express \eqref{hw} in terms of the L\'evy measure $\nu$. Finally we show that we can, under some mild conditions, express the behaviour of the transition density at zero $p_t(0)$ in terms of the measure of a ball with radius $t^{-1/2}$ in the metric given by the characteristic exponent $\psi$. As an application we show that our result gives an easy way to estimate the transition probability density at zero for anisotropic stable processes and tempered stable processes, see \cite{St10}, \cite{R07}, \cite{St08}. In some sense, Theorem \ref{tmain} sharpens \cite[Lemma 3.1]{St10} where an upper bound for the gradient of the transition density $p_t$ of an $\alpha$-stable L\'evy process is obtained whose L\'evy measure $\nu$ is a $\gamma$-measure. Although Theorem~\ref{tmain} does not provide a gradient estimate in terms of the particular structure of the L\'evy measure, it shows that the gradients of any order are in $L_1$. \medskip\noindent \textbf{Notation:} We denote by $\mathcal{F} u(\xi) = (2\pi)^{-n}\int_{{{\mathds R}^n}} u(x)\,e^{-ix\xi}\,dx$ the Fourier transform, $\mathcal{F}^{-1} w(x) = \int_{{{\mathds R}^n}} w(\xi)\,e^{i\xi x}\,d\xi$ is the inverse Fourier transform or characteristic function. By $J_\nu$ and $K_\nu$ we denote the (modified) Bessel functions of the first and third kind, cf.\ \cite{gra-ryz}. We write $C_\infty({{\mathds R}^n})$ for the continuous functions vanishing at infinity. Throughout this paper we will use the same letter $p_t$ to denote the transition probability $p_t(dx)$ and its density $p_t(x)$ w.r.t.\ Lebesgue measure. For functions $f(x)$ and $g(x)$ we write $f\asymp g$ if there are constants $c,C>0$ such that $cf(x)\leq g(x)\leq Cf(x)$ and we write $f\sim g$ (as $x\to a$) if $\lim_{x\to a} f(x)/g(x) = 1$. All other notation should be standard or self-explanatory. \section{Main Results} An $n$-dimensional (L\'evy) process $X$ is called \emph{isotropic}, if for any isometry $L: {{\mathds R}^n}\to{{\mathds R}^n}$, $L(0)=0$, and all Borel sets $B\in\mathcal B({{\mathds R}^n})$ \begin{equation*} \mathds P_0(X_t\in B)=\mathds P_0(X_t\in LB), \quad t\geq 0. \end{equation*} In this case the L\'evy exponent is of the form $\psi(\xi)=g(|\xi|^2)$ for some continuous $g:[0,\infty)\to[0,\infty)$. For $n=1$ the notions of isotropy and symmetry coincide. For an isotropic process we define $G(r):=-\omega_{n-1} \nu(B(0,r)^c)$, $n\geq 1$, where $\omega_{n-1}=2\pi^{n/2}/\Gamma\big(\frac n2\big)$ is a surface volume of the unit sphere $S^{n-1}\subset{{\mathds R}^n}$ and $\Gamma$ is Euler's Gamma function. For isotropic processes in ${{\mathds R}^n}$, $n\geq 1$, we will need the $n$-dimensional analogue of \eqref{kal}: \begin{equation}\label{kaln} \lim_{\varepsilon\to 0} \frac{\int_0^\varepsilon r^2 dG(r)}{\varepsilon^2\, |\ln \varepsilon|}=\infty. \tag{$\textup{K}'_\infty$} \end{equation} Note that \eqref{kaln} fails if, and only if, $\liminf_{\varepsilon\to 0} \int_0^\varepsilon r^2 dG(r)\big/\big(\varepsilon^2\, |\ln \varepsilon|\big) <\infty$. We can now state our main results. \begin{theorem}\label{tmain} Let $X$ be an $n$-dimensional L\'evy process, $n\geq 1$. The following conditions are equivalent: \begin{enumerate} \item[(a)] \eqref{hw}; \item[(b)] for all $t>0$ the transition density exists, $p_t\in C^\infty({{\mathds R}^n})$ and $\nabla^\alpha p_t\in L_1({{\mathds R}^n})\cap C_\infty({{\mathds R}^n})$ for all $\alpha\in\mathds N_0^n$; \item[(c)] for all $t>0$ the transition density exists and $p_t, \nabla p_t\in L_1({{\mathds R}^n})$. \end{enumerate} If $X$ is isotropic such that $\psi(\xi) = g(|\xi|^2)$ with an increasing function $g$, then the above conditions are equivalent to \begin{enumerate} \item[(d)] for all $t>0$ the transition density exists and $p_t\in C_\infty({{\mathds R}^n})$; \item[(e)] for all $t>0$ the transition density exists and $p_t\in L_\infty({{\mathds R}^n})$; \item[(f)] $e^{-t\psi}\in L_1({{\mathds R}^n})$ for all $t>0$. \end{enumerate} \end{theorem} \begin{theorem}\label{tmain2} Let $X$ be an isotropic L\'evy process in ${{\mathds R}^n}$, $n\geq 2$, with characteristic exponent $\psi(\xi)=g(|\xi|^2)$. If \eqref{kaln} fails, then \eqref{hw} is equivalent to \begin{equation}\label{tmain2-eqn} \lim_{\varepsilon\to 0} \frac{\nu(B(0,\varepsilon)^c)}{|\ln \varepsilon|}=\infty. \end{equation} \end{theorem} Before we proceed with the proofs of Theorem~\ref{tmain} and \ref{tmain2} we add a few remarks and give some examples. \begin{exa}\label{exa1} Since the existence and smoothness of a transition density is a time-dependent property, cf.\ \cite{S99}, the specification `\emph{for all $t>0$}' is essential in Theorem \ref{tmain}. A simple counterexample in dimension $n=1$ is the Gamma process, that is the L\'evy process with transition density $$ p_t(x) = \frac{x^{t-1}}{\Gamma(t)}\,e^{-x}, \quad t>0,\; x>0. $$ It is not hard to see that its characteristic exponent is $$ \psi(\xi) = \ln (1+i\xi)^{-1} = \frac 12\ln (1+\xi^2) + i\arctan(\xi). $$ The two-sided (i.e.\ symmetrized) Gamma process whose transition density is $q_t := p_t*\tilde p_t$, $\tilde p_t(x) = p_t(-x)$, has \begin{align*} \psi(\xi) = \ln(1+\xi^2) &= \int_{\mathds R\setminus\{0\}} \big(1-\cos(x\xi)\big) \left(\int_0^\infty \frac{1}{\sqrt{4\pi s}}\,e^{-s}\,e^{-|x|^2/(4s)}\,\frac{ds}s\right)\,dx\\ &= \sqrt{\frac{2}{\pi}}\int_{\mathds R\setminus\{0\}} \big(1-\cos(x\xi)\big) \frac{K_{1/2}(|x|)}{\sqrt{|x|}}\,dx \end{align*} as characteristic exponent; this follows from a combination of \cite[9.23.4, 10.3]{ber-for} and \cite[8.437, p.\ 959]{gra-ryz}. Note that $p_t$ is for $t>1$ of class $C_\infty$, while for $t=1$ the density is bounded and Borel measurable, while for $t\in (1/q,1)$ the density has a pole at $x=0$ but is still contained in $L_p$, $p$, $q$ being conjugate: $p^{-1} + q^{-1} = 1$. A similar picture is true for the density of the symmetrized process $q_t(x)$ which is given by $\Gamma(t)^{-1} \pi^{-1/2} (|x|/2)^{t-1/2} K_{t-1/2}(|x|)$ for $t>1/2$, cf.\ \cite[17.344, p.\ 1151]{gra-ryz}. In $n$ dimensions we get that for $t>n/2$ the density $q_t(x)$ is given by $$ q_t(x) = \frac{2^{1-n}}{\pi^{n/2}\,\Gamma(t)} \left(\frac{|x|}2\right)^{t-n/2} K_{t-n/2}(|x|), \quad n\in\mathds N, \; x\in{{\mathds R}^n},\; t>\frac n2, $$ cf.\ \cite[Theorem 6.13, p.\ 76]{wen} (but mind the different norming of the Fourier transform). \end{exa} \begin{exa}\label{exa2} Let $n=1$. \eqref{kal} implies \eqref{hw}, but \eqref{hw} does not imply \eqref{kal}. Assume that $\ell=0, Q=0$ and $\nu(dy)= \frac 1{|y|}\, \ln\frac 1{|y|}\, \mathds 1_{B(0,1)}(y)dy$ in \eqref{psi}. After some straightforward calculations we obtain that the related characteristic exponent behaves like $\psi(\xi)\sim \ln^2 |\xi |$ as $|\xi|\to\infty$ which implies \eqref{hw}. But \begin{equation*} \lim_{\varepsilon\to 0} \frac{ 2\int_0^\varepsilon y\ln \frac{1}{y}\, dy}{\varepsilon^2\, |\ln \varepsilon|} =\lim_{\varepsilon\to 0} \frac{-2 \varepsilon \ln\varepsilon}{-2\varepsilon \ln \varepsilon -\varepsilon} =1, \end{equation*} i.e.\ we have only \eqref{kal1} for some $t>0$, but not \eqref{kal}. \end{exa} \begin{exa}\label{exa3} The condition $n\geq 2$ in \cite{Z70} is essential, since an isotropic L\'evy process in $\mathds R$ is just a L\'evy process with symmetric L\'evy measure, and such a process can be even continuous singular. Consider, for example, for $a\geq 2$, $a_j:=a^j$, the L\'evy measure \begin{equation}\label{lm} \nu(dx)=\sum_{j=-\infty}^\infty \frac{b_j}2\, \big(\delta_{a_j}(dx)+\delta_{-a_j}(dx)\big), \end{equation} where $b_j\geq 0$, and $\sup_j b_j <\infty$. The corresponding pure jump L\'evy process is continuous singular for any $t>0$, see \cite{W98} or \cite[Theorem 27.19]{S99}. \end{exa} \begin{exa}\label{exa4} Let $(a_j)_{j\geq 0}$ be a sequence such that $\lim_{j\to\infty}a_j = 0$ and let $(b_j)_{j\geq 0}$ be a decreasing sequence such that $b_j\geq 0$ and $\sum_{j\geq 1} b_j=\infty$. If $\nu$ is of the form \eqref{lm}, the corresponding pure jump characteristic exponent is given by \begin{equation* \psi(\xi)=\sum_{j=1}^\infty \big[1-\cos (a_j \xi)\big]\,b_j. \end{equation*} For $a_j :=2^{-j}$ we get \begin{align*} \psi(2^m \cdot 2\pi) &=\sum_{j=1}^\infty \big[1-\cos (2^{m-j} \cdot 2\pi)\big]\,b_j\\ &=\sum_{k=1}^\infty \big[1-\cos(2^{-k}\cdot 2\pi)\big]\,b_{k+m}\\ &\leq \frac{(2\pi)^2}{2} \sum_{k=1}^\infty b_{k+m} 2^{-2k}\\ &\leq \frac{2\pi^2 b_m}{3}\xrightarrow{m\to\infty} 0. \end{align*} Since $\psi$ is symmetric, \begin{equation*} \liminf_{|\xi|\to\infty} \psi(\xi)=0. \end{equation*} On the other hand, $\nu(\mathds R)=\sum_{j\geq 1} b_j =\infty$, hence \begin{equation*} \limsup_{|\xi|\to\infty} \psi(\xi)=\infty. \end{equation*} The corresponding transition probability $p_t(dx)$ is not absolutely continuous, otherwise we would have by \cite{HW42} $\lim_{|\xi|\to \infty}\psi(\xi)=\infty$. Hence by \cite{HW42}, see also \cite[Theorem 27.16]{S99}, $p_t(dx)$ is continuous singular. \end{exa} We will see in Lemma \ref{t1} and Example \ref{exa5} below that \eqref{hw} is indeed not necessary for the existence of a $C_\infty$-transition density. \begin{lem}\label{t1} Let $X$ be an isotropic L\'evy process in ${{\mathds R}^n}$, $n\geq 1$. Then \begin{equation}\label{hw11} p_{t/2}(x) \text{\ \ exists and\ \ } p_{t/2} \in L_1({{\mathds R}^n})\cap C_\infty({{\mathds R}^n}) \implies \limsup_{|\xi|\to\infty} \frac{ \psi(\xi) }{\ln (1+|\xi|)}>\frac{n}{t}. \end{equation} \end{lem} \begin{proof} Note that the left-hand side of \eqref{hw11} implies $\|e^{-t\psi}\|_{L_1}<\infty$. Indeed, if $p_{t/2} \in C_\infty({{\mathds R}^n})\cap L_1({{\mathds R}^n})$, then $p_{t/2} \in L_\infty ({{\mathds R}^n})\cap L_1({{\mathds R}^n})\subset L_2({{\mathds R}^n})$, hence $\mathcal{F}^{-1} p_{t/2} \in L_2({{\mathds R}^n})$. This shows that $|e^{-t\psi}|=|e^{-\frac t2 \psi}|^2 \in L_1({{\mathds R}^n})$. Since $X$ is isotropic, $\psi(\xi)=g(|\xi|^2)$ for some continuous function $g:[0,\infty)\to[0,\infty)$, and we have \begin{equation}\label{pr1} \begin{split} \|e^{-t\psi}\|_{L_1} &=\int_0^\infty m\left\{ \xi\in{{\mathds R}^n}:\,\, \psi(\xi)\leq -\frac{\ln s}{t}\right\} ds\\ &=t\int_0^\infty m\left\{\xi\in{{\mathds R}^n}:\,\, g(|\xi|^2)\leq x\right\} e^{-tx}\, dx\\ &\geq tc_n\int_0^\infty (g^{-1}(x))^{n/2}\, e^{-tx}\, dx, \end{split} \end{equation} where $g^{-1}(x):=\inf\{u :\,\,g(u)\geq x\}$ and $m$ denotes Lebesgue measure in ${{\mathds R}^n}$. The left-hand side of \eqref{pr1} is finite and we conclude that $\liminf_{x\to\infty} g^{-1}(x)e^{-2tx/n} = 0$. By the very definition of the generalized inverse we get $g^{-1}(g(x))\leq x \leq g(g^{-1})(x)$. Therefore, \begin{equation}\label{eq11} g^{-1}(x) e^{-2tx/n} \geq g^{-1}(x) e^{-2t g(g^{-1}(x))}\geq 0. \end{equation} Since the transition density exists, it is an easy consequence of the Riemann-Lebesgue lemma that $\lim_{|\xi|\to\infty}\psi(\xi)=\infty$, cf.\ \eqref{rie-leb}. Therefore, $g$ and $g^{-1}$ are onto. If we combine this fact with the inequality \eqref{eq11}, we see that $\liminf_{x\to\infty} g^{-1}(x)e^{-2tx/n} = 0$ entails $\liminf_{u\to\infty} u e^{-2tg(u)/n}= 0$. Consequently, \begin{equation*} \liminf_{u\to\infty} \left( 1-\frac{2tg(u)}{n\ln u}\right) \ln u <0, \end{equation*} implying $\limsup_{u\to \infty} g(u^2) /\ln u>n/t$. \end{proof} We will now construct a characteristic exponent $\psi$ with $\liminf_{|\xi|\to\infty} \psi(\xi)/\ln |\xi|=0$ and $\limsup_{|\xi|\to\infty} \psi(\xi)/\ln |\xi|=\infty$. \begin{exa}\label{exa5} Let $\nu$ be a L\'evy measure of the form \eqref{lm} with $a_j = 2^{-j}$ and \begin{equation*} b_j := \begin{cases} \ln j, & j=2k,\\ j^2, & j=2k+1. \end{cases} \end{equation*} In Example~\ref{exa4} we proved the upper bound \begin{equation*} \psi(2^m \cdot 2\pi)\leq \frac{2\pi^2}{3} \, b_m. \end{equation*} Similarly, since $\sin^2 x \geq c_1 x^4 $ for small $x$, we have for some $c_2>0$ \begin{align*} \psi(2^m \cdot 2\pi) &=\sum_{k=1}^\infty \big(1-\cos(2^{-k}\cdot 2\pi\big)\,b_{k+m}\\ &=2\sum_{k=1}^\infty \sin^2\left(\frac{\pi}{2^k}\right) b_{k+m}\\ &\geq 2c_1\sum_{k=1}^\infty \left(\frac{\pi}{2^k}\right)^4 b_{k+m}\\ &=c_2\,b_m. \end{align*} Therefore \begin{equation*} \limsup_{|\xi|\to\infty} \frac{\psi(\xi)}{\ln \xi} \geq\lim_{m\to\infty} \frac{\psi(2^{2m+1} \cdot 2\pi)}{\ln (2^{2m+1}\cdot 2\pi)} \geq c_3 \lim_{m\to\infty} \frac{b_{2m+1}}{2m+1} =c_3\lim_{m\to\infty} \frac{(2m+1)^2}{2m+1} =\infty, \end{equation*} while \begin{equation*} \liminf_{|\xi|\to\infty} \frac{\psi(\xi)}{\ln \xi} \leq\lim_{m\to\infty} \frac{\psi(2^{2m} \cdot 2\pi)}{\ln (2^{2m} \cdot 2\pi)} \leq c_4 \lim_{m\to\infty} \frac{b_{2m}}{2m} =c_4\lim_{m\to\infty} \frac{\ln 2m}{2m} =0. \end{equation*} \end{exa} \section{Proofs} Let us now turn to the proof of Theorem~\ref{tmain}. In a first step, Lemma \ref{l1} below, we show that \eqref{hw} is also a sufficient condition for the existence of a $C^\infty_b \cap C_\infty$ density. \begin{lem}\label{l1} Suppose that \eqref{hw} holds true. Then $p_t\in C^\infty({{\mathds R}^n})$, and $\nabla^\alpha p_t\in L_2({{\mathds R}^n})\cap C_\infty({{\mathds R}^n})$ for all $\alpha\in\mathds N_0^n$. \end{lem} \begin{proof} Observe that \eqref{hw} implies for large $|\xi|$ \begin{align*} |\xi|^k \exp\big(-t\psi(\xi)\big) =\exp\left(-\ln |\xi| \left[t\,\frac{\psi(\xi)}{\ln |\xi|} - k\right]\right) \leq \exp\left(-c\ln |\xi|\right) \end{align*} for some constant $c> n$. This shows $$ |\xi|^k e^{-t\psi(\xi)}\in L_2({{\mathds R}^n}) \quad\text{for all\ } k\geq 1, $$ which means that \begin{equation*} \nabla^\alpha p_t \in \bigcap_{k\geq 1} H^k({{\mathds R}^n})\hookrightarrow C_\infty({{\mathds R}^n})\quad\text{for all\ } \alpha\in\mathds N_0^n, \end{equation*} where $H^k({{\mathds R}^n})$ is the $L_2$-Sobolev space of order $k$. \end{proof} \begin{proof}[Proof of Theorem~\ref{tmain}] (a)$\Rightarrow $(b): Without loss of generality we may assume that $\psi$ is real-valued. We decompose the characteristic exponent into two parts \begin{align*} \psi(\xi) &= \bigg(\frac 12\,\xi\cdot Q\xi +\int_{0<|y|\leq 1} \big(1-\cos(y\cdot\xi)\big)\nu(dy)\bigg) + \int_{|y|\geq 1} \big(1-\cos(y\cdot\xi)\big)\nu(dy)\\ &=: \psi_1(\xi) + \widetilde\psi_1(\xi). \end{align*} By construction, $\psi_1$ and $\widetilde\psi_1$ are characteristic exponents of two (symmetric, independent) L\'evy processes. Denote their transition probabilities by $p_{1,t}$ and $\widetilde p_{1,t}$, respectively. Because of independence, $p_t = p_{1,t}*\widetilde p_{1,t}$. Moreover, $\widetilde\psi_1$ is bounded and $\psi_1$ is infinitely often differentiable. Indeed, for any multiindex $\alpha\in\mathds N^n$ \begin{equation}\label{psi-derivative} \nabla^\alpha \psi_1(\xi) = \begin{cases} \displaystyle \sum_{k=1}^n q^{\ell k}\xi_k + \int_{0<|y|\leq 1} \sin(y\cdot\xi)\, y_\ell\,\nu(dy), & \alpha = e_\ell, \\[\medskipamount] \displaystyle q^{\ell m} + \int_{0<|y|\leq 1} \cos(y\cdot\xi)\, y_\ell\, y_m\,\nu(dy), & \alpha = e_\ell + e_m,\\[\medskipamount] \displaystyle \int_{0<|y|\leq 1} \cos^{(|\alpha|)} (y\cdot\xi)\, y^\alpha\,\nu(dy), & |\alpha| > 2, \end{cases} \end{equation} which shows, in particular, that all derivatives of $\psi_1$ are polynomially bounded. From $$ \psi_1(\xi) \leq \psi(\xi) \leq \psi_1(\xi) + \|\widetilde\psi_1\|_\infty $$ we see that \eqref{hw} holds for $\psi$ if, and only if, \eqref{hw} holds for $\psi_1$. Since $\psi_1$ is smooth and has polynomially bounded derivatives, \eqref{hw} implies that $\xi\mapsto \exp(-t\psi_1(\xi))$ and all of its derivatives are bounded. Therefore, $\exp(-t\psi_1)\in S({{\mathds R}^n})$ where $S({{\mathds R}^n})$ denotes the Schwartz space of rapidly decreasing functions. Thus, $$ p_{1,t}(x) = \mathcal{F} e^{-t\psi_1}(x), $$ i.e.\ $p_{1,t}(dx)=p_{1,t}(x)\,dx$ with a density from $S({{\mathds R}^n})$. The identity $p_t = p_{1,t}*\widetilde p_{1,t}$ shows that $p_t(dx)=p_t(x)\,dx$ with a $C^\infty$-density which satisfies $$ \nabla^\alpha p_t(x) = (\nabla^\alpha p_{1,t})*\widetilde p_{1,t}(x) = \int_{{\mathds R}^n} \nabla^\alpha p_{1,t}(x-y)\,\widetilde p_{1,t}(dy). $$ Using Fubini's theorem, and the fact that $\widetilde p_{1,t}$ is a probability measure, yields $$ \|\nabla^\alpha p_t\|_{L_1} \leq \iint |\nabla^\alpha p_{1,t}(x-y)|\,dx\,\widetilde p_{1,t}(dy) = \|\nabla^\alpha p_{1,t}\|_{L_1}. $$ That $p_t$ and $\nabla^\alpha p_t$ are in $C_\infty$ follows from Lemma \ref{l1}. \medskip\noindent (b)$\Rightarrow$(c): This is obvious. \medskip\noindent (c)$\Rightarrow$(a): Since $\nabla p_t\in L_1({{\mathds R}^n})$, the Riemann-Lebesgue lemma shows that $$ |\xi| e^{-t\psi(\xi)} = \exp\left(-\ln |\xi| \left[\frac{\psi(\xi)}{\frac 1t\,\ln |\xi|} - 1\right]\right)\in C_\infty({{\mathds R}^n}) $$ for all $t>0$. Letting $t\to 0$ implies \eqref{hw}. \medskip From now on we assume that $X$ is isotropic with $\psi(\xi)=g(|\xi|^2)$ and with an increasing function $g:[0,\infty)\to[0,\infty)$. \medskip\noindent (a)$\Rightarrow$(d)$\Rightarrow$(e): This follows from the above statements. \medskip\noindent (e)$\Rightarrow$(f): By assumption, $p_t\in L_\infty({{\mathds R}^n})\cap L_1({{\mathds R}^n})$ for all $t>0$. In particular $p_t\in L_2({{\mathds R}^n})$ and, by Plancherel's theorem, $\mathcal{F}^{-1} p_t = e^{-t\psi} \in L_2({{\mathds R}^n})$. Since this holds for all $t>0$, we see for $t=2s$ that $e^{-s\psi}\in L_1({{\mathds R}^n})$ for all $s>0$. \medskip\noindent (f)$\Rightarrow$(a): Since $e^{-t\psi}\in L_1$, we get $(2\pi)^n\,p_t(0) = \int_{{{\mathds R}^n}} e^{-t\psi(\xi)}\,d\xi<\infty$. Introducing polar coordinates and integrating by parts yields \begin{align*} \int_{{{\mathds R}^n}} e^{-t\psi(\xi)}\,d\xi &= \int_{{{\mathds R}^n}} e^{-t g(|\xi|^2)}\,d\xi\\ &= \omega_{n-1} \int_0^\infty e^{-t g(r^2)}\,r^{n-1}\,dr\\ &\geq \omega_{n-1} \int_1^\infty e^{-t g(r^2)}\,r^{n-1}\,dr\\ &= \frac{\omega_{n-1}}n\left( \lim_{s\to\infty} e^{-t g(s^2)}\,s^n - e^{-t g(1)}- \int_1^\infty r^n\,d_re^{-t g(r^2)}\right). \end{align*} Since $r\mapsto e^{-tg(r^2)}$ is decreasing, the integral appearing in the last line is negative and the calculation shows that $\lim_{s\to\infty} e^{-t g(s^2)}\,s^n$ is finite. Therefore, $e^{-tg(r^2)} \leq c_t\,r^{-n}$ for all $r>1$ and with some suitable constant $c_t<\infty$. Then $$ \frac{\psi(\xi)}{\ln |\xi|} = \frac{g(|\xi|^2)}{\ln |\xi|} \geq \frac{n}{t}-\frac{\ln c_t}{t \ln|\xi|},\quad |\xi| > 1, $$ implying $$ \liminf_{|\xi|\to \infty} \frac{\psi(\xi)}{\ln|\xi|}\geq \frac{n}{t}. $$ Letting $t\to 0$, we get $(HW_\infty)$. \end{proof} Let us now turn to the proof of Theorem \ref{tmain2}. Recall that the Bessel function of the first kind, $J_\nu(z)$, is defined by \begin{equation}\label{jnu} J_\nu(z) =\sum_{n=0}^\infty \frac{(-1)^n}{\Gamma(n+\nu+1)n!} \left(\frac{z}{2}\right)^{2n+\nu}, \quad \nu,\, z\in\mathds R. \end{equation} \begin{lem} Let $\psi(\xi) = g(|\xi|^2)$ be the characteristic exponent of a pure jump isotropic $n$-dimensional L\'evy process. Then \begin{equation* g(u^2)= \frac{1}{n} \int_0^\infty |G(r u^{-1})|\cdot r\cdot H_{n/2}(r)\,dr \end{equation*} where $G(r) = -\omega_{n-1}\nu(B(0,r)^c)$, $\omega_{n-1} = \frac{2\pi^{n/2}}{\Gamma(\frac n2)}$, and $H_\nu(r) = 2^\nu\,\Gamma(\nu+1) r^{-\nu}J_\nu(r)$. \end{lem} \begin{proof} Switching to polar coordinates in the L\'evy-Khintchine formula \eqref{psi}, we get \begin{equation}\label{Gr2} g(u^2)=\int_0^\infty \big(1-H_{\frac{n-2}{2}} (ur)\big) \, dG(r),\quad n\geq 1, \end{equation} cf.\ \cite[p.\ 99]{B55}. Note that $H_\nu(0)=1$. Moreover, using \begin{gather*} J_\nu(z)\sim \frac{1}{\Gamma(\nu+1)}\Big(\frac{z}{2}\Big)^\nu, \quad z\to 0,\\ J_\nu(z)\sim \sqrt{\frac{2}{\pi z}} \cos\Big(z-\frac{\pi \nu}{2}-\frac{\pi}{4}\Big), \quad z\to \infty,\\ \frac{d}{dz} (z^{-\nu} J_\nu(z))=-z^{-\nu} J_{\nu+1}(z), \end{gather*} where $z\in \mathds R$, see \cite[pp.\ 359, 368]{WW58}, we get \begin{gather} zH_\nu(z)\sim z, \quad z\to 0, \label{h1}\\ H_\nu(z)\sim \sqrt{\frac{2}{\pi}}\frac{1}{z^{\nu+\frac 12}} \cos\Big(z-\frac{\pi \nu}{2}-\frac{\pi}{4}\Big), \quad z\to \infty, \label{h2}\\ \frac{d}{dz} H_{\nu}(z)=-\frac{z}{2(\nu+1)}\, H_{\nu+1}(z). \label{h3} \end{gather} From the series representation \eqref{jnu} for $J_\nu$, we see $1-H_\nu(z)\sim \frac{z^2}{2(\nu+1)}$ as $z\to 0$. Since $\lim_{r\to 0} r^2 G(r)=0$, see \cite[Theorem 2.1]{BG61}, we find $$ \infty > \int_0^1 r^2\,dG(r) = r^2 G(r)\Big|_0^1 - \int_0^1 2 r G(r)\,dr = G(1)-2\int_0^1 rG(r)\, dr, $$ hence $\int_0^1 rG(r)\,dr<\infty$. Because of \eqref{h3} we can use integration by parts in \eqref{Gr2} to get (with $\nu=\frac{n-2}{2}$) $$ g(u^2)= \frac{1}{2(\nu+1)} \int_0^\infty |G(r u^{-1})|\, r H_{\nu+1}(r)\,dr; $$ the properties \eqref{h1}-\eqref{h2} show that the integral on the right hand side is convergent, and the claim follows. \end{proof} \begin{proof}[Proof of Theorem \ref{tmain2}] For $\varepsilon>0$ and $\gamma>1$ (which will be determined later) we split the integral expression appearing in \eqref{Gr2} and get \begin{align*} \frac{g(\varepsilon^{-1})}{|\ln \varepsilon|} =\frac{1}{|\ln \varepsilon|}\left( \int_0^{\gamma\varepsilon} +\int_{\gamma\varepsilon}^\infty \right) \big(1-H_{\frac{n-2}{2}}(r \varepsilon^{-1})\big)\,dG(r) =: I_1(\varepsilon)+I_2(\varepsilon). \end{align*} Note that the $I_1$ and $I_2$ are nonnegative. By \eqref{jnu} there exist $c_1, c_2>0$ such that \begin{equation*} c_1 r^2 \leq 1-H_{\frac{n-2}{2}}(r)\leq c_2 r^2 \quad\text{for all\ \ } 0\leq r\leq 1, \end{equation*} which gives \begin{equation*} c_1 \int_0^\varepsilon \frac{r^2}{\varepsilon^2\, |\ln \varepsilon |} \,dG(r) \leq I_1(\varepsilon) \leq c_2 \int_0^\varepsilon \frac{r^2}{\varepsilon^2\, |\ln \varepsilon |}\,dG(r). \end{equation*} This means that \eqref{kaln} is equivalent to \begin{equation* \lim_{\varepsilon\to 0} I_1(\varepsilon)=\infty. \end{equation*} \noindent Consider the second term. By \eqref{h2} there exist $C_1, C_2>0$ such that \begin{equation}\label{H-1} 1-\frac{C_1}{r^{\frac{n-1}{2}}} \leq 1-H_{\frac{n-2}{2}}(r) \leq 1+ \frac{C_2}{r^{\frac{n-1}{2}}} \quad\text{for all\ \ } r\geq 1. \end{equation} If we replace in \eqref{H-1} $r$ by $r\varepsilon^{-1}$ we get \begin{equation*} 1-C_1 \left(\frac{\varepsilon}{r}\right)^{\frac{n-1}{2}} \leq 1-H_{\frac{n-2}{2}}(r \varepsilon^{-1}) \leq 1+ C_2 \left(\frac{\varepsilon}{r}\right)^{\frac{n-1}{2}}; \end{equation*} for $r>\gamma\varepsilon$ with a sufficiently large constant $\gamma$ (depending only on $C_1, C_2$ and $n$), we get new constants $0<C_3, C_4<\infty$ such that $$ 0 < C_3 \leq 1-H_{\frac{n-2}{2}}(r\varepsilon^{-1}) \leq C_4\quad\text{for all\ \ }r\geq \gamma\varepsilon. $$ Integrating this expression over $[\gamma\varepsilon,\infty)$ w.r.t.\ $dG(r)$ reveals that $\frac{|G(\gamma\varepsilon)|}{|\ln \varepsilon|} \asymp I_2(\varepsilon)$. This shows \begin{gather*} \lim_{\varepsilon\to 0} I_2(\varepsilon)=\infty \quad\text{if, and only if,}\quad \lim_{\varepsilon\to 0} \frac{|G(\varepsilon)|}{|\ln \varepsilon|}=\infty. \qedhere \end{gather*} \end{proof} \section{Extensions} Let us give two generalizations of Theorem \ref{tmain}. For this we need to recall the notions of de- and increasing rearrangements of a function. Our standard reference is the monograph \cite{ben-sha}. Let $u$ be a real-valued measurable function defined on a measurable subset $B\subset{{\mathds R}^n}$. As usual, $\mu(t):=m\{\xi\in B\::\: |u(\xi)|>t\}$ ($m$ is Lebesgue measure) is the \emph{distribution function} of $u$ and \begin{equation*} u^*(s):=\inf\{t\geq 0\::\: \mu(t)\leq s\}=\sup\{t\geq 0\::\: \mu(t) > s\} \end{equation*} is called the \emph{decreasing rearrangement} of $u$. Note that $u^* : [0,\infty)\to [0,\infty]$ is decreasing and that $u^*$ is the generalized right-continuous inverse of $\mu_u$. Analogously one can define an increasing rearrangement $u_*$ of a function $u$. An important property of decreasing rearrangements is that $u$ and $u^*$ have the same distribution function, and therefore \begin{equation}\label{unorm} \int_B f(u(\xi))\, d\xi = \int_0^\infty f(u^*(s))\,ds\quad\text{for all measurable $f:\mathds R\to\mathds R_+$.} \end{equation} Let $X$ be a L\'evy process with characteristic exponent $\psi$. Set $u(\xi):=e^{-t\Re\psi(\xi)}$ and denote by \begin{equation}\label{f} \nu_{\Re\psi}(s) := m\{\xi\::\: \Re\psi(\xi) \leq s\}. \end{equation} Then we find for the decreasing rearrangement of the function $u$ that \begin{equation}\label{ustar} \begin{split} u^*(s) &= \inf\big\{\tau \geq 0\::\: m\{\xi:\,\, u(\xi)>\tau \}<s \big\}\\ &=\inf\left\{\tau\geq 0\::\: \nu_{\Re \psi}\left(-t^{-1}\ln\tau\right)<s\right\}\\ &= \exp\left(-t\nu_{\Re \psi}^{-1}(s)\right), \quad s>0. \end{split} \end{equation} Here $\nu_{\Re \psi}^{-1}$ is the generalized right-continuous inverse of $\nu_{\Re \psi}$, i.e.\ the increasing rearrangement $(\Re\psi)_*$ of $\Re\psi$. In particular, $(\Re\psi)_*$ is increasing. This allows to apply the same arguments as in the proof of Theorem~\ref{tmain} to find a necessary and sufficient criterion when $e^{-t\psi}\in L_1({{\mathds R}^n})$. \begin{prop}\label{pmain} Let $X$ be an $n$-dimensional L\'evy process with characteristic exponent $\psi$. Then the following conditions are equivalent \begin{enumerate} \item[(a)] $e^{-t\psi}\in L_1({{\mathds R}^n})$ for all $t>0$; \item[(b)] $p_t(dx)=p_t(x)\,dx$ for all $t>0$, and $p_t\in L_1({{\mathds R}^n})\cap C_\infty({{\mathds R}^n})$; \item[(c)] $p_t(dx)=p_t(x)\,dx$ for all $t>0$, and $p_t\in L_\infty({{\mathds R}^n})$; \item[(d)] The increasing rearrangement $(\Re \psi)_*$ satisfies the following Hartman-Wint\-ner-type condition: \begin{equation}\label{hw-type} \lim_{|\xi|\to\infty} \frac{(\Re \psi)_*(\xi) }{\ln (1+|\xi|)} =\infty. \tag{$\textup{HW}^*_\infty$} \end{equation} \end{enumerate} \end{prop} \begin{proof} (a)$\Rightarrow$(b): By the Riemann-Lebesgue Lemma we see that $p_t = \mathcal{F} e^{-t\psi} \in C_\infty({{\mathds R}^n})$; since $p_t$ is the density of a probability measure, $p_t\in L_1({{\mathds R}^n})$ is automatically satisfied. \medskip\noindent (b)$\Rightarrow$(c): This is obvious. \medskip\noindent (c)$\Rightarrow$(a): Since $p_t$ is a transition density, $p_{t/2}\in L_\infty ({{\mathds R}^n}) \cap L_1({{\mathds R}^n})\subset L_2({{\mathds R}^n})$, which implies \begin{gather*} e^{-(t/2)\psi} = \mathcal{F}^{-1} p_{t/2} \in L_2({{\mathds R}^n}) \implies |e^{-t\psi}| \in L_1({{\mathds R}^n}) \implies e^{-t\psi}\in L_1({{\mathds R}^n}). \end{gather*} \medskip\noindent (d)$\Rightarrow$(a): This follows from \eqref{unorm} and \eqref{ustar}. \medskip\noindent (a)$\Rightarrow$(d): This follows from \eqref{unorm}, \eqref{ustar} and the proof of (f)$\Rightarrow$(a) in Theorem~\ref{tmain}. \end{proof} The conditions \eqref{hw} and \eqref{hw1} can be seen as comparison conditions: indeed we compare the growth rates (as $|\xi|\to\infty$) of the the characteristic exponent $\psi$ of the L\'evy process $X$ with the logarithm of the characteristic exponent of the symmetric Cauchy process. It is, therefore, a natural question how one can generalize Theorem~\ref{tmain}. Consider the following Hartman-Wintner-type condition \begin{equation}\label{HW-new}\tag{$\textup{HW}^\phi_\infty$} \lim_{|\xi|\to\infty} \frac{\Re\psi(\xi)}{\ln(1+\phi(\xi))} = \infty \end{equation} where $\phi:{{\mathds R}^n}\to\mathds R$ is the characteristic exponent of a further L\'evy process. For $\phi(\xi)=|\xi|$ and the corresponding Cauchy process \eqref{HW-new} and \eqref{hw} coincide. We are interested in the question which properties of $\phi$ imply that the L\'evy process $X$ with characteristic exponent $\psi$ admits a transition density $p_t$ such that $p_t\in C_\infty({{\mathds R}^n})\cap C^\infty({{\mathds R}^n})$. For this we need to introduce a class of function spaces. Let $\phi$ be a real-valued characteristic exponent of a L\'evy process. Then $\kappa = 1+\phi$ is a temperate weight function in the sense of \cite{hor} and it is possible to define the following class of \emph{anisotropic $L_2$-Bessel potential spaces} \begin{equation*} H_2^{\phi,\kappa}({{\mathds R}^n}) :=\left\{u\in S'(\mathbb{R}^n)\::\: \mathcal{F}^{-1} [(1+\phi)^{\kappa/2} \mathcal{F} u]\in L_2({{\mathds R}^n})\right\}, \end{equation*} see \cite{FJS1,FJS2} for the general $L_p$-theory. For $p=2$ similar spaces were introduced by H\"{o}rmander \cite{hor} and, independently, by Volevich and Paneyah \cite{VP65} in order to study regularity properties of certain partial differential equations. For these spaces, the condition \begin{equation}\label{phi-int} \mathcal{F}^{-1}\left[(1+\phi)^{-\kappa/2}\right]\in L_2({{\mathds R}^n}) \quad\text{or, equivalently,}\quad (1+\phi)^{-\kappa/2} \in L_2({{\mathds R}^n}) \end{equation} is necessary and sufficient for the following Sobolev embedding \begin{equation}\label{embedding} H_2^{\phi,\kappa}({{\mathds R}^n}) \hookrightarrow C_\infty({{\mathds R}^n}), \end{equation} see e.g.\ \cite[Theorem 2.3.4]{FJS2}. In order to formulate the next theorem we need the notion of a Fourier integral resp.\ pseudodifferential operator: Let $\Psi:{{\mathds R}^n}\to\mathds R$ be polynomially bounded. Then $$ \Psi(D) u(x) := \mathcal{F}^{-1}(\Psi\cdot \mathcal{F} u)(x) = \int_{{\mathds R}^n} e^{ix\xi}\,\Psi(\xi)\cdot\mathcal{F} u(\xi)\,d\xi,\quad u\in C_c^\infty({{\mathds R}^n}), $$ defines an linear operator with \emph{symbol} $\Psi$. If $\Psi(\xi)$ is a polynomial, $\Psi(D)$ is a differential operator, e.g.\ the symbol $\Psi(\xi)=\xi$ corresponds to the operator $D = -i\nabla$. \begin{theorem}\label{tmain3} Let $X$ be an $n$-dimensional L\'evy process, $n\geq 1$, with characteristic exponent $\psi$ and let $\phi$ be the characteristic exponent of a symmetric L\'evy process. If \eqref{phi-int} holds for some $\kappa>0$, then the following conditions are equivalent: \begin{enumerate} \item[(a)] \eqref{HW-new}; \item[(b)] for all $t>0$ the transition density exists, $p_t\in C_\infty({{\mathds R}^n})$ and $\phi(D)^m p_t\in L_1({{\mathds R}^n})\cap C_\infty({{\mathds R}^n})$ for all $m\in\mathds N$; \item[(c)] for all $t>0$ the transition density exists and $p_t, \phi(D)p_t \in L_1({{\mathds R}^n})$. \end{enumerate} \end{theorem} \begin{proof} Denote by $\nu$ and $\mu$ the L\'evy measures of $\psi$ and $\phi$, respectively. As in the proof of Theorem \ref{tmain} we write $\psi_1$ and $\phi_1$ for the characteristic exponents which we obtain from $\Re\psi$ and $\phi$ by restricting the respective L\'evy measures to the unit ball: $\nu\rightsquigarrow \mathds 1_{\{0<|y|\leq 1\}}\,\nu(dy)$ and $\mu\rightsquigarrow \mathds 1_{\{0<|y|\leq 1\}}\,\mu(dy)$. Then we see that $\psi_1$ and $\phi_1$ are infinitely often differentiable and $$ \psi_1(\xi) \leq \Re \psi(\xi) \leq c(\psi_1(\xi) + 1) \quad\text{and}\quad \phi_1(\xi) \leq \phi(\xi) \leq c(\phi_1(\xi) + 1) $$ for all $\xi\in{{\mathds R}^n}$ and some constant $c$. Thus, \eqref{HW-new} holds for $\psi$ if, and only if, $(\textup{HW}^{\phi_1}_\infty)$ holds for $\psi_1$. Without loss of generality we may therefore assume that $\phi,\psi\in C^\infty({{\mathds R}^n})$. \medskip\noindent (a)$\Rightarrow$(b): Without loss of generality we may assume that $\psi$ is real-valued. By the chain rule $$ \nabla^\alpha e^{-t\psi} = (-t)^{|\alpha|}\,P(\psi,\ldots,\nabla^\alpha\psi) e^{-t\psi} $$ where $P(\psi, \ldots, \nabla^\alpha\psi)$ is a polynomial of degree less or equal than $|\alpha|$ depending on the derivatives of $\psi$ of order $\beta\leq\alpha$. Using \eqref{psi-derivative} it is not hard to see that $$ \nabla^\alpha \psi_1(\xi) \leq c_{|\alpha|} \begin{cases} \displaystyle \psi(\xi), &|\alpha|=0 \\[\medskipamount] \displaystyle \sqrt{\psi(\xi)} & |\alpha| = 1,\\[\medskipamount] \displaystyle 1 & |\alpha| \geq 2. \end{cases} $$ This observation is due to Hoh \cite{hoh-osaka}. The same is true for the exponent $\phi$. Therefore, $\xi\mapsto\nabla^\alpha e^{-t\psi(\xi)}$ is bounded for all $t>0$ and $\alpha\in\mathds N_0^n$. Combining this with \eqref{HW-new}, we find suitable polynomials $Q,R$ of degree less or equal than $|\alpha|+m$ such that for all $\kappa>0$ and $\alpha\in\mathds N_0^n$ \begin{align*} \left|(1+\phi)^\kappa \nabla_\xi^\alpha \big[(1+\phi)^m e^{-t\psi}\big](\xi)\right| &= \left|\sum_{\beta\leq\alpha} c_\beta \, \nabla^\beta (1+\phi)^m \nabla^{\alpha-\beta} e^{-t\psi}\right|\\ &\leq c_t Q(\phi,\ldots,\nabla^\alpha\phi) R(\psi,\ldots,\nabla^\alpha\psi) e^{-t\psi} \end{align*} is bounded. Because of \eqref{phi-int}, $$ \nabla_\xi^\alpha \big[(1+\phi)^m e^{-t\psi}\big](\xi) \in L_1({{\mathds R}^n})\cap L_\infty({{\mathds R}^n}). $$ From the Riemann-Lebesgue Lemma we get $$ |x|^k (1+\phi(D))^m p_t(x) \in C_\infty({{\mathds R}^n}) $$ for all $k,m\geq 0$. Choosing $k>n$ we can divide by $(1+|x|^k)$ and find, in particular, $(1+\phi(D))^m p_t(x)\in L_1({{\mathds R}^n})$. \medskip\noindent (b)$\Rightarrow$(c): This is obvious. \medskip\noindent (c)$\Rightarrow$(a): This follows as in the proof of Theorem \ref{tmain}. \end{proof} \begin{rem} \textbf{(a)} If we choose $\phi=\Re\psi$ in Theorem \ref{tmain3}, \eqref{HW-new} holds if, and only if, $\lim_{|\xi|\to\infty}\Re\psi(\xi)=\infty$. In this case the statement \begin{enumerate} \item[(i)] \eqref{phi-int} holds for $\phi=\Re\psi$ and some $\kappa>0$, i.e.\ $(1+\Re\psi)^{-\kappa/2}\in L_2({{\mathds R}^n})$ \end{enumerate} always implies \begin{enumerate} \item[(ii)] for all $t>0$ we have $p_t(dx)=p_t(x)\,dx$ with $p_t\in \bigcap_{r>0} H^{\Re\psi,r}_2({{\mathds R}^n})$ and $\Re \psi(D)^m p_t\in C_\infty({{\mathds R}^n})$ for all $m\geq 0$. \end{enumerate} This follows follows immediately from Theorem \ref{tmain3}, \eqref{phi-int} and \eqref{embedding}. We cannot expect the converse to be true. Even if \eqref{HW-new} holds true, the condition $\phi^m(D)p_t\in C_\infty({{\mathds R}^n})$ does not imply \eqref{phi-int}. To see this, consider the function $\psi$ from Example~\ref{exa2}. In this example $\psi\in C^\infty(R)$ and $\psi(\xi)\sim \ln^2|\xi|$ as $|\xi|\to\infty$. Let $\phi(\xi):=\ln (1+\psi(\xi))$. Since $\psi$ and $\phi$ are infinitely often differentiable and, by $(HW_\infty)$, the function $\nabla^\alpha\big[\phi(\xi)e^{-t\psi(\xi)}\big]$ is bounded for any $m,\alpha\geq 1$, we have $\phi \cdot e^{-t\psi}\in S(R)$, implying $$ \phi^m (D)p_t\in L_1(R)\cap C_\infty(R). $$ But since $\phi(\xi)\sim \ln\ln |\xi|$ as $|\xi|\to\infty$, the condition \eqref{phi-int} does not hold for any $\kappa$. \bigskip\noindent \textbf{(b)} Theorem \ref{tmain2} has also an obvious generalization: if we replace in the statement \eqref{hw} by \eqref{HW-new}, then we have to change $|\ln\varepsilon|$ to $\ln \phi(1/\varepsilon)$ in \eqref{tmain2-eqn}. The proof should be fairly obvious. \end{rem} The proof of Theorem \ref{tmain3} shows that the embedding condition \eqref{phi-int} is important to assure that $p_t$ and $\phi(D)^m p_t$ are continuous functions. Let us briefly discuss the meaning of \eqref{phi-int}. \begin{lem}\label{lem-phi-int} Let $\phi$ be a real-valued characteristic exponent of some $n$-dimensional L\'evy process such that $\lim_{|\xi|\to\infty}\phi(\xi)=\infty$. Then \eqref{phi-int} is equivalent to \begin{gather}\tag{$\ref{phi-int}'$}\label{phi-int*} \nu_\phi(x) := m(\xi\in{{\mathds R}^n} \::\: \phi(\xi) \leq x) \leq c\,x^\lambda \quad\text{for all $x\geq 1$} \end{gather} where $c,\lambda>0$ are suitable constants. The condition \eqref{phi-int*}, in turn, is implied by the following volume-doubling condition \begin{equation}\label{voodoo} \frac{\nu_\phi(2x)}{\nu_\phi(x)} \leq C \quad\text{for some constant $C>0$ and all $x\geq 1$}. \end{equation} \end{lem} \begin{proof} Recall that $\phi_*(x) = \nu_\phi^{-1}(x)$ is the increasing rearrangement of the function $\phi$. Therefore, \begin{equation}\label{new2} \int_{{\mathds R}^n} \frac {d\xi}{(1+\phi(\xi))^\kappa} = \int_0^\infty \frac {dx}{(1+\phi_*(x))^\kappa} = \kappa \int_0^\infty \frac{1}{(1+x)^{\kappa+1}}\,\nu_\phi(x)\,dx; \end{equation} the last equality follows from integration by parts. If \eqref{phi-int} holds, the above integrals are finite. Since $\phi_*$ is increasing, the usual Abelian trick (cf.\ the proof of (f)$\Rightarrow$(a) in Theorem \ref{tmain}) guarantees that $$ \frac{1}{(1+\phi_*(x))^\kappa}\leq \frac c{1+x} \implies \frac{1}{(1+y)^\kappa}\leq \frac c{1+\nu_\phi(y)} $$ which gives \eqref{phi-int*}. Conversely, if \eqref{phi-int*} holds, the second equality in \eqref{new2} shows that the integrals are finite as soon as $\kappa>\lambda$. This gives \eqref{phi-int}. Assume finally that \eqref{voodoo} holds. Fix $x\geq 1$ and set $k:= [\ln x / \ln 2]$. Since $\nu_{\phi}$ is increasing, we find $$ \nu_{\phi}(x) \leq \nu_{\phi}(2^{k+1}) \leq C^{k+1}\nu_{\phi}(1) = C\,x^{\lambda}\nu_{\phi}(1) $$ with $\lambda = \ln C/\ln 2$. Without loss of generality we can assume that $\lambda > 0$, and \eqref{voodoo} follows. \end{proof} \section{Applications} The integral representation \eqref{pr1} allows to determine in some cases the asymptotic behaviour of $p_t(0)$ in terms of the L\'evy exponent $\psi$. In what follows we do not assume isotropy. Write $\nu_{\Re\psi}(x) := m\left\{\xi \::\: \Re\psi(\xi)\leq x\right\}$ for the distribution function \eqref{f} of $\Re\psi$. Then \begin{equation* (2\pi)^n\, p_t(0) =\|e^{-t\psi}\|_{L_1} =\|e^{-t\Re\psi}\|_{L_1} =t\int_0^\infty \nu_{\Re\psi}(x)\,e^{-tx}\, dx. \end{equation*} The following proposition below is essentially Theorem 4 from \cite[Chapter XIII.5]{Fel}. \begin{prop}\label{pro1} Suppose that \eqref{hw} holds true and $\nu_{\Re\psi}(x)\sim x^{\rho-1}L(x)$, $0<\rho<\infty$, as $x\to\infty$ \textup{[}resp.\ as $x\to 0$\textup{]} where $L(x)$ is a function which is slowly varying at infinity \textup{[}resp.\ at zero\textup{]}. Then \begin{equation*} p_t(0)\sim \frac{\Gamma(\rho)}{t^{\rho-1}}L(t^{-1}) \quad\text{as\ \ }t\to 0\qquad [\text{resp.\ as\ \ }t\to\infty]. \end{equation*} \end{prop} The above statement can be generalized to the case when $\nu_{\Re\psi}$ is of bounded increase. \begin{prop}\label{p2} Suppose that \eqref{hw} holds true and the function $\nu_{\Re\psi}(x)$ satisfies for $\lambda\geq 1$ the following volume doubling property: \begin{equation}\label{vodo} \frac{\nu_{\Re\psi}(2 x)}{\nu_{\Re\psi}(x)}\leq C \quad\text{as\ \ }x\to \infty \qquad [\text{resp.\ as\ \ }x\to 0] \end{equation} for some constant $C<\infty$. Then \begin{equation}\label{asym1} c_1 \nu_{\Re\psi}\left(t^{-1}\right) \leq p_t(0) \leq c_2 \nu_{\Re\psi}\left(t^{-1}\right) \quad\text{for\ \ }t\to 0\qquad [\text{resp.\ for\ \ }t\to\infty]. \end{equation} \end{prop} \begin{proof} Fix $\lambda\geq 1$ and set $k:= [\ln\lambda / \ln 2]$. Since $\nu_{\Re\psi}$ is increasing, we find $$ \nu_{\Re\psi}(\lambda x) \leq \nu_{\Re\psi}(2^{k+1} x) \leq C^{k+1}\nu_{\Re\psi}(x) = C\lambda^{\alpha}\nu_{\Re\psi}(x) $$ with $\alpha = \ln C/\ln 2$. Without loss of generality we can assume that $\alpha\geq 0$. Therefore, \eqref{vodo} is equivalent to saying that \begin{equation}\label{nup} \frac{\nu_{\Re\psi}(\lambda x)}{\nu_{\Re\psi}(x)}\leq C(1+o(1)) \lambda^\alpha \quad\text{as\ \ }x\to \infty \qquad [\text{resp.\ as\ \ }x\to 0]. \end{equation} It is enough to consider the case where $x\to\infty$, the arguments for $x\to 0$ are similar. Since $(2\pi)^n\,p_t(0)=\int_0^\infty \nu_{\Re\psi}(\frac{y}{t}) e^{-y}dy$ we have by monotonicity $$ (2\pi)^n\,p_t(0) \geq \int_1^\infty \nu_{\Re\psi}(yt^{-1}) e^{-y}\,dy \geq \nu_{\Re\psi}(t^{-1}) \int_1^\infty e^{-y}\,dy. = c_1\, \nu_{\Re\psi}(t^{-1}) $$ Because of \eqref{nup}, \begin{align*} (2\pi)^n p_t(0) &=\left(\int_0^1+\int_1^\infty \right)\nu_{\Re\psi}(yt^{-1}) e^{-y}\,dy \\ &\leq \nu_{\Re\psi}(t^{-1}) \int_0^1e^{-y}dy + C\, \nu_{\Re\psi}(t^{-1}) \int_1^\infty y^\alpha e^{-y}\,dy\\ &= c_2 \, \nu_{\Re\psi}(t^{-1}) \qedhere \end{align*} \end{proof} \begin{rem} The volume doubling property \eqref{vodo} for $\nu_{\Re\psi}$ is important: for example, if $\psi(\xi)\sim \ln^2|\xi|$ as $|\xi|\to\infty$, one can show by the Laplace method, see \cite{Co65}, that $c_1\, e^{c_2/t}\leq p_t(0)\leq c_3 \, e^{c_4/t}$ for $t\in (0,1]$. We refer to \cite{KK10} for more results on transition density estimates in small time. \end{rem} \begin{rem} Let $\psi$ be a characteristic exponent of a L\'evy process. It is known that $\psi$ induces via $\rho(x,y):=\sqrt{\Re\psi(x-y)}$ a metric on ${{\mathds R}^n}$, see \cite[Lemma~3.6.21]{J01}. Define by $B(x,r;\rho):=\left\{y\in{{\mathds R}^n}\::\: \sqrt{\psi(x-y)}\leq r\right\}$ a ball of radius $r$ centred at $x$ in the metric $\rho$. Then $\nu_{\Re\psi}(r)= m(B(x,\sqrt{r};\rho))$. This allows the following interpretation of Proposition \ref{p2}: if the measure of a ball in the metric $\rho$ is regular enough, its behaviour at infinity controls the behaviour of the transition density at zero. \end{rem} \begin{exa}\label{exa6} Consider the isotropic case, i.e.\ $\psi(\xi)=g(|\xi|^2)$ for some continuous $g$, which we assume in addition to be monotone. Under the conditions of Proposition~\ref{p2} we get for some $c_1, c_2>0$ \begin{equation*} c_1 \big(g^{-1}(1/t)\big)^{\frac{n}{2}} \leq p_t(0) \leq c_2 \big(g^{-1}(1/t)\big)^{\frac{n}{2}}, \end{equation*} as $t\to 0$ (resp., $t\to \infty$). As an application of Proposition \ref{p2} we note that \eqref{asym1} immediately tells us that the asymptotic behaviour of the transition density of the tempered $\alpha$-stable or truncated $\alpha$-stable process is $$ c_1 t^{-\frac{n}{\alpha}} \leq p_t(0)\leq c_2 t^{-\frac{n}{\alpha}} \quad\text{as $t\to 0$}. $$ Indeed, since the real part of the characteristic exponent of a tempered $\alpha$-stable processes behaves like $Re\, \psi(\xi)\sim c |\xi|^\alpha$, see \cite[Theorem 2.9]{R07}), we can apply Proposition~\ref{pro1} to get the asymptotic behaviour of $p_t(0)$. For the truncated $\alpha$-stable process the asymptotic behaviour of $p_t(0)$ as $t\to 0$ follows from Proposition~\ref{p2} and the observation that the characteristic exponent $\psi_R$ behaves like $\psi_R(\xi)\sim |\xi|^\alpha$ if $\psi(\xi)\sim |\xi|^\alpha$ as $|\xi|\to\infty$. \end{exa} Let us finally prove a straighforward ratio-limit theorem for L\'evy processes. We begin with an approximation result. \begin{lem}\label{l-approx-unity} Let $\psi:{{\mathds R}^n}\to\mathds C$ be the characteristic exponent of a L\'evy process given by \eqref{psi}. Assume that $e^{-t\psi}\in L^1({{\mathds R}^n})$ for all $t\geq t_0$. Then the normalized function $\chi_t := e^{-t\psi}/\| e^{-t\psi}\|_{L^1}$ satisfies for all $\delta>0$ \begin{equation}\label{approx-unity} \lim_{t\to\infty} \int_{|\xi| > \delta} |\chi_t(\xi)|\,d\xi = 0. \end{equation} \end{lem} \begin{proof} Let $t>t_0$. Then $p_t(dx)=p_t(x)\,dx$ with $p_t\in L^1({{\mathds R}^n})$ and, by the Riemann-Lebesgue Lemma, $\mathcal{F}^{-1} p_t = e^{-t\psi}\in C_\infty({{\mathds R}^n})$. In particular, $\lim_{|\xi|\to\infty} \Re\psi(\xi)=\infty$ which means that the following infimum $$ m_\delta := \inf_{|\xi|>\delta} \Re\psi(\xi) > 0,\quad\delta>0, $$ is attained and strictly positive. Otherwise there would be some $\xi_0$ with $\psi(\xi_0)=0$ and $\psi$ would be periodic; in this case we would find for all $\epsilon>0$ some $h=h_\epsilon$ such that for all $k\in\mathds N$ $e^{-t\psi}\big|_{B_h(k\xi_0)} > 1-\epsilon$. This, however would contradict the assumption that $e^{-t\psi}\in L^1({{\mathds R}^n})$. Moreover, since $\psi$ is unbounded, $\nu(B_\delta(0))=\infty$ for any $\delta>0$. Therefore, for all $t>t_0$ \begin{align*} \int_{|\xi|>\delta} |e^{-t\psi(\xi)}|\,d\xi &= \int_{|\xi|>\delta} e^{-t\Re\psi(\xi)}\,d\xi\\ &= \int_{|\xi|>\delta} e^{-(t-t_0)\Re \psi (\xi)}e^{-t_0\Re\psi(\xi)}\,d\xi\\ &\leq e^{-(t-t_0)m_\delta}\int_{|\xi|>\delta} e^{-t_0\Re\psi(\xi)}\,d\xi. \end{align*} From the L\'evy-Khinchine formula \eqref{psi} we get for every $R>0$ $$ \Re\psi(\xi) \leq c^\psi_R |\xi|^2 + d^\psi_R, \quad\xi\in{{\mathds R}^n} $$ where $c^\psi_R \asymp \|Q\| + \int_{|y|\leq R}|y|^2\,\nu(dy)$ and $d^\psi_R\asymp \nu(B_R^c(0))$. Thus, \begin{align*} \int_{|\xi|>\delta} |\chi_t(\xi)|\,d\xi &\leq \frac{e^{-tm_\delta}\, e^{t_0 m_\delta} \int_{|\xi|>\delta} e^{-t_0\Re\psi(\xi)}\,d\xi}{\int_{{\mathds R}^n} e^{-t\Re \psi (\xi)}\,d\xi}\\ &\leq \frac{e^{-t m_\delta}\, e^{t_0 m_\delta} \int_{|\xi|>\delta} e^{-t_0\Re\psi(\xi)}\,d\xi}{\int_{{\mathds R}^n} e^{-t c^\psi_R |\xi|^2}\,d\xi\, e^{-t d^\psi_R}}\\ &= (\sqrt t)^n\, e^{-t(m_\delta - d^\psi_R)}\,\frac{e^{t_0 m_\delta} \int_{|\xi|>\delta} e^{-t_0\Re\psi(\xi)}\,d\xi}{\int_{{\mathds R}^n} e^{- c^\psi_R |\xi|^2}\,d\xi}. \end{align*} Now we choose $R$ so large that $m_\delta > d^\psi_R$ and let $t\to\infty$. This proves \eqref{approx-unity}. \end{proof} The following result is, in one dimension, due to W.\ Schenk \cite{schenk}. \begin{theorem}\label{ratio-limit} Let $X_t$ be a L\'evy process in ${{\mathds R}^n}$ with characteristic exponent $\psi$ and with transition semigroup $T_t$. If $e^{-t\psi}\in L^1({{\mathds R}^n})$ for all $t\geq t_0$, then the following limits exist locally uniformly for all $x\in{{\mathds R}^n}$, resp.\ $x,y\in{{\mathds R}^n}$ \begin{align} \label{ratio-1} \lim_{t\to\infty} \frac{T_t f(x)}{\|e^{-t\psi}\|_{L^1}} &= \frac 1{(2\pi)^n}\int_{{\mathds R}^n} f(z)\,dz\qquad\text{for all\ \ } f\in L^1({{\mathds R}^n}).\\ \lim_{t\to\infty} \frac{T_t f(x)}{T_{s+t}g(y)} \label{ratio-2} &= \frac{\int_{{\mathds R}^n} f(z)\,dz}{\int_{{\mathds R}^n} g(z)\,dz}\qquad\text{for all\ \ } f,g\in L^1({{\mathds R}^n}),\; s\geq 0;\\ \label{ratio-3} \lim_{t\to\infty} \frac{p_t(x)}{p_t(0)} &= 1. \end{align} \end{theorem} \begin{proof} It is clearly enough to prove \eqref{ratio-1}. For $u\in C_\infty({{\mathds R}^n})$ we get from Lemma \ref{l-approx-unity} with $\chi_t(\xi) = e^{-t\psi(\xi)}/\|e^{-t\psi}\|_{L^1}$ $$ \lim_{t\to\infty} \int_{{\mathds R}^n} \chi_t(\xi)\,u(\xi)\,d\xi = u(0). $$ Indeed, $\int_{{\mathds R}^n} \chi_t(\xi)\,d\xi = 1$ and \begin{align*} \left|\int_{{\mathds R}^n} \chi_t(\xi)(u(\xi)-u(0))\,d\xi\right| &\leq \int_{|\xi|\leq\delta} |\chi_t(\xi)| |u(\xi)-u(0)|\,d\xi + 2\int_{|\xi|>\delta} |\chi_t(\xi)|\,d\xi \|u\|_\infty\\ &\leq \sup_{|\xi|\leq\delta} |u(\xi)-u(0)|\cdot \|\chi_t\|_{L^1} + 2\int_{|\xi|>\delta} |\chi_t(\xi)|\,d\xi \|u\|_\infty. \end{align*} Because of \eqref{approx-unity} the second term vanishes as $t\to\infty$. Letting $\delta\to 0$ makes the first term tend to zero since $u$ is uniformly continuous. For $f\in L^1({{\mathds R}^n})$ and $t>t_0$ we have $$ \frac{T_t f(x)}{\|e^{-t\psi}\|_{L^1}} = \frac 1{\|e^{-t\psi}\|_{L^1}}\,\mathcal{F}^{-1}\left[e^{-t\psi}\mathcal{F} f\right](x) = \int_{{\mathds R}^n} e^{ix\xi} \chi_t(\xi) \mathcal{F} f(\xi)\,d\xi. $$ The above calculation shows for $u(\xi) := e^{ix\xi}\,\mathcal{F} f(\xi)$ and uniformly for $x$ from compact sets that \begin{gather*} \frac{T_t f(x)}{\|e^{-t\psi}\|_{L^1}} \xrightarrow{t\to\infty} \mathcal{F} f(0) = (2\pi)^{-n} \int_{{\mathds R}^n} f(z)\,dz. \qedhere \end{gather*} \end{proof} If we combine \eqref{ratio-1} of Theorem \ref{ratio-limit} with Propositions \ref{pro1}, \ref{p2} or with Example \ref{exa6}, it is possible to get estimates for the speed of convergence in \eqref{ratio-1}
1,116,691,497,973
arxiv
\section{Introduction} Strongly-correlated systems show fascinating physical properties, like coexistence of magnetic and charge correlations\cite{High-Tc1}, high-temperature superconductivity\cite{High-Tc1,High-Tc2,High-Tc}, Hund metal behavior\cite{Hund}, etc. Local correlations, which appear due to the (non-)local interactions in strongly-correlated systems are well described by the (extended) dynamical mean-field theory ((E)DMFT) \cite{DMFT,DMFT2,EDMFT,EDMFT_Si}. At the same time, this theory is not sufficient to describe the non-local correlations, which play crucial role in many phenomena in strongly-correlated systems, in particular in quantum and classical phase transitions, superconductivity, etc. The non-local extensions of dynamical mean field theory, such as dynamic cluster approximation and cellular mean-field theory (see for a review Ref. \cite{RevCluster}) meet difficulties when treat low temperatures and large cluster sizes. Recent progress in diagrammatic extensions of (E)DMFT \cite{Review}, namely ladder \cite{DGA1a,DGA1b,DGA1c,DGA1d,DGA2,abinitioDGA} and parquet \cite{ParquetDGA} dynamic vertex approximation (D$\Gamma$A), dual fermion (DF) approach \cite{DF1,DF2,DF3,DF4}, dual boson (DB) approach \cite{DB1,DB2,DB3,DB4}, TRILEX \cite{TRILEX}, DMF$^{2}$RG approach\cite{DMF2RG,DMF2RG3} and (E)DMFT+2PI-fRG method \cite{MyEDMFT2PI} allowed to treat non-local correlations on a non-perturbative basis. Key ingredient of many of these methods is the relation between given two-particle irreducible vertices (which are often assumed to be local) and the two-particle reducible vertices, expressed by the corresponding Bethe-Salpeter equations, as well as calculation of the fermion-boson vertices \cite{DGA1c,DB4,TRILEX,MyEDMFT2PI,FermionBoson}. Due to using finite frequency box, the corresponding treatment is, however, often approximate, and to get reasonable results large frequency box is required, which makes numerical calculation of vertices within this frequency box difficult. Recently it was proposed \cite{Kunes,Toschi} to split the frequency box into ``small" one where the numerically exact vertices are used, and larger one, where vertex asymptotics are used. The proposed approach requires however numerical treatment of vertices in the large frequency box (although with their asymptotic values) and/or knowing fermion-boson vertices, which make it not very convenient for applications. In the present paper we propose a way of analytical treatment of vertex asymptotics, such that only numerical calculations within small frequency box are required. The plan of the paper is the following. In Sect. II we introduce the model. In Sect. III we consider procedure of calculation of fermion-boson vertices and susceptibilities using the interaction vertex obtained in a given frequency box. In Sect. IV we discuss the solution of the Bethe-Salpeter equation. In Sect. V we present a numerical example of application of the obtained formulae to the standard Hubbard model. In Sect. VI we present Conclusions. \section{The model and asymptotics of vertices} We consider an extended Hubbard model described by an actio \begin{equation} \mathcal{S}=-\sum\limits_{k,\sigma}c_{k\sigma}^{+}G_{0k}^{-1}c_{k\sigma +U\sum\limits_{q}n_{q\uparrow}n_{-q,\downarrow}+\frac{1}{2}\sum\limits_{q V_{q}^{c}n_{q}n_{-q},\label{S_L \end{equation} where $G_{0k}$ and $V_{q}^{c}$ are some (arbitrary) single-particle Green function and the two-particle vertex, $c_{k\sigma}^{+},c_{k\sigma}$ are Grassmann variables, $\sigma=\uparrow,\downarrow,$ $n_{q}=\sum \nolimits_{\sigma}n_{q\sigma}=\sum\nolimits_{k,\sigma}c_{k\sigma}^{+ c_{k+q,\sigma},$ and we use the momentum-frequency variables $k=(\mathbf{k, i\nu_{n}),$ $q=(\mathbf{q},i\omega_{n}),$ where $i\nu_{n}$ and $i\omega_{n}$ are fermionic- and bosonic Matsubara frequencies. The action (\ref{S_L}) can describe both, the (E)DMFT solution of the Hubbard model (in which case $G_{0k}$ and $V_{q}^{c}$ are only frequency dependent), as well as more general case of the non-local theory, for which $G_{0k}$ and/or $V_{q}^{c}$ acquire some momentum dependence. Let us denote the full two-particle vertex in charge (c) and spin (s) channels (which we consider below for definiteness), corresponding to the action (\ref{S_L}) by $\mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}$ , where $\nu,\nu^{\prime}$ are the incoming- and outgoing fermionic Matsubara frequencies, and $q$ is the momentum-frequency transfer. We assume for simplicity that the vertex depends only on one of the momenta (i.e. the momentum transfer $\mathbf{q}$), as it happens in the ladder versions of D$\Gamma$A \cite{DGA1a,DGA1b,DGA1c,DGA1d,abinitioDGA}, DF \cite{DF3,DF4}, DB \cite{DB2,DB3}, and (E)DMFT+2PI-fRG \cite{MyEDMFT2PI} approaches; more general case can be treat in a similar way. The vertex $\mathcal{F}_{\nu\nu^{\prime q}^{c(s)}$ is related to the two-particle irreducible vertex $\Phi_{\nu \nu^{\prime}q}^{c(s)}$ by the Bethe-Salpeter equatio \begin{equation} \mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}=\left[ (\Phi_{\nu\nu^{\prime}q ^{c(s)})^{-1}-\delta_{\nu\nu^{\prime}}\chi_{\nu q}^{0}\right] _{\nu \nu^{\prime}}^{-1},\label{BS1 \end{equation} where $\chi_{\nu q}^{0}=-\sum\nolimits_{\mathbf{k}}G_{k}G_{k+q}$, $G_{k}^{-1}=G_{0k}^{-1}-\Sigma_{k}$ is the full Green function and $\Sigma _{k}$ is the electronic self-energy (for DMFT and ladder D$\Gamma$A the latter depends on the fermionic frequency $\nu$ only). The vertex $\Phi_{\nu \nu^{\prime}q}^{c(s)}$ for the considering cases of (E)DMFT and its non-local ladder extensions has at $\nu\rightarrow\infty$ or $\nu^{\prime }\rightarrow\infty$ the asymptotic form \cite{Toschi} \begin{equation} \Phi_{\nu\nu^{\prime}q}^{c(s)}\rightarrow U_{q}^{c(s)}+\overline{\Phi}_{\nu \nu^{\prime}\omega}^{c(s)},\label{Vertex_ir_as \end{equation} where $U_{q}^{c}=-(U+2V_{q}^{c}),$ $U_{q}^{s}=U,$ and $\overline{\Phi}_{\nu \nu^{\prime}\omega}^{c(s)}$ is given by \begin{align} \overline{\Phi}_{\nu\nu^{\prime}\omega}^{s} & =-U^{2}[\chi_{c}(\nu -\nu^{\prime})-\chi_{s}(\nu-\nu^{\prime})]/2-U^{2}\chi_{pp}(\nu+\nu^{\prime }+\omega)+v^{c}(\nu-\nu^{\prime}),\label{Phi_bar}\\ \overline{\Phi}_{\nu\nu^{\prime}\omega}^{c} & =-U^{2}[\chi_{c}(\nu-\nu ^{\prime})+3\chi_{s}(\nu-\nu^{\prime})]/2+U^{2}\chi_{pp}(\nu+\nu^{\prime }+\omega)+v^{c}(\nu-\nu^{\prime}),\nonumber \end{align} $\chi_{c,s,pp}(\omega)$ are the charge-, spin-, and particle-particle susceptibilities, accounting for the contribution of the respective bubbles in the transverse channel (these contributions are assumed to be local in the considering ladder approximation), $v^{c}(\omega)$ is the local retarded Coulomb interaction, corresponding to the original non-local interaction $V_q^c$, and obtained, e.g., in EDMFT \cite{EDMFT,EDMFT_Si} (non-local corrections to this interaction w.r.t. $\nu$ and $\nu'$ are neglected in the considering case of EDMFT and its non-local ladder extensions). Note that $\overline{\Phi}_{\nu \nu^{\prime}\omega}^{c(s)}$ can be calculated for arbitrary large $\nu,\nu^{\prime}$ since $\chi_{c,s,pp}(\omega)$ and $v^{c}(\omega)$ decay as $1/\omega^2$ (outside the bosonic frequency box they can be therefore approximated by zero or replaced by the respective asymptotic behavior). The corresponding asymptotics of the reducible vertices $\mathcal{F}_{\nu \nu^{\prime}q}^{c(s)}$ at large frequency $\nu$ or $\nu^{\prime}$ fulfill \begin{equation} \mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}\simeq\left\{ \begin{array} [c]{cc U_{q}^{c(s)}\Gamma_{\nu^{\prime}q}^{c(s)}+\overline{\Phi}_{\nu\nu^{\prime }\omega}^{c(s)}+\sum\limits_{\nu^{\prime\prime}}\overline{\Phi}_{\nu \nu^{\prime\prime}\omega}^{c(s)}\chi_{\nu^{\prime\prime}q}^{0}\mathcal{F _{\nu^{\prime\prime}\nu^{\prime}q}^{c(s)}, & \nu\rightarrow\infty,\\ U_{q}^{c(s)}\Gamma_{\nu q}^{c(s)}+\overline{\Phi}_{\nu\nu^{\prime}\omega }^{c(s)}+\sum\limits_{\nu^{\prime\prime}}\mathcal{F}_{\nu\nu^{\prime\prime q}^{c(s)}\chi_{\nu^{\prime\prime}q}^{0}\overline{\Phi}_{\nu^{\prime\prime \nu^{\prime}\omega}^{c(s)}, & \ \nu^{\prime}\rightarrow\infty, \end{array} \right. \ \label{Vertex_as \end{equation} where the three-leg (fermion-boson) vertex $\Gamma_{\nu q}^{c(s)}$ is defined b \begin{equation} \Gamma_{\nu q}^{c(s)}=1+\sum\limits_{\nu^{\prime}}\mathcal{F}_{\nu\nu^{\prime }q}^{c(s)}\chi_{\nu^{\prime}q}^{0},\label{Lambda \end{equation} here and below we assume factor of temperature $T$ for every frequency summation. Note that for completeness we account for the last terms in the right-hand sides of Eqs. (\ref{Vertex_as}), which were omit in Ref. \cite{Toschi}. We note also that our definition of vertices $\mathcal{F}_{\nu\nu^{\prime q}^{c(s)}$ and ${\Phi}_{\nu\nu^{\prime}q}^{c(s)}$ has opposite sign in comparison to that used in Ref. \cite{Toschi}, and the vertex $\Gamma_{\nu q}^{c(s)}$ corresponds to the vertices $1\pm\lambda_{\nu q}^{c(s)}$ of that paper. \section{Three-leg vertices and susceptibilities} Our first task is to obtain a closed expression for $\Gamma_{\nu q}^{c(s)}$ containing only summation of vertices (except their asymptotic parts $\overline \Phi^{c(s)}_{\nu\nu'\omega}$) within a given frequency box $\nu^{\prime}\in B.$ For that we split a summation in Eq. (\ref{Lambda}) into $\nu^{\prime}\in B$ and $\nu^{\prime}\notin B$ and use the asymptotic form of Eq. (\ref{Vertex_as ). Since $\overline{\Phi}_{\nu^{\prime\prime}\nu^{\prime}\omega}^{c(s)}\propto 1/\nu^2$ for large $\nu=||\nu^{\prime\prime}|-|\nu^{\prime}||$ (see Eqs. (\ref{Phi_bar})), it is sufficient to approximate $\mathcal{F _{\nu\nu^{\prime\prime}q}^{c(s)}\simeq U_{q}^{c(s)}\Gamma_{\nu q}^{c(s)}$ in the right-hand side of Eq. (\ref{Vertex_as}) to the accuracy $O(1/\nu_{\rm max}^3)$, where $\nu_{\rm max}$ is the size of the frequency box. Substituting this into Eq. (\ref{Lambda}) and splitting also the summation over $\nu''$ in Eq. (\ref{Vertex_as}) into one inside and outside the frequency box, we obtai \begin{equation} \Gamma_{\nu q}^{c(s)}=Z_{\nu q}^{c(s)}+\sum\limits_{\nu^{\prime}\in B}Z_{\nu' q}^{c(s)}\mathcal{F}_{\nu \nu^{\prime}q}^{c(s)}\chi_{\nu^{\prime}q}^{0}+U_{q}^{c(s)}\Gamma_{\nu q}^{c(s)}X_{q}^{c(s)}, \end{equation} where $X_{q}^{c(s)}=\sum\nolimits_{\nu^{\prime}\notin B}\chi_{\nu^{\prime}q}^{0 +\sum\nolimits_{\nu^{\prime},\nu^{\prime\prime}\notin B}\chi_{\nu ^{\prime\prime}q}^{0}\overline{\Phi}_{\nu^{\prime\prime}\nu^{\prime}\omega }^{c(s)}\chi_{\nu^\prime q}^{0},$ $Z_{\nu q}^{c(s)}=1+\sum\nolimits_{\nu^{\prime}\notin B}\overline{\Phi}_{\nu\nu^{\prime}\omega}^{c(s)}\chi_{\nu^{\prime}q}^{0}$. From this equation we fin \begin{equation} \Gamma_{\nu q}^{c(s)}=\frac{Z_{\nu q}^{c(s)}+\sum\limits_{\nu^{\prime}\in B}Z_{\nu' q}^{c(s)}\mathcal{F _{\nu\nu^{\prime}q}^{c(s)}\chi_{\nu^{\prime}q}^{0}}{1-U_{q ^{c(s)}X_{q}^{c(s)}}.\label{Lambda_box \end{equation} The expression (\ref{Lambda_box}) gives a possibility to calculate $\Gamma_{\nu q}^{c(s)}$ using summations of vertices (except their asymptotic parts) within the selected frequency box only. Using that $\chi^0_{\nu q}\propto 1/\nu^2$ for large $\nu$ and $\overline{\Phi}_{\nu\nu^{\prime}\omega}^{c(s)}\propto (|\nu|-|\nu^{\prime}|)^{-2}$ for large $||\nu|-|\nu^{\prime}||$, we find that the first and second term in $X_{q}^{c(s)}$ are of the order $1/\nu_{\rm max}$ and $1/\nu_{\rm max}^2$, respectively, and the difference $Z_{\nu q}^{c(s)}-1\propto 1/\nu_{\rm max}^3$, such that the second term in $X_q^{c(s)}$ and the difference $Z_{\nu q}^{c(s)}-1$ are expected to give only very small contribution for large $\nu_{\rm max}$ (the smallness of these contributions is also verified to hold numerically for the DMFT solution of single band Hubbard model with some exemplary parameters, e.g. fillings close to half-filling, in Sec. V). Using the obtained fermion-boson vertex, we can similarly find the non-local susceptibilities (which in general should not be confused with the local susceptibilities $\chi^{c(s)}(\omega)$ entering Eq. (\ref{Phi_bar})) by splitting again the summation inside and outside the frequency box \begin{equation} \chi_{q}^{c(s)}=\sum\limits_{\nu}\Gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0 =\sum\limits_{\nu\in B}\Gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0}+\sum \limits_{\nu\notin B}\Gamma_{\nu,q}^{c(s)}\chi_{\nu q}^{0}.\label{chi \end{equation} Performing similar decomposition for $\Gamma_{\nu\notin B,q}^{c(s)}$ in Eq. (\ref{Lambda}) and using again Eq. (\ref{Vertex_as}) we fin \begin{align} \Gamma_{\nu\notin B,q}^{c(s)} & \simeq Z_{\nu q}^{c(s)} \left[ 1+U_{q}^{c(s)}\chi_{q}^{c(s)}\right] +\sum\limits_{\nu' \in B} \overline \Phi_{\nu\nu'\omega}^{c(s)}\chi^0_{\nu' q} \Gamma_{\nu' q} \nonumber\\ & \overset{\nu\rightarrow\infty}{\rightarrow}1+U_{q}^{c(s)}\chi_{q ^{c(s)}.\label{GammaAs1 \end{align} Combining Eq. (\ref{chi}) with the first line of Eq. (\ref{GammaAs1}) this yields \begin{equation} \chi_{q}^{c(s)}=\frac{\sum\limits_{\nu\in B}Z_{\nu q}^{c(s)}\Gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0}+X_{q}^{c(s)}}{1-U_{q}^{c(s)}X_{q}^{c(s)}},\label{chi1 \end{equation} which again uses the summation only in a given frequency box. From Eq. (\ref{chi1}) we fin \begin{equation} 1+U_{q}^{c(s)}\chi_{q}^{c(s)}=\frac{1+U_{q}^{c(s)}\sum\limits_{\nu\in B}Z_{\nu q}^{c(s) \Gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0}}{1-U_{q}^{c(s)}X_{q}^{c(s)}}.\label{chi2} \end{equation} We have verified that the result in the first line of Eq. (\ref{GammaAs1}) with account of Eq. (\ref{chi2}) is identical to the large $\nu$ limit of Eq. (\ref{Lambda_box}). Let us also consider the \textquotedblleft reduced\textquotedblrigh \ fermion-boson vertex \cite{EdwHertz} \begin{equation} \gamma_{\nu q}^{c(s)}=\frac{\Gamma_{\nu q}^{c(s)}}{1+U_{q}^{c(s)}\chi _{q}^{c(s)}}, \label{gamma0} \end{equation} which contains the sum of contributions from 2PI vertices with excluded $U_{q}^{c(s)}$ interaction. This vertex is often used in D$\Gamma$A \cite{DGA1c}, TRILEX \cite{TRILEX}, some versions of the DB approach \cite{DB4}, (E)DMFT+2PI-fRG method \cite{MyEDMFT2PI}, etc. For this vertex we obtai \begin{align} \gamma_{\nu q}^{c(s)} & =\frac{Z_{\nu q}^{c(s)}+\sum\limits_{\nu^{\prime}\in B \mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}\chi_{\nu^{\prime}q}^{0}Z_{\nu' q}^{c(s) }{1+U_{q}^{c(s)}\sum\limits_{\nu\in B}Z_{\nu q}^{c(s)}\Gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0 }\nonumber\\ & =\frac{Z_{\nu q}^{c(s)}+\sum\limits_{\nu^{\prime}\in B}\mathcal{F}_{\nu\nu^{\prime q}^{c(s)}\chi_{\nu^{\prime}q}^{0}Z_{\nu' q}^{c(s)}}{1+\widetilde{U}_{q}^{c(s) \sum\limits_{\nu\in B}Z_{\nu q}^{c(s)}\left\{ Z_{\nu q}^{c(s)}+\sum\limits_{\nu^{\prime}\in B \mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}\chi_{\nu^{\prime}q}^{0}Z_{\nu' q}^{c(s)}\right\} \chi_{\nu q}^{0}},\label{gamma \end{align} where $\widetilde{U}_{q}^{c(s)}=U_{q}^{c(s)}/(1-U_{q}^{c(s)}X_{q}^{c(s)}).$ According to the Eq. (\ref{GammaAs1}), \begin{equation} \gamma_{\nu\notin B,q}^{c(s)}\simeq Z_{\nu,q}^{c(s)}+\sum\limits_{\nu' \in B} \overline \Phi_{\nu\nu'\omega}^{c(s)}\chi^0_{\nu' q} \gamma^{c(s)}_{\nu' q}\overset{\nu\rightarrow \infty}{\rightarrow}1. \end{equation} For the irreducible susceptibility $\phi_{q}^{c(s)},$ which is related to the non-local susceptibility $\chi_{q}^{c(s)}$ by \begin{equation} \chi_{q}^{c(s)}=\frac{\phi_{q}^{c(s)}}{1-U_{q}^{c(s)}\phi_{q}^{c(s)}}, \label{locvsir} \end{equation} we fin \begin{align} \phi_{q}^{c(s)} & =\frac{\chi_{q}^{c(s)}}{1+U_{q}^{c(s)}\chi_{q}^{c(s) }\label{phi0}\\ & =\frac{\sum\limits_{\nu\in B}Z_{\nu q}^{c(s)}\Gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0}+X_{q}^{c(s) }{1+U_{q}^{c(s)}\sum\limits_{\nu\in B}Z_{\nu q}^{c(s)}\Gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0 }.\label{phi \end{align} It can be verified by direct algebraic transformations that the obtained quantities fulfill the result for the irreducible susceptibility, which follows from the Eqs. (\ref{chi}), (\ref{gamma0}), and (\ref{phi0}), cf. Ref. \cite{EdwHertz}, \begin{equation} \sum\limits_{\nu}\gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0}=\sum\limits_{\nu\in B}Z_{\nu q}^{c(s)}\gamma_{\nu q}^{c(s)}\chi_{\nu q}^{0}+X_{q}^{c(s)}=\phi_{q}^{c(s)}.\label{phi1 \end{equation} For the following it is convenient to represent the vertex $\mathcal{F _{\nu\nu^{\prime}q}^{c(s)}$ via Bethe-Salpeter equation, similar to (\ref{BS1}), \begin{equation} \mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}=\left\{ \left[ \Phi_{\nu\nu^{\prime q}^{c(s),\mathrm{box}}\right] ^{-1}-\delta_{\nu\nu^{\prime}}\chi_{\nu q ^{0}\right\} _{\nu\nu^{\prime}}^{-1},\label{BS2 \end{equation} but with the inversion performed for $\nu,\nu^{\prime}\in B$ only (which provides the difference between $\Phi_{\nu\nu^{\prime}q}^{c(s),\mathrm{box}}$ and $\Phi_{\nu\nu^{\prime}q}^{c(s)}$, see Sect. IV). Using this equation and performing algebraic manipulations, similar to those described in Appendix C of Ref. \cite{MyEDMFT2PI}, the result (\ref{gamma}) can be represented in a simpler for \begin{equation} \gamma_{\nu q}^{c(s)}=\sum\limits_{\nu^{\prime}\in B}\left[ 1-\left( \Phi_{\nu\nu^{\prime}q}^{c(s),\mathrm{box}}-Z_{\nu q}^{c(s)}\widetilde{U}_{q}^{c(s)}Z_{\nu' q}^{c(s)}\right) \chi_{\nu^{\prime}q}^{0}\right]_{\nu \nu'} ^{-1}Z_{\nu' q}^{c(s)},\label{gamma1 \end{equation} where again the inversion is performed for $\nu,\nu^{\prime}\in B$. This result allows us to obtain fermion-boson vertices $\gamma_{\nu q}^{c(s)}$ by performing summation over frequencies within the chosen frequency box. The size of the frequency box should be such that the asymptotic (\ref{Vertex_as}) is reached close to the boundary of the frequency box. We also note that different way of efficient calculation of fermion-boson vertices and irreducible susceptibilities in a non-local theory from the known local ones was suggested in Ref. \cite{Krien}. \section{Bethe-Salpeter equation} Now we consider the solution of the Bethe-Salpeter equation (\ref{BS1}) which we write in the for \begin{equation} \mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}=\Phi_{\nu\nu^{\prime}q}^{c(s) +\sum\limits_{\nu^{\prime\prime}}\Phi_{\nu\nu^{\prime\prime}q}^{c(s)}\chi _{\nu^{\prime\prime}q}^{0}\mathcal{F}_{\nu^{\prime\prime}\nu^{\prime}q ^{c(s)}.\label{BS \end{equation} Splitting again the summation to the one restricted to the frequency box and outside the box and using the asymptotic forms (\ref{Vertex_ir_as}) and (\ref{Vertex_as}), we find \begin{align} \mathcal{F}_{\nu\nu^{\prime}q}^{c(s)} & =\Phi_{\nu\nu^{\prime}q}^{c(s) +\sum\limits_{\nu^{\prime\prime}\in B}\Phi_{\nu\nu^{\prime\prime}q}^{c(s) \chi_{\nu^{\prime\prime}q}^{0}\mathcal{F}_{\nu^{\prime\prime}\nu^{\prime q}^{c(s)}\nonumber\\ & +\sum\limits_{\nu^{\prime\prime}\notin B}\left[ U_{q}^{c(s) +\overline{\Phi}_{\nu\nu^{\prime\prime}\omega}^{c(s)}\right] \chi_{\nu ^{\prime\prime}q}^{0}\left[ U_{q}^{c(s)}\Gamma_{\nu^{\prime}q}^{c(s) +\overline{\Phi}_{\nu^{\prime\prime}\nu^{\prime}\omega}^{c(s)}+\overline{\Phi _{\nu^{\prime\prime}\widetilde{\nu}^{\prime\prime}\omega}^{c(s)}\chi_{\widetilde {\nu}^{\prime\prime}q}^{0}\mathcal{F}_{\widetilde{\nu}^{\prime\prime \nu^{\prime}q}^{c(s)}\right] . \end{align} From this equation we can express $\Phi_{\nu\nu^{\prime}q}^{c(s)}:$ \begin{align} \Phi_{\nu\nu^{\prime}q}^{c(s)} & =\sum\limits_{\nu^{\prime\prime}\in B}\left\{ \mathcal{F}_{\nu\nu^{\prime\prime}q}^{c(s)}-\sum\limits_{\widetilde {\nu}^{\prime\prime}\notin B}\left[ U_{q}^{c(s)}+\overline{\Phi _{\nu\widetilde{\nu}^{\prime\prime}\omega}^{c(s)}\right] \chi_{\widetilde{\nu }^{\prime\prime}q}^{0}\right. \nonumber\\ & \left. \times\left[ U_{q}^{c(s)}\Gamma_{\nu^{\prime\prime}q ^{c(s)}+\overline{\Phi}_{\widetilde{\nu}^{\prime\prime}\widetilde{\nu ^{\prime\prime\prime}\omega}^{c(s)}\left( \delta_{\widetilde{\nu}^{\prime \prime\prime}\nu^{\prime\prime}}+\chi_{\widetilde{\nu}^{\prime\prime\prime q}^{0}\mathcal{F}_{\widetilde{\nu}^{\prime\prime\prime}\nu^{\prime\prime q}^{c(s)}\right) \right] \right\} \left[ \delta_{\nu^{\prime\prime \nu^{\prime}}+\chi_{\nu^{\prime\prime}q}^{0}\mathcal{F}_{\nu^{\prime\prime \nu^{\prime}q}^{c(s)}\right] _{\nu^{\prime\prime}\nu^{\prime}}^{-1 \nonumber\\ & =\sum\limits_{\nu^{\prime\prime}\in B}\mathcal{F}_{\nu\nu^{\prime\prime q}^{c(s)}\left[ \delta_{\nu^{\prime\prime}\nu^{\prime}}+\chi_{\nu ^{\prime\prime}q}^{0}\mathcal{F}_{\nu^{\prime\prime}\nu^{\prime}q ^{c(s)}\right] _{\nu^{\prime\prime}\nu^{\prime}}^{-1}\nonumber\\ & -\sum\limits_{\nu^{\prime\prime}\notin B}\left[ U_{q}^{c(s) +\overline{\Phi}_{\nu\nu^{\prime\prime}\omega}^{c(s)}\right] \chi_{\nu ^{\prime\prime}q}^{0}\left\{ Z_{\nu ^{{\prime}{\prime}}q}^{c(s)}\widetilde{U}_{q}^{c(s)}Z_{\nu ^{\prime}q}^{c(s)} +\overline{\Phi}_{\nu^{\prime\prime}\nu^{\prime}\omega ^{c(s)}\right\} , \end{align} where we have used the result for the fermion-boson vertex (\ref{Lambda_box}) and neglected the terms of higher order than $1/\nu_{\rm max}^3$. Finally, using again the Bethe-Salpeter equation (\ref{BS2}) and performing algebraic transformations, we obtain \begin{equation} \Phi_{\nu\nu^{\prime}q}^{c(s)}=\Phi_{\nu\nu^{\prime}q}^{c(s),\mathrm{box }+U_{q}^{c(s)}-Z_{\nu q}^{c(s)}\widetilde{U}_{q}^{c(s)}Z_{\nu' q}^{c(s)},\label{Phi \end{equation} where we have again neglected the terms of the order $o(1/\nu_{\rm max}^3)$. The result (\ref{Phi}) can be also derived from \textquotedblleft Method 2\textquotedblright\ of Ref. \cite{Toschi} which uses $\mathcal F$'s asymptotics (Eq. (19) of that paper) by applying Eqs. (\ref{Vertex_ir_as}), (\ref{Vertex_as}), and (\ref{Lambda_box}) above. The relation (\ref{Phi}) allows one to find the ``physical'' 2PI vertex from given vertex $\mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}$ which is known inside the frequency box ($\nu,\nu^{\prime}\in B$) by exploiting the equation (\ref{BS2}) for the vertex $\Phi_{\nu\nu^{\prime}q}^{c(s),\mathrm{box}}$. On the other hand, knowing the vertex $\Phi_{\nu\nu^{\prime}q}^{c(s)}$ and proceeding the reverse way one can find the corresponding vertex $\mathcal{F}_{\nu\nu^{\prime}q}^{c(s)}.$ In the ladder approximation the vertex $\Phi_{\nu\nu^{\prime}q}^{c(s)}$ is assumed to be local and the same for the local and non-local problems. This allows one to find the relation between the respective vertices $\Phi_{\nu\nu^{\prime \omega}^{c(s),\mathrm{box}}$ and $\Phi_{\nu\nu^{\prime}q}^{c(s),\mathrm{box}}$ of the local and non-local problem, which are different because of slight difference of $X_q^{c(s)}$ and $Z_{\nu q}^{c(s)}$. The equation (\ref{Phi}) also provides explanation of the result (\ref{gamma1}) for the fermion-boson vertex: since $\Phi^{c(s) \mathrm{box} _{\nu\nu^{\prime}q}-Z_{\nu q}^{c(s)}\widetilde{U}^{c(s)}_{q}Z_{\nu' q}^{c(s)}=\Phi^{c(s)}_{\nu\nu^{\prime q}-U^{c(s)}_{q}$, the obtained vertex $\gamma_{\nu q}$ in terms of the physical vertex $\Phi^{c(s)}$ has a rather standard form (cf. Ref. \cite{DGA1b}), which is due to smallness of the difference $\Phi^{c(s)}_{\nu\nu^{\prime}q}-U^{c(s)}_{q}$ in the limit $\nu\rightarrow\infty$ or $\nu^{\prime}\rightarrow\infty$; the factors $Z_{\nu q}^{c(s)}$ play the role of additional vertex corrections, accounting for the finite size of the frequency box. In the approximation $Z_{\nu q}^{c(s)}\approx 1$ (which implies neglect of the contributions to the vertex of the order $1/\nu_{\rm max}^3$) the result of Eq. (\ref{Phi}) implies that the physical 2PI vertex and the 2PI vertex obtained in the frequency box via Eq. (\ref{BS2}) differ by a q-dependent shift only. The quality of this approximation is verified for the Hubbard model near half filling in Sect. V. Obtained results allow to compare the accuracy of vertex calculation with and without the obtained corrections for the size of the frequency box. Without using the obtained corrections the main neglected contribution to the considered vertices arise from the terms, containing $X_{q}^{c(s)}\sim 1/\nu_{\mathrm{max}}$. Therefore, in this case the error of estimating two-particle irreducible and fermion-boson vertices also scales as $ 1/\nu_{\mathrm{max}}$. At the same time, accounting for the obtained corrections, the main source of the error of estimating of considered vertices is the deviation of irreducible vertices from asymptotic behavior (\ref{Vertex_ir_as}) which is expected to scale as $1/{\rm max}(|\nu|,|\nu'|)^3$ and yield $O(1/\nu_{\mathrm{max}}^4)$ corrections to the vertex. Considering that the contribution of the terms of the order $1/\nu_{\rm max}^2$ (second term in $X_q^{c(s)}$) and $1/\nu_{\rm max}^3$ (i.e. $Z_{\nu q}^{c(s)}-1$) to the vertices is small (see also numerical verification in Sect. IV), higher order terms are expected to provide also small contribution, and therefore the suggested method provides fast convergence with increasing size of the box, as verified numerically in the next Section. \section{Numerical example} \begin{figure}[t] \center \includegraphics[width=0.95 \linewidth]{FigGamma.eps} \endcenter \caption{Real part of the fermion-boson vertex $\gamma^{s}_{\nu,0}$ in DMFT for two-dimensional Hubbard model ($t^{\prime}=0.15t$, $U=10t$) for $T=0.2t$, $n=1$ (left) and $T=0.08t$ $n=0.96$ (right). Dashed lines correspond to the calculation, performed only within the chosen frequency box, while solid lines show the result according to the Eq. (\ref{gamma}). The left plot is shown only for $\nu>0$ (the considered real part of the vertex is an even function of the frequency), the inset shows zoom of the asymptotic behavior at large frequencies. \label{FigGamma \end{figure} As an example of the application of the developed approach we calculate the spin vertex $\gamma^{s}_{\nu,0}$ in DMFT approach for the two-dimensional Hubbard model with the dispersion $\epsilon_{\mathbf{k}}=-2t(\cos k_{x}+\cos k_{y})+4t^{\prime}\cos k_{x} \cos k_{y}$. We choose the parameters $t^{\prime }=0.15t$ and $U=10t$, which were suggested previously to describe physical properties of high-$T_{c}$ compound La$_{2-x}$Sr$_{x}$CuO$_{4}$. For numerical implementation of DMFT we use hybridization expansion continous-time QMC method within iQIST package of Refs. \cite{iQIST,iQIST1}, choosing for the frequency box $N_{f}=120$ fermionic Matsubara frequencies. In the left part of Fig. \ref{FigGamma} we show the result of the calculation of fermion-boson vertex for not too low temperature $T=0.2t$ and $n=1$. In this case the chosen frequency box is sufficiently large (the maximal fermionic frequency $\nu_{\mathrm{max}}\sim75t$) and the results calculated with and without account of finite frequency box effects (we put $X_{q}^{s}=Z_{\nu q}^{s}-1=\overline{\Phi}_{\nu\nu^{\prime}\omega}^{s}=0$ in the latter case) are close to each other, with slightly better agreement of the result calculated with account of finite frequency box with the required asymptotic value. With decreasing temperature to $T=0.08t$ the maximal fermionic frequency $\nu_{\mathrm{max}}\sim30t$ and we observe stronger difference of the fermion-boson vertex calculated with and without account of finite frequency box effect (right part of Fig. \ref{FigGamma}; in this case we also change filling to $n=0.96$). The vertex, evaluated with account of finite frequency box effects approaches correct limiting value (equal to one). In both cases we find that the terms related to the $\overline{\Phi}_{\nu\nu^{\prime}\omega}^{s}$ (i.e. second term in $X_{q}^{s}$ and the difference $Z_{\nu q}^{s}-1$) provide very small contribution ($<2\cdot 10^{-6}$ for $T=0.2t$ and $<10^{-4}$ for $T=0.08t$). We have also verified that the obtained vertices $\gamma_{\nu q}^s$ yield the irreducible local susceptibility $\phi_{\omega}^s$, obtained by the Eq. (\ref{phi1}) and the respective local susceptibility $\chi^s_\omega$, obtained by the Eq. (\ref{locvsir}), which agree with those obtained directly from CT-QMC solver (for static local susceptibility at $T=0.2t$ we find $\chi^s_0=2.2074$ vs.~QMC result $2.2071$, while for $T=0.08t$ we find $\chi^s_0=3.77$ vs.~$3.78$, respectively). \begin{figure}[h!] \center \includegraphics[width=0.7 \linewidth]{FigPhi.eps} \endcenter \caption{The 2PI vertex $\Phi^{s}_{\nu_{1}\nu0}$ (a, $\nu_{1}=\pi T$) and $\Phi^{s}_{\nu\nu0}$ (b) in DMFT for two-dimensional Hubbard model ($t^{\prime}=0.15t$, $U=10t$) for $T=0.08t$ $n=0.96$. Short dashed line corresponds to the calculation, performed only within the chosen frequency box, while solid lines show the result according to the Eq. (\ref{Phi}). The long dashed line in (b) shows limiting value $U+\overline{\Phi}_{\nu\nu0}^s$, expected according to the Eq. (\ref{Vertex_ir_as}); insets show zoom of the asymptotic behavior. Plots (c) and (d) show contribution $\Delta \Phi_{\nu \nu' q}=\widetilde U_q^s (1-Z_{\nu q}^{s} Z_{\nu' q}^{s})$ to the third term in the right-hand side of Eq. (\ref{Phi}). \label{FigPhi \end{figure} In Figs. \ref{FigPhi}a,b we show the frequency dependence of 2PI vertex $\Phi _{\nu^{\prime}\nu0}^{s}$ at fixed frequency $\nu'=\nu_{1}=\pi T$ (left part) and two equal frequencies $\nu=\nu^{\prime}$ (right part). For $\nu'=\nu_{1}$ one can see that the obtained correction improves the high-frequency behavior, which is close to $U$ in that case (the contribution $\overline{\Phi}_{\nu \nu^{\prime}0}^s$ is small). At the same time, for $\nu=\nu ^{\prime}$ the obtained correction due to finite frequency box effect is sufficiently small, and both vertices, with and without account of finite frequency box effect approach the expected asymptotic value. The effect of the difference of the factors $Z_{\nu q}^{s}$ from unity in the third term in the right-hand side of Eq. (\ref{Phi}) remains sufficiently small, as can be seen from Figs. \ref{FigPhi} (c,d). The magnitude of the second vs. first term in $X_q^{s}$ is analyzed numerically below. \begin{figure}[h!] \center \includegraphics[width=0.7 \linewidth]{FigAs.eps} \endcenter \caption{The dependence of (a,c) triangular vertex $\gamma^s_{\nu_{5},0}$, (b) first (solid line) and second (multiplied by 5, dashed line) term in $X_0^{s}$, as well as $Z_{\nu,0}^{s}-1$ (multiplied by 5) with $\nu=\nu_5$ and $\nu=\nu_{30}$ (dot-dashed lines), and (d) the 2PI vertex $\Phi^{s}_{\nu_{1}\nu 0}$ on $\nu_{\rm max}$ in DMFT for two-dimensional Hubbard model ($t^{\prime}=0.15t$, $U=10t$, $T=0.08t$, $n=0.96$, $\nu_{n}=(2n-1) \pi T$). Dashed lines in (a,c,d) correspond to the calculation, performed only within the chosen frequency box, while solid lines show the results according to the Eqs. (\ref{gamma}) and (\ref{Phi}). Dotted lines show extrapolation by quadratic polynomial (with respect to $1/\nu_{\rm max}$) of the results calculated at sufficiently large $\nu_{\rm max}$ without account of finite frequency box effects, and by $a+b/\nu_{\rm max}^4+c/\nu_{\rm max}^5$ of the results with account of frequency box effects. \label{FigAs \end{figure} Finally, we verify the scaling of the obtained results with changing the size of the frequency box $\nu_{\rm max}$. For that we change the number of fermionic frequencies inside the frequency box between $18$ and $120$ and evaluate the respective vertices at fixed frequency $\nu$. The result of the calculation of dependence of both terms of $X_0^{s}$, the factors $Z_{\nu_{5},0}^s$ and $Z_{\nu_{30},0}^s$, triangular vertex $\gamma^s_{\nu_{5},0}$ and the two-particle irreducible vertex $\Phi^{s}_{\nu_{1}\nu_{5}0}$ ($\nu_n=(2n-1)\pi T$) on the size of the frequency box is shown in Fig. \ref{FigAs}. One can see that, as discussed in the end of previous Section, the first term in $X_0^{s}$ scales as $1/\nu_{\rm max}$ at large $\nu_{\rm max}$. At the same time, the second term in $X_0^{s}$ scales as $1/\nu_{\rm max}^2$, and, therefore, becomes negligibly small at sufficiently large $\nu_{\rm max}$. Although $Z_{\nu,0}^s-1\propto 1/\nu_{\rm max}^3$ decays faster than the second term in $X_0^s$, at intermediate $\nu_{\rm max}$ and $\nu\sim\nu_{\rm max}$ it becomes somewhat enhanced (cf. also Figs. \ref{FigPhi}c,d). As also discussed in the end of previous Section, the deviation of the vertices, calculated without account of finite frequency box effects from their values extrapolated to $\nu_{\rm max}\rightarrow \infty$ (obtained using quadratic polynomial with respect to $1/\nu_{\rm max}$) scales as $1/\nu_{\rm max}$. At the same time, the vertices, calculated using the obtained formulae, change very weakly with $1/\nu_{\rm max}$ (we have verified that this holds for all $|\nu|<\nu_{\rm max}$). Using $a+b/\nu_{\rm max}^4+c/\nu_{\rm max}^5$ fits for vertices obtained with account of finite frequency box effects, we find the results of extrapolation consistent with those for vertices, obtained without of account of finite frequency box effects, which shows applicability of the obtained formulae. From these results it follows that for practical calculations without account of finite frequency box effects because of strong dependence of the vertices on $1/\nu_{\rm max}$, at least three different sufficiently large sizes of frequency box should be considered to determine the coefficients of the quadratic polynomial, and, therefore, extrapolated values of the vertices. At the same time, since the results obtained with account of finite frequency box effects change very weakly with frequency box size, only one such calculation is sufficient with reasonable accuracy in that case. \section{Conclusion} In conclusion, we have derived explicit formulae for the full (Eq. (\ref{Lambda_box})) and reduced (Eqs. (\ref{gamma}), (\ref{gamma1})) fermion-boson vertices; full (Eq. (\ref{chi1})) and irreducible (Eq. (\ref{phi})) susceptibilities, and the 2PI vertex (\ref{Phi}), which contain summation only in a given frequency box. These formulae account for the contribution of the frequencies outside the frequency box via the terms, containing $X_{q}^{c(s)}$ and $Z_{\nu q}^{c(s)}$. In contrast to the approach, which neglects the corrections due to finiteness of the frequency box, which results error scales as $1/\nu_{\rm max}$, the considered approach is expected to show $1/\nu_{\rm max}^4$ scaling of the error, and requires therefore rather small sizes of the frequency box. We have verified numerically applicability of the obtained results on the two-dimensional Hubbard model with next-nearest hopping and strong Coulomb repulsion. The obtained results can be used in a broad range of applications of diagrammatic extensions of dynamical mean field theory. \textit{Acknowledgements. } The work is partly supported by the theme ``Quant" AAAA-A18-118020190095-4 of Minobrnauki, Russian Federation. The calculations are performed on the ``Uran" cluster of UB RAS.
1,116,691,497,974
arxiv
\section{Introduction} Accurately predicting the properties of molecules lies at the core of fundamental tasks in the chemical and pharmaceutical communities. In light of deep learning, several supervised models have been investigated to learn molecular representations through predicting molecular properties~\cite{DBLP:conf/icml/GilmerSRVD17,DBLP:journals/jcisd/YangSJCEGGHKMPS19,DBLP:conf/ijcai/SongZNFLY20}. While effective, these methods face the challenges of limited labeled data, as laboratory experiments are expensive and time-consuming to annotate data. Moreover, due to the enormous diversity of chemical molecules, these works could barely generalize to unseen cases~\cite{DBLP:conf/iclr/HuLGZLPL20,DBLP:conf/nips/RongBXX0HH20}, which greatly hinders practical applicability. One line of works to alleviate these issues is to design pretext tasks to learn node or graph representations without labels. Several attempts have been made to investigate different strategies for such tasks, including masked attribute prediction~\cite{DBLP:conf/iclr/HuLGZLPL20}, graph-level motif prediction~\cite{DBLP:conf/nips/RongBXX0HH20}, and graph context prediction~\cite{DBLP:conf/nips/LiuDL19}. The other line follows a contrastive learning framework from the computer vision domain~\cite{DBLP:conf/cvpr/WuXYL18,DBLP:conf/icml/ChenK0H20}, which aims to construct similar and dissimilar view pairs via graph augmentations, including node dropping, edge perturbation, subgraph extraction, and attribute masking~\cite{DBLP:conf/nips/YouCSCWS20}. Due to the smaller amount of parameters and simpler predefined tasks, we adopt contrastive learning in our work. However, unlike images, contrastive learning on graphs has its unique challenges. First, the structural information and semantics of the graphs vary significantly across domains, which makes it difficult to design a universal augmentation scheme for graphs. Especially for molecular graphs, removing or adding a chemical bond or a functional group will drastically change their identities and properties~\cite{DBLP:conf/nips/YouCSCWS20}. More importantly, existing graph contrastive learning models mainly focus on graphs structures, without considering fundamental domain knowledge into graph semantics. Another neglected defect is that they model the atoms in molecular graphs as individuals that can only interact when there exists an edge (i.e., a chemical bond), failing to consider the correlations between atoms (e.g., commonalities between atoms of the same attributes). To overcome these challenges, we enrich the molecular graph contrastive learning by incorporating domain knowledge. Since chemical domain knowledge is crucial prior, we hypothesize that the attributes of elements (atom is an instance of element) can affect molecular properties. To obtain the domain knowledge and build microscopic correlations between atoms, we first construct a Chemical Element Knowledge Graph (KG) based on Periodic Table of Elements~\footnote{\url{https://ptable.com}}. The Chemical Element KG describes the relations between elements (denoted in green in Figure~\ref{intro}) and their basic chemical attributes (e.g., periodicity and metallicity, denoted in red in Figure~\ref{intro}). Then we augment the original molecular graph with the guidance of Chemical Element KG, as shown in Figure~\ref{intro}, which helps to establish the associations between atoms that have common attributes but are not directly connected by bonds. In this way, the augmented molecular graph contains not only structural topologies but also the fundamental domain knowledge of elements. \begin{figure \centering \includegraphics[width=1\columnwidth]{intro} \caption{Chemical Element KG builds associations between atoms that are not directly connected by bonds but related in fundamental chemical attributes, as denoted by red arrows.} \label{intro} \end{figure} On top of that, we propose a novel \textbf{K}nowledge-enhanced \textbf{C}ontrastive \textbf{L}earning (KCL) framework to improve the molecular representation with three modules. {(1)} The \textit{knowledge-guided graph augmentation} module leverages Chemical Element KG to guide the graph augmentation process. While preserving the topology structure, the augmented molecular graph also builds associations that cannot be observed explicitly. {(2)} The \textit{knowledge-aware graph representation} module learns molecular representations. We adopt a commonly used graph encoder for the original molecular graphs while designing a Knowledge-aware Message Passing Neural Network (KMPNN) encoder to provide heterogeneous attentive message passing for different types of knowledge in the augmented molecular graph. {(3)} The \textit{contrastive objective} module trains the encoders to maximize the agreement between positives and the discrepancy between hard negatives. To the best of our knowledge, it is the first work to construct KG based on fundamental knowledge of chemical elements and guide molecular contrastive learning. Our contributions can be summarized as follows: \begin{itemize} \item We construct a Chemical Element KG, which describes the relations between elements and their chemical attributes. It can assist various molecular learning tasks beyond the ones in this paper. \item We develop a new contrastive learning framework (KCL) with three modules: knowledge-guided graph augmentation, knowledge-aware graph representation, and contrastive objective. \item We evaluate KCL on eight various molecular datasets under both fine-tune and linear protocols and demonstrate its superiority over the state-of-the-art methods. \end{itemize} \section{Related Works} \paragraph{Molecular Representation Learning} In light of deep learning,~\citeauthor{DBLP:conf/nips/DuvenaudMABHAA15} first applied convolutional networks to map molecules into neural fingerprints. Subsequent works fed SMILES (a line notation for describing the structure of chemical species using short ASCII strings) into recurrent networks-based models to produce molecular representations~\cite{DBLP:journals/corr/JastrzebskiLC16,DBLP:conf/bcb/0005WZH17}. To utilize topology information in the molecular graph, MPNN~\cite{DBLP:conf/icml/GilmerSRVD17} and its variants DMPNN~\cite{DBLP:journals/jcisd/YangSJCEGGHKMPS19}, CMPNN~\cite{DBLP:conf/ijcai/SongZNFLY20}, CoMPT~\cite{DBLP:journals/corr/abs-2107-08773} leverage the node and edge attributes during message passing. However, all the above-mentioned works are supervised models, require expensive annotations, and could barely generalize to unseen molecules, which greatly hinders the feasibility in practice. \paragraph{Self-Supervised Learning on Graphs} Self-supervised learning addresses such a limitation by pre-training molecular graphs. ~\citeauthor{DBLP:conf/nips/LiuDL19} exploited the idea of N-gram in NLP and conducted vertices embedding by predicting the vertices attributes. ~\citeauthor{DBLP:conf/iclr/HuLGZLPL20} designed two pre-training tasks, i.e., predicting neighborhood context and node attributes, to learn meaningful node representations, then using graph-level multi-task pre-training to refine graph representations. Alternatively, GROVER~\cite{DBLP:conf/nips/RongBXX0HH20} incorporated a Transformer-style architecture and learned node embeddings by predicting contextual properties and motif information. Other works~\cite{DBLP:conf/ijcai/ShangMXS19,DBLP:conf/aaai/SunLZ20,DBLP:conf/icml/YasunagaL20} utilized similar strategies for either node or graph level pre-training \paragraph{Contrastive Learning on Graphs} Contrastive learning is a widely-used self-supervised learning algorithm. Its main idea is to make representations of positive pairs that agree with each other and negatives disagree as much as possible ~\cite{DBLP:conf/nips/YouCSCWS20}. One key component is to generate informative and diverse views from each data instance. Previous graph augmentations generated views by randomly shuffling node features~\cite{DBLP:conf/iclr/VelickovicFHLBH19,DBLP:conf/icml/HassaniA20}, removing edges or masking nodes~\cite{DBLP:conf/nips/YouCSCWS20}. However, these perturbations may hurt the domain knowledge inside graphs, especially for chemical compounds. MoCL~\cite{DBLP:journals/corr/abs-2106-04509} proposed a substructure substitution and incorporated two-level knowledge to learn richer representations, CKGNN~\cite{DBLP:journals/corr/abs-2103-13047} selected positive pairs via fingerprint similarity. But they ignore the fundamental domain knowledge. \section{Methodology} \begin{figure*}[!ht] \centering \includegraphics[width=2\columnwidth]{overview} \caption{An illustrative example for KCL. We ignore edge directions in four molecular graphs due to space limitation (the direction of an edge between an attribute and an atom is from the former to the latter, while an edge between atoms is bidirectional). Module 1: Knowledge-guided graph augmentation converts the original molecular graph $\mathcal{G}$ into the augmented molecular graph $\mathcal{G}'$ based on Chemical Element KG. Module 2: Knowledge-aware graph representation captures representations from two graph views separately. Module 3: Contrastive objective trains the encoders and the projection head to maximize agreement between positives and disagreement between hard negatives (e.g., $\mathcal{G}_j$ act as the hard negative of $\mathcal{G}_i$) via a contrastive loss.} \label{overview} \vspace{-1em} \end{figure*} \subsection{Problem Formulation} A molecule can be represented as a graph $\mathcal{G}=\{\mathcal{V}, \mathcal{E}\}$, where $|\mathcal{V}|$ denotes a set of $n$ atoms (nodes) and $|\mathcal{E}| $ denotes a set of $m$ bonds (edges). Each edge is bidirectional. Let $\mathcal{N}_v$ denote the set of node $v$'s neighbors. We use $x_v$ to represent the initial features of node $v$, and $e_{uv}$ as the initial features of edge $(u,v)$. Let $\boldsymbol{h}(v)$ be the node hidden state and $\boldsymbol{h}(e_{uv})$ for the edge hidden state. In the setting of self-supervised graph representation learning, our goal is to learn graph encoders $f: \mathcal{G} \mapsto \mathbb{R}^d$ which maps an input graph to a vector representation without the presence of any labels. The learned encoders and representations can then be used for downstream tasks. \subsection{Overview} Figure~\ref{overview} shows the overview of our work. We propose a contrastive learning framework called KCL with three modules: (1) Knowledge-guided graph augmentation transforms any given molecule graph $\mathcal{G}$ into an augmented molecular graph $\mathcal{G}'$ with the guidance of Chemical Element KG. (2) Knowledge-aware graph representation aims to extract representations from $\mathcal{G}$ and $\mathcal{G}'$ respectively. (3) Contrastive objective aims to project representations to the space where contrastive loss is applied and train the encoders to maximize the agreement between positive pairs and the discrepancy between hard negatives. \subsection{Knowledge-guided Graph Augmentation} \paragraph{Chemical Element KG Construction.} The prerequisite of our work is to collect the fundamental chemical domain knowledge. Previous attempts~\cite{delmas2021building,DBLP:conf/ijcai/LinQWMZ20} built KGs from the public chemical database and scientific literature to extract associations between chemicals and diseases or drug pairs, but none of them considered the fundamental information in chemical elements. In contrast, we crawl all the chemical elements and their attributes from the Periodic Table of Elements. Each element contains more than 15 attributes, including metallicity, periodicity, state, weight, electronegativity, electron affinity, melting point, boiling point, ionization, radius, hardness, modulus, density, conductivity, heat, and abundance. After that, the extracted triples in the form of (Gas, isStateOf, Cl) are constructed in KG, indicating that there are specified relations between elements and attributes. However, since each element has some different continuous attributes, it is difficult for KG to model their connections. To overcome this difficulty, we histogramize the continuous attributes and convert them into discrete labels (e.g., DensityGroup1, RadiusGroup2). The statistics of Chemical Element KG are summarized in Table~\ref{KGstat}. \begin{table}[!t] \small \centering \begin{tabular}{lcl} \hline \hline & \small{Chemical Element KG} \\ \hline Elements & 118\\ Attributes & 107\\ Entities & 225\\ Relation Types & 17\\ KG Triples & 1643\\ \hline \hline \end{tabular} \caption{The statistics of Chemical Element KG.} \label{KGstat} \vspace{-1em} \end{table} \paragraph{Graph Augmentation.} Since most existing augmentation approaches (e.g., node dropping and edge perturbation) violate the chemical semantic inside molecules and ignore the influence of fundamental knowledge on graph semantics, we address these issues by proposing a knowledge-guided graph augmentation module with the guidance of Chemical Element KG. Specifically, as shown in Figure~\ref{overview}, we extract 1-hop neighbor attributes (nodes in red) of atoms (nodes in green) in a molecule from Chemical Element KG and add the triples as edges (edges in red). For example, we add a node ``Gas'' and an edge from ``Gas'' to ``Cl'' to the original molecular graph based on the triple (Gas, isStateOf, Cl). Note that the direction of each edge between the attribute and the atom is from the former to the latter, while the edges between atoms are bidirectional. Then we obtain an augmented molecular graph, in which the original molecular structure is preserved, and neighborhood topologies for atom-related attributes are introduced. While preserving the topology structure, the augmented molecular graph $\mathcal{G}'$ also considers the fundamental domain knowledge within elements, as well as the microscopic associations between atoms that have common attributes but are not directly connected by bonds. The augmented molecular graph thus contains richer and more complex information, and is treated as a positive sample in contrastive learning. \subsection{Knowledge-aware Graph Representation} \paragraph{Knowledge Feature Initialization.} Different from the random initialization of atoms and bonds, in order to obtain the initial features of attributes and relations in the augmented molecular graph, we adopt the commonly used KG embedding method, RotateE~\cite{DBLP:conf/iclr/SunDNT19}, to train Chemical Element KG. In this way, the initial features can capture the structural information of the triples. The necessity of this step is proved in subsequent experiments. More details are in ~\ref{Feature Initialization} of Appendix. \paragraph{KMPNN Encoder.} Although various architectures can be adopted, since the augmented molecular graphs are complex irregular-structured data that combines two types of information (i.e., the structure knowledge implied in molecular bonds and domain knowledge extracted from Chemical Element KG), we design a KMPNN encoder as $f'(\cdot)$ to learn their graph-level representations. The key idea behind KMPNN is that we provide two types of message passing for different types of neighbors, and assign them different attention according to their importance. Algorithm~\ref{algorithm:KMPNN} describes the KMPNN encoding process. The input of the encoder is the augmented molecular graph $\mathcal{G}'=\{\mathcal{V}, \mathcal{E}\}$, including initial features of all nodes $x_v$, $\forall v \in \mathcal{V}$, and features of all edges $e_{uv}$, $\forall (u,v) \in \mathcal{E}$. $K$ rounds of message passing are then applied to all nodes. We enable heterogeneous message passing with two $\mathrm{M}\scriptstyle\mathrm{SG}$ functions, where $\mathrm{M}\scriptstyle\rm{SG_1\displaystyle(\cdot)}$ is applied to neighbors representing atoms, and $\mathrm{M}\scriptstyle\rm{SG_0\displaystyle(\cdot)}$ is applied to attributes in the neighborhood. The indicator function $\boldsymbol{1}_{[\cdot]}$ is used to index the selection of these functions, $\boldsymbol{1}_{[u=a]}=1$ if $u$ represents an atom else 0. In this way, the nodes with the same type of knowledge share parameters during message passing. Apart from the above, we extend message passing by self-attention. We compute attention coefficients and normalize them with the softmax function to make coefficients easily comparable across different nodes. Following ~\cite{DBLP:conf/iclr/VelickovicCCRLB18}, the coefficients can be expressed as: \begin{small} \begin{equation} \alpha_{uv}=\frac{\exp \left(\mathrm{LeakyReLU}\left(\boldsymbol{a}^T\left[\boldsymbol{W} \boldsymbol{h}_{u}|| \boldsymbol{W} \boldsymbol{h}_{v}\right]\right)\right)}{\sum_{k \in \mathcal{N}_{u}} \exp \left(\mathrm{LeakyReLU}\left(\boldsymbol{a}^T\left[\boldsymbol{W} \boldsymbol{h}_{u}|| \boldsymbol{W} \boldsymbol{h}_{k}\right]\right)\right)}, \end{equation} \end{small} where $\cdot ^T$ represents transposition and $||$ is the concatenation operation. The attention mechanism is implemented as a single-layer feedforward neural network, parametrized by a weight vector $\boldsymbol{a}$ and followed by a LeakyRELU activation. Once obtained, the normalized attention coefficients are used to compute a linear combination of the features corresponding to them: \begin{small} \begin{equation} \mathrm{M}\scriptstyle\mathrm{SG}_{0}\displaystyle= \alpha_{uv}\boldsymbol{W}_0\boldsymbol{h}^{k-1}(e_{uv}) \cdot \boldsymbol{h}^{k-1}(u), \end{equation} \end{small} where $\boldsymbol{W}_0$ denotes the weight matrix operating on incoming relations. This attentive message passing function allows for assigning different attention values to neighbor nodes, based on the intuition that different attributes have different importance to the atom. \begin{algorithm}[!t] \caption{KMPNN encoding algorithm.} \label{algorithm:KMPNN} \textbf{Input}: The augmented molecular graph $\mathcal{G}'=\{\mathcal{V}, \mathcal{E}\}$; message function $\mathrm{M}\scriptstyle\mathrm{SG\displaystyle(\cdot)}$; aggregate function $\mathrm{A}\scriptstyle\mathrm{GG}$; update function $\mathrm{U}$. \\ \textbf{Output}: Graph embedding $\boldsymbol{h}_{\mathcal{G}'}$. \begin{algorithmic}[1] \STATE $\boldsymbol{h}^0(v)\leftarrow x_v$, $\forall v \in \mathcal{V}$; $\boldsymbol{h}^0(e_{uv})\leftarrow e_{uv}$, $\forall (u,v) \in \mathcal{E}$ \FOR{$k=1,\dots,K$} \FOR{$v \in \mathcal{V}$} \STATE $\boldsymbol{m}^k(v) \leftarrow \mathrm{A}\scriptstyle\mathrm{GG}\displaystyle(\{\mathrm{M}\scriptstyle\mathrm{SG}_{\boldsymbol{1}[u=a]}\displaystyle(\boldsymbol{h}^{k-1}(e_{uv}),\boldsymbol{h}^{k-1}(u)), $ $\forall u \in \mathcal{N}(v)\})$ \\ $\boldsymbol{h}^k(v) \leftarrow \mathrm{U}\displaystyle(\boldsymbol{h}^{k-1}(v), \boldsymbol{m}^k(v))$ \ENDFOR \ENDFOR \STATE $\boldsymbol{h}_{\mathcal{G}'} \leftarrow \mathrm{R}\scriptstyle\mathrm{EADOUT}\displaystyle (\{\boldsymbol{h}^{K}(v) ,\forall v \in \mathcal{V} \})$ \end{algorithmic} \end{algorithm} Since the messages delivered by different neighbor atoms to the central atom also have various importance, atoms in the neighborhood follow a common process with different parameters: \begin{small} \begin{equation} \mathrm{M}\scriptstyle\mathrm{SG}_{1}\displaystyle =\beta_{uv}\boldsymbol{W}_1\boldsymbol{h}^{k-1}(e_{uv}) \cdot \boldsymbol{h}^{k-1}(u), \end{equation} \end{small} where $\beta_{uv}$ is the attention coefficients between atoms, $\boldsymbol{W_1}$ is the weight matrix of incoming bonds. In the message diffusion module, we collect the messages from their neighboring edges in message aggregation, \begin{small} \begin{equation} \boldsymbol{m}^k(v)= \sum_{k \in \mathcal{N}_{u}} \mathrm{M}\scriptstyle\mathrm{SG}\displaystyle(\boldsymbol{h}^{k-1}(e_{uv}) , \boldsymbol{h}^{k-1}(u)), \end{equation} \end{small} and apply GRU as the update function. \begin{small} \begin{equation} \boldsymbol{h}^{k}(v) =\mathrm{GRU}(\boldsymbol{h}^{k-1}(v), \boldsymbol{m}^k(v)), \end{equation} \end{small} where GRU is the Gated Recurrent Unit introduced in~\cite{DBLP:conf/ssst/ChoMBB14}. After $K$ steps' iteration, a readout operator is applied to get a graph-level representation for the molecule: \begin{small} \begin{equation} \boldsymbol{h}_{\mathcal{G}'} = \mathrm{Set2set}(\boldsymbol{h}^{K}(v)), \end{equation} \end{small} where set2set~\cite{DBLP:journals/corr/VinyalsBK15} is specifically designed to operate on sets and have more expressive power than simply summing the final node states. \paragraph{GNN-based Encoder.} There is no constraint of network architecture for $f(\cdot)$. We opt for simplicity and adopt the commonly used GCN~\cite{DBLP:conf/iclr/KipfW17} to obtain $\boldsymbol{h}_\mathcal{G}=f(\mathcal{G})$, which is the output after weighted sum and max pooling readout. \subsection{Contrastive Objective} \paragraph{Projection Head.} A non-linear transformation $g(\cdot)$ named projection head maps both the original and augmented representations to another latent space where the contrastive loss is calculated, as advocated in~\cite{DBLP:conf/icml/ChenK0H20}. In KCL, a two-layer perceptron (MLP) is applied to obtain $\boldsymbol{z}=g(\boldsymbol{h}_{\mathcal{G}})$ and $\boldsymbol{z'}=g(\boldsymbol{h}_{\mathcal{G}'})$. Note that after pre-training is completed, we throw the projection head away and only use the encoders for downstream tasks. \paragraph{Negative Mining.} Instead of randomly choose graphs other than the anchor instance as negatives~\cite{DBLP:conf/nips/YouCSCWS20,DBLP:journals/corr/abs-2106-04509}, we consider an additional hard negative mining scheme by treating molecules similar to the anchor instance as negatives. Specifically, we represent each molecule by its Morgan Fingerprints~\cite{DBLP:journals/jcisd/RogersH10}, which perceive the presence of specific circular substructures around each atom in a molecule and encode it in a fixed length binary vector. Then we calculate the molecular similarity through their Tanimoto coefficient~\cite{DBLP:journals/jcheminf/BajuszRH15}: \begin{small} \begin{equation} s(\boldsymbol{e}_1,\boldsymbol{e}_2) = \frac{N_{12}}{N_1+N_2-N_{12}}, \end{equation} \end{small} where $\boldsymbol{e}_1,\boldsymbol{e}_2$ denotes the fingerprints, $N_1,N_2$ denotes the number of 1s in $\boldsymbol{e}_1,\boldsymbol{e}_2$ respectively, and $N_{12}$ denotes the number of 1s in the intersection of $\boldsymbol{e}_1,\boldsymbol{e}_2$. In order to ensure all molecules have negative samples, instead of setting a fixed threshold, we sorted samples by similarity and selected a batch of most similar molecules as the negative samples. \paragraph{Contrastive Loss.} We augmented a minibatch of $N$ similar molecular graphs with knowledge-guided graph augmentation, resulting in a total of $2N$ graphs. Following ~\cite{DBLP:conf/nips/YouCSCWS20,DBLP:conf/icml/ChenK0H20}, given a positive pair, we treat the other $2(N-1)$ graphs within the same minibatch as hard negative samples. We utilize NT-Xent as our objective function like in~\cite{DBLP:conf/iclr/HjelmFLGBTB19,DBLP:conf/icml/ChenK0H20,DBLP:conf/nips/YouCSCWS20,DBLP:conf/isbi/CarseCM21}. The training objective for $(\mathcal{G}_i,\mathcal{G}_i')$ is defined as \begin{equation} \ell_{i}=-\log \frac{e^{ \rm{sim} \left(\boldsymbol{z}_i, \boldsymbol{z}_i'\right) / \tau}}{\sum_{j=1}^{N} \left( e^{ \rm{sim} \left(\boldsymbol{z}_i, \boldsymbol{z}_j'\right) / \tau} + e^{\rm{sim} \left(\boldsymbol{z}_i', \boldsymbol{z}_j\right) / \tau} \right)} , \end{equation} where $\tau$ denotes the temperature parameter and sim($\boldsymbol{z}_1, \boldsymbol{z}_2$) is the cosine similarity $\frac{\boldsymbol{z}_{1}^{\top} \boldsymbol{z}_{2}}{\left\|\boldsymbol{z}_{1}\right\| \cdot\left\|\boldsymbol{z}_{2}\right\|}$. The final loss is computed across all positive pairs in the minibatch. \section{Experiments} In this section, we conduct extensive experiments to examine the proposed method by answering the following questions: \textbf{Q1}: How does KCL perform compared with state-of-the-art methods for molecular property prediction? \textbf{Q2}: Does the knowledge-guided graph augmentation in Module 1 learns better representations than general augmentations? \textbf{Q3}: How do knowledge feature initialization and graph encoders in Module 2 affect KCL? \textbf{Q4}: How useful are the self-supervised contrastive learning and hard negative strategy in Module 3? \textbf{Q5}: How can we interpret KCL(KMPNN) from a domain-specific perspective? \begin{table*}[!h] \small \centering \begin{tabular}{c|cccccc|cc} \hline \hline Task & \multicolumn{6}{c|}{Classification (ROC-AUC)} & \multicolumn{2}{c}{Regression (RMSE)} \\ \hline Dataset & B\small{BBP} & T\small{ox21} & T\small{oxCast} & S\small{IDER} & C\small{linTox} & B\small{ACE} & E\small{SOL} & F\small{reeSolv} \\ \#Molecules & 2039 & 7831 & 8575 & 1427 & 1478 & 1513 & 1128 & 642 \\ \#Tasks & 1 & 12 & 617 & 27 & 2 & 1 & 1 & 1 \\ \hline G\small{CN~\cite{DBLP:conf/iclr/KipfW17}} & 0.877 & 0.772 & 0.650 & 0.638 & 0.807 & 0.854 & 1.068 & 2.900 \\ W\small{eave~\cite{DBLP:journals/jcamd/KearnesMBPR16}} & 0.837 & 0.741 & 0.678 & 0.621 & 0.823 & 0.791 & 1.158 & 2.398 \\ M\small{PNN~\cite{DBLP:conf/icml/GilmerSRVD17}} & 0.913 & 0.808 & 0.691 & 0.641 & 0.879 & 0.815 & 1.167 & 2.185 \\ D\small{MPNN~\cite{DBLP:journals/jcisd/YangSJCEGGHKMPS19}} & 0.919 & 0.826 & 0.718 & 0.632 & 0.897 & 0.852 & 0.980 & 2.177 \\ C\small{MPNN~\cite{DBLP:conf/ijcai/SongZNFLY20}} & 0.927 & 0.806 & 0.738 & 0.636 & 0.902 & 0.869 & 0.798 & \underline{0.956} \\ C\small{oMPT~\cite{DBLP:journals/corr/abs-2107-08773}} & 0.938 & 0.809 & \underline{0.740} & 0.634 & 0.934 & 0.871 & \underline{0.774} & 1.855 \\ \hline N\small{-GRAM~\cite{DBLP:conf/nips/LiuDL19}} & 0.912 & 0.769 & - & 0.632 & 0.870 & 0.876 & 1.100 & 2.512 \\ Hu \small{et al.~\cite{DBLP:conf/iclr/HuLGZLPL20}} & 0.915 & 0.811 & 0.714 & 0.614 & 0.762 & 0.851 & - & - \\ G\small{ROVER~\cite{DBLP:conf/nips/RongBXX0HH20}} & \underline{0.940} &\underline{0.831} & 0.737 & \underline{0.658} & \underline{0.944} & \underline{0.894} & 0.831 & 1.544 \\ \hline K\small{CL(GCN)} & 0.956 & 0.856 & \textbf{0.757} & 0.666 & 0.945 & \textbf{0.934} & \textbf{0.582} & 0.854 \\ K\small{CL(KMPNN)} & \textbf{0.961} & \textbf{0.859} & 0.740 & \textbf{0.671} & \textbf{0.958} & 0.924 & 0.732 & \textbf{0.795} \\ \hline \hline \end{tabular} \caption{The property prediction performance (lower is better for regression) of KCL under the fine-tune protocol, compared with supervised learning (first group) and pre-training methods (second group) baselines on 8 datasets.} \label{fine-tune} \end{table*} \subsection{Experimental Setup} \paragraph{Pre-training Data Collection.} We collect 250K unlabeled molecules sampled from the ZINC15 datasets~\cite{DBLP:journals/jcisd/SterlingI15} to pre-train KCL. \paragraph{Fine-tuning Tasks and Datasets.} We use 8 benchmark datasets from the MoleculeNet~\cite{wu2018moleculenet} to perform the experiments, which cover a wide range of molecular tasks such as quantum mechanics, physical chemistry, biophysics, and physiology. For each dataset, as suggested by~\cite{wu2018moleculenet}, we apply three independent runs on three random-seeded random splitting or scaffold splitting with a ratio for train/validation/test as 8:1:1. Details of datasets and dataset splitting are deferred to Appendix~\ref{Dataset Description}. \paragraph{Baselines.} We adopt three types of baselines: \begin{itemize} \item \textit{Supervised learning methods}: GCN~\cite{DBLP:conf/iclr/KipfW17} and Weave~\cite{DBLP:journals/jcamd/KearnesMBPR16} are two types of graph convolutional methods. MPNN~\cite{DBLP:conf/icml/GilmerSRVD17} and its variants DMPNN~\cite{DBLP:journals/jcisd/YangSJCEGGHKMPS19}, CMPNN~\cite{DBLP:conf/ijcai/SongZNFLY20}, CoMPT~\cite{DBLP:journals/corr/abs-2107-08773} consider the edge features and strengthen the message interactions between bonds and atoms during message passing. \item \textit{Pre-trained methods}: N-GRAM~\cite{DBLP:conf/nips/LiuDL19} conducts node embeddings by predicting the node attributes. Hu et al.~\cite{DBLP:conf/iclr/HuLGZLPL20} and GROVER~\cite{DBLP:conf/nips/RongBXX0HH20} are pre-trained models incorporating both node-level and graph-level pretext tasks. \item \textit{Graph contrastive learning baselines}: InfoGraph~\cite{DBLP:conf/iclr/SunHV020} maximizes the mutual information between nodes and graphs. MICRO-Graph~\cite{DBLP:conf/aaai/Subramonian21} is a motif-based contrastive method. GraphCL~\cite{DBLP:conf/nips/YouCSCWS20} constructs contrastive views of graph data via hand-picking ad-hoc augmentations. JOAO~\cite{DBLP:conf/icml/YouCSW21} automates the augmentation selection. MoCL~\cite{DBLP:journals/corr/abs-2106-04509} utilizes domain knowledge at two levels to assist representation learning. \end{itemize} \paragraph{Evaluation Protocol.} The evaluation process follows two steps. We first pre-train the model and then evaluate the learned model on downstream tasks under two protocols. \begin{itemize} \item \textit{Fine-tune protocol}: To achieve the full potential of our model, given graph embeddings output by the KCL encoder, we use an additional MLP to predict the property of the molecule. Fine-tune parameters in the encoders and the MLP. \item \textit{Linear Protocol}: For comparison of our model and contrastive learning baselines, we fix the graph embeddings from the pre-trained model, and train a linear classifier. \end{itemize} \paragraph{Implementation details.} We use the Adam optimizer with an initial learning rate of 0.0001 and batch size of 256. For pre-training models, the running epoch is fixed to 20. The temperature $\tau$ is set as 0.1. For downstream tasks, we use early stopping on the validation set. We apply the random search to obtain the best hyper-parameters based on the validation set. Our model is implemented with PyTorch~\cite{NEURIPS2019_9015} and Deep Graph Library~\cite{wang2019dgl}. We develop all codes on a Ubuntu Server with 4 GPUs (NVIDIA GeForce 1080Ti). More experimental details are available in Appendix~\ref{Implementation and Pre-training Details} and~\ref{Downstream Details}. \subsection{Performance Comparison (Q1 \& Q2)} \paragraph{Performance under Fine-tune Protocol.} We first examine whether the proposed KCL performs better than SOTA methods. Table~\ref{fine-tune} displays the complete results of supervised learning baselines and pre-trained methods, where the underlined cells indicate the previous SOTAs, and the cells with bold show the best result achieved by KCL. The Tox21, SIDER, and ClinTox are all multiple-task learning tasks, including totally 42 classification tasks. We also implemented two versions of our KCL model, the original molecular graph with GCN encoder and the augmented molecular graph with KMPNN as the encoder. Table~\ref{fine-tune} offers the following observations: {(1)} KCL consistently achieves the best performance on all datasets with large margins. The overall relative improvement is 7.1\% on all datasets (2.6\% on classification tasks and 20.4\% on regression tasks)\footnote{We use relative improvement to provide the unified descriptions.}. This notable performance improvement suggests the effectiveness of KCL for molecular property prediction tasks. {(2)} In the small dataset FreeSolv with only 642 labeled molecules, KCL gains a 16.8\% improvement over SOTA baselines. This confirms the strength of KCL since it can significantly help with tasks with very limited label information. \paragraph{Performance under Linear Protocol.} \begin{table \small \centering \begin{tabular}{p{1.3cm}<{\centering}|p{0.6cm}<{\centering}p{0.58cm}<{\centering}p{0.75cm}<{\centering}p{0.6cm}<{\centering}p{0.65cm}<{\centering}p{0.65cm}<{\centering} \hline \hline Dataset & B\small{BBP} & T\small{ox21} & T\small{oxCast} & S\small{IDER} & C\small{linTox} & B\small{ACE} \\ \hline Node & 0.843 & 0.728 & 0.633 & 0.577 & 0.635 & 0.746 \\ Edge & 0.833 & 0.715 & 0.619 & 0.605 & 0.630 & 0.657 \\ Subgraph & 0.815 & 0.727 & 0.625 & 0.583 & 0.603 & 0.629 \\ Attribute & 0.826 & 0.726 & 0.623 & 0.621 & 0.671 & 0.796 \\ \hline I\small{nfoGraph} & 0.611 & 0.615 & 0.562 & 0.502 & 0.458 & 0.594 \\ M\small{ICRO} & 0.830 & 0.718 & 0.595 & 0.573 & 0.735 & 0.708 \\ G\small{raphCL} & 0.697 & 0.739 & 0.624 & 0.605 & 0.760 & 0.755 \\ J\small{OAO} & 0.714 & 0.750 & 0.632 & 0.605 & \underline{0.813} & 0.773 \\ M\small{oCL} & \underline{0.905} & \underline{0.768} & \underline{0.653} & \underline{0.628} & 0.750 & \underline{0.845} \\ \hline K\small{CL(G)} &\textbf{0.929} & 0.821 & 0.696 & 0.620 &\textbf{0.909} & \textbf{0.902} \\ K\small{CL(K)}& 0.927 & \textbf{0.825} & \textbf{0.709} & \textbf{0.659} & 0.898 & 0.860 \\ \hline \hline \end{tabular} \caption{The performance of KCL under the linear protocol on 6 datasets, compared with contrastive learning baselines. The metric is ROC-AUC.} \label{linear} \end{table} We next study whether the knowledge-guided graph augmentation in Module 1 helps learn better molecular representations. Table~\ref{linear} shows the comparison results of different augmentation (node dropping, edge perturbation, subgraph extraction and attribute masking) and contrastive learning methods. To be consistent with prior works and make the comparisons fair, we use the linear protocol, which is exactly what baselines have done, to evaluate the performance on classification datasets. Results on regression tasks are deferred to Appendix~\ref{Regression Results under Linear Protocol}. Both versions of KCL produce better results compared to alternative graph augmentation methods (the first group in Table 3). This verifies our assumption that knowledge-guided graph augmentation does not violate the biological semantic in molecules and thus works better than other augmentations. Moreover, KCL gains a 7.0\% improvement over the previous best contrastive learning methods (the second group), which confirms that better representations of molecular graphs could be obtained by incorporating fundamental chemical domain knowledge and capturing microscopic associations between atoms. \subsection{Ablation Study (Q3 \& Q4)} \begin{figure}[!t] \centering \includegraphics[width=0.95\columnwidth]{ablation} \caption{Performance of KCL with different settings under the fine-tune protocol (lower is better for regression).} \label{ablation} \end{figure} We then conducted ablation studies to investigate components in Module 1 and 2 that influence the performance of the proposed KCL framework. As shown in Figure~\ref{ablation}, KCL with knowledge feature initialization and hard negative mining scheme (bar in yellow) shows the best performance among all architectures. Models with random initialization and random negative sampling denoted by ``w/o ALL" almost always perform the worst. Excluding any of these two components can easily result in a decrease in performance. This illustrates that both knowledge feature initialization and hard negative mining strategy are necessary for KCL, because the former captures the structural triple information, while the latter guides the encoders to generate more discriminative representations. \begin{table \begin{small} \centering \begin{tabular}{ccc} \hline \hline Task & Classification & Regression \\ \hline GCN(No contrast) & 0.766 & 1.984\\ KMPNN(No contrast) & 0.806 & 1.531 \\ \hline KCL(GIN) & 0.849 & \underline{0.718} \\ KCL(GAT) & 0.850 & 0.724 \\ KCL(GCN) & \underline{0.852} & \underline{0.718} \\ \hline KCL(RGCN) & 0.831 & 1.008 \\ KCL(MPNN) & 0.833 & 0.927 \\ KCL(KMPNN) & \underline{0.852} & \underline{0.765} \\ \hline \hline \end{tabular} \caption{Ablation results under the fine-tune protocol. Each value represents the average result of the task, and the underline marks the best in the group.} \label{tab:ablation} \end{small} \end{table} Since our graph encoders are pluggable, we replaced both GCN, KMPNN with other architectures to explore the impact of graph encoders. The results in Table~\ref{tab:ablation} demonstrate that applying different encoders (e.g., GIN~\cite{DBLP:conf/iclr/XuHLJ19}, GAT~\cite{DBLP:conf/iclr/VelickovicCCRLB18}) on original molecular graphs has no significant impact on performance. In addition, we ignore the different types of nodes and edges in augmented graphs and replace KMPNN with previous heterogeneous graph neural network (RGCN~\cite{DBLP:conf/esws/SchlichtkrullKB18}) and general message passing framework (MPNN~\cite{DBLP:conf/icml/GilmerSRVD17}). The comparisons reveal that KMPNN has better expressive power by providing heterogeneous attentive message passing for different types of knowledge on the augmented molecular graphs. The specific values are deferred to Appendix~\ref{Effect of Different Settings} and \ref{Effect of Different Encoders}. To investigate the contribution of the self-supervision strategy, we compare the performances between KCL with and without contrastive learning under the fine-tune protocol (the counterpart under linear protocol is deferred to Appendix~\ref{Effect of Contrastive Learning}). We report the comparison results in Table~\ref{tab:ablation}. The self-supervised contrastive learning leads to a performance boost with an average increase of 8.5\% on classification and 56.9\% on regression over the model without contrastive learning. This confirms that contrastive learning can learn better representations by narrowing the distance between the structural view and the knowledgeable view in the latent space, and enhance the prediction performance of downstream tasks. \subsection{Chemical Interpretability Analysis (Q5)} Finally, we explore the interpretability of our model by visualizing the attention of each edge in a molecule. Specifically, we extracted and normalized the atom's attention weights to their neighbors from the last layer of KCL(KMPNN). \begin{figure}[!t] \centering \includegraphics[width=0.95\columnwidth]{attention} \caption{An attention visualization example of different types of neighbors (attributes and atoms) in the BBBP dataset. The attention weights assigned for bonds connected to the two C atoms are visualized on the right. The darker the color, the higher the attention.} \label{attention} \end{figure} Figure~\ref{attention} illustrates an example in the BBBP dataset~\cite{DBLP:journals/jcisd/MartinsTPF12}. BBBP involves records of whether a compound carries the permeability property of penetrating the blood-brain barrier. As shown in the left part of the figure, atoms tend to assign more attention to their electron affinity, electronegativity, metallicity, and ionization. These attributes are closely related to atoms' ability to lose electrons. The strength of the atom's ability to gain or lose electrons will largely affect the polarity of the molecule, thereby affecting its permeability. In addition, more lively atomic neighbors are easier to be noticed, as illustrated on the right side of Figure~\ref{attention}. The element Cl has relatively higher electronegativity, so it has a stronger ability to obtain electrons. Also, the hydroxyl group promotes hydrophilicity and thus is assigned higher attention. Another interesting observation is that fine-grained attributes (e.g., weight, radius) receive less attention than coarse-grained attributes (e.g., electron affinity, electronegativity, metallicity, and ionization). It is because coarse-grained attributes are more abstract and informative than fine-grained attributes, and therefore contain richer domain knowledge. This is in line with hierarchical machine learning where coarse-grained features at higher levels can be seen as a summary of fine-grained features in terms of target prediction. More examples and discussions on other datasets are in Appendix~\ref{Chemical Interpretability Analysis}. \section{Conclusion and Future Work} This paper aims to incorporate fundamental domain knowledge into molecular graph representation learning. We construct Element KG to build microscopic connections between elements, and propose to utilize knowledge in the KCL framework to enhance molecular graph contrastive learning. We demonstrate the effectiveness of KCL under both fine-tune and linear protocols, and experiments show that KCL excels previous methods with better interpretation and representation capability. In the future, we intend to extend our work in several aspects. First, we would introduce different granularity of domain knowledge to enrich Chemical Element KG. Also, we will improve the current KG with more description logics defined in OWL2, such as more object properties and axioms. Third, we will open-source Chemical Element KG, continue to improve its quality and expand its scale. \newpage \section*{Acknowledgement} We want to express gratitude to the anonymous reviewers for their hard work and kind comments. This work is funded by NSFCU19B2027/91846204, national key research program 2018YFB1402800.
1,116,691,497,975
arxiv
\section{Introduction} As the scale of parallel computers constantly grows, it becomes increasingly difficult for application developers to maintain strong-scalability. For example, on the Summit supercomputer at Oak Ridge National Laboratory, on the order of 100 million operations need to be executed simultaneously in order to fully utilize all processing elements. As the number of processing elements is expected to steeply increase as we approach the exascale era \cite{alexander2020exascale}, it is paramount to develop novel strategies to maximize the amount of parallelism exposed by the applications. A now common programming model in scientific applications is task-based programming, where the execution of the application is factored into tasks of varying granularity that are then scheduled for execution using a runtime system \cite{Robson:2016:RCH:3018814.3018821,bauer2012legion}. This model has proved to be powerful in a range of contexts, and his now deployed in production scientific applications \cite{di2020htr,doi:10.1063/5.0014475,torres2019soleil,jain:isc2016}. A potentially promising generalization of task-based programming to further performance at very large scales is task-level speculative execution, akin to a distributed-memory version of Thread Level Speculation \cite{10.1145/2400682.2400698,10.1145/2821505}. In our approach, (coarse) computational tasks are made available for execution before it can be established that they will definitely be used as part of the calculation. If speculations can be made accurately so that the results of most executed tasks are eventually used, this strategy has the potential to enable higher concurrency, and hence to improve scalability. This manuscript considers the problem of optimal resource allocation in a speculative task-execution setting where a task usefulness probability, i.e., a probability that the results of a speculatively executed task will be consumed as part of the calculation (hereby referred to as the \textit{task probability}), can be explicitly computed or estimated. This is a rich problem as, in practice, task probabilities will often be conditioned on the current state of the application, and can therefore dynamically change as execution proceeds. Further, the run time of individual tasks is often much shorter than the run time of the application as a whole. As a result, as tasks complete, freeing-up previously allocated resources, new tasks have to quickly be identified to take their place. Finally, optimal allocation of resources to tasks (i.e., how much computing resources are dedicated to the execution of each given task) is not only dependent on the individual task probabilities, but is also tied to the distribution of task probabilities of all other tasks that are available for execution. In the following, we consider such a dynamic setting where tasks are assumed to be preemptable and restartable with a different resource allocation. It then becomes possible to periodically reevaluate the optimal allocation, pausing the execution of all running tasks, and reallocating resources as needed. This can either be done at fixed time intervals, when tasks complete, or when a change of context dictates. With each update, a newly derived optimal allocation can be executed, resuming paused tasks (with a potentially different resource allocation) as well as starting new ones if needed. This pausing and resuming of tasks allows for the optimal allocation to adapt to the dynamic variability of the system, maintaining an optimal (expected) throughput at all times. In the following, we present a generic analysis of this approach as well as a case study to a specific scientific application called Parallel Trajectory Splicing \cite{ParSplice}, which is adapted to a setting where task probabilities can be explicitly estimated. \section{Previous work} Modern parallel computing architectures have complex memory hierarchies as well as heterogeneous processors. In order to achieve high performance on such architectures, programming models such as Legion \cite{bauer2012legion} are organized into logical regions that expresses locality and independence of data and tasks. The instances of these logical regions can be assigned to specific memories and processors in the machine during run-time. Similar logical hierarchies are also introduced in OpenMP 5.0 \cite{OpenMP5}, Chapel \cite{chamberlain2007parallel}, Charm++ \cite{kale1993charm++}, etc. for task-based parallelism. These task-based systems are capable of dynamic load balancing for scheduling and mapping tasks for optimal performance on the underlying hardware. Locality-aware parallelism has been well studied in non-speculative systems, and only a select few speculative systems utilizing parallelism via thread-level speculation (TLS) \cite{steffan2000scalable} or hardware transactional memory (HTM) \cite{bobba2007performance} can scale beyond a few nodes. One such system that we came across is described in the work by Jeffrey et al., \cite{jeffrey2016data} where program knowledge is leveraged to provide \textit{spatial hints} to indicate the data that is likely to be accessed by a speculative task. In this work, we adapt and augment this idea and speculatively schedule tasks based on their usefulness in contributing to the overall computations in order to increase throughput. Many resource allocation strategies have been explored in the context of load balancing to efficiently use the existing hardware. A naïve way to allocate resources is to base it on peak utilization. However, designing a resource allocation strategy based on worst-case needs is not a viable approach as it results in excessive resource estimates. Many static and dynamic approaches \cite{almeida1995comparative,andonov1993dynamic,morales1995integral,algorithms1988gibbons} have been proposed to distribute the problem pieces optimally over different nodes with an objective of balancing the execution time. However, the issue with most of these optimization problems is the curse of dimensionality as the search space grows exponentially with the size of the problem and the potential impact of emerging hardware, such as smart interconnects \cite{rajamony2011percs,faanes2012cray} with advanced traffic monitoring hardware. Static approaches distributing the load during compile time have limitations as the performance is not only dependent on problem size but also over many dynamic factors. Adaptive resource management techniques \cite{rosu1997adaptive} try to overcome these limitations by dynamically allocating resources to different processes. To provide software support, the MPI-2 standard also introduced dynamic process creation using the MPI\_Comm\_spawn function \cite{balaji2011mpi}. This function enables to create new processes during the program run-time. To mitigate poor resource allocation and load balancing in dynamic MPI spawning, fuzzy scheduling algorithms \cite{moussa2017intelligent} for dynamic processes have been explored. Many control-theory based techniques are also used for adaptive resource allocation that use standard feedback controllers with an auto-regressive prediction model to predict the resource allocation \cite{xu2009grey}. Many resource monitoring, prediction, and allocation strategies have been explored in cloud computing environments \cite{minarolli2014distributed,wei2015towards,ma2016auto}. Solutions including genetic algorithms \cite{tseng2017dynamic}, neural networks, etc. are explored for prediction and allocation of resources in cloud data centers \cite{chen2015resource}. All of these approaches are based on learned behavior from heuristics and do not consider the inherent probabilities of individual tasks at an application level. In any resource allocation problem \cite{morales2000design}, limited resources are to be allocated to a set task to maximize effectiveness. Dynamic programming has also been explored in this setting and it can be shown \cite{elmaghraby1993resource,powell2002adaptive,denardo2012dynamic} that the problem can be solved using a simple sequential multi-stage dynamic programming algorithm in $\mathcal{O}(N^2M)$ time. Pipeline based algorithms \cite{gonzalez2003towards,jahn2015runtime} that mimics instruction pipelines within processors have also been attempted, however, most of these approaches have a high communication cost. \section{Methods} \subsection{Optimal Throughput} Consider the problem of allocating resources between a (potentially infinite) number of candidate tasks in a speculative task execution setting on a machine containing $N$ hardware slots on which tasks can be assigned (which can be nodes, cores, GPU, etc). Tasks can be run in parallel over a certain number of slots $w_i$, in which case, task completion requires an expected time $T(w_i)$. For simplicity, it is assumed that all tasks are computationally equivalent; however, a task-specific $T_\alpha(w_i)$ can be introduced in the derivation below without additional complication. Each of the candidate tasks are assigned a probability $p_i$ of ultimately being used as part of the overall execution of the calculation, which is abstractly conceived as a workflow that progresses by consuming completed tasks. This probability can either be a rigorously derived value or a heuristic estimate. Note that each task can also be assigned a weight that reflects how much it would contribute to the calculation by scaling the corresponding $p_i$ accordingly to obtain an expected utility. In the following, however the $p$’s will still be referred to as probabilities, for simplicity. To simplify and accelerate the allocation process (which is important in the context where probabilities are adjusted and resources re-allocated at a high rate), the resource assignment problem is solved in a continuous setting where the $w_i$ are real numbers instead of integers. This enables an extremely efficient solution scheme. These values can then be discretized after optimization is complete, yielding an approximate solution but at a greatly reduced computational cost. In what follows, it is assumed that the tasks are ordered by decreasing probability. The optimal allocation of resources consists of determining the number of tasks that should be executed, $M$ (i.e., the $M$ tasks with the highest probabilities are selected for execution) as well as the resources assigned to each task, $w_i$. The objective function to be optimized is the instantaneous expected throughput from the $M$ tasks that are selected for execution: \begin{center} $R(M,\{w_i\})=\sum_i^M{p_i/T(w_i)}$ \end{center} where $T(w_i)$ is the expected time to complete a task when provided $w_i$ resources. $R(M,\{w_i\})$ measures the expected rate at which useful results are generated for a given allocation $M,\{w_i\}$. The problem is constrained by requiring that the allocation fully utilizes available resources, so that $\sum_i^M w_i = N$. In pathological cases where there are more resources available than could possibly be used ($N>M*w_\mathrm{max}$, where $w_\mathrm{max}$ is the maximum allocation which a single task can fully utilize, as will be defined below) $N$ is replaced with $M*w_\mathrm{max}$. This constraint can be enforced by introducing a Lagrange multiplier $\lambda$ to the objective function. \begin{center} $R(M,\{w_i\})=\sum_i^M{p_i/T(w_i)}+\lambda(\sum_i w_i-N)$ \end{center} Extremizing the Lagrangian with respect to $w_i$ and $\lambda$ yields \begin{center} $p_i F(w_i)+\lambda=0$ \\ $\sum_i w_i=N$ \\ \end{center} respectively, where the function $F$ is defined $F(w_i) \coloneqq-T^{'}(w_i)/T^2(w_i)$. Therefore, given an explicit expression for $T(w_i)$, one can invert the function $F$ and obtain an explicit expression for $w_i$: $w_i=F^{-1}(-\lambda/p_i),$ which depends only on the Lagrange multiplier $\lambda$ and on the task probabilities $p_i$. At a given value of $M$, the allocation problem is reduced to solving a 1D root-finding problem in $\lambda$, $\sum_i w_i=N$, which yields the values of $\{w_i\}$ that maximizes the throughput for this value of $M$. Note that this formulation can yield $w_i=0$, so that considering the first $M$ tasks in the optimization problem is not guaranteed to allocate resources to all of them. Finally, the optimal number of tasks to consider, $M^*$, is taken to be the value which maximizes the expected throughput over all values of $M$. The allocation problem is therefore reduced to solving two embedded 1D problems, which can be done very efficiently. In practice, an explicit expression for $T(w_i)$ is obtained by fitting to benchmark results. Benchmarks were carried out on dual sockets Intel Broadwell E5-2695V4 nodes. In section \ref{ApplicationSection}, an application to parallelized materials simulation is considered. For this work, a benchmark analysis of the molecular dynamics code LAMMPS \cite{LAMMPS,LAMMPSCODE} was conducted as shown in Figure \ref{fig:benchmark}. The function $T(w_i)$ was obtained by running an identical task in parallel over a varying number of cores and recording the time to complete said task. Fractional core counts are obtained when oversubscribing the hardware slots. The recorded times were then fit to the functional form $a+b/x+d\log(gx)+h/x^2$ which was loosely based on Amdahl's law \cite{amdahl1967validity}, adding a heuristic $\log$ term to account for the cost of synchronization and a $1/x^2$ term to provide an oversubscription penalty. The specific functional form is not crucial; other smooth approximations could have been used instead. \begin{figure} \centering \includegraphics[width=\linewidth]{fig1.png} \caption{Benchmark analysis of the molecular dynamics code LAMMPS \cite{LAMMPSCODE}. Run times in red were measured for an identical task executed over a varying number of cores. Fractional core counts were obtained via oversubscribing the hardware slots. In blue, a functional form $a+b/x+d\log(gx)+h/x^2$ was fit to the data to produce the invertible function $T(w)$ with coefficients $a=-2.38, b=481.42, d=2.32, g=21.76,$ and $h=7.10$ } \label{fig:benchmark} \end{figure} As shown in Figure \ref{fig:benchmark}, $T(w)$ possesses a minimum, after which the time to execute a task begins to increase with increasing resources due to communication and synchronization overheads. As no optimal resource allocation can include $w$’s in this regime (because a higher throughput could always be obtained with even fewer resources) this branch of the function $T(w)$ is ignored when obtaining $F^{-1}$. The minimum of $T(w)$ therefore defines the maximum allocation ($w_\mathrm{max}$, roughly $200$ cores in this case) which can be fully utilized by a task, and hence a corresponding minimum time in which a task can be completed; here $T(w_\mathrm{max})$ is roughly $19.5$ seconds. This quantity becomes important in conjunction with $T(1)$, the time to complete a task at the maximum parallel efficiency, as their ratio will be shown to correspond to an upper bound of achievable performance gains. In addition to ignoring the $w>w_\mathrm{max}$ branch, the domain of $F$ is restricted to those values of $w$ where $F$ is monotonically decreasing, which is required for the solution to be a maximum of the throughput, in contrast to a minimum. $F$ is therefore invertible so that $F^{-1}$ is well defined. \subsection{ParSplice} In the following, the potential benefits of optimal resource allocation in a speculative task execution setting are demonstrated by studying an existing scientific application called Parallel Trajectory Splicing, or ParSplice \cite{ParSplice,perez2017long}. ParSplice is a method in the family of Molecular Dynamics (MD) simulations. MD numerically integrates the classical equations of motion of atoms using interatomic forces derived from the gradient a user-provided potential that describes the interactions between atoms. MD is broadly used in the computational sciences, with applications to materials science, biology, chemistry, etc. MD is extremely powerful, but also computationally intensive. While domain-decomposition approaches enable the use of massively-parallel computers to extend the simulation length-scales \cite{LAMMPS}, similar approaches do not allow for significant extension in timescales except for very large systems, due to communication and synchronization overhead. Extending timescales instead requires specialized techniques \cite{AKMC,hyperdynamics,TAD,Zamora2020,perez2015parallel}. ParSplice is one such technique where parallelization is carried out in the time domain, thereby avoiding synchronization costs inherent to domain decomposition. It however comes with a tradeoff: instead of generating a trajectory that is continuous in phase space, it produces a discrete state-to-state trajectory, where a state corresponds to a finite volume in the phase space of the problem. States are usually defined to correspond to long-lived metastable topologies of the system (such as the attraction region of deep local energy minima), and so state-to-state trajectories are sequences of transitions between such long-lived states. ParSplice works by concurrently and asynchronously generating many short “segments” of MD trajectory in such a way that they can later be spliced together to create a single state-to-state trajectory. Generating a “segment” involves creating an independent realization of the system’s trajectory (by solving a stochastic differential equation) that is initialized in some assigned starting state and evolved through MD until a physics-motivated stopping condition is achieved, after which the final state visited by the trajectory (which may or may not be the same as the starting state) is noted. So, in short, a segment is composed of an initial and a final state, separated by some MD time (see Figure \ref{fig:genSeg}). These segments are then returned to a database where they are stored until they can be spliced. Due to the specially-designed protocol by which segments are produced and stored \cite{aristoff2019generalizing}, any segment in the database can be spliced onto any other so long as it began in the same state that the other finished (see Figure \ref{fig:spliceSeg}). This allows for a single state-to-state trajectory to be formed by extracting individual segments from the database and splicing them onto the end of the trajectory. For further details on how the independent generation and splicing of segments is guaranteed to produce statistically correct state-to-state trajectories, the reader is referred to the original manuscript \cite{ParSplice}. \begin{figure} \centering \includegraphics[width=\linewidth]{fig2.jpg} \caption{Conceptual illustration of segment generation: An MD trajectory is initialized in some assigned ``circle" state and then dynamically evolved forward in time through MD. After some stopping criteria is met, the final state of the MD trajectory is noted and used to produce a ParSplice ``segment".} \label{fig:genSeg} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{fig3.jpg} \caption{Conceptual illustration of segment splicing: Left panel, only segments which start in the same state that the previous spliced segment ended (here the ``diamond" state) can be spliced. Right panel, splicing a segment involves extracting it from the database and appending it to the state-to-state trajectory.} \label{fig:spliceSeg} \end{figure} Because the individual segments are independently produced in parallel, ParSplice can offer a potential wall-clock speedup proportional to the number of MD instances. Achieving this ideal level of parallel efficiency however requires that every segment generated is eventually spliced into the state-to-state trajectory. Therefore, while the accuracy of a trajectory is ensured solely by the independent generation and splicing of segments according to the ParSplice prescriptions, the efficiency of ParSplice is a function of its ability to forecast ahead of the trajectory and assign segments to be generated in those states where they are most likely to be needed. As such, ParSplice follows the speculative execution paradigm discussed above: at any point in time, only one segment is strictly guaranteed to be spliced into the trajectory (a segment that begins in the state where the trajectory currently ends), but one can identify a much larger number of segments that could potentially be spliced in the future. Towards this goal, ParSplice develops a discrete time Markov Model (MM) on-the-fly from the previously generated segments and uses this model to assign starting states for new segments to be generated. The MM encodes the estimated probability that a segment generated from state $i$ will end in state $j$. In actual simulations, the MM is usually empty at the beginning of the simulation and it is dynamically updated as more segments are generated. The original ParSplice method selects segments for execution through a procedure referred to as virtual end (VE) scheduling. VE accounts for completed but unspliced segments which are stored in the database as well as those “pending” segments which have been assigned to some computing resources, but have not yet been completed. The process of VE assigning the state in which the next segment should be generated is outlined in Figure \ref{fig:VE}. 1) The MM is used to sample “virtual” endpoints for all of the pending segments, creating a prediction of what the database might look like once all of the pending tasks are completed. 2) It then “virtually” splices from this database-prediction onto the end of the state-to-state trajectory until it runs out of segments to splice. 3) It assigns the next segment to be generated starting in the state where the state-to-state trajectory “virtually” ended. This process is then repeated for next segment state-assignment, and so on until a segment has been assigned to any idle MD instance. The word ``virtual" is used to denote that this process is not actually manipulating segment endpoints or splicing onto the actual physical trajectory. This process is simply used as a means of forecasting where to assign the next segment and only segments that were actually completed can be spliced into the physical trajectory. \begin{figure} \centering \includegraphics[width=\linewidth]{fig4.jpg} \caption{Virtual End (VE) scheduling of segments: Top panel, the statistical Markov Model (in green) is used to sample ``virtual" end states (also green) for all pending-segments, speculating on what the database might look like once these pending-segments are completed. Bottom-left panel, segments are then ``virtually" spliced from the speculative database, extending the ``virtual" trajectory as far as possible. Bottom-right panel, a new segment (outlined in yellow) is scheduled to begin in the state where the ``virtual" trajectory ended. We stress the word ``virtual" here to differentiate from anything actual. All segment manipulation is only carried out as a thought experiment for determining where to generate the next segment.} \label{fig:VE} \end{figure} In the present context, an important limitation of the VE procedure is that it samples from the ensemble of possible tasks according to their probabilities, but does not directly give access to the individual task probabilities themselves. In order to address this limitation, a new variant of ParSplice is proposed where instead the task probabilities are first explicitly estimated, and then tasks with the largest probabilities are selected for execution. This new variant is referred to as MaxP (maximum probability) scheduling. The derivation of MaxP is based on the formalism of discrete time Markov Chains, and is detailed explicitly in Appendix A. The general concept involves calculating the probability that particular segments will be spliced into the state-to-state trajectory over some finite time horizon, as an average over all paths that the spliced trajectory could take. These probabilities can be computed analytically or approximated via a computationally cheaper Monte Carlo approach. See Appendix A for details. The MaxP formulation provides a natural estimate of the task probabilities for each segment that could be generated, i.e, each potential task. It is important to note that the probabilities derived from the MaxP formalism are dependent both on the instantaneous content of the database and on the current end point of the trajectory, as it was the case for the VE variant. The probabilities therefore continually change as the simulation proceeds, which suggests that it might be advantageous to periodically re-adjust/recalculate the probabilities and re-assign resources to tasks so as to maintain an optimal expected throughput. Further, MD is inherently preemtable and restartable: the only information needed to checkpoint and restart a simulation is the list of the current positions and velocities of the atoms. Using this checkpoint, the simulation can be restarted with a different domain decomposition, and hence with a different $w$. The resource allocation approach discussed above is therefore directly applicable to ParSplice-MaxP. \section{Application} \label{ApplicationSection} To gain a better intuition of the solutions resulting from different task probability distributions, and of the potential performance improvements that can be expected by allocating resources based on task probabilities, we first discuss results on various synthetic distributions. More specifically, we focus on the characterization of the instantaneous throughput obtained by optimizing the resource allocation as a function of the characteristics of the task probability distribution. Each of the following distributions were created by drawing random $p_i$ samples from a probability density until a given total $\sum p_i=1000$ was reached. While this process resulted in each synthetic distribution containing a different number of potential tasks, the constrained value of $1000$ ensures that the maximum expected throughput given infinite resources is identical for each distribution, and hence comparisons can be made easily. The probability densities from which the synthetic distributions were sampled belonged to one of two generic classes. The first was a delta distribution or composition of two delta distributions from which only particular values of $p_i$ could be sampled. Each composition of delta distributions contained a non-zero peak at $p=1$, corresponding to having a certain number of tasks which are known to be essential (i.e $p=1$), and another non-zero peak at lower $p$, corresponding to a certain number of speculative tasks which are assigned a generic probability. As one would expect to generally have a large number of speculative opportunities, and thus of speculative tasks, the magnitude of the peaks were weighted in favor of the lower probability by a 9:1 ratio. Sampling from these distributions yields a task probability distribution which exhibits a ``step” from the $p=1$ tasks to the speculative probability. In addition to a single delta distribution at $p=1$, which generated a trivial distribution containing only $p=1$ tasks, several composite distributions are analyzed with varying values for the lower-probability speculative tasks The second class of probability densities were beta distributions, B$(\alpha,\beta)$. Adjusting the shape parameters $\alpha$ and $\beta$ allows for the creation of a wide range of different distributions, as illustrated in Figure \ref{fig:BetaP}. Sampling from the continuous probability densities produced nearly continuous task probability distributions capable of spanning the entire $[0,1]$ probability domain. This assortment of synthetic task probability distributions provide a reasonable collection for surveying the performance landscape of the proposed optimal resource allocation method. \begin{figure} \centering \includegraphics[width=\linewidth]{fig5.png} \caption{Synthetic task probability distributions sampled from different B$(\alpha,\beta)$ distributions as depicted in the legend.} \label{fig:BetaP} \end{figure} The most important question in practice is whether the effort of deriving and implementing a probability-aware optimal allocation scheme is worthwhile as compared to a naive approach which does not consider the probability of tasks. Such a naive scheme would assign resources in equal sized chunks corresponding to the maximal parallel efficiency, so as to maximize throughput in the non-speculative setting. It would only deviate from this chunk size if the resources available enabled all tasks to be run at maximum parallel efficiency and excess resources remained. In such a case, the constant chunk size allocated to each task would uniformly increase to fully utilize all available resources. Therefore, the naive allocation is to assign each task with $w_\mathrm{const}=\max(1,N/M)$ resources to each task. In the following, it is shown that the increase in throughput due to optimal allocation can in fact be quite substantial. \begin{figure} \centering \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{fig6a.png} \caption{} \label{fig:Deltaboost} \end{subfigure} \hfill \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{fig6b.png} \caption{} \label{fig:Betaboost} \end{subfigure} \caption{Boost in performance as a function of resources $N$, where Boost is defined as the ratio of expected throughput provided the optimal allocation to the expected throughput provided the naive allocation. Results shown for synthetic distributions sampled from both the delta distributions (a) and the beta distributions (b).} \end{figure} We first recognize from the blue curve in Figure \ref{fig:Deltaboost} that the trivial probability distribution where all tasks are of equal probability $(p_i=p)$, corresponding to task probabilities sampled from a single delta distribution, obtains unit boost in performance compared to naive scheduling throughout the entire range of $N$. This was expected given that when all probabilities are equal, the throughput is maximized for a uniform allocation of resources. The natural extension to this trivial case of uniform task-probabilities is the case of binary probability values, where tasks are assigned one of two probabilities: $p_a$ and $p_b$ (where $p_a>p_b$). Such synthetic task probabilities are sampled from a composition of delta distributions, e.g., constructing a list of containing certain ($p_a=1$) and speculative ($p_b<1$) tasks. An example of a step distribution arising in practice might be an application which identifies a certain number of tasks as provably necessary (and hence for which $p_a=1$), and a certain number of speculative tasks, which are assigned a generic probability $p_b$. Synthetic task probability distributions were sampled for four different values of speculative task probability ($p_b = \{0.5, 0.1, 0.01, 10^{-10}\}$). It is seen in Figure \ref{fig:Deltaboost} that the boost obtained through the optimal allocation varies inversely with $p_b$ and saturates as $p_b$ approaches zero. Take for example the resource allocation of $N=10,000$ to a probability distribution characterized by $p_a=1$, $p_b=0.01$. The naive allocation distributes resources evenly across all possible tasks, producing an expected throughput of roughly 2.3. This is in stark contrast to the optimal allocation, which concentrates resources only to $p_a$ tasks. As a result, the optimal allocation executes fewer tasks, but does so yielding a higher expected throughput of roughly 16.8. The optimal allocation therefore results in a substantial boost in performance, producing nearly 7.3 times the expected throughput of the naive allocation. We note that the boost also affects only an intermediate range of values of $N$, as, in both the small and large $N$ limits, the optimal and naive allocation are identical. The value of $p_b$ relates to the potential boost obtained as it affects the sampled task probability distribution in two key ways: $1)$ The smaller values of $p_b$ present steeper decays in the probability distributions as they cover a larger range in values. The naive allocation struggles to handle a large range in values as its allocation is uniform, meanwhile the optimal allocation is specifically tailored to the individual probabilities of each task. $2)$ Because the values ($p_a$, $p_b$) are sampled in a 1:9 ratio, a smaller value of $p_b$ implies that more tasks will be sampled before reaching the $\sum p_i=1000$, and thus the distribution of tasks will have a longer tail. This, again, is not handled well by the naive allocation as the high $p=p_a$ tasks will receive the constant $w_\mathrm{const}=1$ allocation unless $N$ is such that all tasks can be executed, at which point $w$ will start to increase uniformly. The longer the tail in the distribution, the more resources are needed before the naive allocation will increase the uniform chunk size, and hence the throughput. These results illustrate the intuitive idea that ignoring the task probabilities and invoking a uniform allocation often involves running lower probability tasks with resources that could be better spent increasing the allocation to higher probability tasks. These two key features (steep decay and long tail) are particularly detrimental to the performance of the naive allocation scheme. Considering the task probability distributions sampled from the Beta probability density, one can see that this rule of thumb is upheld. Figure \ref{fig:10kBeta} illustrates allocation solutions for $N=10,000$ given a task probability distribution sampled from B$(0.1,1)$. The sampled probability distribution consisted of 11,132 tasks and spanned a range of probabilities from $p\sim1$ to $p\sim10^{-32}$. The naive allocation allocated resources uniformly, executing $10,000$ tasks with $w=1$. This resulted in an expected throughput of just over 2. The optimal allocation provided resources in greater chunks to fewer tasks. It only executed 923 tasks, but did so yielding an expected throughput of nearly 12, providing a boost in expected throughput of nearly 6 times the naive allocation. \begin{figure} \centering \includegraphics[width=.9\linewidth]{fig7.png} \caption{Top: Task probability distribution sampled from B$(0.1,1)$ distribution. Bottom: Allocation of $N=10,000$ resources among tasks.} \label{fig:10kBeta} \end{figure} We see here again how the long tail and steep decay of the task probability distribution meant that the naive solution would allocate resources to low probability tasks that contribute little to the expected throughput. It is instead optimal to allocate additional resources to those higher probability tasks, running fewer tasks but generating a higher expected throughput. The maximum attainable boost one could possibly obtain can be determined by the following analysis. Consider a task probability distribution consisting of $N_a$ tasks of probability $p_a=1$ and $N_b$ tasks of probability $p_b=\epsilon$. The optimal allocation would divide resources among those $N_a$ tasks, ignoring the $N_b$ tasks. This would yield an expected throughput of $N_a/T(N/N_a)$. The naive allocation would spread resources among all tasks, yielding an expected throughput of $N_a/T(1)+N_b \epsilon/T(1)$ if $N$ was sufficiently large such that $N=N_a+N_b > w_\mathrm{max} N_a$. In the case where $N_b \epsilon <<1$ this expression would simplify and a trivial relation for the boost in expected throughput would result as the ratio $T(1)/T(N/N_a)$. This expression is maximal when $N/N_a=w_\mathrm{max}$. Thus, the maximum attainable boost one could obtain is equal to $T(1)/T(w_\mathrm{max})$, which for our application was roughly 25. \subsection{Constant $w$} The assortment of synthetic distributions surveyed above shows a diverse range of optimal task allocations and the corresponding boost compared to the naive scheduling strategy. These task allocations are guaranteed to provide the greatest expected throughput for a given distribution, at the cost of increased code complexity. One may instead consider a simpler approximate solution where each executed task is provided the same allocation, but this allocation is allowed to differ from the naive strategy. Certainly this simplified scheme would be suboptimal, but it is unclear by how much. In the trivial case of constant probability, for example, the optimal allocation was a constant allocation. The same was true for the step distributions when resources are limited. In fact, it is often the case that a constant allocation can achieve a throughput close to that of the optimal allocation. Figures \ref{fig:FracoTPBeta} and \ref{fig:FracoTPDelta} show what fraction of the optimal expected throughput can be achieved when an optimal constant allocation is provided for each of the distributions surveyed above. For most values of $N$ there exists a constant value of $w$ which can provide upwards of 90\% the expected throughput that the optimal allocation would yield. This is, however, not always the case. When the task probability distribution possesses a major discontinuity (as seen in the step distributions), there exists a range of N values where even the best constant value is largely suboptimal. \begin{figure} \centering \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{fig8a.png} \caption{} \label{fig:FracoTPDelta} \end{subfigure} \hfill \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{fig8b.png} \caption{} \label{fig:BestConDelta} \end{subfigure} \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{fig8c.png} \caption{} \label{fig:FracoTPBeta} \end{subfigure} \hfill \begin{subfigure}[b]{.49\textwidth} \centering \includegraphics[width=\linewidth]{fig8c.png} \caption{} \label{fig:BestConBeta} \end{subfigure} \caption{On left, fraction of optimal throughput which can be achieved with a constant allocation for the Delta sampled distributions (\ref{fig:FracoTPDelta}) and Beta sampled distributions (\ref{fig:FracoTPBeta}). On right, the corresponding value of $w$ required to obtain this fraction of optimal throughput for the respective Delta sampled (\ref{fig:BestConDelta}) and Beta sampled (\ref{fig:BestConBeta}) distributions.} \label{fig:boost} \end{figure} Furthermore, as seen in figures \ref{fig:BestConBeta} and \ref{fig:BestConDelta}, the value of the best constant $w$ can greatly vary depending on the resources available and on the specifics of the task probability distribution. Furthermore, in a setting where the task probability distribution is dynamic and/or context dependent, the precise value of the best constant allocation will change in time. This presents a major difficulty in assigning a single constant value of $w$ that will be allocated to each executed task; what may be a “good” value of $w$ at one point in time might be a poor value sometime later. To maintain an expected throughput that is near optimal for an evolving task probability distribution, one would have to consistently tune the constant value of $w$. Devoting this effort to maintain a near-optimal solution is uneconomical as one could ensure the truly optimal solution with similar efforts. \subsection{ParSplice Simulator} In order to assess the potential performance gains accessible with this new approach without the considerable time investment required to rewrite the ParSplice production code, we instead chose to make use of a simulator; a strategy which has proved beneficial \cite{spectad,tadsim} in developing Accelerated Molecular Dynamics \cite{perez2009accelerated} methods, to which ParSplice belongs. The ParSplice Simulator (ParSpliceSIM) was designed to directly mirror the logic of the actual ParSplice code with the exception that dynamics are generated from a user-specified Markov Chain that can be used to statistically sample segment endpoints, rather than using computationally expensive MD calculation \cite{garmon2020exploiting}. ParSpliceSIM is therefore computationally light, simple, and runs in serial. As the matter of interest is measuring the potential improvements in performance, rather than obtaining correct atomistic trajectories, the ParSpliceSIM provides an ideal framework for testing the proposed ParSplice variant described in this work. ParSpliceSIM was used to compare the performance of the existing VE method to the newly developed MaxP method, which was then further enhanced to allow the periodic pausing of segments and reallocation of resources following the optimization procedure described above. The effect of these successive developments are shown on a range of different model systems to illustrate the expected gains in performance. As to separate the effect of the current work from that of intrinsic model uncertainty, it was assumed that the MM created on-the-fly by ParSplice would provide a sufficiently accurate representation of the dynamics, and was instead replaced with the actual underlying Markov Chain within the ParSpliceSIM. The metric of performance in evaluating the different methods is the amount of pseudo-MD spliced for a given wall clock time (WCT), which is a direct measure of the scientific value of a simulation. The following ParSpliceSIM results were generated using an assortment of Markov Chains with varying state connectivity. Each state within a particular Markov Chain was endowed with a $\rho_{ii}$ probability of not escaping the current state, and had an equally likely probability of transitioning to one of it’s $K$ neighbors $(\rho_{ik}=(1-\rho_{ii}) / K)$ with $K$ defined by connectivity. Periodic boundaries were implemented to ensure each state within a particular Markov Chain had the same state connectivity. To mimic the environment of a true atomistic simulation, the number of states in a particular Markov Chain was set to 8000 such that the vast majority of state-space remained un-visited by the trajectory throughout the duration of the simulation. In order to evaluate the true potential of the developed methods, each simulation was provided a resource allotment $N$ that was many times greater than the expected number of segments needed to escape ($\left\langle n_\mathrm{escape} \right\rangle$) from a state. This is the regime of greatest interest as it is where speculation significantly affects the efficiency of ParSplice. When the resources are not greater than $\left\langle n_\mathrm{escape} \right\rangle$ all resources can be allocated to the task(s) of building in the current state and will be amortized with high probability. Prior to the current work, ParSplice attempted to best utilize those additional resources (those which were not likely to be needed in escaping the current state) to speculate on where the trajectory was likely to go next. The main purpose of this section is to assess whether further efficiency gains are possible by dynamically assigning resources based on expected utility. In what follows, each simulation was carried out with assigned $\rho_{ii}=0.99$, corresponding to $\left\langle n_\mathrm{escape} \right\rangle=100$, and a resource allotment of $N=5000$. With an $N$ being $50$x greater than $\left\langle n_\mathrm{escape} \right\rangle$ ParSplice can trade-off resources in order to obtain an escape from the current state faster. This tradeoff would be beneficial in cases where speculation was futile, but could be poor in cases where accurate speculation were possible. To illustrate this effect, results are shown for three different Markov Chain toy models of increasing connectivity: 1D representing dynamics on a line, 3D representing dynamics on a cubic lattice, and fully-connected, where each state is connected to every other state. The greater the connectivity, the more difficult it is to speculate on the trajectory's future as the possible paths exhibit exponential branching. Conversely, when the connectivity is low speculation can be quite accurate. While these models of state connectivity are much simpler than what would be observed in actual applications they do however provide relevant information and general guidelines on the potential performance of the method in different scenarios. These three toy models present a good assortment for testing the new methods. The 1D model provides rather-predictable dynamics for which speculation will be quite fruitful. The 3D model presents dynamics which are somewhat less predictable, and where effective forecasting will likely be limited to within a small neighborhood from the current state. Finally, speculation is futile for the fully-connected model where the number of branching paths is immense. Performance results for each of the toy models are shown in Figure \ref{fig:SIM}, displaying the pseudo-MD spliced as a function of WCT. Each subfigure shows the results of five different methods for its particular model. The different methods consist of the existing VE formalism, the newly introduced MaxP formalism, and MaxP with preemption and restarts. The last method is shown implementing three distinct allocation policies: 1) The (naive) maximum throughput allocation; distributing resources evenly $(w=w_\mathrm{const})$ to execute the most tasks at the highest throughput, thus producing the maximum number of segments for a given $N$. 2) The minimum time allocation; distributing resources evenly to execute tasks with the maximum allocation (as defined previously, $w=w_\mathrm{max}$) thus producing $N/w_\mathrm{max}$ segments as quickly as possible. 3) The (optimal) maximum expected throughput allocation; distributing resources according to how likely tasks are to be spliced onto the state-to-state trajectory, thereby balancing the tradeoff between time and throughput to produce the most spliced segments as quickly as possible. The first thing to note in Figure \ref{fig:SIM} is the small but appreciable increase in performance that results from transitioning from the VE to the MaxP formalism. While MaxP was introduced to allow for the implementation of our derived methods, it is worth noting that the transition does not come at the cost of performance, to the contrary. Further improvement resulting from the ability to pause and reschedule segments can be substantial depending on the topology of the state space. As was discussed previously, the 1D toy model presents very limited connectivity, therefore corresponding to a system which is highly susceptible to speculation. As a result, the distribution of task probabilities will decay slowly, and the balance between running a few tasks very quickly and maximizing the overall task-completion throughput will be more heavily skewed toward throughput. This is seen from the 1D results in Figure \ref{fig:SIM} as the maximum throughput allocation outperforms the minimum time allocation (which even under-performs the standard VE and MaxP) by a factor of three. However, even in a highly predictable system like the 1D toy model, the balance between time and throughput is not completely one-sided. This is seen as the optimal allocation (which aims to maximize the \textit{expected} throughput, or segments spliced) is able to further improve performance by a factor of two as it strikes the optimal balance. Overall, the implementation of our derived methods applied to the 1D model are able to more than double the pseudo-MD spliced over the same WCT as compared to the standard VE method. \begin{figure} \centering \includegraphics[width=\linewidth]{fig9.png} \caption{ParSpliceSIM results for the 1D, 3D, and fully-connected toy models showing pseudo-MD spliced as a function of WCT. Each panel displays performance of VE in blue, MaxP in red, MaxP($w_\mathrm{const}$) in green, MaxP($w_\mathrm{max}$) in maroon, and MaxP($w^*$) in yellow. The results shown represent an average of roughly 500 independent simulations conducted for each method on each model.} \label{fig:SIM} \end{figure} The 3D model presents a slightly different picture as long-time speculation is somewhat difficult due to the increased connectivity, yet short-time speculation can still be profitable, thus this model requires a more delicate balance between throughput and execution time. This is seen from the 3D results in Figure \ref{fig:SIM} as now the minimum time allocation outperforms the maximum throughput allocation by over 50\%. The optimal allocation adapts to the new model and achieves the best performance, more than doubling the efficiency of the the minimum time allocation strategy and providing nearly a six-fold improvement as compared to the standard VE method. Lastly, the fully-connected model is considered, for which speculation is futile and escaping from the current state as quickly as possible is the only sound strategy. As expected, the fully-connected results in Figure \ref{fig:SIM} show how the minimum time allocation greatly outperforms the maximum throughput allocation by nearly an order of magnitude. However, the optimal allocation is able to further improve performance by nearly doubling the throughput achieved over the simulated times shown here. Although the minimum time allocation utilizes resources to generate segments as quickly as possible, it does not achieve the desired result of escaping from the current state as quickly as possible. This is because the number of segments it produces is likely insufficient to escape the state, i.e less than $\left\langle n_\mathrm{escape} \right\rangle$. It is actually better to generate more segments (greater throughput) at a slightly slower rate (but higher efficiency) such that a sufficient number of segments to escape from the current state are generated. Overall, our derived method applied to this toy model of greater connectivity enabled nearly twenty times the pseudo-MD to be spliced over the same WCT as compared to the standard VE method. The resulting improvement of our derived methods, as compared to the existing VE scheduling method, showed a nearly 2.5x, 6x, and 20x boost in performance for 1D, 3D, and fully connected toy models, respectively. These ParSpliceSIM results can be better understood by analyzing the task probability distributions that are characteristic of each toy model. Figure \ref{fig:single} shows an example initial probability distribution for each of the toy model systems as was constructed by the MCMaxP procedure described in appendix A. One can see how the task probabilities for the 1D model exhibit a very gradual decay over the first 5,000 tasks down to $p\sim0.8$. This reflects the limited state connectivity which makes speculation fruitful. The 3D task probabilities exhibit a very steep decay over the first $\sim$100 tasks (corresponding to an escape from the current state), followed by a more gradual decay over the following 600 tasks (corresponding to an escape from the 6 neighbors of the current state), followed by a more gradual decay out to 5,000 tasks. Overall, task probabilities remain non-negligible out to 5,000 tasks with long tail around $p\sim0.2$. Considering the fully connected model, one can see a sharp decay in probability corresponding to the first escape from the current state, after which the probability of tasks drops down to $p<0.1$. One can see how these results in performance generally adhere to the inference made while studying the synthetic distributions, i.e. performance gains are largest when the probability distribution exhibits steep decays and long tails. \begin{figure} \centering \includegraphics[width=.8\linewidth]{fig10.png} \caption{Initial task probability distributions taken from simulations on the 1D (blue), 3D (red), and fully-connected (orange) toy models. These task probability distributions were constructed at the start of the simulation; having no contribution from the then-empty database of stored segments, and are therefore a reflection of the state connectivity.} \label{fig:single} \end{figure} Figure \ref{fig:all} shows the evolution of task probability distributions which were sampled throughout a simulation for each of the toy model systems. This occurs for two reasons: 1) The effect of the database; the stored segments which have been generated but not yet consumed by the trajectory play a role in the MCMaxP sampled trajectory and therefore affect which segments are expected to be ``needed” by a future trajectory. And 2) the time horizon over which the MCMaxP trajectory is sampled; a greater time horizon corresponds to a higher likelihood that a particular segment will eventually be spliced into the trajectory, thus resulting in a shallow decay of the probability distribution. \begin{figure} \centering \includegraphics[width=.8\linewidth]{fig11.png} \caption{All task probability distributions generated during a single simulation on the 1D (blue), 3D (red), and fully-connected (orange) toy models.} \label{fig:all} \end{figure} The dynamic nature of the probability distributions is quite relevant as it pertains to a changing allocation which must be maintained in order to achieve optimal performance. Recognizing this fact, one may consider the difficulties of maintaining the optimal allocation and note how the frequency at which segments are paused and resources are reallocated can have a large impact on performance. Constantly maintaining the optimal allocation involves pausing and reallocating resources whenever any new information is available, which is not always feasible. In practice, the user may choose to pause and update the allocation at some fixed interval, which can be tuned for the user’s particular system. In the present study, the allocation was updated whenever a segment was completed. This can be thought of as the most aggressive scenario that will produce the upper bound in performance. In the ParSpliceSIM results above this condition could be relaxed to update whenever a segment contained a transition as the true underlying Markov Chain was being used rather than developing a model from segment data, and therefore segments which did not yield escapes would have no effect on the current MCMaxP-constructed task probability distribution. In a true ParSplice simulation however, each segment would contribute to the development of the statistical Markov Model being produced on-the-fly and thereby (possibly) change the statistics of the MCMaxP sampled trajectories, hence affecting the task probability distribution. The question of when it is appropriate and necessary to pause and reallocate resources is not easy to quantify. In general, the user would want to do so whenever the task probability distribution substantially changes. In the example of ParSplice, this would certainly occur whenever the current state of trajectory changes as all tasks and probabilities are generated from the MCMaxP procedure and are thus conditional on starting in a particular state. The distribution could also change without the trajectory changing state, however, when unused segments which contain transitions are stored in the database and can contribute to the MCMaxP sampled trajectories. For this reason the ParSpliceSIM results were aggressively updated whenever a segment yielding a transition was returned to the database. The question becomes even more fuzzy for a real ParSplice simulation which develops it’s statistical model on the fly. Any segment which changes the model in a substantial way will likely change the MCMaxP sampled trajectories and thus affect the sampled probability distribution. It is not easy to systematically catch changes of this type. A single segment will likely have a negligible effect on the model, but cannot be ignored outright as enough of these negligible changes can account for a significant effect. In addition, a segment which drastically changes the model but does so far from the current state of the trajectory will have little effect on the sampled task probability distribution and therefore does not warrant pausing and reallocating resources. In practice, it is pragmatic to periodically pause and update the allocation at some fixed interval, which is chosen so as to limit scheduling and pausing/restarting overhead. \section{Conclusion} The advent of exascale computing platforms will be accompanied by a need for specially designed software and algorithms that are capable of utilizing the large availability of resources simultaneously. As maintaining strong-scalability on such platforms will be quite difficult, the use of speculative task-based paradigms are promising; enabling higher concurrency and improved scaling. In this work, we derived the optimal allocation of resources for task execution in this speculative setting. The utility of this approach was then assessed on assortment of synthetic task probability distributions, comparing the expected throughput of our derived optimal allocation of resources to more naive allocation policies. While a uniform allocation of resources can often be found to produce a nearly optimal expected throughput, it was shown that determining the particular value for the constant allocation size is in practice just a difficult as computing and employing the optimal allocation. A dynamic setting was then considered where task probabilities were influenced by some underlying variable (state, context, time, etc.) and were therefore changing throughout the run-time of the application. This setting was explored by examining the effect of our derived methods applied to a specific scientific application, ParSplice, which operates in this domain. In order to implement our methods, we first had to design a new application-specific technique for accessing the speculative probability that potential tasks would be useful. This technique not only allowed for our derived methods to be implemented, but was also shown increase the performance of the scientific application. The potential gains in performance resulting from our derived methods were assessed through the use of a simulator. While the boost achieved varied with physical system (ranging from 2.5x to 20x), it was found to be greatest when the system of study was most complex; resulting in lower speculative task probabilities and a greater ability to leverage the trade-off between throughput and time. By considering the speculative task probabilities, the optimal balance could be struck to produce the maximum rate of expected throughput. This novel optimization scheme stands to improve performance of speculative task-based applications, particularly when run at large computational scales. \section*{Acknowledgments} AG was supported by the US Department of Energy Office of Science Graduate Student Research (SCGSR) program. The SCGSR program is administered by the Oak Ridge Institute for Science and Education (ORISE) for the DOE. ORISE is managed by ORAU under contract number DE-SC0014664. All opinions expressed in this paper are the author’s and do not necessarily reflect the policies and views of DOE, ORAU, or ORISE. VR was supported by the Advanced Simulation and Computing Program (ASC) and DP was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the US Department of Energy Office of Science and the National Nuclear Security Administration. Los Alamos National Laboratory is operated by Triad National Security LLC, for the National Nuclear Security administration of the U.S. DOE under Contract No. 89233218CNA0000001. We graciously acknowledge computing resources from the Los Alamos National Laboratory Institutional Computing (IC) program, and insightful discussions with David Aristoff and Mouad Ramil. \begin{appendices} \section{Maximum-Probability (MaxP)} In the following, the probability that a candidate task (e.g, the generation of a segment starting in a given state) will be consumed as part of the calculation is derived in the context of discrete time Markov Chains, which is the natural setting for a trajectory composed of segments generated following the ParSplice prescription. In other words, the problem at hand is to compute the probability that a trajectory of a given length, sampled from this Markov Chain, would contain the generated segment. As will be shown, these probabilities can be evaluated analytically from the Markov jump process or approximated through a Monte Carlo sampling procedure. To clearly distinguish from any variables defined in the main text, we have chosen to express this derivation using a double-struck font. This derivation utilizes a Markov model $\mathbbm{M}$; a stochastic matrix whose elements $\mathbbm{p}_{ij}$ represent the probability of moving from state $i$ to state $j$, thereby governing the discrete Markov process. In the example of our scientific application (ParSplice) these are the probabilities that a segment starting in state $i$ will end in state $j$. Note that these probabilities encode the potential outcome of the task, not the potential task's usefulness, and are therefore distinct from task probabilities $p_i$ defined in the main text. The work detailed herein describes our method for extracting these latter probabilities $p_i$ from the former probabilities $\mathbbm{p}_{ij}$. We define $\mathbbm{f}_{ij}^{(n)}$ as the probability that the first passage from state $i$ to state $j$ takes exactly $n$ steps. This can be written recursively as \begin{align*} \mathbbm{f}_{ij}^{(1)} & = \mathbbm{p}_{ij}^{(1)}=\mathbbm{p}_{ij} \\ \mathbbm{f}_{ij}^{(2)} & = \sum_{k!=j} \mathbbm{p}_{ik} \mathbbm{f}_{kj}^{(1)} \\ \mathbbm{f}_{ij}^{(n)} & = \sum_{k!=j} \mathbbm{p}_{ik} \mathbbm{f}_{kj}^{(n-1)} \end{align*} We then also define $\mathbbm{f}_{ii}^{(n)}$ to be the probability that the first return to state $i$ upon leaving state $i$ takes exactly $n$ steps, which can similarly be written recursively as $$\mathbbm{f}_{ii}^{(n)}=[\mathbbm{M}^n]_{ii}-\sum_{k=1}^{n-1} \mathbbm{f}_{ii}^{(k)}[\mathbbm{M}^{n-k}]_{ii}$$ Where $[\mathbbm{M}^n]_{ii}$ represents the $i,i$ element of the Markov Model $\mathbbm{M}$ raised to the power $n$. Then, summing over the index $n$ allows one to write the probability of a return to state $i$ sometime in the next $\mathbbm{N}$ steps: $$\mathbbm{F}_{ii}^{(n)}=\sum_n^\mathbbm{N} \mathbbm{f}_{ii}^{(n)}$$ Lastly, let $\mathbbm{v}_{j}$ be the number of visits to state $j$, and $\mathbbm{P}_{i}(\mathbbm{X})$ denote the probability of $\mathbbm{X}$ conditional on starting in state $i$. The probability of making exactly $m$ visits to state $j$ over the next $\mathbbm{N}$ steps from the current state $i$ can then be expressed recursively as \begin{align*} \mathbbm{P}_{i}(\mathbbm{v}_{j}=1|\mathbbm{N}) & = \sum_{k=1}^\mathbbm{N} \mathbbm{f}_{ij}^{(k)}[1-\mathbbm{F}_{jj}^{\mathbbm{N}-k}]\\ \mathbbm{P}_{i}(\mathbbm{v}_{j}=2|\mathbbm{N}) & = \sum_{k=1}^\mathbbm{N} \mathbbm{f}_{ij}^{(k)} \mathbbm{P}_{j}(\mathbbm{v}_{j}=1|\mathbbm{N}-1) \\ \mathbbm{P}_{i}(\mathbbm{v}_{j}=m|\mathbbm{N}) & = \sum_{k=1}^\mathbbm{N} \mathbbm{f}_{ij}^{(k)} \mathbbm{P}_{j}(\mathbbm{v}_{j}=m-1|\mathbbm{N}-m) \end{align*} Summing over the index $m$ and subtracting from $1$ yields the probability of having more than $S$ visits to a state over a horizon of $\mathbbm{N}$ steps: $$\mathbbm{P}_{i}(\mathbbm{v}_{j}>S|\mathbbm{N})=1-\sum_{m=1}^S \mathbbm{P}_{i}(\mathbbm{v}_{j}=m|\mathbbm{N})$$ Therefore, given a current state of the trajectory $i$ and the number of pending/unconsumed segments in state $j$, this prescription provides a means of extracting the probability that the next segment generated in state $j$ will be consumed into the trajectory over the finite time horizon $\mathbbm{N}$. Denoting the number of pending/unconsumed segments in state $k$ as $S_k$ allows the the probability of each potential task to be written as $$p_k=\mathbbm{P}_{i}(\mathbbm{v}_{k}>S_{k}|\mathbbm{N}),\forall k$$ Having this derivation in mind, we propose a new ParSplice scheduling scheme referred to as MaxP (maximum probability). In MaxP, the probability that each task will be spliced into the trajectory over a given finite time horizon is calculated. While computing these probabilities can be done analytically, it is often far more practical and efficient to do so via a Monte Carlo procedure, especially when the number of states is large. This can be done in a similar fashion to VE where an ensemble of future state-to-state trajectories are sampled, accounting for the pending/unconsumed segments in the same way, but, instead of stopping when running out of segments, each trajectory continues until the preset time horizon, keeping track of how many segments would have to be generated in each state to reach said horizon. This ensemble is then used to calculate the probability that particular segments built in particular states are to be used by the state-to-state trajectory over the time horizon. ParSplice can then assign segments to be generated in decreasing order of probability, thus generating the segments which have the ``maximum probability" of being spliced. It can actually be shown\cite{MouadRamil} that the MaxP allocation scheme formally minimizes the expected number of database ``misses", i.e., the number of times splicing has to be interrupted because a segment that is required to move forward is not found in the database. One may note that MaxP is substantially more expensive than VE for assigning an initial state to a single segment. While this is true, the ensemble average required by MaxP can be used to make state-assignments for a large number of segments all at once. This is compared to VE which can make one state-assignment for each virtual-trajectory. Furthermore, the VE process for assigning states to several segments must each be done in serial, meanwhile the ensemble trajectories used by MaxP can be generated in parallel. These differences can become quite significant as simulations scale to larger and larger machines, employing a greater number of MD instances. \end{appendices} \bibliographystyle{unsrt}
1,116,691,497,976
arxiv
\section{Introduction} If $V = \C^d$ denotes the natural module for the complex general linear Lie algebra $\gl_d$, Schur--Weyl duality states that the natural actions of $\gl_d$ and the symmetric group $\fS_k$ on $V^{\otimes k}$ commute and generate each other's centralizers. This classic result can be extended to the definition of a full monoidal functor from the diagrammatic \emph{oriented Brauer category} $\OB_d$ of \cite{BCNR17} to the category of $\gl_d$-modules. This functor sends the two generating objects $\uparrow$ and $\downarrow$ of $\OB_d$ to $V$ and its dual $V^*$. Since \linebreak $\Hom_{\OB_d}(\uparrow^{\otimes k}) \cong \C \fS_k$, we have an induced surjective algebra homomorphism \[ \C \fS_k \twoheadrightarrow \End_{\gl_d}(V^{\otimes k}), \] recovering one of the statements of Schur--Weyl duality. After passing to the additive Karoubi envelope $\Kar(\OB_d)$ of $\OB_d$, the above functor is also essentially surjective. It follows that $\gl_d$-mod is a quotient of $\Kar(\OB_d)$ by a tensor ideal. This observation allows one to use powerful and intuitive diagrammatic techniques in the study of the representation theory of the general linear Lie algebra. It also leads to the definition of Deligne's interpolating categories. Analogues of the above theory also exist in types $BCD$. Here the oriented Brauer category is replaced by the \emph{Brauer category} of \cite{LZ15}, whose endomorphism algebras are \emph{Brauer algebras}. However, analogous techniques in \emph{exceptional} type are not as well developed. For type $G_2$, the diagrammatic category has been described by Kuperberg \cite{Kup94,Kup96}. (In fact, Kuperberg treats the quantum case; see below.) Invariant tensors for classical and exceptional semisimple Lie algebras have been computed diagrammatically by Cvitanovi\'{c} \cite{Cvi08}, but the approach there is rather different, inspired by the language of Feynman diagrams in quantum field theory. This approach has been further investigated for exceptional Lie algebras in \cite{MT07,Wen03,Wes03}. The goal of the current paper is to develop a diagrammatic category for type $F_4$ analogous to the oriented and unoriented Brauer categories in classical type. Hints of the defining relations appear in the aforementioned papers. In particular, several of the equations deduced in the current paper can be found in \cite[Ch.~19]{Cvi08} and \cite{Thu04} in a different language. However, a complete treatment from the monoidal category point of view seems to be new. Given a field $\kk$ of characteristic zero, we define a strict monoidal $\kk$-linear category $\Fcat = \Fcat_{\alpha,\delta}$, depending on two parameters $\alpha,\delta \in \kk$. (In fact, up to isomorphism, the category is independent of $\alpha$; see \cref{whyalpha}.) We consider the strict $\kk$-linear monoidal category generated by a single object $\go$ and four morphisms \[ \mergemor \colon \go^{\otimes 2} \to \go,\qquad \crossmor \colon \go^{\otimes 2} \to \go^{\otimes 2},\qquad \cupmor \colon \one \to \go^{\otimes 2},\qquad \capmor \colon \go^{\otimes 2} \to \one, \] where $\one$ is the unit object. These morphisms are subject to certain relations, which we split into two families. We denote by $\Tcat = \Tcat_{\alpha,\delta}$ the category obtained by imposing the first family of relations; see \cref{Tdef}. These relations imply, in particular, that the category is symmetric monoidal and strict pivotal, the trivalent vertex is symmetric, and the generating object $\go$ is symmetrically self-dual of categorical dimension $\delta$. To obtain the category $\Fcat$, we then impose three additional relations; see \cref{Fdef}. The first of these is the relation \begin{equation} \label{CH} \Hmor + \Imor + \begin{tikzpicture}[centerzero] \draw (-0.2,-0.4) -- (-0.2,0.4); \draw (-0.2,-0.2) -- (0.4,0.4); \draw (0.4,-0.4) -- (-0.2,0.2); \end{tikzpicture} = \frac{2\alpha}{\delta+2} \left(\, \jail + \hourglass + \crossmor\, \right), \end{equation} while the other two express the square and pentagon in terms of acyclic diagrams. When $\kk=\C$, we define (\cref{magneto,baja}) a monoidal functor \[ \Phi \colon \Fcat_{7/3,26} \to \fg\md, \] where $\fg$ is the complex simple Lie algebra of type $F_4$. The generating object $\go$ is sent to the natural $\fg$-module $V$, which is $26$-dimensional. The compact Lie group $G$ corresponding to $\fg$ is the group of algebra automorphisms of the Albert algebra which, over the complex numbers, is the unique exceptional Jordan algebra. The natural module $V$ can be identified with the traceless part of the Albert algebra. Multiplication in the Albert algebra gives rise to a $\fg$-module homomorphism $V^{\otimes 2} \to V$, which is the image under $\Phi$ of the trivalent vertex $\mergemor$. The morphism $\crossmor$ corresponds to the symmetric braiding in $\fg$-mod, and $\capmor$ is sent to the natural invariant bilinear form on $V$ coming from the trace on the Albert algebra. The morphism $\cupmor$ is also determined by this bilinear form. In this way, the category $\Fcat$ can also be viewed as a diagrammatic calculus for the Albert algebra. The relation \cref{CH} corresponds to the Cayley--Hamilton theorem for $V$; see \cref{prestige}. The functor $\Phi$ is defined only when $\delta=26$, since that is the dimension of the natural $\fg$-module $V$. However, the diagrammatic category $\Fcat_{\alpha,\delta}$ is defined for any $\delta \ne -2$. (When $\delta=-2$, the preliminary category $\Tcat$ collapses to the Temperley--Lieb category; see \cref{TL}.) The importance of the case $\delta=26$ can be seen purely diagrammatically. It corresponds to the fact that $V$ is not a summand of the tensor square of the first fundamental representation of $\fg$. See \cref{sack}. Since the category $\fg$-mod is idempotent complete, the functor $\Phi$ induces a functor \[ \Kar(\Phi) \colon \Kar(\Fcat_{7/3,26}) \to \fg\md. \] Then $\Kar(\Phi)$ is full since $\Phi$ is. In addition, we show (\cref{splat}) that $\Kar(\Phi)$ is essentially surjective. Thus $\fg\md$ is equivalent to a quotient of the diagrammatic category $\Fcat$ by a tensor ideal. In fact, we conjecture that $\Kar(\Phi)$ is also faithful, and hence an equivalence of categories. We also give (\cref{pipe}) conjectural bases for the morphism spaces in $\Fcat$. One immediate consequence of the fullness of $\Phi$ (\cref{SW}) is that we have surjective algebra homomorphisms \[ \End_{\Fcat}(\go^{\otimes k}) \twoheadrightarrow \End_\fg(V^{\otimes k}),\quad k \in \N. \] In other words, the endomorphism algebras in $\Fcat$ play the role in type $F_4$ that the oriented and unoriented Brauer algebras play in the classical types. In classical types, quantum versions of the relevant diagrammatic categories exist. In type $A$, the quantum analogue of the oriented Brauer category is the framed HOMFLYPT skein category. In types $BCD$, the analogue of the unoriented Brauer category is the Kauffman skein category. As their names suggest, both categories are closely related to important knot invariants. In type $G_2$, the connection to the corresponding Reshetikhin--Turaev invariant is discussed in \cite{Kup94,Kup96}. There should also exist a quantum analogue of the category $\Fcat$, related to the Reshetikhin--Turaev invariant in type $F_4$. These quantum diagrammatics should also be related to a quantum version of the Albert algebra. The category $\Fcat$ is also a first step towards a category of webs for type $F_4$. The main goal in the theory of webs is to give a presentation, in terms of generators and relations, for the full monoidal subcategory of the category of modules for a quantized enveloping algebra, generated by the fundamental modules. Such presentations are typically in terms of diagrammatic categories known as \emph{web categories}. Web categories were first developed for rank two simple complex Lie algebras by Kuperberg \cite{Kup96}. Then, in more general type $A$, they were described by Cautis--Kamnitzer--Morrison \cite{CKM14}. More recently, the type $C$ case has been treated in \cite{BERT21}; see also \cite{Wes08}. The degenerate (that is, $q=1$) web category for type $F_4$ should be the full monoidal subcategory of $\Kar(\Fcat)$ generated by objects corresponding to the four fundamental modules. We explicitly identify three of these objects in \cref{sec:fundamental}. \subsection*{Acknowledgements} The research of R.G.\ was supported by an Ontario Graduate Scholarship and a Canada Graduate Scholarship from the Natural Sciences and Engineering Research Council of Canada (NSERC). The research of A.S.\ and K.Z.\ was supported by NSERC Discovery Grants RGPIN-2017-03854 and RGPIN-2015-04469, respectively. We thank Erhard Neher for helpful conversations and Bruce Westbury for useful remarks on an earlier version of this paper. \section{The diagrammatic category} We fix a field $\kk$ of characteristic zero. All categories are $\kk$-linear and all algebras and tensor products are over $\kk$ unless otherwise specified. We let $\one$ denote the unit object of a monoidal category. For objects $X$ and $Y$ in a category $\cC$, we denote by $\cC(X,Y)$ the vector space of morphisms from $X$ to $Y$. \begin{defin} \label{Tdef} Fix $\alpha,\delta \in \kk$. Let $\Tcat = \Tcat_{\alpha,\delta}$ be the strict monoidal category generated by the object $\go$ and generating morphisms \begin{equation} \label{lego} \mergemor \colon \go \otimes \go \to \go,\quad \crossmor \colon \go \otimes \go \to \go \otimes \go,\quad \cupmor \colon \one \to \go \otimes \go,\quad \capmor \colon \go \otimes \go \to \one, \end{equation} subject to the following relations: \begin{gather} \label{vortex} \begin{tikzpicture}[centerzero] \draw (-0.3,-0.4) -- (-0.3,0) arc(180:0:0.15) arc(180:360:0.15) -- (0.3,0.4); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (0,-0.4) -- (0,0.4); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.3,0.4) -- (-0.3,0) arc(180:360:0.15) arc(180:0:0.15) -- (0.3,-0.4); \end{tikzpicture} \ ,\quad \splitmor := \begin{tikzpicture}[anchorbase] \draw (-0.4,0.2) to[out=down,in=180] (-0.2,-0.2) to[out=0,in=225] (0,0); \draw (0,0) -- (0,0.2); \draw (0.3,-0.3) -- (0,0); \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (0.4,0.2) to[out=down,in=0] (0.2,-0.2) to[out=180,in=-45] (0,0); \draw (0,0) -- (0,0.2); \draw (-0.3,-0.3) -- (0,0); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,-0.1) arc(180:0:0.2) -- (0.2,-0.3); \draw (-0.3,0.3) \braiddown (0,-0.3); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,-0.1) arc(180:0:0.2) -- (0.2,-0.3); \draw (0.3,0.3) \braiddown (0,-0.3); \end{tikzpicture} \ , \\ \label{venom} \begin{tikzpicture}[centerzero] \draw (0.2,-0.4) to[out=135,in=down] (-0.15,0) to[out=up,in=-135] (0.2,0.4); \draw (-0.2,-0.4) to[out=45,in=down] (0.15,0) to[out=up,in=-45] (-0.2,0.4); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.15,-0.4) -- (-0.15,0.4); \draw (0.15,-0.4) -- (0.15,0.4); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[centerzero] \draw (0.3,-0.4) -- (-0.3,0.4); \draw (0,-0.4) to[out=135,in=down] (-0.25,0) to[out=up,in=-135] (0,0.4); \draw (-0.3,-0.4) -- (0.3,0.4); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (0.3,-0.4) -- (-0.3,0.4); \draw (0,-0.4) to[out=45,in=down] (0.25,0) to[out=up,in=-45] (0,0.4); \draw (-0.3,-0.4) -- (0.3,0.4); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[anchorbase,scale=0.8] \draw (-0.4,-0.5) -- (0.2,0.3) -- (0.4,0.1) -- (0,-0.5); \draw (0.2,0.3) -- (0.2,0.6); \draw (-0.4,0.6) -- (-0.4,0.1) -- (0.4,-0.5); \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (-0.4,-0.4) -- (-0.2,-0.2) -- (-0.2,0) -- (0.2,0.4); \draw (0,-0.4) -- (-0.2,-0.2); \draw (0.2,-0.4) -- (0.2,0) -- (-0.2,0.4); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[anchorbase,scale=0.8] \draw (-0.4,0.5) -- (0.2,-0.3) -- (0.4,-0.1) -- (0,0.5); \draw (0.2,-0.3) -- (0.2,-0.6); \draw (-0.4,-0.6) -- (-0.4,-0.1) -- (0.4,0.5); \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (-0.4,0.4) -- (-0.2,0.2) -- (-0.2,0) -- (0.2,-0.4); \draw (0,0.4) -- (-0.2,0.2); \draw (0.2,0.4) -- (0.2,0) -- (-0.2,-0.4); \end{tikzpicture} \ , \\ \label{chess} \begin{tikzpicture}[anchorbase] \draw (-0.15,-0.4) to[out=45,in=down] (0.15,0) arc(0:180:0.15) to[out=down,in=135] (0.15,-0.4); \end{tikzpicture} = \capmor \ ,\quad \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.5) to[out=45,in=down] (0.15,-0.2) to[out=up,in=-45] (0,0) -- (0,0.2); \draw (0.2,-0.5) to [out=135,in=down] (-0.15,-0.2) to[out=up,in=-135] (0,0); \end{tikzpicture} = \mergemor \ ,\quad \begin{tikzpicture}[centerzero] \draw (0,-0.4) -- (0,-0.2) to[out=45,in=down] (0.15,0) to[out=up,in=-45] (0,0.2) -- (0,0.4); \draw (0,-0.2) to[out=135,in=down] (-0.15,0) to[out=up,in=-135] (0,0.2); \end{tikzpicture} = \alpha\ \begin{tikzpicture}[centerzero] \draw(0,-0.4) -- (0,0.4); \end{tikzpicture} \ ,\quad \bubble = \delta 1_\one, \quad \lollydrop = 0. \end{gather} \end{defin} \begin{rem} \label{whyalpha} As long as $\alpha$ is invertible and has a square root in $\kk$, we can rescale $\mergemor$ by $\alpha^{-1/2}$ to see that $\Tcat_{\alpha,\delta}$ is isomorphic to $\Tcat_{1,\delta}$. However, it will be useful in the forthcoming applications to include the parameter $\alpha$ in our definition. In particular, we will be most interested in the case where $\alpha = \frac{7}{3}$ and $\delta=26$. \end{rem} \begin{rem} \label{meow} The relations in \cref{Tdef} all have conceptual category-theoretic meanings: \begin{itemize} \item The relations \cref{venom} and the last equality in \cref{vortex} correspond to the fact that the crossing endows $\Fcat$ with the structure of a symmetric monoidal category. \item The first two equalities in \cref{vortex}, together with the first equality in \cref{chess} assert that the generating object $\go$ is symmetrically self-dual. Then the fourth equality in \cref{vortex} implies that $\Fcat$ is strict pivotal. (See the discussion after \cref{windy}.) \item The second equality in \cref{chess} can be viewed as stating that $\mergemor$ corresponds to a commutative binary operation on $\go$. \item The fourth relation in \cref{chess} states that $\go$ has categorical dimension $\delta$. (See \cref{sec:fundamental} for further discussion of categorical dimension.) \item If we want $\go$ to be a simple object (more precisely, $\Fcat(\go,\go) = \kk 1_\go$), not isomorphic to $\one$, then the fifth equality corresponds to the fact that there are no nonzero morphisms $\one \to \go$, while the left-hand side of the third equality in \cref{chess} must be a scalar multiple (which we denote by $\alpha$) of the identity $1_\go$. \end{itemize} \end{rem} We define \begin{equation} \dotcross := \begin{tikzpicture}[centerzero] \draw (-0.2,-0.4) -- (-0.2,0.4); \draw (-0.2,-0.2) -- (0.4,0.4); \draw (0.4,-0.4) -- (-0.2,0.2); \end{tikzpicture} \ . \end{equation} \begin{prop} \label{windy} The following relations hold in $\Tcat$: \begin{gather} \label{topsy} \begin{tikzpicture}[anchorbase] \draw (-0.4,-0.2) to[out=up,in=180] (-0.2,0.2) to[out=0,in=135] (0,0); \draw (0,0) -- (0,-0.2); \draw (0.3,0.3) -- (0,0); \end{tikzpicture} = \mergemor = \begin{tikzpicture}[anchorbase] \draw (0.4,-0.2) to[out=up,in=0] (0.2,0.2) to[out=180,in=45] (0,0); \draw (0,0) -- (0,-0.2); \draw (-0.3,0.3) -- (0,0); \end{tikzpicture} \ ,\quad \triform := \begin{tikzpicture}[centerzero] \draw (-0.2,-0.2) to (0,0); \draw (0.2,-0.2) to (0,0); \draw (0,0) arc(0:180:0.2) -- (-0.4,-0.2); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.2,-0.2) to (0,0); \draw (0.2,-0.2) to (0,0); \draw (0,0) arc(180:0:0.2) -- (0.4,-0.2); \end{tikzpicture} \ ,\quad \explode := \begin{tikzpicture}[centerzero] \draw (-0.2,0.2) to (0,0); \draw (0.2,0.2) to (0,0); \draw (0,0) arc(360:180:0.2) to (-0.4,0.2); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.2,0.2) to (0,0); \draw (0.2,0.2) to (0,0); \draw (0,0) arc(180:360:0.2) to (0.4,0.2); \end{tikzpicture} \ , \\ \label{turvy} \begin{tikzpicture}[centerzero] \draw (-0.2,0.3) -- (-0.2,0.1) arc(180:360:0.2) -- (0.2,0.3); \draw (-0.3,-0.3) to[out=up,in=down] (0,0.3); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.2,0.3) -- (-0.2,0.1) arc(180:360:0.2) -- (0.2,0.3); \draw (0.3,-0.3) to[out=up,in=down] (0,0.3); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[anchorbase] \draw (-0.2,0.2) -- (0.2,-0.2); \draw (-0.4,0.2) to[out=down,in=225,looseness=2] (0,0) to[out=45,in=up,looseness=2] (0.4,-0.2); \end{tikzpicture} = \crossmor = \begin{tikzpicture}[anchorbase] \draw (0.2,0.2) -- (-0.2,-0.2); \draw (0.4,0.2) to[out=down,in=-45,looseness=2] (0,0) to[out=135,in=up,looseness=2] (-0.4,-0.2); \end{tikzpicture} \ ,\quad \begin{tikzpicture}[anchorbase] \draw (-0.2,0.2) -- (0.2,-0.2); \draw (-0.4,0.2) to[out=down,in=225,looseness=2] (0,0) to[out=45,in=up,looseness=2] (0.4,-0.2); \opendot{0,0}; \end{tikzpicture} = \dotcross = \begin{tikzpicture}[anchorbase] \draw (0.2,0.2) -- (-0.2,-0.2); \draw (0.4,0.2) to[out=down,in=-45,looseness=2] (0,0) to[out=135,in=up,looseness=2] (-0.4,-0.2); \opendot{0,0}; \end{tikzpicture} \ . \end{gather} \end{prop} \begin{proof} The first and second equalities in \cref{topsy} follow immediately from the first four equalities in \cref{vortex}. Then, using the first and second equalities in \cref{topsy}, we have \[ \begin{tikzpicture}[centerzero] \draw (-0.2,-0.2) to (0,0); \draw (0.2,-0.2) to (0,0); \draw (0,0) arc(0:180:0.2) -- (-0.4,-0.2); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (0,-0.2) to (0,0) to[out=45,in=up,looseness=2] (0.3,-0.2); \draw (0,0) to[out=135,in=up,looseness=2] (-0.3,-0.2); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.2,-0.2) to (0,0); \draw (0.2,-0.2) to (0,0); \draw (0,0) arc(180:0:0.2) -- (0.4,-0.2); \end{tikzpicture} \ , \] proving the fourth equality in \cref{topsy}. The proof of the sixth equality in \cref{topsy} is analogous. To prove the first equality in \cref{turvy}, we use \cref{vortex} to compute \[ \begin{tikzpicture}[centerzero] \draw (-0.2,0.3) -- (-0.2,0.1) arc(180:360:0.2) -- (0.2,0.3); \draw (-0.3,-0.3) to[out=up,in=down] (0,0.3); \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (-1,0.5) -- (-1,0.2) arc(180:360:0.2) arc(180:0:0.2) arc(180:360:0.2) -- (0.2,0.5); \draw (-0.3,-0.3) \braidup (0,0.5); \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (-1,0.5) -- (-1,0.2) arc(180:360:0.2) arc(180:0:0.2) arc(180:360:0.2) -- (0.2,0.5); \draw (-0.5,-0.3) \braidup (-0.8,0.5); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.2,0.3) -- (-0.2,0.1) arc(180:360:0.2) -- (0.2,0.3); \draw (0.3,-0.3) to[out=up,in=down] (0,0.3); \end{tikzpicture} \ . \] The second and third equalities in \cref{turvy} now follow from sliding the crossing over the cup or cap, and then using the first two equalities in \cref{vortex}. In remains to prove the fourth equality in \cref{turvy}. Using the second and third relations in \cref{vortex} and the first two relations in \cref{topsy} to rotate trivalent vertices, we have \[ \begin{tikzpicture}[anchorbase] \draw (-0.2,0.2) -- (0.2,-0.2); \draw (-0.4,0.2) to[out=down,in=225,looseness=2] (0,0) to[out=45,in=up,looseness=2] (0.4,-0.2); \opendot{0,0}; \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (-0.2,0.4) -- (-0.2,0.2) -- (0,0) -- (0.2,0.2) -- (0.2,0.4); \draw (-0.2,0.2) -- (-0.4,0) -- (0,-0.4); \draw (0,0) -- (-0.4,-0.4); \end{tikzpicture} \ . \] Now, composing the fourth equality in \cref{venom} on the bottom with $\crossmor$ and on the top with $ \begin{tikzpicture}[centerzero] \draw (-0.2,-0.2) -- (0,0.2); \draw (0,-0.2) -- (0.2,0.2); \draw (0.2,-0.2) -- (-0.2,0.2); \end{tikzpicture} $ , and then using the first equality in \cref{venom}, we have \[ \begin{tikzpicture}[anchorbase,scale=0.8] \draw (0.4,0.5) -- (-0.2,-0.3) -- (-0.4,-0.1) -- (0,0.5); \draw (-0.2,-0.3) -- (-0.2,-0.6); \draw (0.4,-0.6) -- (0.4,-0.1) -- (-0.4,0.5); \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (0.4,0.4) -- (0.2,0.2) -- (0.2,0) -- (-0.2,-0.4); \draw (0,0.4) -- (0.2,0.2); \draw (-0.2,0.4) -- (-0.2,0) -- (0.2,-0.4); \end{tikzpicture} \ . \] Using this and the second equality in \cref{chess}, we have \[ \begin{tikzpicture}[anchorbase] \draw (-0.2,0.4) -- (-0.2,0.2) -- (0,0) -- (0.2,0.2) -- (0.2,0.4); \draw (-0.2,0.2) -- (-0.4,0) -- (0,-0.4); \draw (0,0) -- (-0.4,-0.4); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.2,-0.4) -- (-0.2,0.4); \draw (-0.2,-0.2) -- (0.4,0.4); \draw (0.4,-0.4) -- (-0.2,0.2); \end{tikzpicture} = \dotcross\ , \] completing the verification of the fourth equality in \cref{turvy}. \end{proof} It follows from \cref{vortex,topsy,turvy} that the cups and caps equip $\Fcat$ with the structure of a \emph{strict pivotal} category. Intuitively, this means that morphisms are invariant under ambient isotopy fixing the boundary. Thus, for example, it makes sense to allow horizontal strands in diagrams: \begin{equation} \Hmor := \begin{tikzpicture}[anchorbase] \draw (-0.4,-0.4) -- (-0.4,0) -- (-0.2,0.2) -- (0.2,-0.2) -- (0.4,0) -- (0.4,0.4); \draw (-0.2,0.2) -- (-0.2,0.4); \draw (0.2,-0.2) -- (0.2,-0.4); \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (0.4,-0.4) -- (0.4,0) -- (0.2,0.2) -- (-0.2,-0.2) -- (-0.4,0) -- (-0.4,0.4); \draw (0.2,0.2) -- (0.2,0.4); \draw (-0.2,-0.2) -- (-0.2,-0.4); \end{tikzpicture} \ . \end{equation} In addition, since the object $\go$ is self-dual, the cups and caps yield natural isomorphisms \begin{equation} \label{twirl} \Tcat(\go^{\otimes m}, \go^{\otimes n}) \cong \Tcat(\go^{\otimes (m+n)},\one),\qquad n,m \in \N. \end{equation} \begin{defin} \label{Fdef} Fix $\alpha, \delta \in \kk$, with $\delta \ne -2$. Let $\Fcat = \Fcat_{\alpha,\delta}$ be the strict monoidal category obtained from $\Tcat_{\alpha,\delta}$ by imposing the following three additional relations: \begin{gather} \label{magic} \Hmor + \Imor + \dotcross = \frac{2\alpha}{\delta+2} \left(\, \jail + \hourglass + \crossmor\, \right), \\ \label{sqburst} \sqmor = \frac{\alpha^2 (\delta + 14)}{2(\delta+2)^2} \left(\, \jail + \hourglass \, \right) + \frac{\alpha (\delta-6)}{2(\delta+2)} \left(\, \Hmor + \Imor\, \right) + \frac{3\alpha^2 (2-\delta)}{2(\delta+2)^2} \, \crossmor\ , \\ \label{pentburst} \begin{aligned} \pentmor &= \frac{\alpha(10-\delta)}{4(\delta+2)} \left( \begin{tikzpicture}[anchorbase] \draw (-0.2,0) -- (0,0.25) -- (0.2,0); \draw (0,0.25) -- (0,0.4); \draw (-0.2,-0.25) -- (-0.2,0) -- (-0.3,0.4); \draw (0.2,-0.25) -- (0.2,0) -- (0.3,0.4); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (-0.3,0.3) -- (0,0) -- (0.3,0.3); \draw (0,0.3) -- (-0.15,0.15); \draw (0,0) -- (0,-0.15) -- (-0.15,-0.3); \draw (0,-0.15) -- (0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (0.3,0.3) -- (0,0) -- (-0.3,0.3); \draw (0,0.3) -- (0.15,0.15); \draw (0,0) -- (0,-0.15) -- (0.15,-0.3); \draw (0,-0.15) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (-0.3,-0.3) -- (0.3,0.3); \draw (-0.3,0.3) -- (-0.15,-0.15); \draw (0,0.3) -- (0.15,0.15); \draw (0,0) -- (0.3,-0.3); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (0.3,-0.3) -- (-0.3,0.3); \draw (0.3,0.3) -- (0.15,-0.15); \draw (0,0.3) -- (-0.15,0.15); \draw (0,0) -- (-0.3,-0.3); \end{tikzpicture} \right) \\ &\qquad - \frac{\alpha^2 (\delta+30)}{8(\delta+2)^2} \left( \begin{tikzpicture}[centerzero] \draw (-0.15,-0.3) -- (-0.15,-0.23) arc(180:0:0.15) -- (0.15,-0.3); \draw (-0.3,0.3) -- (0,0.08) -- (0.3,0.3); \draw (0,0.3) -- (0,0.08); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,0.3); \draw (0,0.3) -- (0.15,0) -- (0.3,0.3); \draw (0.15,0) -- (0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0.2,-0.3) -- (0.2,0.3); \draw (0,0.3) -- (-0.15,0) -- (-0.3,0.3); \draw (-0.15,0) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (-0.3,0.3) -- (-0.3,0.23) arc(180:360:0.15) -- (0,0.3); \draw (0.3,0.3) -- (0.15,0) -- (-0.2,-0.3); \draw (0.2,-0.3) -- (0.15,0); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0.3,0.3) -- (0.3,0.23) arc(360:180:0.15) -- (0,0.3); \draw (-0.3,0.3) -- (-0.15,0) -- (0.2,-0.3); \draw (-0.2,-0.3) -- (-0.15,0); \end{tikzpicture} \right) \\ &\qquad + \frac{3 \alpha^2 (\delta-2)}{8(\delta+2)^2} \left( \begin{tikzpicture}[centerzero] \draw (0,0.3) -- (0,-0.15) -- (-0.15,-0.3); \draw (0,-0.15) -- (0.15,-0.3); \draw (-0.2,0.3) -- (-0.2,0.25) arc(180:360:0.2) -- (0.2,0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,0.3) to[out=-45,in=70] (0.15,-0.3); \draw (-0.3,0.3) -- (0,0) -- (0.3,0.3); \draw (0,0) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,0.3) to[out=225,in=110] (-0.15,-0.3); \draw (0.3,0.3) -- (0,0) -- (-0.3,0.3); \draw (0,0) -- (0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (-0.3,0.3) -- (-0.15,0.15) -- (0,0.3); \draw (-0.15,0.15) -- (0.15,-0.3); \draw (0.3,0.3) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0.3,0.3) -- (0.15,0.15) -- (0,0.3); \draw (0.15,0.15) -- (-0.15,-0.3); \draw (-0.3,0.3) -- (0.15,-0.3); \end{tikzpicture} \right). \end{aligned} \end{gather} \end{defin} \begin{rem} We will see in \cref{prestige} that \cref{magic} corresponds to the Cayley--Hamilton theorem for traceless $3 \times 3$ octonionic matrices (see \cref{boysenberry,mango}). \end{rem} Before proceeding further, let us motivate the assumption in \cref{Fdef} that $\delta \ne -2$. In fact, we could multiply both sides of \cref{magic,sqburst,pentburst} by an appropriate power of $\delta+2$ to clear this factor from the denominators. For instance, \cref{magic} then becomes \begin{equation} \label{jordan} (\delta+2) \left( \Hmor + \Imor + \dotcross \right) = 2 \alpha \left(\, \jail + \hourglass + \crossmor\, \right). \end{equation} Then it makes sense to allow $\delta = -2$. However, in this case, the category collapses considerably, as we now show. Recall that, for $\delta \in \kk$, the \emph{Temperley--Lieb category} $\TL_\delta$ is the strict monoidal category generated by the object $\go$ and morphisms $\cupmor$, $\capmor$, subject to the first two equalities in \cref{vortex} and the fourth equality in \cref{chess}. \begin{prop} \label{TL} The category obtained from $\Tcat_{\alpha,-2}$ by imposing the relation \cref{jordan}, with $\delta=-2$, is isomorphic to $\TL_{-2}$. \end{prop} \begin{proof} When $\delta = -2$, \cref{jordan} gives $ \crossmor \ = -\ \begin{tikzpicture}[centerzero] \draw (-0.15,-0.3) -- (-0.15,0.3); \draw (0.15,-0.3) -- (0.15,0.3); \end{tikzpicture} \ -\ \begin{tikzpicture}[centerzero] \draw (-0.15,-0.3) -- (-0.15,-0.25) arc(180:0:0.15) -- (0.15,-0.3); \draw (-0.15,0.3) -- (-0.15,0.25) arc(180:360:0.15) -- (0.15,0.3); \end{tikzpicture} \ . $ Composing on the top with $\mergemor$ and using the second and fifth equalities in \cref{chess} then gives $\mergemor=0$. Is it then a straightforward exercise to see that all the relations in \cref{Tdef} are either trivial or follow from the first two equalites in \cref{vortex} and the fourth equality in \cref{chess}. \end{proof} \section{Dimension restrictions} In this section we show that, with some mild restrictions on $\delta$, any quotient of the category $\Tcat$ with certain conditions on the dimensions of the morphism spaces $\go^{\otimes 2} \to \go^{\otimes 2}$ and $\go^{\otimes 2} \to \go^{\otimes 3}$ satisfies the additional relations \cref{magic,sqburst,pentburst}, and hence is also a quotient of $\Fcat$. Later, in \cref{sec:functor}, we will use this result show that the functor $\Tcat \to \fg\md$ defined in \cref{magneto} factors through $\Fcat$ (\cref{baja}). Recall that an \emph{ideal} in a $\kk$-linear category $\cC$ is a collection $\cI$ of vector subspaces $\cI(X,Y)$ of $\cC(X,Y)$ for all $X,Y \in \Ob(\cC)$ such that \[ \cC(Y,Z) \circ \cI(X,Y) \subseteq \cI(X,Z) \quad \text{and} \quad \cI(X,Y) \circ \cC(Z,X) \subseteq \cI(Z,Y) \] for all $X,Y,Z \in \Ob(\cC)$. If, in addition, $\cC$ is a monoidal category, then we say $\cI$ is a \emph{tensor ideal} if it is an ideal and \[ 1_Z \otimes \cI(X,Y) \subseteq \cI(Z \otimes X, Z \otimes Y) \quad \text{and} \quad \cI(X,Y) \otimes 1_Z \subseteq \cI(X \otimes Z, Y \otimes Z) \] for all $X,Y,Z \in \Ob(\cC)$. If $\cI$ is a tensor ideal, it follows that $f \otimes g$ and $g \otimes f$ belong to $\cI$ for arbitrary morphisms $f$ in $\cI$ and $g$ in $\cC$. If $\cI$ is a tensor ideal of $\cC$, then the \emph{quotient category} $\cC/\cI$ is the category with \[ \Ob (\cC/\cI) = \Ob(\cC),\quad (\cC/\cI)(X,Y) = \cC(X,Y) / \cI(X,Y). \] The composition and tensor product in $\cC/\cI$ are induced by those in $\cC$. \begin{theo} \label{demayo} Assume $\delta \notin \{-2,2,6,10\}$. If $\cI$ is a tensor ideal of $\Tcat$ such that \begin{equation} \label{cinco} \dim \left( (\Tcat/\cI)(\go^{\otimes 2}, \go^{\otimes 2}) \right) = 5 \quad \text{and} \quad \dim \left( (\Tcat/\cI)(\go^{\otimes 3}, \go^{\otimes 2}) \right) \le 15 \end{equation} then relations \cref{magic,sqburst,pentburst} hold in $\Tcat/\cI$. \end{theo} The proof of \cref{demayo} will occupy the remainder of this section. We break the proof into a series of smaller results. \begin{center} \textit{We assume for the remainder of this section that $\delta \ne -2$.} \end{center} Consider the linear operators \begin{align*} \Rot \colon \Tcat(\go^{\otimes 2}, \go^{\otimes 2}) &\to \Tcat(\go^{\otimes 2}, \go^{\otimes 2}), & \Rot \left( \begin{tikzpicture}[anchorbase] \draw (-0.1,-0.4) -- (-0.1,0.4); \draw (0.1,-0.4) -- (0.1,0.4); \filldraw[fill=white,draw=black] (-0.25,0.2) rectangle (0.25,-0.2); \node at (0,0) {$\scriptstyle{f}$}; \end{tikzpicture} \right) &= \begin{tikzpicture}[anchorbase] \draw (-0.25,0.2) rectangle (0.25,-0.2); \node at (0,0) {$\scriptstyle{f}$}; \draw (-0.4,-0.4) -- (-0.4,0.2) arc (180:0:0.15); \draw (0.4,0.4) -- (0.4,-0.2) arc(360:180:0.15); \draw (-0.1,-0.4) -- (-0.1,-0.2); \draw (0.1,0.4) -- (0.1,0.2); \end{tikzpicture} \ , \\ \Switch \colon \Tcat(\go^{\otimes 2}, \go^{\otimes 2}) &\to \Tcat(\go^{\otimes 2}, \go^{\otimes 2}), & \Switch \left( \begin{tikzpicture}[anchorbase] \draw (-0.1,-0.4) -- (-0.1,0.4); \draw (0.1,-0.4) -- (0.1,0.4); \filldraw[fill=white,draw=black] (-0.25,0.2) rectangle (0.25,-0.2); \node at (0,0) {$\scriptstyle{f}$}; \end{tikzpicture} \right) &= \begin{tikzpicture}[anchorbase] \draw (-0.1,-0.5) -- (0.1,-0.2); \draw (0.1,-0.5) -- (-0.1,-0.2); \draw (-0.1,0.2) -- (-0.1,0.4); \draw (0.1,0.2) -- (0.1,0.4); \filldraw[fill=white,draw=black] (-0.25,0.2) rectangle (0.25,-0.2); \node at (0,0) {$\scriptstyle{f}$}; \end{tikzpicture} \ . \end{align*} In other words, \[ \Rot(f) = \left( \capmor \otimes 1_\go^{\otimes 2} \right) \circ (1_\go \otimes f \otimes 1_\go) \circ (1_\go^{\otimes 2} \otimes \cupmor),\quad \Switch(f) = f \circ \crossmor. \] Any tensor ideal of $\Tcat$ or $\Fcat$ is invariant under $\Rot$. We have \begin{equation} \label{rotary} \begin{aligned} \Rot \left(\, \jail\, \right) &= \hourglass\, ,& \Rot \left(\, \hourglass\, \right) &= \jail\, ,& \Rot \left(\, \crossmor\, \right) &= \crossmor\, , \\ \Rot \left(\, \Hmor\, \right) &= \Imor\, ,& \Rot \left(\, \Imor\, \right) &= \Hmor\, ,& \Rot \left(\, \dotcross\, \right) &= \dotcross\, . \end{aligned} \end{equation} and \begin{equation} \label{flick} \begin{aligned} \Switch \left(\, \jail\, \right) &= \crossmor\, ,& \Switch \left(\, \crossmor\, \right) &= \jail\, ,& \Switch \left(\, \hourglass\, \right) &= \hourglass\, , \\ \Switch \left(\, \Hmor\, \right) &= \dotcross\, ,& \Switch \left(\, \dotcross\, \right) &= \Hmor\, ,& \Switch \left(\, \Imor\, \right) &= \Imor\, . \end{aligned} \end{equation} \begin{prop} \label{SUP} If $\cI$ is a tensor ideal of $\Tcat$ such that \begin{equation} \label{funf} \dim \left( (\Tcat/\cI)(\go^{\otimes 2}, \go^{\otimes 2}) \right) = 5, \end{equation} then \cref{magic} holds in $\Tcat/\cI$, and the morphisms \begin{equation} \label{bigfive} \jail\, ,\quad \hourglass\, ,\quad \crossmor\, ,\quad \Hmor\, ,\quad \Imor \end{equation} give a basis for $(\Tcat/\cI)(\go^{\otimes 2}, \go^{\otimes 2})$. \end{prop} \begin{proof} Suppose $\cI$ is a tensor ideal of $\Tcat$ satisfying \cref{funf}, and let $\cC = \Tcat/\cI$. Then, in $\cC$, there must be a linear dependence relation involving the six morphisms \begin{equation} \label{bigsix} \jail\, ,\quad \hourglass\, ,\quad \crossmor\, ,\quad \Hmor\, ,\quad \Imor\, ,\quad \dotcross\, . \end{equation} Let us for a moment view the diagrams in \cref{bigsix} as graphs (with $\dotcross$ being a 4-valent vertex) embedded in the plane. The operators $\Rot$ and $\Switch$ act on this set as in \cref{rotary,flick}. It follows that $\Rot$ and $\Switch$ generate an action of the symmetric group $\fS_3$ on the $6$-dimensional space $U$ spanned by $\cref{bigsix}$ (viewed only as embedded planar graphs). The space $U$ decomposes as a direct sum \[ U = U_1 \oplus U_2 \oplus U_3 \oplus U_4, \] where \[ U_1 = \Span_\kk \left\{ \jail\, +\, \hourglass\, +\, \crossmor \right\} \quad \text{and} \quad U_2 = \Span_\kk \left\{ \Hmor\, +\, \Imor\, +\, \dotcross \right\} \] are copies of the trivial $\fS_3$-module, \[ U_3 = \Span_\kk \left\{ \jail\, +\, \hourglass\, -2\, \crossmor\, ,\ \jail\, -\, \hourglass \right\} \quad \text{and} \quad U_4 = \Span_\kk \left\{ \Hmor\, +\, \Imor\, -2\, \dotcross\, ,\ \Hmor\, -\, \Imor \right\} \] are copies of the unique simple $\fS_3$-module of dimension $2$ (that is, the Specht module corresponding to the partition $(2,1)$), and there is an isomorphism $U_3 \xrightarrow{\cong} U_4$ sending the given basis of $U_3$ to the given basis of $U_4$. If some linear combination $u$ of the elements \cref{bigsix} is zero in $\Tcat/\cI$, then all elements of the $\fS_3$-submodule generated by $u$ are also zero in $\Tcat/\cI$. First consider the case where $\cI(\go^{\otimes 2}, \go^{\otimes 2}) \cap (U_3 \oplus U_4) \ne \{0\}$. By the above discussion, $\cI$ then contains at least the span of the vectors \[ \lambda \left( \Hmor - \Imor \right) + \mu \left( \jail\, -\, \hourglass\, \right) \quad \text{and} \quad \lambda \left( \Hmor\, +\, \Imor\, -2\, \dotcross \right) + \mu \left(\, \jail\, +\, \hourglass\, -2\, \crossmor \right) \] for some $\lambda,\mu \in \kk$, not both zero. Now, if $\lambda = 0$, then we have $\jail\, =\, \hourglass$. Composing on the top with $\mergemor$ and using the fourth relation in \cref{chess}, we then get $\mergemor=0$. It is then clear that \cref{funf} cannot be satisfied. Thus, we may suppose $\lambda \ne 0$, in which case relations of the form \begin{equation} \label{croatia} \Hmor\, -\, \Imor\, =\, \mu \left(\, \jail\, -\, \hourglass\, \right) \quad \text{and} \quad \Hmor\, +\, \Imor\, -2\, \dotcross\, =\, \mu \left(\, \jail\, +\, \hourglass\, -2\, \crossmor\, \right) \end{equation} hold in $\Tcat/\cI$. From the first equation in \cref{croatia}, we have \[ \begin{tikzpicture}[centerzero] \draw (-0.3,-0.2) -- (-0.1,0); \draw (0.3,-0.2) -- (0.1,0); \draw[thick,densely dotted] (-0.3,0.2) -- (-0.1,0) -- (0.1,0) -- (0.3,0.2); \end{tikzpicture} \, =\, \Imor\, + \mu \left(\, \jail\, -\, \hourglass\, \right), \] which allows us to reduce the length of any cycle of length at least three, or break open the cycle. (The part of a cycle that would be replaced is indicated by dotted lines.) Thus, $(\Tcat/\cI)(\go^{\otimes 2}, \go^{\otimes 2})$ is spanned by acyclic diagrams. The second relation in \cref{croatia} then allows us to eliminate $\dotcross$. This implies that $\jail\,$, $\hourglass\,$, $\Hmor$, and $\Imor$ span $(\Tcat/\cI)(\go^{\otimes 2}, \go^{\otimes 2})$, since these are the only planar acyclic trivalent graphs connecting four endpoints. This contradicts our hypothesis \cref{funf}. We now know that $\cI(\go^{\otimes 2}, \go^{\otimes 2}) \subseteq U_1 \oplus U_2$. So $\cI$ contains at least the span of the vectors \[ \lambda \left(\, \jail\, +\, \hourglass\, +\, \crossmor\, \right) + \mu \left( \Hmor + \Imor + \dotcross\, \right) \] for some $\lambda,\mu \in \kk$, not both zero. If $\mu = 0$, then we have \[ \jail\, +\, \hourglass\, +\, \crossmor\, = 0 \quad \text{in } \Tcat/\cI. \] Composing on the bottom with $\cupmor$ yields $(\delta+2)\, \cupmor = 0$. If $\cupmor=0$, then $1_\go = 0$ by the first relation in \cref{vortex}, and so $\Tcat/\cI$ is the trivial category. Hence $\delta=-2$ and $\Tcat/\cI$ is a quotient of the Temperley--Lieb $\TL_{-2}$ category by \cref{TL}. Since $\dim \big( \TL_{-2}(\go^{\otimes 2}, \go^{\otimes 2}) \big) = 2$, this contradicts our hypothesis \cref{funf}. We may thus assume $\mu \ne 0$. Hence a relation of the form \[ \Hmor + \Imor + \dotcross = \lambda \left(\, \jail + \hourglass + \crossmor\, \right) \] holds in $\Tcat/\cI$. Now, composing on the bottom with $\cupmor$ and using \cref{chess}, we have \[ 2 \alpha\, \cupmor\, = \lambda (\delta+2)\, \cupmor, \] which implies that $\lambda = \frac{2\alpha}{\delta+2}$. (As explained above, we cannot have $\cupmor\, =0$.) Therefore, \cref{magic} holds in $\Tcat/\cI$. It remains to prove that the morphisms \cref{bigfive} give a basis for $(\Tcat/\cI)(\go^{\otimes 2}, \go^{\otimes 2})$. In light of our assumption \cref{funf}, it suffices to show that the morphisms \cref{bigfive} are linearly independent. In fact, this already follows from the above discussion. As we saw above, no nonzero element of $U_3 \oplus U_4$ can be zero in $\Tcat/\cI$, and the space $U_1 \oplus U_2$ has dimension one in $\Tcat/\cI$. Therefore, in $\Tcat/\cI$, the span of the morphisms \cref{bigfive} is $5$-dimensional, as required. \end{proof} \begin{rem} \label{zagreb} Composing the relations in \cref{croatia} on the bottom with $\cupmor$ shows that $(\delta-1)\lambda = \alpha$. (We assume here that $\cupmor\, \ne 0$, since, as we saw in the proof of \cref{SUP}, $\cupmor=0$ would imply that $\Tcat/\cI$ is trivial.) Thus, there is a category $\mathcal{D}$ obtained from $\Tcat$ by imposing the additional relations \begin{equation} \label{bosnia} \Hmor\, -\, \Imor\, =\, \frac{\alpha}{\delta-1} \left(\, \jail\, -\, \hourglass\, \right) \quad \text{and} \quad \Hmor\, +\, \Imor\, -2\, \dotcross\, =\, \frac{\alpha}{\delta-1} \left(\, \jail\, +\, \hourglass\, -2\, \crossmor\, \right). \end{equation} (We have assumed here that $\delta \ne 1$, since $\delta=1$ leads to a rather uninteresting category where $\go^{\otimes 2} \cong \one$.) Other than the Temperley--Lieb category $\TL_{-2}$ and the category with $\go^{\otimes 2} \cong \one$, the above argument shows that there are precisely \emph{two} quotients of $\Tcat$ whose morphisms spaces $\go^{\otimes 2} \to \go^{\otimes 2}$ have dimension less than or equal to $5$. Whereas the goal of the current paper is to examine the category $\Fcat$ and relate it to the representation theory of the Lie algebra of type $F_4$, the authors do not know of the significance of the category $\mathcal{D}$ in representation theory. We feel this category merits further investigation. \end{rem} \begin{lem} \label{spinner} If $\cI$ is a tensor ideal of $\Tcat$ such that \cref{magic} holds in $\Tcat/\cI$, then the relation \begin{equation} \label{triangle} \trimor = \frac{\alpha(2-\delta)}{2(\delta+2)}\, \mergemor \end{equation} holds in $\Tcat/\cI$. In particular, \cref{triangle} holds in $\Fcat$. \end{lem} \begin{proof} Relation \cref{triangle} follows by composing \cref{magic} on the top with $\mergemor$, then using \cref{chess,turvy}. \end{proof} The first two relations in \cref{venom} imply that we have an algebra homomorphism \begin{equation} \kk \fS_n \to \Tcat(\go^{\otimes n}, \go^{\otimes n}), \end{equation} sending the simple transposition $s_i \in \fS_n$ to the crossing of the $i$-th and $(i+1)$-st strands. We will denote the image of the complete symmetrizers and antisymmetrizers under this homomorphism by white and black rectangles, respectively: \begin{equation} \label{boxes} \begin{tikzpicture}[centerzero] \draw (-0.5,-0.6) -- (-0.5,0.6); \draw (0.5,-0.6) -- (0.5,0.6); \node at (0,0.4) {$\cdots$}; \node at (0,-0.4) {$\cdots$}; \symbox{-0.7,-0.15}{0.7,0.15}; \end{tikzpicture} = \frac{1}{n!} \sum_{\sigma \in \fS_n} \begin{tikzpicture}[centerzero] \draw (-0.5,-0.6) -- (-0.5,0.6); \draw (0.5,-0.6) -- (0.5,0.6); \node at (0,0.4) {$\cdots$}; \node at (0,-0.4) {$\cdots$}; \filldraw[rounded corners, fill=white, draw=black] (-0.7,-0.15) rectangle (0.7,0.15); \node at (0,0) {$\sigma$}; \end{tikzpicture} \ ,\qquad \begin{tikzpicture}[centerzero] \draw (-0.5,-0.6) -- (-0.5,0.6); \draw (0.5,-0.6) -- (0.5,0.6); \node at (0,0.4) {$\cdots$}; \node at (0,-0.4) {$\cdots$}; \antbox{-0.7,-0.15}{0.7,0.15}; \end{tikzpicture} = \frac{1}{n!} \sum_{\sigma \in \fS_n} (-1)^{\ell(\sigma)} \begin{tikzpicture}[centerzero] \draw (-0.5,-0.6) -- (-0.5,0.6); \draw (0.5,-0.6) -- (0.5,0.6); \node at (0,0.4) {$\cdots$}; \node at (0,-0.4) {$\cdots$}; \filldraw[rounded corners, fill=white, draw=black] (-0.7,-0.15) rectangle (0.7,0.15); \node at (0,0) {$\sigma$}; \end{tikzpicture} \ , \end{equation} where the diagrams contain $n$ strings, $\fS_n$ is the symmetric group on $n$ letters, and $\ell(\sigma)$ is the length of the permutation $\sigma$. It then follows from \cref{venom,chess} that \begin{equation} \label{pomegranate} \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.2) arc(180:0:0.15) -- (0.15,-0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} = \capmor\, ,\quad \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.2) arc(180:0:0.15) -- (0.15,-0.35); \antbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} = 0,\quad \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0); \draw (0.15,-0.35) -- (0.15,0); \symbox{-0.25,-0.1}{0.25,0.1}; \draw (-0.15,0.1) -- (0.15,0.35); \draw (0.15,0.1) -- (-0.15,0.35); \end{tikzpicture} = \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} \, ,\quad \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0); \draw (0.15,-0.35) -- (0.15,0); \antbox{-0.25,-0.1}{0.25,0.1}; \draw (-0.15,0.1) -- (0.15,0.35); \draw (0.15,0.1) -- (-0.15,0.35); \end{tikzpicture} = -\, \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \antbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} \, ,\quad \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0); \draw (0.15,-0.35) -- (0.15,0); \symbox{-0.25,-0.1}{0.25,0.1}; \draw (-0.15,0.1) -- (0,0.25) -- (0,0.4); \draw (0.15,0.1) -- (0,0.25); \end{tikzpicture} = \mergemor\, ,\quad \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0); \draw (0.15,-0.35) -- (0.15,0); \antbox{-0.25,-0.1}{0.25,0.1}; \draw (-0.15,0.1) -- (0,0.25) -- (0,0.4); \draw (0.15,0.1) -- (0,0.25); \end{tikzpicture} = 0. \end{equation} It also follows from \cref{turvy} that \begin{equation} \label{ladderslip} \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \symbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (-0.2,0.4) -- (-0.2,-0.6); \draw (0.2,0.4) -- (0.2,-0.6); \draw (-0.2,-0.35) -- (0.2,-0.35); \symbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} \ ,\qquad \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} = \begin{tikzpicture}[anchorbase] \draw (-0.2,0.4) -- (-0.2,-0.6); \draw (0.2,0.4) -- (0.2,-0.6); \draw (-0.2,-0.35) -- (0.2,-0.35); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} \ . \end{equation} \begin{lem} \label{sqexplode} If $\cI$ is a tensor ideal of $\Tcat$ satisfying \cref{funf}, then the relation \cref{sqburst} holds in $\Tcat/\cI$. \end{lem} \begin{proof} By \cref{SUP,spinner}, the morphisms \cref{bigfive} give a basis for $(\Tcat/\cI)(\go^{\otimes 2},\go^{\otimes 2})$ and \cref{triangle} holds. Since $\sqmor$ is invariant under $\Rot$, we must have a relation in $\Tcat/\cI$ of the form \begin{equation} \label{sqbreak1} \sqmor = \beta_1 \left(\, \jail + \hourglass \, \right) + \beta_2 \left(\, \Hmor + \Imor\, \right) + \beta_3\, \crossmor\ . \end{equation} Attaching a symmetrizer to the bottom of the diagrams in \cref{magic} and using \cref{pomegranate} gives \begin{equation} \label{sqbreak2} 2\ \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \symbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} + \Imor = \frac{2 \alpha}{\delta+2} \left(\, \hourglass + 2\ \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture}\, \right). \end{equation} Attaching a $\Hmor$ to the top of the diagrams in \cref{sqbreak2} and using \cref{chess,triangle} gives \begin{equation} \label{sqbreak3} \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.2) -- (0.2,0.2); \draw (-0.2,0.4) -- (0.2,0.4); \symbox{-0.3,-0.2}{0.3,0}; \end{tikzpicture} = \frac{\alpha}{\delta+2} \left(\, \alpha\, \hourglass + 2\ \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \symbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture}\, \right) + \frac{\alpha(\delta-2)}{4(\delta+2)}\, \Imor \overset{\cref{sqbreak2}}{=} \frac{4 \alpha^2}{(\delta+2)^2}\, \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} \, + \frac{(\delta+4) \alpha^2}{(\delta+2)^2}\, \hourglass\, + \frac{(\delta-6)\alpha}{4(\delta+2)}\, \Imor. \end{equation} On the other hand, attaching a symmetrizer to the bottom of \cref{sqbreak1} gives \begin{equation} \label{sqbreak4} \begin{aligned} \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.2) -- (0.2,0.2); \draw (-0.2,0.4) -- (0.2,0.4); \symbox{-0.3,-0.2}{0.3,0}; \end{tikzpicture} &= \beta_1 \left(\, \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture}\, + \hourglass \, \right) + \beta_2 \left(\, \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \symbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture}\, + \Imor \, \right) + \beta_3\, \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} \\ &\overset{\mathclap{\cref{sqbreak2}}}{=}\ \left( \beta_1 + \beta_3 + \frac{2 \alpha \beta_2}{\delta+2} \right)\, \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} \, + \left( \beta_1 + \frac{\alpha \beta_2}{\delta+2} \right) \hourglass \, + \frac{\beta_2}{2}\, \Imor. \end{aligned} \end{equation} Since the morphisms \cref{bigfive} are linearly independent, comparing \cref{sqbreak3,sqbreak4} gives \[ \frac{4 \alpha^2}{(\delta+2)^2} = \beta_1 + \beta_3 + \frac{2 \alpha}{\delta+2} \beta_2,\qquad \frac{(\delta+4)\alpha^2}{(\delta+2)^2} = \beta_1 + \frac{\alpha}{\delta+2} \beta_2,\qquad \frac{(\delta-6)\alpha}{4(\delta+2)} = \frac{1}{2}\beta_2. \] Solving this linear system for $\beta_1$, $\beta_2$, and $\beta_3$ gives the coefficients of relation \cref{sqburst}. \end{proof} \begin{lem} \label{pentexplode} If $\cI$ is a tensor ideal of $\Tcat$ satisfying \cref{cinco}, and $\delta \notin \{2,6,10\}$, then the relation \cref{pentburst} holds in $\Tcat/\cI$. \end{lem} \begin{proof} Since the pentagon is rotation invariant, the assumption \cref{cinco} implies that we must have a relation of the form \begin{equation} \label{pentbreak1} \begin{multlined} \gamma_0 \pentmor = \gamma_1 \left( \begin{tikzpicture}[anchorbase] \draw (-0.2,0) -- (0,0.25) -- (0.2,0); \draw (0,0.25) -- (0,0.4); \draw (-0.2,-0.25) -- (-0.2,0) -- (-0.3,0.4); \draw (0.2,-0.25) -- (0.2,0) -- (0.3,0.4); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (-0.3,0.3) -- (0,0) -- (0.3,0.3); \draw (0,0.3) -- (-0.15,0.15); \draw (0,0) -- (0,-0.15) -- (-0.15,-0.3); \draw (0,-0.15) -- (0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (0.3,0.3) -- (0,0) -- (-0.3,0.3); \draw (0,0.3) -- (0.15,0.15); \draw (0,0) -- (0,-0.15) -- (0.15,-0.3); \draw (0,-0.15) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (-0.3,-0.3) -- (0.3,0.3); \draw (-0.3,0.3) -- (-0.15,-0.15); \draw (0,0.3) -- (0.15,0.15); \draw (0,0) -- (0.3,-0.3); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (0.3,-0.3) -- (-0.3,0.3); \draw (0.3,0.3) -- (0.15,-0.15); \draw (0,0.3) -- (-0.15,0.15); \draw (0,0) -- (-0.3,-0.3); \end{tikzpicture} \right) + \gamma_2 \left( \begin{tikzpicture}[centerzero] \draw (-0.15,-0.3) -- (-0.15,-0.23) arc(180:0:0.15) -- (0.15,-0.3); \draw (-0.3,0.3) -- (0,0.08) -- (0.3,0.3); \draw (0,0.3) -- (0,0.08); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,0.3); \draw (0,0.3) -- (0.15,0) -- (0.3,0.3); \draw (0.15,0) -- (0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0.2,-0.3) -- (0.2,0.3); \draw (0,0.3) -- (-0.15,0) -- (-0.3,0.3); \draw (-0.15,0) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (-0.3,0.3) -- (-0.3,0.23) arc(180:360:0.15) -- (0,0.3); \draw (0.3,0.3) -- (0.15,0) -- (-0.2,-0.3); \draw (0.2,-0.3) -- (0.15,0); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0.3,0.3) -- (0.3,0.23) arc(360:180:0.15) -- (0,0.3); \draw (-0.3,0.3) -- (-0.15,0) -- (0.2,-0.3); \draw (-0.2,-0.3) -- (-0.15,0); \end{tikzpicture} \right) \\ + \gamma_3 \left( \begin{tikzpicture}[centerzero] \draw (0,0.3) -- (0,-0.15) -- (-0.15,-0.3); \draw (0,-0.15) -- (0.15,-0.3); \draw (-0.2,0.3) -- (-0.2,0.25) arc(180:360:0.2) -- (0.2,0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,0.3) to[out=-45,in=70] (0.15,-0.3); \draw (-0.3,0.3) -- (0,0) -- (0.3,0.3); \draw (0,0) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,0.3) to[out=225,in=110] (-0.15,-0.3); \draw (0.3,0.3) -- (0,0) -- (-0.3,0.3); \draw (0,0) -- (0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (-0.3,0.3) -- (-0.15,0.15) -- (0,0.3); \draw (-0.15,0.15) -- (0.15,-0.3); \draw (0.3,0.3) -- (-0.15,-0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0.3,0.3) -- (0.15,0.15) -- (0,0.3); \draw (0.15,0.15) -- (-0.15,-0.3); \draw (-0.3,0.3) -- (0.15,-0.3); \end{tikzpicture} \right). \end{multlined} \end{equation} (More precisely, given any relation involving the diagrams appearing in \cref{pentbreak1}, we can sum over its images under $\Rot^n$, $0 \le n \le 5$, to get a relation of the form \cref{pentbreak1}.) Let $\theta = \frac{\alpha(2-\delta)}{2(\delta+2)}$, so that $\trimor = \theta \mergemor$. Composing with $\mergemor$ on the rightmost two strings at the top of \cref{pentbreak1} gives \begin{multline*} \theta \gamma_0 \sqmor = \gamma_1 \left( (\theta + \alpha) \left( \Hmor + \Imor \right) + \sqmor \right) + \gamma_2 \left( \alpha \left(\, \jail + \hourglass\, \right) + \Hmor + \Imor \right) \\ + \gamma_3 \left( \Hmor + \Imor + 2\, \dotcross + \alpha \crossmor \right). \end{multline*} Thus, using \cref{magic} to eliminate $\dotcross$, we have \[ (\theta \gamma_0 - \gamma_1) \sqmor = \left( \alpha \gamma_2 + \frac{4\alpha}{\delta+2} \gamma_3 \right) \left(\, \jail + \hourglass \, \right) + \left( (\theta+\alpha) \gamma_1 + \gamma_2 - \gamma_3 \right) \left(\, \Hmor + \Imor\, \right) + \frac{\alpha(\delta+6)}{\delta+2} \gamma_3 \, \crossmor\ . \] Comparing to \cref{sqburst} and using the fact that the morphisms \cref{bigfive} are linearly independent, this gives \begin{align*} \frac{\alpha(\delta+14)}{2(\delta+2)^2} (\theta \gamma_0 - \gamma_1) &= \gamma_2 + \frac{4}{\delta+2} \gamma_3, \\ \frac{\alpha(\delta-6)}{2(\delta+2)} (\theta \gamma_0 - \gamma_1) &= (\theta+\alpha)\gamma_1 + \gamma_2 - \gamma_3, \\ \frac{3 \alpha (2-\delta)}{2(\delta+2)} (\theta \gamma_0 - \gamma_1) &= (\delta+6) \gamma_3. \end{align*} Assuming $\delta \ne 2$, this is equivalent to the linear system \begin{gather} \nonumber \gamma_2 + \frac{\delta+30}{3(\delta-2)} \gamma_3 = 0, \qquad \frac{\alpha(\delta+6)}{2(\delta+2)} \gamma_1 + \gamma_2 + \frac{\delta^2-3\delta-30}{3(\delta-2)} \gamma_3 = 0, \\ \label{pent1} \frac{3 \alpha (2-\delta)}{2(\delta+2)} \gamma_1 + (\delta+6) \gamma_3 = \frac{3 \alpha^2 (2-\delta)^2}{4(\delta+2)^2} \gamma_0. \end{gather} Subtracting the first equation above from the second gives \[ \frac{\alpha(\delta+6)}{2(\delta+2)} + \frac{(\delta+6)(\delta-10)}{3(\delta-2)} \gamma_3 = 0 \iff \gamma_3 = \frac{3 \alpha (2-\delta)}{2(\delta+2)(\delta-10)} \gamma_1. \] as long as $\delta \ne 6,10$. Combining with \cref{pent1} then gives \begin{multline*} \frac{3 \alpha (2-\delta)}{2(\delta+2)} \gamma_1 + \frac{3 \alpha (\delta+6)(2-\delta)}{2(\delta+2)(\delta-10)} \gamma_1 = \frac{3 \alpha^2 (2-\delta)^2}{4(\delta+2)^2} \gamma_0 \\ \implies -\frac{3\alpha(\delta-2)^2}{(\delta+2)(\delta-10)} \gamma_1 = \frac{3 \alpha^2 (2-\delta)^2}{4(\delta+2)^2} \gamma_0 \implies \gamma_1 = \frac{\alpha(10-\delta)}{4(\delta+2)} \gamma_0. \end{multline*} Thus \[ \gamma_3 = \frac{3 \alpha (2-\delta)}{2(\delta+2)(\delta-10)} \gamma_1 = \frac{3 \alpha^2 (\delta-2)}{8(\delta+2)^2} \gamma_0 \quad \text{and} \quad \gamma_2 = \frac{\delta+30}{3(2-\delta)} \gamma_3 = - \frac{\alpha^2 (\delta+30)}{8(\delta+2)^2} \gamma_0. \] We see that the relation \cref{pentbreak1} becomes trivial if $\gamma_0 = 0$, so we may assume $\gamma_0 \ne 0$. Then, dividing \cref{pentbreak1} by $\gamma_0$ gives the relation \cref{pentburst}. \details{ As a check, adding a $\capmor$ to the rightmost top two strings in \cref{pentburst} gives \[ \alpha \theta \mergemor = \gamma_1 (2\alpha + \theta) \mergemor + \gamma_2 (\delta + 2) \mergemor + 4 \gamma_3 \mergemor. \] So we should have \[ \alpha \theta = (2 \alpha + \theta)\gamma_1 +(\delta+2)\gamma_2 + 4 \gamma_3, \] that is, \[ \frac{\alpha^2(2-\delta)}{2(\delta+2)} = \frac{\alpha(3\delta+10)}{2(\delta+2)} \gamma_1 + (\delta+2) \gamma_2 + 4 \gamma_3. \] Indeed, we have \begin{align*} \frac{\alpha(3\delta+10)}{2(\delta+2)} \gamma_1 + (\delta+2) \gamma_2 + 4 \gamma_3 &= \frac{\alpha(3\delta+10)}{2(\delta+2)} \frac{\alpha(10-\delta)}{4(\delta+2)} - (\delta+2) \frac{\alpha^2 (\delta+30)}{8(\delta+2)^2} + 4 \frac{3 \alpha^2 (\delta-2)}{8(\delta+2)^2} \\ &= \frac{\alpha^2 (16-4\delta^2)}{8(\delta+2)^2} = \frac{\alpha^2 (2-\delta)}{2(\delta+2)} \end{align*} as desired. } \end{proof} \begin{rem} When $\delta \in \{2,6,10\}$, there exist other solutions to the linear system appearing in the proof of \cref{pentexplode}. We are not sure of the role of these other categories in representation theory. Compare to \cref{zagreb}. \end{rem} \begin{proof}[Proof of \cref{demayo}] The theorem now follows immediately from \cref{SUP,sqexplode,pentexplode}. \end{proof} \begin{rem} When combined with \cref{meow}, \cref{demayo} implies that every pivotal symmetric monoidal category $\cC$ generated by a symmetric self-dual object $\go$ and a rotationally invariant symmetric morphism $\one \to \go^{\otimes 3}$, and with $\dim \cC(\one, \go^{\otimes n})$ equal to $1,0,1,1,5,15$ for $n=0,1,2,3,4,5$, respectively, is a quotient of $\Fcat$ for some value of $\delta$. Similar categories, with different conditions on the dimensions, were classified in \cite{MPS17}. The corresponding statement for the quantum $G_2$ link invariant is given in \cite[Th.~2.1]{Kup94}. \end{rem} \section{The Albert algebra and the Lie group of type $F_4$} In this section, we will develop some properties of the Albert algebra and the Lie group and Lie algebra of type $F_4$ that will be used in the sequel. For further details, we refer the reader to \cite[Ch.~16]{Ada96}. Let \[ A = \left\{ \begin{pmatrix} \lambda_1 & x_3 & \bar{x}_2 \\ \bar{x}_3 & \lambda_2 & x_1 \\ x_2 & \bar{x}_1 & \lambda_3 \end{pmatrix} : \lambda_i \in \R,\ x_i \in \OO \right\} \] denote the set of $3 \times 3$ self-adjoint matrices over the octonions $\OO$, equipped with the bilinear operation \[ a \circ b := \frac{1}{2}(ab+ba),\quad a,b \in A, \] where the juxtaposition $ab$ denotes usual matrix multiplication. Thus $A$ is one of the three real Alberta algebras. Note that this algebra is commutative and unital, but \emph{not} associative. We have $\dim_\R(A) = 27$. Eventually, we will be interested in the complexification $\C \otimes_\R A$, which is the unique simple exceptional complex Jordan algebra, up to isomorphism. However, since many of our preliminary arguments are valid over $\R$, we state them in that setting. Let $\tr \colon A \to \R$ denote the trace map, so that \begin{equation} \label{Atrace} \tr \begin{pmatrix} \lambda_1 & x_3 & \bar{x}_2 \\ \bar{x}_3 & \lambda_2 & x_1 \\ x_2 & \bar{x}_1 & \lambda_3 \end{pmatrix} = \lambda_1 + \lambda_2 + \lambda_3. \end{equation} For $a \in A$, let \[ L_a \colon A \to A,\quad b \mapsto a \circ b, \] denote the $\R$-linear map given by left multiplication by $a$. \begin{lem} For $a \in A$, we have \begin{equation} \label{slime} \tr(a) = \tfrac{1}{9} \Tr(L_a), \end{equation} where $\Tr(L_a)$ denotes the usual trace of the linear operator $L_a$ on the $27$-dimensional real vector space $A$. \end{lem} \begin{proof} Let $E_{ij}$ denote the matrix with a $1$ in the $(i,j)$-entry and all other entries equal to zero. Since both sides of \cref{slime} are $\R$-linear in $a$, it suffices to consider the cases where $a = x E_{ij} + \bar{x} E_{ji}$ for $x \in \OO$, $1 \le i,j \le 3$. Consider the basis of $A$ given by the elements \[ y E_{kl} + \bar{y} E_{lk},\quad k \le l, \] where $y$ runs over a basis of $\OO$ if $k \ne l$, and $y=\frac{1}{2}$ if $k=l$. Now \[ L_a(y E_{kl} + \bar{y} E_{lk}) = \frac{1}{2} \left( \delta_{jk} xy E_{il} + \delta_{jl} x\bar{y} E_{ik} + \delta_{ik} \bar{x}y E_{jl} + \delta_{il} \bar{x}\bar{y} E_{jk} \right). \] We see that the $yE_{ij} + \bar{y}E_{ji}$ component of this is zero unless $i=j=k$ or $i=j=l$. Thus $\tr(L_a)$ is zero unless $a$ is a diagonal matrix. On the other hand, if $a = \lambda E_{ii}$, $\lambda \in \R$, then $L_a$ acts as multiplication by $\frac{\lambda}{2}(\delta_{ik} + \delta_{il})$ on the subspace $x E_{kl} + \bar{x} E_{lk}$, $x \in \OO$, $k \ne l$, and multiplication by $\lambda$ on the subspace $\R E_{ii}$. Thus $\tr(L_a) = (1 + \frac{1}{2}(16)) \lambda = 9 \lambda$. \end{proof} Let $G$ denote the group of algebra automorphisms of $A$. Thus $G$ is the compact connected real Lie group of type $F_4$. \begin{lem} \label{versa} We have $\tr(ga) = \tr(a)$ for all $a \in A$ and $g \in G$. \end{lem} \begin{proof} For $a \in A$ and $g \in G$, we have \[ (g L_a g^{-1})(b) = g(a \circ (g^{-1}b)) = (ga) \circ b = L_{ga}(b). \] Thus $\tr(ga) = \frac{1}{9} \Tr(L_{ga}) = \frac{1}{9} \Tr (g L_a g^{-1}) = \frac{1}{9} \Tr(L_a) = \tr(a)$. \end{proof} \begin{cor} \label{squirrel} The symmetric bilinear form \begin{equation} \label{Bdef} B \colon A \otimes A \to \R,\quad B(a \otimes b) := \tr(a \circ b), \end{equation} is nondegenerate and $G$-invariant. \end{cor} We will sometimes write $B(a,b)$ for $B(a \otimes b)$. \begin{proof} Direct computation shows that, if $a$ is the matrix appearing in \cref{Atrace}, then \[ B(a, a) = \sum_{i=1}^3 \left( \lambda_i^2 + \|x_i\|^2 \right), \] which is nonzero when $a \ne 0$. Hence $B$ is nondegenerate. Since \[ B(ga, gb) = \tr((ga) \circ (gb)) = \tr(g(a \circ b)) = \tr(a \circ b) = B(a \otimes b), \] we also see that $B$ is $G$-invariant. \end{proof} It follows from \cref{squirrel} that we have a decomposition of $G$-modules \begin{equation} \label{rabbit} A = \R 1_A \oplus V_\R,\quad V_\R := \ker(\tr). \end{equation} Then $V_\R$ is the $26$-dimensional irreducible $G$-module (\cite[Cor.~16.2]{Ada96}). Let \begin{equation} \label{blueberry} \pi \colon A \to V_\R,\quad \pi(a) = a - \frac{1}{3} \tr(a) 1_A \end{equation} be the projection along the decomposition \cref{rabbit}. For $x \in \OO$, let $\RP(x)$ denote its real part. It is straightforward to verify that \begin{equation} \label{crayon} \RP(xy) = \RP(yx),\quad \RP((xy)z) = \RP(x(yz)),\qquad x,y,z \in \OO. \end{equation} For $X \in \Mat_{3 \times 3}(\OO)$, let $\tr_\R = \RP(\tr(X))$. \begin{lem} For $X,Y,Z \in \Mat_{3 \times 3}(\OO)$, we have \begin{equation} \label{trip} \tr_\R(XY) = \tr_\R(YX) \quad \text{and} \quad \tr_\R((XY)Z) = \tr_\R(X(YZ)). \end{equation} \end{lem} \begin{proof} Since both sides of both equalities to be proved are $\R$-linear in $X$, $Y$, and $Z$, it suffices to consider the case where $X = x E_{ij}$, $Y = y E_{kl}$, and $Z = z E_{mn}$, for $x,y,z \in \OO$. For the first equality, we have \[ \tr_\R(XY) = \delta_{jk} \delta_{il} \RP(xy) \overset{\cref{crayon}}{=} \delta_{jk} \delta_{il} \RP(yx) = \tr_\R(YX). \] For the second equality, we have \[ \tr_\R((XY)Z) = \delta_{jk} \delta_{lm} \delta_{in} \RP((xy)z) \overset{\cref{crayon}}{=} \delta_{jk} \delta_{lm} \delta_{in} \RP(x(yz)) = \tr_\R(X(YZ)). \qedhere \] \end{proof} \begin{lem} We have \begin{equation} \label{cedar} \tr((a \circ b) \circ c) = \tr(a \circ (b \circ c)),\quad a,b,c \in A. \end{equation} \end{lem} \begin{proof} For $a,b,c \in A$, we have \begin{multline*} 4 \tr((a \circ b) \circ c) = 4 \tr_\R((a \circ b) \circ c) = \tr_\R((ab)c + (ba)c + c(ab) + c(ba)) \\ \overset{\cref{trip}}{=} \tr_\R(a(bc) + a(cb) + (bc)a + (cb)a) = 4 \tr_\R(a \circ (b \circ c)) = 4 \tr(a \circ (b \circ c)). \qedhere \end{multline*} \end{proof} \begin{lem} For $a \in V_\R$, we have \begin{equation} \label{boysenberry} \pi(\pi(a \circ a) \circ a) = \tfrac{1}{6} \tr(a \circ a) a. \end{equation} \end{lem} \begin{proof} Since $\tr(a)=0$, \cite[Th.~16.6(iii)]{Ada96} (which is essentially the Cayley--Hamilton theorem for $A$) implies that \[ 0 = (a \circ a) \circ a - \tfrac{1}{2} \tr(a \circ a) a - \tfrac{1}{3} \tr((a \circ a) \circ a) \overset{\cref{blueberry}}{=} \pi((a \circ a) \circ a) - \tfrac{1}{2} \tr(a \circ a) a. \] Now, since $\pi(a)=a$, we have \[ \pi((a \circ a) \circ a) \overset{\cref{blueberry}}{=} \pi(\pi(a \circ a) \circ a) + \tfrac{1}{3} \tr(a \circ a) a. \] The identity \cref{boysenberry} now follows. \end{proof} \begin{cor} For $a,b,c \in V_\R$, we have \begin{equation} \label{mango} \pi(\pi(a \circ b) \circ c) + \pi(\pi(b \circ c) \circ a) + \pi(\pi(a \circ c) \circ b) = \tfrac{1}{6} \left( \tr(b \circ c) a + \tr(a \circ b) c + \tr(a \circ c) b \right). \end{equation} \end{cor} \begin{proof} This is the polarization of the identity \cref{boysenberry}. That is, we replace $a$ in \cref{boysenberry} by $\lambda_a a + \lambda_b b + \lambda_c c$, expand, and take the $\lambda_a \lambda_b \lambda_c$ terms. \end{proof} Fix a basis $\B_V$ of $V_\R$. By \cref{squirrel}, there exists a dual basis $\B_V^\vee = \{b^\vee : b \in \B_V\}$ defined by \[ \tr(a^\vee \circ b) = \delta_{a,b},\quad a,b \in \B_V. \] We can extend this to a basis $\B_A := \B_V \sqcup \{1\}$ of $A$, with dual basis $\B_V^\vee \sqcup \{\frac{1}{3}\}$, i.e.\ $1^\vee = \frac{1}{3}$. Note that the elements $\sum_{b \in \B_V} b \otimes b^\vee \in V_\R \otimes V_\R$ and $\sum_{b \in \B_A} b \otimes b^\vee \in A \otimes A$ are both independent of the choice of bases. \begin{lem} For $a \in A$, we have \begin{equation} \label{teleport} \sum_{b \in \B_A} a \circ b \otimes b^\vee = \sum_{b \in \B_A} b \otimes b^\vee \circ a. \end{equation} \end{lem} \begin{proof} We have \[ \sum_{b \in \B_A} a \circ b \otimes b^\vee = \sum_{b,c \in \B_A} \tr(c^\vee \circ (a \circ b)) c \otimes b^\vee \overset{\cref{cedar}}{=} \sum_{b,c \in \B_A} c \otimes \tr((c^\vee \circ a) \circ b) b^\vee = \sum_{c \in \B_A} c \otimes c^\vee \circ a. \qedhere \] \end{proof} Let $\fg = \C \otimes_\R \fg_\R$ be the complexification of the Lie algebra $\fg_\R$ of $G$, and let $V = \C \otimes_\R V_\R$ be the corresponding natural $\fg$-module. Let $\fg$-mod denote the category of finite-dimensional $\fg$-modules. We continue to denote by $\tr$ and $B$ the complexification of the maps \cref{Atrace,Bdef}. Then $\B_V$ is also a $\C$-basis of $V$ with dual basis $\B_V^\vee$. We will continue to use the bar notation $\bar{\ }$ to denote the conjugation of the octonions, extended to their complexification by $\C$-linearity. We conclude this section by recalling some basic facts about the representation theory of $\fg$. Consider the following labeling of the nodes of the Dynkin diagram of type $F_4$: \[ \begin{tikzpicture}[centerzero] \draw (0,0) -- (1,0); \draw (2,0) -- (3,0); \draw[style=double,double distance=2pt] (1,0) -- (2,0); \draw[style=double,double distance=2pt,-{Classical TikZ Rightarrow[length=3mm,width=4mm]}] (1,0) -- (1.65,0); \filldraw (0,0) circle (2pt) node[anchor=south] {$1$}; \filldraw (1,0) circle (2pt) node[anchor=south] {$2$}; \filldraw (2,0) circle (2pt) node[anchor=south] {$3$}; \filldraw (3,0) circle (2pt) node[anchor=south] {$4$}; \end{tikzpicture} \] Let $\omega_1,\omega_2,\omega_3,\omega_4$ denote the corresponding fundamental weights. For a dominant integral weight $\lambda$, let $V_\lambda$ denote the simple $\fg$-module of highest weight $\lambda$. In particular $V = V_{\omega_4}$, while $V_{\omega_1}$ is the adjoint representation. We have tensor product decompositions \begin{align} \label{2decomp} V^{\otimes 2} &= V_0 \oplus V_{\omega_1} \oplus V_{\omega_3} \oplus V_{\omega_4} \oplus V_{2 \omega_4}, \\ \label{3decomp} V^{\otimes 3} &= V_0 \oplus V_{\omega_1}^{\oplus 2} \oplus V_{\omega_2} \oplus V_{\omega_3}^{\oplus 4} \oplus V_{\omega_4}^{\oplus 5} \oplus V_{\omega_1+\omega_4}^{\oplus 3} \oplus V_{\omega_3+\omega_4}^{\oplus 2} \oplus V_{2\omega_4}^{\oplus 3} \oplus V_{3 \omega_4}. \end{align} This follows, for example from the table given in \cite[Ch.~11,~Table~7]{MPR90}. By Schur's lemma, we thus have \begin{equation} \label{measure} \dim \Hom_\fg(V^{\otimes 2}, V^{\otimes 2}) = 5 \quad \text{and} \quad \dim \Hom_\fg(V^{\otimes 3}, V^{\otimes 2}) = 15 \end{equation} The importance of these dimensions is the assumption \cref{cinco} in \cref{demayo}. \section{The functor\label{sec:functor}} In this section we describe a natural functor from the category $\Fcat$ to the category $\fg$-mod of finite-dimensional modules over the complex Lie algebra $\fg$ of type $F_4$. We do this by first defining a functor from $\Tcat$, and then showing that it factors through $\Fcat$. Throughout this section we work over the field $\kk = \C$. \begin{theo} \label{magneto} There is a unique monoidal functor \[ \Phi \colon \Tcat_{7/3,26} \to \fg\md \] given on objects by $\go \mapsto V$ and on morphisms by \begin{align} \Phi(\mergemor) &\colon V \otimes V \to V,& a \otimes b &\mapsto \pi(a \circ b), \\ \Phi(\crossmor) &\colon V \otimes V \to V \otimes V,& a \otimes b &\mapsto b \otimes a, \\ \Phi(\cupmor) &\colon \C \to V \otimes V,& 1 &\mapsto \sum_{b \in \B_V} b \otimes b^\vee, \\ \Phi(\capmor) &\colon V \otimes V \to \C,& a \otimes b &\mapsto B(a \otimes b) = \tr(a \circ b). \end{align} Furthermore \begin{equation} \label{Gsplit} \Phi \left( \splitmor \right) \colon V \to V \otimes V,\qquad a \mapsto \sum_{b \in \B_V} b \otimes \pi (b^\vee \circ a) = \sum_{b \in \B_V} \pi(a \circ b) \otimes b^\vee. \end{equation} \end{theo} \begin{proof} We must verify that $\Phi$ respects the defining relations in \cref{Tdef}. The verification of the relations \cref{venom} and the first two equalities in \cref{vortex} is straightforward. For the third equality in \cref{vortex}, we first use \cref{teleport} to see that, for $a \in V$, \[ \sum_{b \in \B_V} a \circ b \otimes b^\vee + a \otimes \tfrac{1}{3} = \sum_{b \in \B_V} b \otimes b^\vee + \tfrac{1}{3} \otimes a. \] Applying $\pi \otimes \pi$ yields \[ \sum_{b \in \B_V} \pi(a \circ b) \otimes b^\vee = \sum_{b \in \B_V} b \otimes \pi(b^\vee \circ a). \] Thus, for $a \in V$, we have \[ \Phi \left( \begin{tikzpicture}[anchorbase] \draw (-0.4,0.2) to[out=down,in=180] (-0.2,-0.2) to[out=0,in=225] (0,0); \draw (0,0) -- (0,0.2); \draw (0.3,-0.3) -- (0,0); \end{tikzpicture} \right) (a) = \sum_{b \in \B_V} b \otimes \pi (b^\vee \circ a) = \sum_{b \in \B_V} \pi(a \circ b) \otimes b^\vee = \Phi \left( \begin{tikzpicture}[anchorbase] \draw (0.4,0.2) to[out=down,in=0] (0.2,-0.2) to[out=180,in=-45] (0,0); \draw (0,0) -- (0,0.2); \draw (-0.3,-0.3) -- (0,0); \end{tikzpicture} \right) (a). \] This shows that $\Phi$ preserves the third equality in \cref{vortex} and that it satisfies \cref{Gsplit}. For the fourth equality in \cref{vortex}, we compute \[ \Phi \left( \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,-0.1) arc(180:0:0.2) -- (0.2,-0.3); \draw (-0.3,0.3) \braiddown (0,-0.3); \end{tikzpicture} \right) (a \otimes b \otimes c) = B(a \otimes c) b = \Phi \left( \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,-0.1) arc(180:0:0.2) -- (0.2,-0.3); \draw (0.3,0.3) \braiddown (0,-0.3); \end{tikzpicture} \right) (a \otimes b \otimes c). \] The fact that $\Phi$ respects the first two relations in \cref{chess} follows immediately from the fact that $a \circ b = b \circ a$ for $a,b \in V$. Now consider the third relation in \cref{chess}. Since $V$ is an irreducible $\fg$-module, there exists $\alpha \in \C$ such that \[ \Phi \left( \begin{tikzpicture}[centerzero] \draw (0,-0.4) -- (0,-0.2) to[out=45,in=down] (0.15,0) to[out=up,in=-45] (0,0.2) -- (0,0.4); \draw (0,-0.2) to[out=135,in=down] (-0.15,0) to[out=up,in=-135] (0,0.2); \end{tikzpicture} \right) (a) = \alpha a \quad \text{for all } a \in V, \] and we must show that $\alpha = \tfrac{7}{3}$. It suffices to show this for some specific choice of $a$, so we choose $a = E_{11} - E_{22}$. Now, choose the basis \begin{align*} \B_V &= \{b_1 = \tfrac{1}{\sqrt{2}} E_{11} - \tfrac{1}{\sqrt{2}} E_{22},\, b_2 = \tfrac{1}{\sqrt{6}} E_{11} + \tfrac{1}{\sqrt{6}} E_{22} - \tfrac{2}{\sqrt{6}} E_{33}\} \sqcup \B_V',\quad \text{where} \\ \B_V' &= \{ \tfrac{1}{2}(x E_{ij} + \bar{x} E_{ji}) : 1 \le i < j \le 3,\ x \in \B_\OO\}, \end{align*} and where $\B_\OO$ is the usual basis of $\OO$ (in particular, $\bar{x} x = x \bar{x} = 1$ for $x \in \B_\OO$). Then $\B_V$ is an orthonormal basis for $V$, that is, $b^\vee = b$ for all $b \in \B_V$. Furthermore, for $b = \frac{1}{2}(xE_{ij} + \bar{x}E_{ji})$, with $x \in \B_\OO$ and $1 \le i < j \le 3$, we have \[ (a \circ b) \circ b^\vee = \tfrac{1}{2}(\delta_{i1} - \delta_{i2} - \delta_{j2}) b \circ b = \tfrac{1}{4}(\delta_{i1} - \delta_{i2} - \delta_{j2})(E_{ii} + E_{jj}). \] Therefore, \begin{multline*} \sum_{b \in \B_V} (a \circ b) \circ b^\vee = (a \circ b_1) \circ b_1 + (a \circ b_2) \circ b_2 + \sum_{b \in \B_V'} (a \circ b) \circ b \\ = \tfrac{1}{2}(E_{11}-E_{22}) + \tfrac{1}{6}(E_{11}-E_{22}) + 2 \sum_{i<j} (\delta_{i1}-\delta_{i2}-\delta_{j2}) (E_{ii}+E_{jj}) = \tfrac{8}{3}a. \end{multline*} Thus we have \begin{multline*} \Phi \left( \begin{tikzpicture}[centerzero] \draw (0,-0.4) -- (0,-0.2) to[out=45,in=down] (0.15,0) to[out=up,in=-45] (0,0.2) -- (0,0.4); \draw (0,-0.2) to[out=135,in=down] (-0.15,0) to[out=up,in=-135] (0,0.2); \end{tikzpicture} \right) (a) = \sum_{b \in \B_V} \pi \left( \pi(a \circ b) \circ b^\vee \right) \overset{\cref{blueberry}}{=} \sum_{b \in \B_V} \pi((a \circ b) \circ b^\vee) - \tfrac{1}{3} \sum_{b \in \B_V} B(a \otimes b) \pi(b^\vee) \\ = \sum_{b \in \B_V} \pi((a \circ b) \circ b^\vee) - \tfrac{1}{3} \pi(a) = \tfrac{8}{3} a - \tfrac{1}{3} a = \tfrac{7}{3} a, \end{multline*} as desired. That $\Phi$ respects the fourth equality in \cref{chess} follows immediately from the fact that $\dim_\C V = 26$. Since $\Phi \left( \lollydrop \right)$ is a homomorphism of $\fg$-modules from the trivial module to $V$, it must be zero by Schur's lemma, proving that $\Phi$ respects the fifth equality in \cref{chess}. \end{proof} For a linear category $\cC$, let $\Kar(\cC)$ denote its additive Karoubi envelope. Thus, objects of $\Kar(\cC)$ are pairs $(X,e)$, where $X$ is an object in the additive envelope $\Add(\cC)$ of $\cC$, and $e \in \cC(X,X)$ is an idempotent endomorphism. Morphisms in $\Kar(\cC)$ are given by \[ \Kar(\cC) \big( (X,e),(X',e') \big) = e' \Add(\cC)(X,X') e. \] Composition is as in $\cC$. \begin{prop} \label{FunctorFull} The functor $\Phi$ is full. \end{prop} \begin{proof} We follow the method of the proof of \cite[Th.~5.1]{Kup96}. Since the category $\fg$-mod is idempotent complete, we have an induced functor \begin{equation} \label{river} \Kar(\Tcat_{7/3,26}) \to \fg\md. \end{equation} Let $\cC$ be the image of this functor. Then $\cC$ is a rigid symmetric monoidal category. We claim that $\End_\cC(V^{\otimes n})$ is a semisimple algebra for all $n \ge 0$. Indeed, consider the conjugate-linear contravariant monoidal endofunctor $\Xi$ of $\Fcat$ determined on objects by $\go \mapsto \go$ and on morphisms by \[ \mergemor \mapsto \splitmor,\quad \crossmor \mapsto \crossmor,\quad \cupmor \mapsto \capmor,\quad \capmor \mapsto \cupmor. \] Intuitively, $\Xi$ is given by reflecting diagrams in the vertical axis and taking the complex conjugate of all coefficients. Then $\Phi$ intertwines $\Xi$ with the hermitian adjoint. It follows that $\End_\cC(V^{\otimes n})$ is closed under hermitian adjoint, and hence is semisimple. Thus $\cC$ satisfies the hypotheses of the Tannaka--Krein duality theorem \cite[p.~177]{Kir76}, and must be the category of finite-dimensional representations of some compact group $H$. Since all morphisms of $\cC$ are homomorphisms of $G$-modules, we have $G \subseteq H$. On the other hand, $G$ is precisely the group of automorphisms of $V$ preserving $\Phi(\mergemor)$ and $\Phi(\capmor)$. Thus $G=H$ and so \cref{river} is full. Viewing $\Tcat$ as a full subcategory of $\Kar(\Tcat)$ in the usual way, we conclude that $\Phi$ is full. \end{proof} \begin{theo} \label{baja} The functor $\Phi$ from \cref{magneto} factors through $\Fcat_{7/3, 26}$. \end{theo} \begin{proof} Let $\cI$ be the kernel of the functor $\Phi$. Then $\cI$ is a tensor ideal of $\Tcat$, and we must show that the relations \cref{magic,sqburst,pentburst} are satisfied in $\Tcat/\cI$. By \cref{FunctorFull,measure}, it follows that condition \cref{cinco} is satisfied. Then the theorem follows from \cref{demayo}. \end{proof} \begin{rem} \label{prestige} It is also possible to give a more direct proof that the image of \cref{magic} under $\Phi$ holds in $\fg$-mod. Bending the top right endpoint to the bottom of the diagrams by tensoring on the right with $\go$ and attaching a cup to the two rightmost endpoints at the top of the diagram (an operation which is invertible by the first relation in \cref{vortex}), we see that \cref{magic} is equivalent to \[ \begin{tikzpicture}[anchorbase] \draw (-0.4,-0.4) -- (0,0) -- (0,0.3); \draw (0,-0.4) -- (-0.2,-0.2); \draw (0.4,-0.4) -- (0,0); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (-0.4,-0.4) -- (0,0) -- (0,0.3); \draw (0,-0.4) -- (0.2,-0.2); \draw (0.4,-0.4) -- (0,0); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (-0.4,-0.4) -- (0,0) -- (0,0.3); \draw (0.4,-0.4) -- (-0.2,-0.2); \draw (0,-0.4) -- (0.2,-0.2) -- (0,0); \end{tikzpicture} = \frac{1}{6} \left(\, \begin{tikzpicture}[anchorbase] \draw (0,-0.4) -- (0,-0.3) arc(180:0:0.2) -- (0.4,-0.4); \draw (-0.4,-0.4) \braidup (0,0.3); \end{tikzpicture} + \begin{tikzpicture}[anchorbase] \draw (0,-0.4) -- (0,-0.3) arc(0:180:0.2) -- (-0.4,-0.4); \draw (0.4,-0.4) \braidup (0,0.3); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,-0.1) arc(180:0:0.2) -- (0.2,-0.3); \draw (-0.3,0.3) \braiddown (0,-0.3); \end{tikzpicture} \, \right). \] The fact that $\Phi$ respects this relation follows from \cref{mango} and the fact that the operation $\circ$ on $A$ is commutative. Hence, we see that \cref{magic} essentially corresponds to the Cayley--Hamilton theorem for the Albert algebra. \end{rem} Since the category $\fg$-mod is idempotent complete, we have an induced functor \begin{equation} \label{lake} \Kar(\Phi) \colon \Kar \left( \Fcat_{7/3,26} \right) \to \fg\md. \end{equation} \begin{prop} \label{splat} The functor $\Kar(\Phi)$ of \cref{lake} is full and essentially surjective. \end{prop} \begin{proof} Fullness follows from \cref{FunctorFull}, so it remains to show that $\Kar(\Phi)$ is essentially surjective. If $\lambda = \sum_{i=1}^4 \lambda_i \omega_i$, with $\lambda_i \in \Z_{\ge 0}$, then $V_\lambda$ is the submodule of $\bigotimes_{i=1}^4 V_{\omega_i}^{\otimes \lambda_i}$ generated by the one-dimensional $\lambda$ weight space. Since the category $\fg$-mod is semisimple, this implies that $V_\lambda$ is a direct summand of $\bigotimes_{i=1}^4 V_{\omega_i}^{\otimes \lambda_i}$. Therefore, it suffices to show that the image of $\Kar(\Phi)$ contains the fundamental representations $V_{\omega_i}$ for $i \in \{1,2,3,4\}$. We see from \cref{2decomp} that $V_{\omega_1}$ and $V_{\omega_3}$ are contained in $V_{\omega_4}^{\otimes 2}$. It also follows from \cite[Ch.~11,~Table~7]{MPR90} that $V_{\omega_2}$ is contained in $V_{\omega_3} \otimes V_{\omega_4}$. \end{proof} \begin{cor} \label{SW} We have a surjective algebra homomorphism \[ \Fcat_{7/3,26}(\go^{\otimes k}, \go^{\otimes k}) \twoheadrightarrow \End_\fg(V^{\otimes k}),\quad k \in \N. \] \end{cor} \Cref{SW} implies that the endomorphism algebras of $\Fcat$ play the role in type $F_4$ that the group algebra of the symmetric group (or the oriented Brauer algebras if one includes the dual of the natural module) plays in type $A$ and that the Brauer algebras play in types $BCD$. We conclude this section with some conjectures. \begin{conj} \label{faithful} The functor $\Kar(\Phi)$ is faithful, and hence is an equivalence of categories. \end{conj} A string diagram built from the generating morphisms \cref{lego} via tensor product and composition can be viewed as a graph. Here we view $\mergemor$ as a trivalent vertex and $\crossmor$ as two edges (that is, we do \emph{not} view the crossing as a vertex). We say such a graph is \emph{component-planar} if its connected components are planar graphs. For example, \cref{bigfive} is a complete list of the $5$ acyclic component-planar graphs $\go^{\otimes 2} \to \go^{\otimes 2}$, and there are precisely $15$ acyclic component-planar graphs $\go^{\otimes 2} \to \go^{\otimes 3}$: \begin{equation} \label{brutal} \begin{tikzpicture}[anchorbase] \draw (-0.2,0) -- (0,0.25) -- (0.2,0); \draw (0,0.25) -- (0,0.4); \draw (-0.2,-0.25) -- (-0.2,0) -- (-0.3,0.4); \draw (0.2,-0.25) -- (0.2,0) -- (0.3,0.4); \end{tikzpicture} \ ,\ \begin{tikzpicture}[anchorbase] \draw (-0.3,0.3) -- (0,0) -- (0.3,0.3); \draw (0,0.3) -- (-0.15,0.15); \draw (0,0) -- (0,-0.15) -- (-0.15,-0.3); \draw (0,-0.15) -- (0.15,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[anchorbase] \draw (0.3,0.3) -- (0,0) -- (-0.3,0.3); \draw (0,0.3) -- (0.15,0.15); \draw (0,0) -- (0,-0.15) -- (0.15,-0.3); \draw (0,-0.15) -- (-0.15,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[anchorbase] \draw (-0.3,-0.3) -- (0.3,0.3); \draw (-0.3,0.3) -- (-0.15,-0.15); \draw (0,0.3) -- (0.15,0.15); \draw (0,0) -- (0.3,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[anchorbase] \draw (0.3,-0.3) -- (-0.3,0.3); \draw (0.3,0.3) -- (0.15,-0.15); \draw (0,0.3) -- (-0.15,0.15); \draw (0,0) -- (-0.3,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (-0.15,-0.3) -- (-0.15,-0.23) arc(180:0:0.15) -- (0.15,-0.3); \draw (-0.3,0.3) -- (0,0.08) -- (0.3,0.3); \draw (0,0.3) -- (0,0.08); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,0.3); \draw (0,0.3) -- (0.15,0) -- (0.3,0.3); \draw (0.15,0) -- (0.15,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (0.2,-0.3) -- (0.2,0.3); \draw (0,0.3) -- (-0.15,0) -- (-0.3,0.3); \draw (-0.15,0) -- (-0.15,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (-0.3,0.3) -- (-0.3,0.23) arc(180:360:0.15) -- (0,0.3); \draw (0.3,0.3) -- (0.15,0) -- (-0.2,-0.3); \draw (0.2,-0.3) -- (0.15,0); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (0.3,0.3) -- (0.3,0.23) arc(360:180:0.15) -- (0,0.3); \draw (-0.3,0.3) -- (-0.15,0) -- (0.2,-0.3); \draw (-0.2,-0.3) -- (-0.15,0); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (0,0.3) -- (0,-0.15) -- (-0.15,-0.3); \draw (0,-0.15) -- (0.15,-0.3); \draw (-0.2,0.3) -- (-0.2,0.25) arc(180:360:0.2) -- (0.2,0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (0,0.3) to[out=-45,in=70] (0.15,-0.3); \draw (-0.3,0.3) -- (0,0) -- (0.3,0.3); \draw (0,0) -- (-0.15,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (0,0.3) to[out=225,in=110] (-0.15,-0.3); \draw (0.3,0.3) -- (0,0) -- (-0.3,0.3); \draw (0,0) -- (0.15,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (-0.3,0.3) -- (-0.15,0.15) -- (0,0.3); \draw (-0.15,0.15) -- (0.15,-0.3); \draw (0.3,0.3) -- (-0.15,-0.3); \end{tikzpicture} \ ,\ \begin{tikzpicture}[centerzero] \draw (0.3,0.3) -- (0.15,0.15) -- (0,0.3); \draw (0.15,0.15) -- (-0.15,-0.3); \draw (-0.3,0.3) -- (0.15,-0.3); \end{tikzpicture} \ . \end{equation} These are the morphisms appearing on the right-hand side of \cref{pentburst}. \begin{conj} \label{pipe} For $m,n \in \N$, a basis for $\Fcat(\go^{\otimes m}, \go^{\otimes n})$ is given by the component-planar graphs whose cycles are all of length at least six. \end{conj} Recall from \cref{measure} that \[ \dim \Hom_\fg(V^{\otimes 2}, V^{\otimes 2}) = 5 \quad \text{and} \quad \dim \Hom_\fg(V^{\otimes 3}, V^{\otimes 2}) = 15. \] One can also count that there are $70$ component-planar graphs $\go^{\otimes 3} \to \go^{\otimes 3}$ whose cycles are all of length at least six, and that $\dim \Hom_\fg(V^{\otimes 3}, V^{\otimes 3}) = 70$ (using \cref{3decomp} and Schur's lemma). Thus, it would follow from \cref{FunctorFull,twirl,pipe} that the functor $\Phi$ induces isomorphisms \[ \Fcat(\go^{\otimes m}, \go^{\otimes n}) \xrightarrow{\cong} \Hom_\fg(V^{\otimes m}, V^{\otimes n}) \quad \text{for } m+n \le 6. \] This is related to \cref{faithful}. \section{Fundamental modules\label{sec:fundamental}} Our goal in this final section is to describe the objects in $\Kar(\Fcat)$ sent, under the functor $\Kar(\Phi)$ of \cref{lake}, to the four fundamental $\fg$-modules. Some of our intermediate results will be valid for more general $\alpha$ and $\delta$. However, throughout this section we suppose that \[ \alpha \ne 0 \qquad \text{and} \qquad \delta \notin \{-10,-2,0\}. \] We continue to work over the field $\kk=\C$. Recall the definition of the antisymmetrizers in \cref{boxes}. Let \begin{gather*} e_0 = \frac{1}{\delta}\, \hourglass,\qquad e_1 = \frac{8}{\delta+10} \left( \begin{tikzpicture}[centerzero] \draw (-0.2,-0.5) -- (-0.2,0.5); \draw (0.2,-0.5) -- (0.2,0.5); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} + \frac{\delta+2}{4\alpha}\, \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} \right) ,\\ e_3 = \frac{\delta+2}{\delta+10} \left( \begin{tikzpicture}[centerzero] \draw (-0.2,-0.5) -- (-0.2,0.5); \draw (0.2,-0.5) -- (0.2,0.5); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} - \frac{2}{\alpha}\, \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} \right) ,\qquad e_4 = \frac{1}{\alpha} \Imor ,\qquad \tilde{e} = \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \symbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} - \frac{1}{\delta}\, \hourglass - \frac{1}{\alpha}\, \Imor. \end{gather*} \begin{lem} \label{sponge} We have $ef,fe \in \kk e$ for all \[ f \in \left\{ \jail\, ,\ \hourglass,\ \crossmor,\ \Imor,\ \Hmor \right\}, \qquad e \in \{e_0,e_1,e_3,e_4,\tilde{e}\}. \] \end{lem} \begin{proof} Since the given choices of $e$ and $f$ are invariant under rotation by $180\degree$, it suffices to show that $ef \in \kk e$. This is a straightforward computation. For example, \[ \jail \circ e_1 = e_1,\quad \hourglass \circ e_1 \overset{\cref{pomegranate}}{\underset{\cref{ladderslip}}{=}} 0,\quad \crossmor \circ e_1 \overset{\cref{pomegranate}}{\underset{\cref{ladderslip}}{=}} - e_1,\quad \Imor \circ e_1 \overset{\cref{pomegranate}}{\underset{\cref{ladderslip}}{=}} 0,\quad \Hmor \circ e_1 \overset{\cref{sqburst}}{\underset{\cref{pomegranate}}{=}} \frac{\alpha}{2} e_1 \] and \begin{gather*} \jail \circ \tilde{e} = \tilde{e},\quad \hourglass \circ \tilde{e} \overset{\cref{pomegranate}}{\underset{\cref{chess}}{=}} 0,\quad \crossmor \circ \tilde{e} \overset{\cref{pomegranate}}{\underset{\cref{chess}}{=}} e,\quad \Imor \circ \tilde{e} \overset{\cref{pomegranate}}{\underset{\cref{chess}}{=}} 0, \\ \Hmor \circ \tilde{e} \overset{\cref{chess}}{\underset{\cref{triangle}}{=}} \frac{1}{2} \left( \Hmor + \dotcross \right) - \frac{\alpha}{\delta}\, \hourglass + \frac{\delta-2}{2(\delta+2)}\, \Imor \overset{\cref{magic}}{=} \frac{2\alpha}{\delta+2} \tilde{e}. \end{gather*} The computations for $e \in \{e_0,e_3,e_4\}$ are similar. \end{proof} \begin{lem} \label{neanderthal} We have a decomposition \begin{equation} \label{coals} 1_{\go \otimes \go} = e_0 + e_1 + e_3 + e_4 + \tilde{e} \end{equation} of $1_{\go \otimes \go}$ as a sum of pairwise orthogonal idempotents. \end{lem} \begin{proof} A straightforward computation verifies \cref{coals}. Next we show that $e_0,e_1,e_3,e_4$ are idempotent. We have $e_0^2=e_0$ and $e_4^2=e_4$ by the fourth and third relations in \cref{chess}, respectively. Next, \begin{multline*} e_1^2 = \frac{64}{(\delta+10)^2} \left( \begin{tikzpicture}[centerzero] \draw (-0.2,-0.5) -- (-0.2,0.5); \draw (0.2,-0.5) -- (0.2,0.5); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} + \frac{\delta+2}{4\alpha}\, \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} + \frac{(\delta+2)^2}{16 \alpha^2} \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.2) -- (0.2,0.2); \draw (-0.2,0.4) -- (0.2,0.4); \antbox{-0.3,-0.2}{0.3,0}; \end{tikzpicture} \right) \\ \overset{\mathclap{\cref{sqburst}}}{\underset{\cref{pomegranate}}{=}}\ \frac{64}{(\delta+10)^2} \left( \frac{\delta+10}{8} \begin{tikzpicture}[centerzero] \draw (-0.2,-0.5) -- (-0.2,0.5); \draw (0.2,-0.5) -- (0.2,0.5); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} + \frac{(\delta+10)(\delta+2)}{32\alpha}\, \begin{tikzpicture}[anchorbase] \draw (-0.2,-0.4) -- (-0.2,0.6); \draw (0.2,-0.4) -- (0.2,0.6); \draw (-0.2,0.35) -- (0.2,0.35); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} \right) = e_1. \end{multline*} Since \[ e_3 = \begin{tikzpicture}[centerzero] \draw (-0.15,-0.35) -- (-0.15,0.35); \draw (0.15,-0.35) -- (0.15,0.35); \antbox{-0.25,-0.1}{0.25,0.1}; \end{tikzpicture} \, - e_1, \] it then follows that $e_3^2=e_3$. Now, by \cref{sponge}, we have $e f \in \kk e \cap \kk f$ for $e,f \in \{e_0,e_1,e_3,e_4,\tilde{e}\}$. Since no two of the idempotents $e_0,e_1,e_3,e_4,e$ are scalar multiples of each other, it follows that they are orthogonal. \end{proof} \begin{rem} If we knew that the morphisms \cref{bigfive} spanned $\Fcat(\go^{\otimes 2}, \go^{\otimes 2})$, then it would follow from \cref{sponge} that \[ \dim_\kk \left( e \Fcat(\go^{\otimes 2},\go^{\otimes 2}) e \right) = 1 \] and hence that $e_0,e_1,e_3,e_4$ are primitive. For example, this would be the case if \cref{pipe} holds. \end{rem} Recall from \cref{sec:functor} our conventions for labeling of weights of $\fg$ and that $V = V_{\omega_4}$. \begin{theo} \label{hacky} Suppose $\alpha = \frac{7}{3}$ and $\delta = 26$. Then \[ \Kar(\Phi)(\go^{\otimes 2}, e_1) = V_{\omega_1},\quad \Kar(\Phi)(\go^{\otimes 2}, e_3) = V_{\omega_3},\quad \Kar(\Phi)(\go, 1_\go) = V_{\omega_4}, \quad \Kar(\Phi)(\go^{\otimes 2},\tilde{e}) = V_{2\omega_4}. \] \end{theo} \begin{proof} In $\fg$-mod, we have the tensor product decomposition \cref{2decomp} and dimensions \[ \dim V_0 = 1,\quad \dim V_{\omega_1} = 52,\quad \dim V_{\omega_3} = 273,\quad \dim V_{\omega_4} = 26,\quad \dim V_{2 \omega_4} = 324. \] See, for example, \cite[Ch.~6,~Table~2]{MPR90}. By \cref{neanderthal}, we also have the decomposition \[ V^{\otimes 2} \cong \Kar(\Phi)(\go^{\otimes 2},e_0) \oplus \Kar(\Phi)(\go^{\otimes 2}, e_1) \oplus \Kar(\Phi)(\go^{\otimes 2}, e_3) \oplus \Kar(\Phi)(\go^{\otimes 2}, e_4) \oplus \Kar(\Phi)(\go^{\otimes 2}, \tilde{e}). \] Thus, if we can show that the images of the given elements of $\Kar(\Fcat)$ have the expected dimensions, the theorem follows. Since the dimension of a $\fg$-module is the trace of its identity endomorphism, the dimension of $\Kar(\Phi)(\go^{\otimes n}, e)$ is the image under $\Phi$ of \[ \begin{tikzpicture}[centerzero] \draw (-0.7,-0.2) rectangle (0,0.2); \node at (-0.35,0) {$e$}; \draw (-0.2,0.2) arc(180:0:0.2) -- (0.2,-0.2) arc(360:180:0.2); \draw (-0.4,0.2) arc(180:0:0.4) -- (0.4,-0.2) arc(360:180:0.4); \draw (-0.6,0.2) arc(180:0:0.6) -- (0.6,-0.2) arc(360:180:0.6); \end{tikzpicture} \in \Fcat(\go,\go), \] where there are $n$ strands. (Here, and in what follows, we identify $\lambda 1_\one$ with $\lambda \in \kk$.) Note that \begin{gather*} \begin{tikzpicture}[centerzero] \draw (-0.2,0.1) arc(180:0:0.2) -- (0.2,-0.1) arc(360:180:0.2); \draw (-0.4,0.1) arc(180:0:0.4) -- (0.4,-0.1) arc(360:180:0.4); \antbox{-0.5,-0.1}{-0.1,0.1}; \end{tikzpicture} = \frac{1}{2}\, \begin{tikzpicture}[centerzero] \draw (-0.2,-0.1) -- (-0.2,0.1) arc(180:0:0.2) -- (0.2,-0.1) arc(360:180:0.2); \draw (-0.4,-0.1) -- (-0.4,0.1) arc(180:0:0.4) -- (0.4,-0.1) arc(360:180:0.4); \end{tikzpicture} - \frac{1}{2}\, \begin{tikzpicture}[centerzero] \draw (-0.4,-0.15) \braidup (-0.2,0.15) arc(180:0:0.15) -- (0.1,-0.15) arc(360:180:0.15); \draw (-0.2,-0.15) \braidup (-0.4,0.15) arc(180:0:0.35) -- (0.3,-0.15) arc(360:180:0.35); \end{tikzpicture} \overset{\cref{chess}}{=} \frac{\delta(\delta-1)}{2}, \\ \begin{tikzpicture}[anchorbase] \draw (-0.2,0.1) -- (-0.2,0.3) arc(180:0:0.2) -- (0.2,-0.1) arc(360:180:0.2); \draw (-0.4,0.1) -- (-0.4,0.3) arc(180:0:0.4) -- (0.4,-0.1) arc(360:180:0.4); \draw (-0.4,0.25) -- (-0.2,0.25); \antbox{-0.5,-0.1}{-0.1,0.1}; \end{tikzpicture} = \frac{1}{2}\, \begin{tikzpicture}[centerzero] \draw (0,0) circle(0.2); \draw (0,0) circle(0.4); \draw (-0.4,0) -- (-0.2,0); \end{tikzpicture} -\frac{1}{2}\!\!\! \begin{tikzpicture}[centerzero] \draw (0,0) to[out=60,in=90,looseness=1.5] (0.4,0) to[out=-90,in=-60,looseness=1.5] (0,0); \draw (0,0) to[out=135,in=90,looseness=2.5] (0.6,0) to[out=-90,in=-135,looseness=2.5] (0,0); \opendot{0,0}; \end{tikzpicture} \overset{\cref{chess}}{=} 0 - \frac{1}{2}\!\!\! \begin{tikzpicture}[centerzero] \draw (0,-0.1) -- (0,0.1) -- (0.1,0.2) to[out=45,in=90] (0.4,0) to[out=-90,in=-45] (0.1,-0.2) -- (0,-0.1); \draw (0,0.1) -- (-0.1,0.2) to[out=135,in=90,looseness=2] (0.6,0) to[out=-90,in=-135,looseness=2] (-0.1,-0.2) -- (0,-0.1); \end{tikzpicture} \overset{\cref{chess}}{=} - \frac{\alpha \delta}{2}. \end{gather*} Therefore, we have \begin{gather*} \dim (\go^{\otimes 2}, e_1) = \frac{8}{\delta+10} \left( \frac{\delta(\delta-1)}{2} - \frac{\delta (\delta+2)}{8} \right) = \frac{3 \delta (\delta-2)}{\delta+10} = 52, \\ \dim (\go^{\otimes 2}, e_3) = \frac{\delta+2}{\delta+10} \left( \frac{\delta(\delta-1)}{2} + \delta \right) = \frac{\delta(\delta+1)(\delta+2)}{2(\delta+10)} = 273. \end{gather*} It is also clear that $\dim(\go,1_\one) = \delta = 26$ and that $\dim(\go^{\otimes 2},e_0) = 1$. It then follows that $\dim(\go^{\otimes 2},\tilde{e}) = 26^2 - 1 - 52 - 273 - 26 = 324$. \end{proof} Note that \cref{hacky} does not describe an object in $\Kar(\Fcat)$ that is mapped to the second fundamental representation $V_{\omega_2}$. In $\fg$-mod, the lowest tensor power of $V$ in which $V_{\omega_2}$ appears is $V^{\otimes 3}$. Thus, we expect that there exists an idempotent $e_2 \in \Fcat(\go^{\otimes 3}, \go^{\otimes 3})$ such that $\Kar(\Phi)(\go^{\otimes 3},e_2) = V_{\omega_2}$. (Such an idempotent is guaranteed to exist if \cref{faithful} holds.) In fact, we can say a bit more. A computation similar to the ones in the proof of \cref{hacky} shows that \begin{equation} \begin{tikzpicture}[centerzero] \antbox{-0.7,-0.1}{-0.1,0.1}; \draw (-0.2,0.1) arc(180:0:0.2) -- (0.2,-0.1) arc(360:180:0.2); \draw (-0.4,0.1) arc(180:0:0.4) -- (0.4,-0.1) arc(360:180:0.4); \draw (-0.6,0.1) arc(180:0:0.6) -- (0.6,-0.1) arc(360:180:0.6); \end{tikzpicture} = \frac{\delta(\delta-1)(\delta-2)}{6} = 2600 = 1274 + 273 + 1053 \quad \text{when } \delta = 26. \end{equation} The tensor product decomposition of $V^{\otimes 3}$ is given in \cref{3decomp}. Since \begin{gather*} \dim V_0 = 1,\ \dim V_{\omega_1} = 52,\ \dim V_{\omega_2} = 1274,\ \dim V_{\omega_3} = 273,\ \dim V_{\omega_4} = 26,\\ \dim V_{\omega_1+\omega_4} = 1053,\ \dim V_{\omega_3+\omega_4} = 4096,\ \dim V_{2\omega_4} = 324, \text{ and } \dim V_{3\omega_4} = 2652, \end{gather*} (see, for example, \cite[Ch.~6,~Table~2]{MPR90}) the only submodule of $V^{\otimes 3}$ of dimension $2600$ is $V_{\omega_2} \oplus V_{\omega_3} \oplus V_{\omega_1+\omega_4}$. Thus, we expect that the antisymmetrizer $ \begin{tikzpicture}[centerzero] \draw (-0.2,-0.3) -- (-0.2,0.3); \draw (0,-0.3) -- (0,0.3); \draw (0.2,-0.3) -- (0.2,0.3); \antbox{-0.3,-0.1}{0.3,0.1}; \end{tikzpicture} $ decomposes as a sum of three orthogonal minimal idempotents, one of which is $e_2$. (This is guaranteed to happen if \cref{faithful} holds.) Unfortunately, the computations required to find this idempotent explicitly are unwieldy. \begin{lem} We have \[ \begin{tikzpicture}[centerzero] \draw (-0.6,-0.2) rectangle (-0.1,0.2); \node at (-0.35,0) {$e_1$}; \draw (0.6,-0.2) rectangle (0.1,0.2); \node at (0.35,0) {$e_1$}; \draw (-0.25,0.2) arc(180:0:0.25); \draw (0.25,-0.2) arc(360:180:0.25); \draw (-0.45,0.2) to[out=up,in=225] (0,0.7) -- (0,0.9); \draw (0.45,0.2) to[out=up,in=-45] (0,0.7); \draw (-0.45,-0.2) to[out=down,in=135] (0,-0.7) -- (0,-0.9); \draw (0.45,-0.2) to[out=down,in=45] (0,-0.7); \end{tikzpicture} = \frac{3 \alpha (2-\delta) (\delta - 26)}{4 (\delta + 10)^2} 1_\go. \] \end{lem} \begin{proof} We compute \begin{gather*} \begin{tikzpicture}[centerzero] \antbox{-0.6,-0.1}{-0.1,0.1}; \antbox{0.6,-0.1}{0.1,0.1}; \draw (-0.25,0.1) arc(180:0:0.25); \draw (0.25,-0.1) arc(360:180:0.25); \draw (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0.45,0.1) to[out=up,in=-45] (0,0.6); \draw (-0.45,-0.1) to[out=down,in=135] (0,-0.6) -- (0,-0.8); \draw (0.45,-0.1) to[out=down,in=45] (0,-0.6); \end{tikzpicture} = \frac{1}{4} \left( \begin{tikzpicture}[centerzero] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \end{tikzpicture} - \begin{tikzpicture}[centerzero] \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.2) to[out=up,in=180] (0,0.25) arc(90:-90:0.25) to[out=180,in=down] (-0.45,0.2) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \end{tikzpicture} - \begin{tikzpicture}[centerzero,xscale=-1] \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.2) to[out=up,in=180] (0,0.25) arc(90:-90:0.25) to[out=180,in=down] (-0.45,0.2) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,-0.8) -- (0,-0.5) to[out=135,in=down] (-0.4,-0.2) to[out=up,in=180] (0,0.2) to[out=0,in=up] (0.4,-0.2) to[out=down,in=45] (0,-0.5); \draw (0,0.8) -- (0,0.5) to[out=-135,in=up] (-0.4,0.2) to[out=down,in=180] (0,-0.2) to[out=0,in=down] (0.4,0.2) to[out=up,in=-45] (0,0.5); \end{tikzpicture} \right) \overset{\cref{chess}}{=} = \frac{\alpha(\delta-2)}{4} 1_\go, \\ \begin{tikzpicture}[centerzero] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \antbox{-0.6,-0.1}{-0.1,0.1}; \antbox{0.1,-0.25}{0.6,-0.05}; \draw (0.25,0.1) -- (0.45,0.1); \end{tikzpicture} = \frac{1}{4} \left( \begin{tikzpicture}[centerzero] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \draw (0.25,0) -- (0.45,0); \end{tikzpicture} - \begin{tikzpicture}[centerzero] \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.2) to[out=up,in=180] (0,0.25) arc(90:-90:0.25) to[out=180,in=down] (-0.45,0.2) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \draw (0.25,0) -- (0.45,0); \end{tikzpicture} - \begin{tikzpicture}[centerzero,xscale=-1] \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.2) to[out=up,in=180] (0,0.25) arc(90:-90:0.25) to[out=180,in=down] (-0.45,0.2) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \opendot{-0.4,0}; \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,-0.8) -- (0,-0.5) to[out=135,in=down] (-0.4,-0.2) to[out=up,in=180] (0,0.2) to[out=0,in=up] (0.4,-0.2) to[out=down,in=45] (0,-0.5); \draw (0,0.8) -- (0,0.5) to[out=-135,in=up] (-0.4,0.2) to[out=down,in=180] (0,-0.2) to[out=0,in=down] (0.4,0.2) to[out=up,in=-45] (0,0.5); \opendot{0.34,0}; \end{tikzpicture} \right) \overset{\cref{chess}}{=} \frac{1}{4} \left( 0 - \begin{tikzpicture}[centerzero] \draw (0,-0.4) -- (0,-0.2) to[out=135,in=down] (-0.15,0) to[out=up,in=225] (0,0.2) -- (0,0.4); \draw (0,-0.2) to[out=45,in=down] (0.15,0) to[out=up,in=-45] (0,0.2); \draw (-0.15,0) -- (0.15,0); \end{tikzpicture} - \begin{tikzpicture}[centerzero] \draw (0,-0.7) -- (0,-0.5) to[out=45,in=-45] (0.2,-0.15) -- (0.2,0.15) to[out=45,in=-45] (0,0.5) -- (0,0.7); \draw (0.2,0.15) to[out=135,in=up,looseness=1.5] (0,0) to[out=down,in=225,looseness=1.5] (0.2,-0.15); \draw (0,-0.5) to[out=135,in=down] (-0.2,0) to[out=up,in=225] (0,0.5); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,-0.7) -- (0,-0.5) to[out=45,in=-45,looseness=1.5] (0,-0.1) -- (0,0.1) to[out=45,in=-45,looseness=1.5] (0,0.5) -- (0,0.7); \draw (0,-0.5) to[out=135,in=-135,looseness=1.5] (0,-0.1) -- (0,0.1) to[out=135,in=-135,looseness=1.5] (0,0.5); \end{tikzpicture} \right) \overset{\cref{chess}}{=} \frac{\alpha^2(\delta-2)}{8(\delta+2)} 1_\go, \\ \begin{tikzpicture}[centerzero] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \antbox{-0.6,-0.25}{-0.1,-0.05}; \antbox{0.1,-0.25}{0.6,-0.05}; \draw (0.25,0.1) -- (0.45,0.1); \draw (-0.25,0.1) -- (-0.45,0.1); \end{tikzpicture} = \frac{1}{4} \left( \begin{tikzpicture}[centerzero] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \draw (0.25,0) -- (0.45,0); \draw (-0.25,0) -- (-0.45,0); \end{tikzpicture} - \begin{tikzpicture}[centerzero] \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.2) to[out=up,in=180] (0,0.25) arc(90:-90:0.25) to[out=180,in=down] (-0.45,0.2) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \draw (0.25,0) -- (0.45,0); \opendot{-0.4,0}; \end{tikzpicture} - \begin{tikzpicture}[centerzero,xscale=-1] \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.2) to[out=up,in=180] (0,0.25) arc(90:-90:0.25) to[out=180,in=down] (-0.45,0.2) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \opendot{-0.4,0}; \draw (0.25,0) -- (0.45,0); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,-0.8) -- (0,-0.5) to[out=135,in=down] (-0.4,-0.2) to[out=up,in=180] (0,0.2) to[out=0,in=up] (0.4,-0.2) to[out=down,in=45] (0,-0.5); \draw (0,0.8) -- (0,0.5) to[out=-135,in=up] (-0.4,0.2) to[out=down,in=180] (0,-0.2) to[out=0,in=down] (0.4,0.2) to[out=up,in=-45] (0,0.5); \opendot{0.34,0}; \opendot{-0.34,0}; \end{tikzpicture} \right) \overset{\cref{chess}}{=} \frac{1}{4} \left( \alpha\, \begin{tikzpicture}[centerzero] \draw (0,-0.4) -- (0,-0.2) to[out=135,in=down] (-0.15,0) to[out=up,in=225] (0,0.2) -- (0,0.4); \draw (0,-0.2) to[out=45,in=down] (0.15,0) to[out=up,in=-45] (0,0.2); \draw (-0.15,0) -- (0.15,0); \end{tikzpicture} - \begin{tikzpicture}[centerzero,xscale=-1] \draw (0,-0.7) -- (0,-0.5) to[out=45,in=-45] (0.2,-0.15) -- (0.2,0.15) to[out=45,in=-45] (0,0.5) -- (0,0.7); \draw (0.2,0.15) to[out=135,in=up,looseness=1.5] (0,0) to[out=down,in=225,looseness=1.5] (0.2,-0.15); \draw (0,-0.5) to[out=135,in=down] (-0.2,0) to[out=up,in=225] (0,0.5); \draw (-0.2,0) -- (0,0); \end{tikzpicture} - \begin{tikzpicture}[centerzero] \draw (0,-0.7) -- (0,-0.5) to[out=45,in=-45] (0.2,-0.15) -- (0.2,0.15) to[out=45,in=-45] (0,0.5) -- (0,0.7); \draw (0.2,0.15) to[out=135,in=up,looseness=1.5] (0,0) to[out=down,in=225,looseness=1.5] (0.2,-0.15); \draw (0,-0.5) to[out=135,in=down] (-0.2,0) to[out=up,in=225] (0,0.5); \draw (-0.2,0) -- (0,0); \end{tikzpicture} + \begin{tikzpicture}[centerzero] \draw (0,-0.8) -- (0,-0.6) to[out=135,in=225] (-0.2,-0.1) -- (-0.2,0.1) to[out=135,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=-45] (0.2,-0.1) -- (0.2,0.1) to[out=45,in=-45] (0,0.6); \draw (-0.2,0.1) to[out=45,in=135,looseness=1.5] (0.2,0.1); \draw (-0.2,-0.1) to[out=-45,in=-135,looseness=1.5] (0.2,-0.1); \end{tikzpicture} \right) \\ \qquad \qquad \qquad \overset{\cref{chess}}{=} \frac{\alpha^3 (2 - \delta)(3 \delta + 2)}{16(\delta+2)^2} 1_\go. \end{gather*} Thus \begin{align*} \begin{tikzpicture}[centerzero] \draw (-0.6,-0.2) rectangle (-0.1,0.2); \node at (-0.35,0) {$e_1$}; \draw (0.6,-0.2) rectangle (0.1,0.2); \node at (0.35,0) {$e_1$}; \draw (-0.25,0.2) arc(180:0:0.25); \draw (0.25,-0.2) arc(360:180:0.25); \draw (-0.45,0.2) to[out=up,in=225] (0,0.7) -- (0,0.9); \draw (0.45,0.2) to[out=up,in=-45] (0,0.7); \draw (-0.45,-0.2) to[out=down,in=135] (0,-0.7) -- (0,-0.9); \draw (0.45,-0.2) to[out=down,in=45] (0,-0.7); \end{tikzpicture} &= \frac{64}{(\delta+10)^2} \left( \begin{tikzpicture}[centerzero] \antbox{-0.6,-0.1}{-0.1,0.1}; \antbox{0.6,-0.1}{0.1,0.1}; \draw (-0.25,0.1) arc(180:0:0.25); \draw (0.25,-0.1) arc(360:180:0.25); \draw (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0.45,0.1) to[out=up,in=-45] (0,0.6); \draw (-0.45,-0.1) to[out=down,in=135] (0,-0.6) -- (0,-0.8); \draw (0.45,-0.1) to[out=down,in=45] (0,-0.6); \end{tikzpicture} + \frac{\delta+2}{4\alpha} \begin{tikzpicture}[centerzero] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \antbox{-0.6,-0.1}{-0.1,0.1}; \antbox{0.1,-0.25}{0.6,-0.05}; \draw (0.25,0.1) -- (0.45,0.1); \end{tikzpicture} + \frac{\delta+2}{4\alpha} \begin{tikzpicture}[centerzero,xscale=-1] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \antbox{-0.6,-0.1}{-0.1,0.1}; \antbox{0.1,-0.25}{0.6,-0.05}; \draw (0.25,0.1) -- (0.45,0.1); \end{tikzpicture} + \frac{(\delta+2)^2}{16 \alpha^2} \begin{tikzpicture}[centerzero] \draw (-0.25,-0.1) -- (-0.25,0.1) arc(180:0:0.25) -- (0.25,-0.1) arc(360:180:0.25) -- cycle; \draw (0,-0.8) -- (0,-0.6) to[out=135,in=down] (-0.45,-0.1) -- (-0.45,0.1) to[out=up,in=225] (0,0.6) -- (0,0.8); \draw (0,-0.6) to[out=45,in=down] (0.45,-0.1) -- (0.45,0.1) to[out=up,in=-45] (0,0.6) -- (0,0.8); \antbox{-0.6,-0.25}{-0.1,-0.05}; \antbox{0.1,-0.25}{0.6,-0.05}; \draw (0.25,0.1) -- (0.45,0.1); \draw (-0.25,0.1) -- (-0.45,0.1); \end{tikzpicture} \right) \\ &= \frac{3 \alpha (2-\delta) (\delta - 26)}{4 (\delta + 10)^2} 1_\go. \qedhere \end{align*} \end{proof} \begin{cor} \label{twentysix} If $\alpha \ne 0$ and $\delta \notin \{-2,-10,2,26\}$, then \[ \frac{4(\delta+10)^2}{3 \alpha (2-\delta)(\delta-26)} \begin{tikzpicture}[centerzero] \draw (-0.25,-1.15) -- (-0.25,-1); \draw (-0.45,-1.15) -- (-0.45,-1); \draw (0.25,-1.15) -- (0.25,-1); \draw (0.45,-1.15) -- (0.45,-1); \draw (-0.6,-1) rectangle (-0.1,-0.6); \node at (-0.35,-0.8) {$e_1$}; \draw (0.6,-1) rectangle (0.1,-0.6); \node at (0.35,-0.8) {$e_1$}; \draw (-0.25,-0.6) arc(180:0:0.25); \draw (-0.45,-0.6) to[out=up,in=225] (0,-0.1) -- (0,0.1) to[out=135,in=down] (-0.45,0.6); \draw (0.45,-0.6) to[out=up,in=-45] (0,-0.1); \draw (0,0.1) to[out=45,in=down] (0.45,0.6); \draw (-0.25,0.6) arc(180:360:0.25); \draw (-0.6,1) rectangle (-0.1,0.6); \draw (0.6,1) rectangle (0.1,0.6); \node at (-0.35,0.8) {$e_1$}; \node at (0.35,0.8) {$e_1$}; \draw (-0.25,1.15) -- (-0.25,1); \draw (-0.45,1.15) -- (-0.45,1); \draw (0.25,1.15) -- (0.25,1); \draw (0.45,1.15) -- (0.45,1); \end{tikzpicture} \] is a nonzero idempotent endomorphism. In particular, $\go$ is a direct summand of $(\go^{\otimes 2}, e_1)^{\otimes 2}$ in $\Kar(\Fcat)$. \end{cor} \begin{rem} \label{sack} \Cref{twentysix,hacky} explain the importance of the choice $\delta=26$. The $\fg$-module $V_{\omega_1}$ is the adjoint representation, and $V_{\omega_1}^{\otimes 2}$ does not contain a copy of the natural representation $V = V_{\omega_4}$. In fact, a lengthy computation shows that the morphism \[ \begin{tikzpicture}[centerzero] \draw (-0.25,-1.15) -- (-0.25,-1); \draw (-0.45,-1.15) -- (-0.45,-1); \draw (0.25,-1.15) -- (0.25,-1); \draw (0.45,-1.15) -- (0.45,-1); \draw (-0.6,-1) rectangle (-0.1,-0.6); \node at (-0.35,-0.8) {$e_1$}; \draw (0.6,-1) rectangle (0.1,-0.6); \node at (0.35,-0.8) {$e_1$}; \draw (-0.25,-0.6) arc(180:0:0.25); \draw (-0.45,-0.6) to[out=up,in=225] (0,-0.1) -- (0,0.1); \draw (0.45,-0.6) to[out=up,in=-45] (0,-0.1); \end{tikzpicture} \] can be written as a linear combination of the morphisms \cref{brutal} with the two leftmost top strands brought to the bottom using caps. The coefficients in this linear combination all vanish \emph{if and only if} $\delta=26$. \end{rem} \begin{rem} \label{webs} Suppose we have an idempotent $e_2 \in \Fcat(\go^{\otimes 3}, \go^{\otimes 3})$ such that $\Kar(\Phi)(\go^{\otimes 3}, e_2) = V_{\omega_2}$. Let $\Wcat = \Wcat(\alpha,\delta)$ be the full monoidal subcategory of $\Kar(\Fcat_{\alpha,\delta})$ generated by \begin{equation} \label{list} (\go^{\otimes 2}, e_1),\quad (\go^{\otimes 3},e_2),\quad (\go^{\otimes 2},e_3),\quad \go. \end{equation} Then morphisms in $\Wcat$ can be depicted as string diagrams with strands labeled by elements of $\{1,2,3,4\}$, with a strand labeled $i$ corresponding to the identity morphism of the $i$-th object in the list \cref{list}. The category $\Wcat$ should be a degenerate (that is, $q=1$) web category of type $F_4$. \end{rem} \bibliographystyle{alphaurl}
1,116,691,497,977
arxiv
\section{appendix} Following are the implementation details. The word embeddings are initialized from GloVe~\citep{pennington2014glove:emnlp2014}. During training, they are not updated. The word embeddings not found in GloVe are initialized with zero. The dimensionality $l$ of the hidden layers is set to be 150. We use ADAMAX~\citep{kingma2014adam:iclr2015} with the coefficients $\beta_1=0.9$ and $\beta_2=0.999$ to optimize the model. The batch size is set to be 30 and the learning rate is 0.002. We do not use L2-regularization. The hyper-parameter we tuned is the dropout on the embedding layer. For WikiQA, which is relatively small dataset, we also tune the learning rate and batch size. For the convolutional window sizes for MovieQA, InsuranceQA, WikiQA and SNLI, we use [1,3,5], [1,2,3], [1,2,3,4,5] and [1,2,3,4,5], respectively. \section{Conclusions} In this paper, we systematically analyzed the effectiveness of a ``compare-aggregate" model on four different datasets representing different tasks. Moreover, we compared and tested different kinds of word-level comparison functions and found that some element-wise comparison functions can outperform the others. According to our experiment results, many different tasks can share the same ``compare-aggregate" structure. In the future work, we would like to test its effectiveness on multi-task learning. \section{Experiments} \begin{table}[] \centering \footnotesize \begin{tabular}{ccccccccccccc} \toprule \multirow{2}{*}{} & \multicolumn{3}{c}{MovieQA} & \multicolumn{3}{c}{InsuranceQA} & \multicolumn{3}{c}{WikiQA} & \multicolumn{3}{c}{SNLI} \\ \cline{2-13} & train & dev & test & train & dev & test & train & dev & test & train & dev & test \\ \midrule \#Q & 9848 & 1958 & 3138 & 13K & 1K & 1.8K*2 & 873 & 126 & 243 & 549K & 9842 & 9824 \\ \#C & 5 & 5 & 5 & 50 & 500 & 500 & 10 & 9 & 10 & - & - & - \\ \begin{tabular}[c]{@{}c@{}}\#w in P\end{tabular} & 873 & 866 & 914 & - & - & - & - & - & - & - & - & - \\ \begin{tabular}[c]{@{}c@{}}\#w in Q\end{tabular} & 10.6 & 10.6 & 10.8 & 7.2 & 7.2 & 7.2 & 6.5 & 6.5 & 6.4 & 14 & 15.2 & 15.2 \\ \begin{tabular}[c]{@{}c@{}}\#w in A\end{tabular} & 5.9 & 5.6 & 5.5 & 92.1 & 92.1 & 92.1 & 25.5 & 24.7 & 25.1 & 8.3 & 8.4 & 8.3 \\ \bottomrule \end{tabular} \caption{The statistics of different data sets. Q:question/hypothesis, C:candidate answers for each question, A:answer/hypothesis, P:plot, w:word (average).} \label{table:stat} \end{table} In this section, we evaluate our model on four different datasets representing different tasks. The first three datasets are question answering tasks while the last one is on textual entailment. The statistics of the four datasets are shown in Table~\ref{table:stat}. We will fist introduce the task settings and the way we customize the ``compare-aggregate" structure to each task. Then we will show the baselines for the different datasets. Finally, we discuss the experiment results shown in Table~\ref{table:res}. \begin{table}[] \centering \label{my-label} \begin{small} \begin{tabular}{lccccccccc} \toprule \multirow{2}{*}{Models} & \multicolumn{2}{c}{MovieQA} & \multicolumn{3}{c}{InsuranceQA} & \multicolumn{2}{c}{WikiQA} & \multicolumn{2}{c}{SNLI} \\ \cline{2-10} & dev & test & dev & test1 & test2 & MAP & \multicolumn{1}{l}{MRR} & train & test \\ \midrule \multicolumn{1}{l}{Cosine Word2Vec} & \multicolumn{1}{l}{46.4} & \multicolumn{1}{l}{45.63} & - & - & - & - & - & - & - \\ Cosine TFIDF & 47.6 & \textbf{47.36} & - & - & - & - & - & - & - \\ SSCB TFIDF & \textbf{48.5} & - & - & - & - & - & - & - & - \\ IR model & - & - & 52.7 & 55.1 & 50.8 & - & - & - & - \\ CNN with GESD & - & - & 65.4 & 65.3 & 61.0 & - & - & - & - \\ Attentive LSTM & - & - & 68.9 & 69.0 & 64.8 & - & - & - & - \\ IARNN-Occam & - & - & 69.1 & 68.9 & \textbf{65.1} & \textbf{0.7341} & \textbf{0.7418} & - & - \\ IARNN-Gate & - & - & \textbf{70.0} & \textbf{70.1} & 62.8 & 0.7258 & 0.7394 & - & - \\ CNN-Cnt & - & - & - & - & - & 0.6520 & 0.6652 & - & - \\ ABCNN & - & - & - & - & - & 0.6921 & 0.7108 & - & - \\ CubeCNN & - & - & - & - & - & 0.7090 & 0.7234 & - & - \\ W-by-W Attention & - & - & - & - & - & - & - & 85.3 & 83.5 \\ match-LSTM & - & - & - & - & - & - & - & 92.0 & 86.1 \\ LSTMN & - & - & - & - & - & - & - & 88.5 & 86.3 \\ Decomp Attention & - & - & - & - & - & - & - & 90.5 & 86.8 \\ EBIM+TreeLSTM & - & - & - & - & - & - & - & 93.0 & \textbf{88.3} \\ \midrule NN & 31.6 &- & 76.8 & 74.9 & 72.4 & 0.7102 & 0.7224 & 89.3 & 86.3 \\ NTN & 31.6 &- & 75.6 & 75.0 & 72.5 & 0.7349 & 0.7456 &91.6 & 86.3 \\ \textsc{EucCos} & 71.9 &- & 70.6 & 70.2 & 67.9 & 0.6740 & 0.6882 & 87.1 & 84.0 \\ \textsc{Sub} & 64.9 &- & 70.0 & 71.3 & 68.2 & 0.7019 & 0.7151 &89.8 & \textbf{86.8} \\ \textsc{Mult} & 66.4 &- & 76.0 & 75.2 & \textbf{73.4} & \textbf{0.7433} & \textbf{0.7545} & 89.7 & 85.8 \\ \textsc{SubMult+NN} & \textbf{72.1} & \textbf{72.9} & \textbf{77.0} & \textbf{75.6} & 72.3 & 0.7332 & 0.7477 & 89.4 & \textbf{86.8} \\ \bottomrule \end{tabular} \end{small} \caption{Experiment Results} \label{table:res} \end{table} \subsection{Task-specific Model Structures} In all these tasks, we use matrix $\mathbf{Q} \in \mathbb{R}^{d \times Q}$ to represent the question or premise and matrix $\mathbf{A}_k \in \mathbb{R}^{d \times A_k}$ ($k \in [1, K]$) to represent the $k^\text{th}$ answer or the hypothesis. For the machine comprehension task \textbf{MovieQA}~\citep{MovieQA:cvpr2016}, there is also a matrix $\mathbf{P} \in \mathbb{R}^{d \times P}$ that represents the plot of a movie. Here $Q$ is the length of the question or premise, $A_k$ the length of the $k^\text{th}$ answer, and $P$ the length of the plot. For the \textbf{SNLI}~\citep{bowman2015large:EMNLP} dataset, the task is text entailment, which identifies the relationship (entailment, contradiction or neutral) between a premise sentence and a hypothesis sentence. Here $K = 1$, and there are exactly two sequences to match. The actual model structure is what we have described before. For the \textbf{InsuranceQA}~\citep{feng2015applying} dataset, the task is an answer selection task which needs to select the correct answer for a question from a candidate pool. For the \textbf{WikiQA}~\citep{yang2015wikiqa:emnlp} datasets, we need to rank the candidate answers according to a question. For both tasks, there are $K$ candidate answers for each question. Let us use $\mathbf{r}_k$ to represent the resulting vector produced by Eqn.~\ref{eqn:aggregate} for the $k^\text{th}$ answer. In order to select one of the $K$ answers, we first define $\mathbf{R} = [\mathbf{r}_1, \mathbf{r}_2, \ldots, \mathbf{r}_K]$. We then compute the probability of the $k^\text{th}$ answer to be the correct one as follows: \begin{eqnarray} \label{eqn:pqasoft} p(k | \mathbf{R}) & = & \text{softmax}( \mathbf{w}^\text{T} \tanh(\mathbf{W}^{\text{s}}\mathbf{R} + \mathbf{b}^\text{s} \otimes \mathbf{e}_{K}) + b \otimes \mathbf{e}_{K}), \end{eqnarray} where $\mathbf{W}^{\text{s}}\in \mathbb{R}^{l\times nl}$, $\mathbf{w}\in \mathbb{R}^{l}$, $\mathbf{b}^\text{s}\in \mathbb{R}^{l}$, $b\in \mathbb{R}$ are parameters to be learned. For the machine comprehension task \textbf{MovieQA}, each question is related to Plot Synopses written by fans after watching the movie and each question has five candidate answers. So for each candidate answer there are three sequences to be matched: the plot $\mathbf{P}$, the question $\mathbf{Q}$ and the answer $\mathbf{A}_k$. For each $k$, we first match $\mathbf{Q}$ and $\mathbf{P}$ and refer to the matching result at position $j$ as $\mathbf{t}^\text{q}_{j}$, as generated by one of the comparison functions $f$. Similarly, we also match $\mathbf{A}_k$ with $\mathbf{P}$ and refer to the matching result at position $j$ as $\mathbf{t}^\text{a}_{k, j}$. We then define \begin{eqnarray*} \mathbf{t}_{k, j} & = & \begin{bmatrix} \mathbf{t}^\text{q}_j \\ \mathbf{t}^\text{a}_{k, j} \end{bmatrix}, \end{eqnarray*} and \begin{eqnarray*} \mathbf{r}_k & = & \text{CNN}([\mathbf{t}_{k, 1}, \ldots, \mathbf{t}_{k, P}]). \end{eqnarray*} To select an answer from the $K$ candidate answers, again we use Eqn.~\ref{eqn:pqasoft} to compute the probabilities. \subsection{Baselines} Here, we will introduce the baselines for each dataset. We did not re-implement these models but simply took the reported performance for the purpose of comparison. \textbf{SNLI:} $\bullet$ \textbf{W-by-W Attention}: The model by \cite{rock:ICLR2016}, who first introduced attention mechanism into text entailment. $\bullet$ \textbf{match-LSTM}: The model by \cite{wang:NAACL2016}, which concatenates the matched words as the inputs of an LSTM. $\bullet$ \textbf{LSTMN}: Long short-term memory-networks proposed by \cite{cheng2016long}. $\bullet$ \textbf{Decomp Attention}: Another ``compare-aggregate'' model proposed by \cite{parikh:emnlp2016}. $\bullet$ \textbf{EBIM+TreeLSTM}: The state-of-the-art model proposed by \cite{chen2016enhancing:arxiv} on the SNLI dataset. \textbf{InsuranceQA:} $\bullet$ \textbf{IR model}: This model by \cite{bendersky2010:wsdm} learns the concept information to help rank the candidates. $\bullet$ \textbf{CNN with GESD}: This model by \cite{feng2015applying} uses Euclidean distance and dot product between sequence representations built through convolutional neural networks to select the answer. $\bullet$ \textbf{Attentive LSTM}: \cite{tanimproved:acl2016} used soft-attention mechanism to select the most important information from the candidates according to the representation of the questions. $\bullet$ \textbf{IARNN-Occam}: This model by \cite{wanginner:acl2016} adds regularization on the attention weights. $\bullet$ \textbf{IARNN-Gate}: This model by \cite{wanginner:acl2016} uses the representation of the question to build the GRU gates for each candidate answer. \noindent \textbf{WikiQA:} $\bullet$ \textbf{IARNN-Occam} and \textbf{IARNN-Gate} as introduced before. $\bullet$ \textbf{CNN-Cnt}: This model by \cite{yang2015wikiqa:emnlp} combines sentence representations built by a convolutional neural network with logistic regression. $\bullet$ \textbf{ABCNN}: This model is Attention-Based Convolutional Neural Network proposed by \cite{yin2015abcnn:tacl}. $\bullet$ \textbf{CubeCNN} proposed by \citet{he:naacl16} builds a CNN on all pairs of word similarity. \noindent \textbf{MovieQA:} All the baselines we consider come from \cite{MovieQA:cvpr2016}'s work: $\bullet$ \textbf{Cosine Word2Vec}: A sliding window is used to select the answer according to the similarities computed through Word2Vec between the sentences in plot and the question/answer. $\bullet$ \textbf{Cosine TFIDF}: This model is similar to the previous method but uses bag-of-word with tf-idf scores to compute similarity. $\bullet$ \textbf{SSCB TFIDF}: Instead of using the sliding window method, a convolutional neural network is built on the sentence level similarities. \subsection{Analysis of Results} We use accuracy as the evaluation metric for the datasets MovieQA, InsuranceQA and SNLI, as there is only one correct answer or one label for each instance. For WikiQA, there may be multiple correct answers, so evaluation metrics we use are Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR). We observe the following from the results. (1) Overall, we can find that our general ``compare-aggregate" structure achieves the best performance on \textbf{MovieQA}, \textbf{InsuranceQA}, \textbf{WikiQA} datasets and very competitive performance on the \textbf{SNLI} dataset. Especially for the \textbf{InsuranceQA} dataset, with any comparison function we use, our model can outperform all the previous models. (2) The comparison method \textsc{SubMult+NN} is the best in general. (3) Some simple comparison functions can achieve better performance than the neural networks or neural tensor network comparison functions. For example, the simplest comparison function \textsc{EucCos} achieves nearly the best performance in the \textbf{MovieQA} dataset, and the element-wise comparison functions, which do not need parameters can achieve the best performance on the \textbf{WikiQA} data set. \subsection{Further Analyses} \begin{figure}[] \centering \includegraphics[width=5.5in]{analysis2} \includegraphics[width=5.5in]{analysis1} \caption{An visualization of the largest value of each dimension in the convolutional layer of CNN. The top figure is an example from the data set \textbf{MovieQA} with CNN window size 5. The bottom figure is an example from the data set \textbf{InsuranceQA} with CNN window size 3.} \label{fig:visual} \end{figure} To further explain how our model works, we visualize the max values in each dimension of the convolutional layer. We use two examples shown in Table~\ref{sample} from MovieQA and InsuranceQA data sets respectively. In the top of Figure~\ref{fig:visual}, we can see that the plot words that also appear in either the question or the answer will draw more attention by the CNN. We hypothesize that if the nearby words in the plot can match both the words in question and the words in one answer, then this answer is more likely to be the correct one. Similarly, the bottom one of Figure~\ref{fig:visual} also shows that the CNN will focus more on the matched word representations. If the words in one answer continuously match the words in the question, this answer is more likely to be the correct one. \section{Introduction} \label{sec:intro2} Many natural language processing problems involve matching two or more sequences to make a decision. For example, in textual entailment, one needs to determine whether a hypothesis sentence can be inferred from a premise sentence~\citep{bowman2015large:EMNLP}. In machine comprehension, given a passage, a question needs to be matched against it in order to find the correct answer~\citep{richardsonmctest:EMNLP2013,MovieQA:cvpr2016}. Table~\ref{sample} gives two example sequence matching problems. In the first example, a passage, a question and four candidate answers are given. We can see that to get the correct answer, we need to match the question against the passage and identify the last sentence to be the answer-bearing sentence. In the second example, given a question and a set of candidate answers, we need to find the answer that best matches the question. Because of the fundamental importance of comparing two sequences of text to judge their semantic similarity or relatedness, sequence matching has been well studied in natural language processing. With recent advances of neural network models in natural language processing, a standard practice for sequence modeling now is to encode a sequence of text as an embedding vector using models such as RNN and CNN. To match two sequences, a straightforward approach is to encode each sequence as a vector and then to combine the two vectors to make a decision~\citep{bowman2015large:EMNLP,feng2015applying}. However, it has been found that using a single vector to encode an entire sequence is not sufficient to capture all the important information from the sequence, and therefore advanced techniques such as attention mechanisms and memory networks have been applied to sequence matching problems~\citep{hermann:nips2015,hill:ICLR2016,rock:ICLR2016}. A common trait of a number of these recent studies on sequence matching problems is the use of a ``compare-aggregate'' framework~\citep{wang:NAACL2016, he:naacl16, parikh:emnlp2016}. In such a framework, comparison of two sequences is not done by comparing two vectors each representing an entire sequence. Instead, these models first compare vector representations of smaller units such as words from these sequences and then aggregate these comparison results to make the final decision. For example, the match-LSTM model proposed by \cite{wang:NAACL2016} for textual entailment first compares each word in the hypothesis with an attention-weighted version of the premise. The comparison results are then aggregated through an LSTM. \cite{he:naacl16} proposed a pairwise word interaction model that first takes each pair of words from two sequences and applies a comparison unit on the two words. It then combines the results of these word interactions using a similarity focus layer followed by a multi-layer CNN. \cite{parikh:emnlp2016} proposed a decomposable attention model for textual entailment, in which words from each sequence are compared with an attention-weighted version of the other sequence to produce a series of comparison vectors. The comparison vectors are then aggregated and fed into a feed forward network for final classification. Although these studies have shown the effectiveness of such a ``compare-aggregate'' framework for sequence matching, there are at least two limitations with these previous studies: (1) Each of the models proposed in these studies is tested on one or two tasks only, but we hypothesize that this general framework is effective on many sequence matching problems. There has not been any study that empirically verifies this. (2) More importantly, these studies did not pay much attention to the comparison function that is used to compare two small textual units. Usually a standard feedforward network is used~\citep{hu2014convolutional,wang:NAACL2016} to combine two vectors representing two units that need to be compared, e.g., two words. However, based on the nature of these sequence matching problems, we essentially need to measure how semantically similar the two sequences are. Presumably, this property of these sequence matching problems should guide us in choosing more appropriate comparison functions. Indeed \cite{he:naacl16} used cosine similarity, Euclidean distance and dot product to define the comparison function, which seem to be better justifiable. But they did not systematically evaluate these similarity or distance functions or compare them with a standard feedforward network. In this paper, we argue that the general ``compare-aggregate'' framework is effective for a wide range of sequence matching problems. We present a model that follows this general framework and test it on four different datasets, namely, MovieQA, InsuranceQA, WikiQA and SNLI. The first three datasets are for Question Answering, but the setups of the tasks are quite different. The last dataset is for textual entailment. More importantly, we systematically present and test six different comparison functions. We find that overall a comparison function based on element-wise subtraction and multiplication works the best on the four datasets. The contributions of this work are twofold: (1) Using four different datasets, we show that our model following the ``compare-aggregate'' framework is very effective when compared with the state-of-the-art performance on these datasets. (2) We conduct systematic evaluation of different comparison functions and show that a comparison function based on element-wise operations, which is not widely used for word-level matching, works the best across the different datasets. We believe that these findings will be useful for future research on sequence matching problems. We have also made our code available online.\footnote{\url{https://github.com/shuohangwang/SeqMatchSeq}} \begin{table}[] \small \label{sample} \centering \begin{tabular}{ll} \begin{tabular}{l} \toprule \multicolumn{1}{p{6cm}}{ \textbf{Plot}: ... Aragorn is crowned King of Gondor and taking Arwen as his queen before all present at his coronation bowing before Frodo and the other Hobbits . The Hobbits return to \textbf{the Shire} where Sam marries Rosie Cotton . ...} \\ \midrule \multicolumn{1}{p{6cm}}{\textbf{Qustion}: Where does Sam marry Rosie? } \\ \midrule \multicolumn{1}{p{6cm}}{\textbf{Candidate answers}: 0) Grey Havens. 1) Gondor. \textbf{2) The Shire}. 3) Erebor. 4) Mordor. }\\ \bottomrule \end{tabular} & \begin{tabular}{l} \toprule \multicolumn{1}{p{6cm}}{\textbf{Question}: can i have auto insurance without a car} \\ \midrule \multicolumn{1}{p{6cm}}{ \textbf{Ground-truth answer}: yes, it be possible have auto insurance without own a vehicle. you will purchase what be call a name ... } \\ \midrule \multicolumn{1}{p{6cm}}{ \textbf{Another candidate answer}: insurance not be a tax or merely a legal obligation because auto insurance follow a car... }\\ \bottomrule \end{tabular} \end{tabular} \normalsize \caption{The example on the left is a machine comprehension problem from MovieQA, where the correct answer here is \textbf{The Shire}. The example on the right is an answer selection problem from InsuranceQA.} \end{table} \section{Method} \begin{figure}[] \centering \includegraphics[width=5.5in]{model1} \caption{The left hand side is an overview of the model. The right hand side shows the details about the different comparison functions. The rectangles in dark represent parameters to be learned. $\times$ represents matrix multiplication.} \label{fig:model} \end{figure} In this section, we propose a general model following the ``compare-aggregate'' framework for matching two sequences. This general model can be applied to different tasks. We focus our discussion on six different comparison functions that can be plugged into this general ``compare-aggregate'' model. In particular, we hypothesize that two comparison functions based on element-wise operations, \textsc{Sub} and \textsc{Mult}, are good middle ground between highly flexible functions using standard neural network models and highly restrictive functions based on cosine similarity and/or Euclidean distance. As we will show in the experiment section, these comparison functions based on element-wise operations can indeed perform very well on a number of sequence matching problems. \subsection{Problem Definition and Model Overview} The general setup of the sequence matching problem we consider is the following. We assume there are two sequences to be matched. We use two matrices $\mathbf{Q} \in \mathbb{R}^{d \times Q}$ and $\mathbf{A} \in \mathbb{R}^{d \times A}$ to represent the word embeddings of the two sequences, where $Q$ and $A$ are the lengths of the two sequences, respectively, and $d$ is the dimensionality of the word embeddings. In other words, each column vector of $\mathbf{Q}$ or $\mathbf{A}$ is an embedding vector representing a single word. Given a pair of $\mathbf{Q}$ and $\mathbf{A}$, the goal is to predict a label $y$. For example, in textual entailment, $\mathbf{Q}$ may represent a premise and $\mathbf{A}$ a hypothesis, and $y$ indicates whether $\mathbf{Q}$ entails $\mathbf{A}$ or contradicts $\mathbf{A}$. In question answering, $\mathbf{Q}$ may be a question and $\mathbf{A}$ a candidate answer, and $y$ indicates whether $\mathbf{A}$ is the correct answer to $\mathbf{Q}$. We treat the problem as a supervised learning task. We assume that a set of training examples in the form of $(\mathbf{Q}, \mathbf{A}, y)$ is given and we aim to learn a model that maps any pair of $(\mathbf{Q}, \mathbf{A})$ to a $y$. An overview of our model is shown in Figure~\ref{fig:model}. The model can be divided into the following four layers: \begin{enumerate} \item \textbf{Preprocessing:} We use a preprocessing layer (not shown in the figure) to process $\mathbf{Q}$ and $\mathbf{A}$ to obtain two new matrices $\overline{\mathbf{Q}} \in \mathbb{R}^{l \times Q}$ and $\overline{\mathbf{A}} \in \mathbb{R}^{l \times A}$. The purpose is to obtain a new embedding vector for each word in each sequence that captures some contextual information in addition to the word itself. For example, $\overline{\mathbf{q}}_i \in \mathbb{R}^l$, which is the $i^\text{th}$ column vector of $\overline{\mathbf{Q}}$, encodes the $i^\text{th}$ word in $\mathbf{Q}$ together with its context in $\mathbf{Q}$. \item \textbf{Attention:} We apply a standard attention mechanism on $\overline{\mathbf{Q}}$ and $\overline{\mathbf{A}}$ to obtain attention weights over the column vectors in $\overline{\mathbf{Q}}$ for each column vector in $\overline{\mathbf{A}}$. With these attention weights, for each column vector $\overline{\mathbf{a}}_j$ in $\overline{\mathbf{A}}$, we obtain a corresponding vector $\mathbf{h}_j$, which is an attention-weighted sum of the column vectors of $\overline{\mathbf{Q}}$. \item \textbf{Comparison:} We use a comparison function $f$ to combine each pair of $\overline{\mathbf{a}}_j$ and $\mathbf{h}_j$ into a vector $\mathbf{t}_j$. \item \textbf{Aggregation:} We use a CNN layer to aggregate the sequence of vectors $\mathbf{t}_j$ for the final classification. \end{enumerate} Although this model follows more or less the same framework as the model proposed by \cite{parikh:emnlp2016}, our work has some notable differences. First, we will pay much attention to the comparison function $f$ and compare a number of options, including a some uncommon ones based on element-wise operations. Second, we apply our model to four different datasets representing four different tasks to evaluate its general effectiveness for sequence matching problems. There are also some other differences from the work by \cite{parikh:emnlp2016}. For example, we use a CNN layer instead of summation and concatenation for aggregation. Our attention mechanism is one-directional instead of two-directional. In the rest of this section we will present the model in detail. We will focus mostly on the comparison functions we consider. \subsection{Preprocessing and Attention} Our preprocessing layer uses a recurrent neural network to process the two sequences. We use a modified version of LSTM/GRU in which we keep only the input gates for remembering meaningful words: \begin{eqnarray} \nonumber \overline{\mathbf{Q}} & = & \sigma(\mathbf{W}^\text{i} \mathbf{Q} + \mathbf{b}^{\text{i}} \otimes \mathbf{e}_Q) \odot \tanh(\mathbf{W}^{\text{u}}\mathbf{Q}+\mathbf{b}^{\text{u}}\otimes \mathbf{e}_Q), \\ \overline{\mathbf{A}} & = & \sigma(\mathbf{W}^\text{i} \mathbf{A} + \mathbf{b}^{\text{i}} \otimes \mathbf{e}_A) \odot \tanh(\mathbf{W}^{\text{u}}\mathbf{A}+\mathbf{b}^{\text{u}}\otimes \mathbf{e}_A), \end{eqnarray} where $\odot$ is element-wise multiplication, and $\mathbf{W}^\text{i}, \mathbf{W}^\text{u}\in \mathbb{R}^{l\times d}$ and $\mathbf{b}^\text{i},\mathbf{b}^\text{u}\in \mathbb{R}^{l}$ are parameters to be learned. The outer product $(\cdot \otimes \mathbf{e}_X)$ produces a matrix or row vector by repeating the vector or scalar on the left for $X$ times. The attention layer is built on top of the resulting $\overline{\mathbf{Q}}$ and $\overline{\mathbf{A}}$ as follows: \begin{eqnarray} \nonumber \mathbf{G} & = & \text{softmax} \left( ( \mathbf{W}^{\text{g}} \overline{\mathbf{Q}} + \mathbf{b}^{\text{g}} \otimes \mathbf{e}_Q)^{\text{T}} \overline{\mathbf{A}} \right), \\ \label{eqn:alpha} \mathbf{H} & = & \overline{\mathbf{Q}} \mathbf{G}, \end{eqnarray} where $\mathbf{W}^{\text{g}} \in \mathbb{R}^{l\times l}$ and $\mathbf{b}^{\text{g}} \in \mathbb{R}^{l}$ are parameters to be learned, $\mathbf{G}\in \mathbb{R}^{Q\times A}$ is the attention weight matrix, and $\mathbf{H} \in \mathbb{R}^{l\times A}$ are the attention-weighted vectors. Specifically, $\mathbf{h}_j$, which is the $j^\text{th}$ column vector of $\mathbf{H}$, is a weighted sum of the column vectors of $\overline{\mathbf{Q}}$ and represents the part of $\mathbf{Q}$ that best matches the $j^\text{th}$ word in $\mathbf{A}$. Next we will combine $\mathbf{h}_j$ and $\overline{\mathbf{a}}_j$ using a comparison function. \subsection{Comparison} The goal of the comparison layer is to match each $\overline{\mathbf{a}}_j$, which represents the $j^\text{th}$ word and its context in $\mathbf{A}$, with $\mathbf{h}_j$, which represents a weighted version of $\mathbf{Q}$ that best matches $\overline{\mathbf{a}}_j$. Let $f$ denote a comparison function that transforms $\overline{\mathbf{a}}_j$ and $\mathbf{h}_j$ into a vector $\mathbf{t}_j$ to represent the comparison result. A natural choice of $f$ is a standard neural network layer that consists of a linear transformation followed by a non-linear activation function. For example, we can consider the following choice: \begin{eqnarray} \textsc{NeuralNet (NN):}& & \mathbf{t}_j = f(\overline{\mathbf{a}}_j, \mathbf{h}_j) = \text{ReLU}(\mathbf{W} \begin{bmatrix} \overline{\mathbf{a}}_j \\ \mathbf{h}_j \end{bmatrix} + \mathbf{b}), \end{eqnarray} where matrix $\mathbf{W} \in \mathbb{R}^{l\times 2l}$ and vector $\mathbf{b}\in \mathbb{R}^{l}$ are parameters to be learned. Alternatively, another natural choice is a neural tensor network~\citep{socher2013:emnlp} as follows: \begin{eqnarray} \textsc{NeuralTensorNet (NTN):} & & \mathbf{t}_j = f(\overline{\mathbf{a}}_j, \mathbf{h}_j) = \text{ReLU}(\overline{\mathbf{a}}_j^\text{T} \mathbf{T}^{[1 \ldots l]} \mathbf{h}_j + \mathbf{b}), \label{eqn:bilinear} \end{eqnarray} where tensor $\mathbf{T}^{[1 \ldots l]}\in \mathbb{R}^{l\times l\times l}$ and vector $\mathbf{b} \in \mathbb{R}^l$ are parameters to be learned. However, we note that for many sequence matching problems, we intend to measure the semantic similarity or relatedness of the two sequences. So at the word level, we also intend to check how similar or related $\overline{\mathbf{a}}_j$ is to $\mathbf{h}_j$. For this reason, a more natural choice used in some previous work~\cite{} is Euclidean distance or cosine similarity between $\overline{\mathbf{a}}_j$ and $\mathbf{h}_j$. We therefore consider the following definition of $f$: \begin{eqnarray} \textsc{Euclidean+Cosine (EucCos):} & & \mathbf{t}_j = f(\overline{\mathbf{a}}_j, \mathbf{h}_j) = \begin{bmatrix} \Vert \overline{\mathbf{a}}_j - \mathbf{h}_j \Vert _{2} \\ \cos(\overline{\mathbf{a}}_j, \mathbf{h}_j)\end{bmatrix}. \label{eqn:cos} \end{eqnarray} Note that with \textsc{EucCos}, the resulting vector $\mathbf{t}_j$ is only a 2-dimensional vector. Although \textsc{EucCos} is a well-justified comparison function, we suspect that it may lose some useful information from the original vectors $\overline{\mathbf{a}}_j$ and $\mathbf{h}_j$. On the other hand, \textsc{NN} and \textsc{NTN} are too general and thus do not capture the intuition that we care mostly about the similarity between $\overline{\mathbf{a}}_j$ and $\mathbf{h}_j$. To use something that is a good compromise between the two extreme cases, we consider the following two new comparison functions, which operate on the two vectors in an element-wise manner. These functions have been used previously by \cite{tai2015improved:acl}. \begin{eqnarray} \textsc{Subtraction (Sub):} & & \mathbf{t}_j = f(\overline{\mathbf{a}}_j, \mathbf{h}_j) = (\overline{\mathbf{a}}_j - \mathbf{h}_j) \odot (\overline{\mathbf{a}}_j - \mathbf{h}_j), \\ \textsc{Multiplication (Mult):} & & \mathbf{t}_j = f(\overline{\mathbf{a}}_j, \mathbf{h}_j) = \overline{\mathbf{a}}_j \odot \mathbf{h}_j. \end{eqnarray} Note that the operator $\odot$ is element-wise multiplication. For both comparison functions, the resulting vector $\mathbf{t}_j$ has the same dimensionality as $\overline{\mathbf{a}}_j$ and $\mathbf{h}_j$. We can see that \textsc{Sub} is closely related to Euclidean distance in that Euclidean distance is the sum of all the entries of the vector $\mathbf{t}_j$ produced by \textsc{Sub}. But by not summing up these entries, \textsc{Sub} preserves some information about the different dimensions of the original two vectors. Similarly, \textsc{Mult} is closely related to cosine similarity but preserves some information about the original two vectors. Finally, we consider combining \textsc{Sub} and \textsc{Mult} followed by an NN layer as follows: \begin{eqnarray} \textsc{SubMult+NN:} & & \mathbf{t}_j = f(\overline{\mathbf{a}}_j, \mathbf{h}_j) = \text{ReLU}(\mathbf{W} \begin{bmatrix} (\overline{\mathbf{a}}_j - \mathbf{h}_j) \odot (\overline{\mathbf{a}}_j - \mathbf{h}_j) \\ \overline{\mathbf{a}}_j \odot \mathbf{h}_j \end{bmatrix} + \mathbf{b}). \end{eqnarray} In summary, we consider six different comparison functions: \textsc{NN}, \textsc{NTN}, \textsc{EucCos}, \textsc{Sub}, \textsc{Mult} and \textsc{SubMult+NN}. Among these functions, the last three (\textsc{Sub}, \textsc{Mult} and \textsc{SubMult+NN}) have not been widely used in previous work for word-level matching. \subsection{Aggregation} After we apply the comparison function to each pair of $\overline{\mathbf{a}}_j$ and $\mathbf{h}_j$ to obtain a series of vectors $\mathbf{t}_j$, finally we aggregate these vectors using a one-layer CNN~\citep{kim:emnlp14}: \begin{eqnarray} \mathbf{r} & = & \text{CNN}([\mathbf{t}_1, \ldots, \mathbf{t}_A]). \label{eqn:aggregate} \end{eqnarray} $\mathbf{r}\in \mathbb{R}^{nl}$ is then used for the final classification, where $n$ is the number of windows in CNN. \section{Related Work} We review related work in three types of general structures for matching sequences. \textbf{Siamense network:} These kinds of models use the same structure, such as RNN or CNN, to build the representations for the sequences separately and then use them for classification. Then cosine similarity~\citep{feng2015applying,yang2015wikiqa:emnlp}, element-wise operation~\citep{tai2015improved:acl,mou2015:emnlp} or neural network-based combination~\citet{bowman2015large:EMNLP} are used for sequence matching. \textbf{Attentive network:} Soft-attention mechanism~\citep{bahdanau:ICLR2015} has been widely used for sequence matching in machine comprehension~\citep{hermann:nips2015}, text entailment~\citep{rock:ICLR2016} and question answering~\citep{tanimproved:acl2016}. Instead of using the final state of RNN to represent a sequence, these studies use weighted sum of all the states for the sequence representation. \textbf{Compare-Aggregate network:} This kind of framework is to perform the word level matching~\citep{wang2016machine,parikh:emnlp2016,he:naacl16,trischler:acl2016}. Our work is under this framework. But our structure is different from previous models and our model can be applied on different tasks. Besides, we analyzed different word-level comparison functions separately.
1,116,691,497,978
arxiv
\section{Introduction} \label{sec:introduction} Seven confirmed planets have been found orbiting six binary stars \citep{Doyle11, Orosz12a, Orosz12b, Welsh12, Schwamb13}, igniting interest in the possibility of terrestrial planets in circumbinary radiative habitable zones (RHZs) \citep{Mason12, Kane13, Clark13}. Planetary atmosphere erosion by stellar winds and intense XUV fluxes must also be considered when assessing circumbinary habitability. These erosive processes could obliterate planetary atmospheres or produce desiccation, akin to Venus \citep{Zuluaga13}. In this letter, we present a mechanism by which stellar activity in binaries can be reduced (increased) due to tidal breaking of stellar components, potentially enhancing (restricting) the protection of planetary atmospheres against these stellar aggression factors. Relations between age, rotation rate, and magnetic activity of single stars have been established both theoretically and observationally \citep{Basri87,Wood05}. Rapidly rotating stars are luminous XUV sources and undergo significant mass-loss \citep{Wood05}, posing high risks to weakly magnetized planets. These relationships have been used to evaluate the evolution of stellar aggression and its role in terrestrial planet habitability \citep{Griebmeier07, Zuluaga13}. When these relationships are applied to binaries, we find that early tidal spin-down of one or both stars produces an effective stellar rotational aging. Thus reducing stellar aggression, abating mass-loss from planetary atmospheres, and potentially promoting habitability. In other cases the stellar aging effect increases activity and reduces the chances for habitability. To explore this effect, we extend single star RHZ limits \citep{Kasting93, Kopparapu13} to planets in circumbinary orbits (Section \ref{sec:circumbinary}) and investigate the magnitude of stellar aggression towards these planets (Section \ref{sec:planet-binary-interaction}). An ensemble of main sequence binaries with components from 0.2$M_{\Sun}$ to 1.5$M_{\Sun}$, and a range of binary periods and eccentricities are modeled (Section \ref{sec:results}). Synchronization times are computed for stars in both twin and disparate binaries, including six Kepler binaries with known circumbinary planets (Section \ref{sec:results}). \section{Circumbinary Habitability} \label{sec:circumbinary} Circumbinary habitability is a five dimensional parameter problem, even for circular planetary orbits. These are the mass of the primary and secondary, $M_1$ and $M_2$, binary eccentricity and period, $e$ and $P_{bin}$, and planetary semi-major axis $a$. Five dimensions are reduced to four by considering only orbits in the middle of the circumbinary habitable zone. Two cross-sections of the remaining four variable problem are examined: (1) twins, i.e. $M_1 = M_2$ and (2) binaries with a solar mass primary and a lower mass companion. Examination of these cross-sections and specific case studies provided by the six Kepler binaries elucidate circumstances for which enhanced, or reduced, habitability induced by the tidal breaking mechanism operates. \subsection{Orbital Stability} \label{subsec:orbital-stability} Orbital stability is a prerequisite for habitability. To be stable, the semi-major axis of a circumbinary planet must be larger than a critical value $a_{c}$. For a large range of binary semi-major axes $a_{\rm bin}$ and eccentricities $e$, $a_c$ is calculated from numerical fits, such as that provided by Eq. (3) of \citet{Holman99}. Orbital stability within the RHZ is of greatest concern for similar mass binaries with large separations. For twin binaries in circular orbits, i.e. $\mu=1/2$ and $e=0$, the stability criterion simplifies to $a_{c} \sim 2.4\,a_{\rm bin}$. In this case, if $a_{\rm bin}>l\sub{out}/2.4$, where $l\sub{out}$ is the outer edge of the RHZ, planets throughout the habitable zone have unstable orbits, rendering the binary uninhabitable. \subsection{Circumbinary Radiative Habitable Zone} \label{subsec:hz} RHZ limits are found by calculating fluxes which allow the existence of liquid water. We follow the results of \citet{Kopparapu13} that, for single stars, provide limiting fluxes $S\sub{eff}$ (in units of present Earth solar flux) as a function of stellar effective temperature $T_{*}=T_{eff}-5780 K$: \begin{equation} \label{eq:Seff} S\sub{eff}=S_{{\rm eff}\odot}+aT_{*}+bT^{2}_{*}+cT^{3}_{*}+dT^{4}_{*} \end{equation} Here $S\sub{eff\odot}$, $a$, $b$, $c$ and $d$ are constants which depend on the physical criterion defining a given limit and are tabulated in Table 3 of \citet{Kopparapu13}. The most generous limits are obtained by assuming that Venus had surface liquid water until a few Gyr ago (the recent Venus criterion) and that Mars was also habitable at the beginning of solar system evolution (the early Mars criterion). Limits of the RHZ, either around single stars or binaries are defined as the distance $d$ where the averaged flux $\langle S(d)\rangle$ is equal to the critical flux: \beq{eq:HZ} \langle S(d)\rangle = S\sub{eff} \end{equation} For single stars, circular orbits, and fast rotating planets, $\langle S(d)\rangle=L_{*}/d^2$, where $L_{*}$ is the stellar luminosity in solar units and $d$ is in AU. In order to apply Eq. (\ref{eq:Seff}) to binaries we first verify that an effective temperature can be associated with the combined stellar flux. For twins, the result is trivial, but for disparate binaries, a more complex situation arises. We have verified numerically that the peak flux in disparate systems remains close to that of the primary. So conservatively, we assume that the effective temperature is that of the most luminous star, since it has the dominant effect on the RHZ. The flux at a distance $d$ from the center of mass (CM), varies with the binary phase angle $\theta$ as: \beq{eq:BinaryFlux} S\sub{bin}(d,\theta)=\frac{L_1}{R_1^2(d,\theta)}+\frac{L_2}{R_2^2(d,\theta)} \end{equation} where $R_1$ and $R_2$ are planetary distances to the primary and secondary components. Assuming that the planetary orbit is in the same plane as the binary, \begin{eqnarray*} R_1^2(d,\theta) & = & (d+r_1\sin\theta)^2+r_1^2\cos^2\theta\\ R_2^2(d,\theta) & = & (d-r_2\sin\theta)^2+r_2^2\cos^2\theta \end{eqnarray*} Here $r_2=a\sub{bin}/(1+q)$ and $r_1=q r_2$ are the average star-CM distances and $q = M_2 /M_1$ is the binary mass ratio. To calculate RHZ limits we compare the combined flux (Eq. \ref{eq:BinaryFlux}) averaged over the binary orbit, with the effective flux for a given RHZ edge: \beq{eq:HZbin} \frac{1}{2\pi}\int_0^{2\pi}\left(\frac{L_1}{R_1^2(d,\theta)}+ \frac{L_2}{R_2^2(d,\theta)}\right)d\theta=S_{eff} \end{equation} The result of solving Equation (\ref{eq:HZbin}) in the case of twins and disparate binaries is presented in Figure \ref{fig:binaryHZ}. \begin{figure} \epsscale{1.15} \plotone{Fig1_BinaryHZ.eps} \caption{Habitable zones for single stars (dashed lines) and for binaries (solid lines) are shown for twins (left) and solar primaries (right). The vertical axis shows the mass in the twin cases and the secondary mass for the solar primary cases. The binary period is 15 days. The inner RHZ edge is the recent Venus limit (green) and the outer edge is the early Mars limit (red). The average of these extremes is shown in blue. The solar system is shown as the shaded region, on the right, corresponding to the limit of the binary RHZ as the secondary mass approaches 0.0 $M_{\Sun}$. Critical distances for orbital stability assuming an $e=0.5$ binary orbit are also shown for reference.} \label{fig:binaryHZ} \end{figure} \bigskip Orbital stability and proper insolation do not guarantee the most important condition for habitability: the presence of a dense and wet atmosphere. Although important intrinsic factors are involved in the formation and preservation of planetary atmospheres, planet-star interactions play key roles in the fate of gaseous planetary envelopes, especially during the early, active, phases of stellar evolution. In the next section, we model several aspects of planet-star interactions and apply them to the survival of circumbinary planet atmospheres. \section{Planet-stars interaction} \label{sec:planet-binary-interaction} Winds from low-mass stars ($M_\star\lesssim 1M_\odot$) play a central role in planetary atmosphere retention. Planets could lose their atmospheres or be desiccated on a time-scale much less than that required for the evolution of complex life \citep{Zendejas10,Lammer12}. Even if planets have magnetic fields comparable to Earth, but are located in the RHZ of K-M stars, they could be subject to intense XUV irradiation \citep{Segura10}. The resulting water loss is important, especially in the case of developing Nitrogen-rich atmospheres during early phases of planetary evolution \citep{Lammer09}. This is not a second-order effect, but can become the dominant factor determining habitability of terrestrial planets. \subsection{Stellar activity and rotational aging} \label{subsec:binary-sw} A rigorous treatment of binary stellar winds is challenging \citep{Siscoe1974}. The problem has been extensively studied in the case of early type binaries \citep{Stevens92}, but less attention has been paid to the case of low-mass stars in binaries. Here, we assume a simplified non-interacting model for the combined stellar wind. Since for binary periods of $P_{bin}\sim 10-20$ days, orbital velocities ($v\sim$ 80-100 km/s) are much lower than wind velocities as measured near the stellar surface ($v_{\rm sw}>$ 3000 km/s), we neglect orbital motion effects in calculating stellar wind properties. For twins, we assume a wind source with a coronal temperature and wind velocity profile equal to that of a single star. Mass-loss rates are assumed to be double of that calculated for single stars of the same type. For disparate binaries, we simply sum the stellar wind pressure from each component. Stellar wind properties are calculated using Parker's model \citep{Parker58}. It has been shown \citep{Zuluaga13} that planetary magnetospheric properties computed with this model are only $\sim 10\%$ different than those obtained with more realistic models. The stellar wind's average particle velocity, $v_{\rm sw}(d)$, at a distance $d$ from the host star is obtained by solving Parker's equation (Equation (35) in \citealt{Zuluaga13}). The density profile, $n_{\rm sw}(d)$ is obtained by equating particle velocity and mass-loss rate $\dot{M_\star}$: \begin{equation} n_{\rm sw}=\frac{\dot{M_\star}}{4\pi d^2 v_{\rm sw} m} \end{equation} A procedure to estimate Parker model parameters as a function of stellar age for single stars was devised by \citet{Griebmeier07}. It relies on empirical relationships between age, X-ray flux, and rotation of single stars \citep{Wood05}. According to these relations the product of mass-loss rate and wind velocity $\mbox{\.{M}} v\sub{sw}$ scale with stellar rotation period $P\sub{rot}$ following: \beq{eq:Protscaling} \mbox{\.{M}} v\sub{sw}\propto P\sub{rot}^{-3.3} \end{equation} We adapt these results to binaries by introducing the so called ``rotational age'', $\tau_{\rm rot}$ defined as the equivalent stellar age at which rotational period of a single star $\Prot{single}$ will be equal to rotational period of a star in the binary $P\sub{rot,bin}$: \begin{equation} \label{eq:rotage} \Prot{single}(\tau_{\rm rot})=P\sub{rot,bin} \end{equation} For non-negligible binary eccentricity, stars eventually reach a pseudo-synchronous state with $P\sub{rot,bin}=P_{\rm bin}/n\sub{sync}$ (see Section \ref{subsec:tidal-interaction}) where $n\sub{sync}$ is a real number depending on eccentricity\citep{Hut81}. Applying these isolated star relationships to binaries is reasonable since physical mechanisms connecting rotation, age, and activity would not be different for stars in binaries with separations of tens to hundreds of stellar radii, i.e. for $P\sub{bin}>5$ days. Rotational ages for stars in binaries are depicted in Figure \ref{fig:rotational-aging}. We see that F, G and late K stars ($M_\star>0.8$) in binaries with orbital periods $P\sub{bin,rot}<20$ days, could experience premature aging if they are quickly tidally locked. They would appear as old as single stars with $\tau>3$ Gyr, in terms of magnetic and stellar activity. Premature aging might be an advantage for circumbinary terrestrial planet atmospheres. On the other hand, lower mass stars in binaries with similar periods exhibit a forever-young effect, i.e. components freeze at rotational ages less than approximately 2 Gyr, thereby reducing habitability. \begin{figure} \epsscale{1.15} \plotone{Fig2_RotationalAge.eps} \caption{Rotational ages for single stars as a function of rotational period. We assume that a similar relationship applies to stars in binaries where the rotational period is replaced by a multiple of the binary period.} \label{fig:rotational-aging} \end{figure} Stellar XUV luminosity depends on chromospheric and coronal activity, which in turn depends on rotation. Since rotation of single MS stars slows down with age, XUV luminosity should also decrease with time \citep{Garces11}. The rotational aging mechanism will reduce XUV luminosities and their potential harmful effects on planetary atmospheres. Despite large uncertainties in measured XUV stellar emission \citep{Pizzolato03}, several authors have developed simple empirical laws expressing XUV luminosities, or its proxy, X-ray luminosity, as a function of age. To be conservative, we use the law by \citealt{Garces11} providing X-ray luminosity of GKM-types as a power-law of stellar age: \beq{eq:LXFunc} L_X=\left\{ \begin{array}{ll} 6.3\times 10^{-4} L_\star & \rm{if}\;\tau<\tau_i \\ 1.89\times 10^{28}\;\tau^{-1.55} & \rm{otherwise} \end{array} \right. \end{equation} where $L_\star$ is the bolometric luminosity and $\tau_i$ is the so called saturation time scaling with $L_\star$ according to: \beq{eq:ti} \tau_i=0.06\,\rm{Gyr}\;\left(\frac{L_\star}{L_\odot}\right)^{-0.65} \end{equation} For high X-ray luminosities we approximate $L\sub{XUV}\approx L\sub{X}$ \citep{Guinan09}. Using this model, we verify that XUV Present Earth Level (PEL) is 0.88 erg cm$^{-2}$ s$^{-1}$, in agreement with the observed value \citep{Judge03}. We also predict that at $\tau_\odot\approx 1$ Gyr the Earth XUV flux was $F\sub{XUV}=8$ PEL in agreement with previous estimates (see e.g. \citealt{Kulikov06}). \subsection{Binary tidal interaction} \label{subsec:tidal-interaction} Benefits are maximized if tidal locking occurs, at least for the primary component, before the rise of the secondary planetary atmosphere. For solar system terrestrial planets, this time is estimated as $\tau\sub{atm}\sim 0.3-0.8\,{\rm Gyr}$ \citep{Hart78,Hunten93}. Therefore, in order to evaluate if circumbinary planets benefit from rotational aging we estimate synchronization times. For a target star with initial rotational and orbital angular velocities $\Omega=2\pi/{P\sub{rot}}$ and $\omega=2\pi/P_{\rm bin}$, subject to secondary star tides placed in an orbit with eccentricity $e$, synchronization time $t_{sync}$ is calculated following \citet{Zahn08}, \begin{eqnarray} \label{eq:tsync} \frac{1}{t_{sync}}&=&\frac{1}{t_{diss}} \frac{f_2(e^2)}{(1-e^2)^6}\times\\\nonumber & & \times\left[1-\frac{(1-e^2)^{3/2}f_5(e^2)}{f_2(e^2)}\frac{\omega}{\Omega}\right]\times\\\nonumber & & \times\left(\frac{M\sub{field}}{M\sub{targ}}\right)^{2}\frac{M\sub{targ}R\sub{targ}^{2}}{I}\left(\frac{R}{a}^{6}\right) \end{eqnarray} where: \begin{eqnarray*} f_2(e^2) & = & 1+\frac{15}{2}e^2+\frac{45}{8}e^4+\frac{5}{16}e^6\\ f_5(e^2) & = & 1+3e^2+\frac{3}{8}e^4 \end{eqnarray*} Here $I$ is the moment of inertia of the target star, and $M\sub{targ}$ and $R\sub{targ}$ are its mass and radius. $M\sub{field}$ is the mass of the star producing the tidal field. Moments of inertia ${\rm MoI}\equiv I/MR^2$ have been calculated from stellar evolution models \citep{Claret90}. ZAMS values values of ${\rm MoI}=0.08$ for solar-mass stars, ${\rm MoI}=0.1$ for a 0.8 $M_\odot$ star and ${\rm MoI}=0.14$ for 0.6 $M_\odot$ stars have been used to interpolate the value of this parameter for other masses. For less massive stars we use values close to 0.23, which is the limit for less centrally concentrated substellar objects \citep{Leconte11}. Since we are interested in low mass binaries, i.e. $M_\star<1.5 \mbox{$M_{\odot}$}$, for which convection happens in the whole star or in the outer envelope \citep{Baraffe97}, we assume that turbulent convection is the dominant tidal dissipation mechanism. We compute the viscous dissipation time $t\sub{diss}$ directly from convection overturn times following Eq. (2.8) in \citet{Zahn08}. For non-negligible eccentricities, tidal breaking drives stars to a pseudo-synchronous final state where rotational angular velocity is not exactly the average orbital angular velocity. Final rotational velocity in eccentric binaries is obtained when the term in the square brackets in Eq. (\ref{eq:tsync}) becomes zero. \begin{equation} n\sub{sync}\equiv\frac{\Omega\sub{sync}}{\omega}= \frac{1+\frac{15}{2}e^{2}+\frac{45}{8}e^{4}+\frac{5}{16}e^{6}} {(1-e^{2})^{\frac{3}{2}}(1+3e^{2}+\frac{3}{8}e^{4})} \end{equation} For eccentricities in the range 0-0.5, $1<n\sub{sync}<2.8$. Note that this equation is also Eq. (42) of \citet{Hut81}, where we use average angular velocity rather than instantaneous periastron velocity. \subsection{Planetary magnetospheres} \label{subsec:PM} For magnetic protection, we use the models of \citet{Zuluaga13}. Magnetosphere sizes, quantified by standoff distance $R_S$, are scaled with stellar wind dynamical pressure $P\sub{sw}=m n\sub{sw} v\sub{sw}^2$ and the planetary dipole moment ${\cal M}$ according to: \beq{eq:RS} \frac{R_S}{R_\oplus} = 9.75 \left(\frac{\cal M}{{\cal M}_\oplus}\right)^{1/3} \left(\frac{P\sub{sw}}{P\sub{sw\odot}}\right)^{-1/6} \end{equation} where ${\cal M}_\oplus = 7.768\times 10^{22}$ A m$^2$ and $P\sub{sw\odot}=2.24\times 10^{-9}$ Pa are the present Earth dipole moment and the average dynamical pressure of the solar wind at Earth. We note that during the first two Gyr, our thermal evolution models predict a dipole moment for Earth analogues of around 0.6 ${\cal M}_\oplus$ (see Figure 4 in \citealt{Zuluaga13}). Strong magnetic fields were probably required to create magnetosphere cavities large enough to protect early bloated atmospheres. For a magnetosphere comparable in size, or smaller than, that predicted for early Venus ($R\sub{S,Venus}$=3.8 $R_p$), subject to similar levels of XUV radiation, we assume magnetic protection is insufficient to prevent water loss similar to Venus. On the other hand, if magnetospheres are larger than that of early Earth $R\sub{S,Earth}$=4.5 $R_p$ the planet is magnetically protected. Earth and Venus limits are drawn in the contour plots for $R_S$ in Figures \ref{fig:FXUV-Rs-1} and \ref{fig:FXUV-Rs-2}. We stress that while $R_S$ depends on the dipole moment, an intrinsic planetary property, it also depends on the stellar wind pressure and velocity. This is the reason why we can display $R_S$ in the $e-P_{bin}$ plane for Earth-like magnetic properties. \begin{figure} \epsscale{1.15} \plotone{Fig3_SyncTime.eps} \caption{Rotational synchronization times. The mass of twin stars (left) and the mass of the secondary, with a solar type primary, (right) is on the vertical axis. Left: 1 Gyr (solid lines) and 5 Gyr (dashed lines) synchronization times for eccentricities 0.1, 0.3, and 0.5 are shown. Stars in binaries with high eccentricities become rotationally synchronized quicker than those with low eccentricities. Right: shaded regions show where one or both stars become tidally locked during the first Gyr.} \label{fig:synctime} \end{figure} \section{Results and discussion} \label{sec:results} Relevant models are applied to study tidal synchronization effects on circumbinary habitability. Results for Kepler 34, Kepler 47 and Kepler 35, showing enhanced habitability are given in Figure \ref{fig:FXUV-Rs-1}. Kepler 16, Kepler 38 and Kepler 64 results, are given in Figure \ref{fig:FXUV-Rs-2}. Kepler binaries (black triangles) are plotted along with the level of stellar aggression experienced by planets in the middle of the RHZ. Dark blue areas in the $e-P_{bin}$ plane correspond to binaries with reduced early XUV flux and enhanced magnetic protection (large $R_S$). We stress that known Kepler binary system planets are not studied here, rather hypothetical Earth-like planets in circumbinary habitable zones are illustrated. Solar mass twins, like Kepler 34 (black triangle in upper left panel of Figure \ref{fig:FXUV-Rs-1}), for example, provide a RHZ that has $\sim$ 60\% of the averaged XUV flux of early Earth $\langle F\sub{XUV,Earth}\rangle$. However, there is a spot, a binary habitable niche, at $P\sub{bin}\sim 15$ days and $e=0$ for which a mid-RHZ potentially habitable planet receives merely 20\% of the XUV flux of early Earth. Fluxes less than that experienced by Venus exist for planets at mid-RHZ for all but the shortest period and highest eccentricity solar twins (orange region). Magnetospheric stand-off radii show a similar trend. Binaries that synchronize in less than 1 Gyr may provide reduced stellar aggression. However, this is true only if the synchronization time is not too short (see the upper left corner of the $e-P_{bin}$ plane) due to the forever young effect. In fact, Earth-like conditions exist near the inner edge of the RHZ for solar-like twins. A Venus-like planet with less magnetic protection than Earth could potentially maintain habitability in a system that could also have an Earth-twin and even a water world farther out in the RHZ. Hence multi-planet habitability is possible. Lower mass twins or binaries with solar-like plus low mass companions, Kepler 35 and Kepler 47 respectively, provide habitable conditions for certain binary period and eccentricity combinations. However, the trend is towards less habitable conditions for lower mass binaries. Kepler 35 consists of 0.89 $M_{\sun}$ and 0.81 $M_{\sun}$ stars in a nearly circular orbit with a 21 day period. Its RHZ is exposed to about twice the XUV flux as early Earth, but still is magnetically protected. The same result applies in Kepler 47. Binary habitable niches for analogues of those systems are found around $P\sub{bin}\sim 12$ days and $e\sim 0.2$. Results for Kepler 16, Kepler 38 and Kepler 64 binaries are shown in Figure \ref{fig:FXUV-Rs-2}. These range from the very inhospitable, Kepler 16 with two low mass stars, to the planet friendly Kepler 64. A dwarf K and M pair (dK and dM), like Kepler 16, have RHZs which are exposed to much higher XUV flux than Earth. Kepler 38 ($M_{1} = 0.98 M_{\sun}$ and $M_{2} = 0.25 M_{\sun}$) is a marginal case with Venus-like conditions at best. Single dM stars are generally considered inhabitable based on their deadly XUV flux, however, if paired with a solar companion, the dM star may synchronize the primary, thereby providing a reduction of XUV flux. Kepler 64 ($M_{1} = 1.53 M_{\sun}$ and $M_{2} = 0.38 M_{\sun}$) provides reduced XUV flux and enhanced magnetic protection. It is however, limited by the relatively short lifetime of the primary star. \bigskip Our results show that if tidal-locking is capable of rotationally synchronizing stars in binaries within the first Gyr, then the RHZs may have reduced XUV and stellar wind flux. Planets with Earth-like magnetic fields in the RHZ of these binaries will likely retain atmospheric water. This effect is especially strong for solar-like (or larger) primaries. Planets that are less magnetically protected than the Earth may survive desiccation even in the inner region of the binary habitable zone. Not only is it possible for Earth-like planets to exist in circumbinary orbits for a wide range of binary parameters, but also atmospheres experiencing less erosion than Earth are possible. This has implications for both the number of habitable planets in the Galaxy as well as the number of habitable planets per stellar system. We suggest that the paradigm that binaries are not as suitable as single stars for life, should be shifted to include a significant number of potentially habitable circumbinary planets. \acknowledgments For additional material and updates please visit {\small \url{http://astronomia.udea.edu.co/binary-habitability}}. We appreciate comments by Ren\'e Heller and an anonymous referee. This research is supported in part by NSF grant 0958783. J.I. Zuluaga and P.A. Cuartas-Restrepo are supported by CODI-UdeA.
1,116,691,497,979
arxiv
\section{Introduction}\label{sec:intro} The extension of mirror symmetry for Fano varieties beyond the toric context, where it is well-understood due to foundational work by Hori--Vafa \cite{horivafa}, Givental \cite{Givental98}, Lian--Liu--Yau \cite{lian} and others, is an area of active research. Grassmannians and flag varieties are central examples here, as Fano GIT quotients with a rich geometric and combinatorial structure. One of the oldest proposals for a mirror, or superpotential, for the Grassmannian $\Gr(n,r)$ was given by Eguchi--Hori--Xiong \cite{eguchi}. This was later generalized to type A flag varieties by Batyrev--Ciocan-Fontanine--Kim--van Straten \cite{flagdegenerations} (for simplicity, we refer to these mirrors as EHX mirrors). These proposals are motivated by taking toric degenerations of Grassmannians and flag varieties, and then applying toric methods to the singular fiber. However, the toric degeneration approach has not been successful in proving required properties of these mirrors -- partial verification was completed for Grassmannian and flag varieties by Rietsch in \cite{lietheoretic} using the Lie theoretic superpotential, and a full verification for Grassmannians by Marsh--Rietsch \cite{MarshRietsch} using the Pl\"u cker coordinate mirror. The Pl\"u cker coordinate mirror for the Grassmannian is the most promising approach to mirror symmetry beyond the toric context. This remarkable construction connects the earlier, Lie theoretic proposals of the Grassmannian with the conjectures of the Fanosearch program and the toric degeneration approach. Extending the construction beyond the Grassmannian is thus an important problem. In \cite{kflags}, the second author introduces a conjectural Pl\"u cker coordinate mirror for type A flag varieties (see \cite{spacek, spacekwang} for recent progress on the subject in other types). As a first test of its validity, the second author proves in \cite{kflags} that the Pl\"u cker coordinate mirror is compatible with the EHX mirror. More is required, however: a superpotential or mirror should compute quantum information about the variety -- through determining both quantum relations as well as certain genus 0 Gromov--Witten invariants. In this paper, we prove a theorem in this direction. We show that partial derivatives of the Pl\"u cker coordinate mirror of a type A flag variety give quantum cohomology relations. To state the result carefully, we need some more background. The Pl\"u cker coordinate mirror of the Grassmannian is a rational function on the Grassmannian. As for toric varieties, there is a map from the Cox ring of the Grassmannian (the ring generated by Pl\"u cker coordinates) to the cohomology ring of the Grassmannian. Pl\"u cker coordinates of the Grassmannian of quotients $\Gr(n,r)$ are indexed by the same set as Schubert classes of the Grassmannian -- i.e. by partitions fitting into an $r \times (n-r)$ box -- and this map takes the Pl\"u cker coordinate $p_\lambda$ to the Schubert class $s_\lambda$. Under this map, partial derivatives of the Pl\"u cker coordinate mirror give quantum cohomology relations \cite{MarshRietsch}. For $n=:r_0$ and $\mathbf{r}=(r_1>\cdots>r_\rho>r_{\rho+1}:=0)$, let $\Fl(n;\mathbf{r}):=\Fl(n;r_1,\dots,r_\rho)$ be the partial flag variety of successive quotients of $\mathbb{C}^n$ of dimension $r_i$. The Pl\"u cker coordinate mirror of this flag variety proposed in \cite{kflags} is a rational function on a product of Grassmannians $Y=\prod_{i=1}^\rho \Gr(r_{i-1},r_i)$, with the convention $r_0:=n$. Using the cluster structures of the Grassmannian factors, this superpotential can be written as a Laurent polynomial in certain Pl\"u cker coordinates of each factor. We index Pl\"u cker coordinates on $Y$ by $p^i_\lambda$, where $i=1,\dots,\rho$ and $\lambda$ is a partition that fits into an $r_i \times (r_{i-1}-r_i)$ box. Schubert classes of the Grassmannian are indexed by partitions, and Schubert classes $\sigma_{\vec{\lambda}}$ in a flag variety are indexed by \emph{tuples} ${\vec{\lambda}}=(\lambda_1,\dots,\lambda_n)$ of partitions. To interpret partial derivatives of the Pl\"u cker coordinate mirror requires a map from the Cox ring of $Y$ to the cohomology of the flag variety. This is not the natural map given by \[p^i_\lambda \mapsto s_{\vec{\mu}},\] where $\vec{\mu}_j$ is $\lambda$ if $i=j$ and $\emptyset$ otherwise. Instead, we require the \emph{Schubert map} $F$ (see definition \ref{def:Schubertmap}). Our main result is then the following. \begin{theorem} \label{thm:thmA} Let $W_P$ be the Pl\"u cker coordinate mirror of a flag variety, and $W_{P,C}$ the expression of $W_P$ in any choice of cluster charts. Then \[F\left(\frac{\partial}{\partial p^i_{\lambda}} W_{P,C}\right)=0\] in quantum cohomology, for any $i=1,\dots,\rho$ and $p^i_\lambda$ in the cluster chart. \end{theorem} This result represents a significant step towards a full verification of the Pl\"u cker coordinate mirror of flag varieties. If similar structure holds beyond the type A case, this result may also be important in extending candidate mirrors from cominiscule varieties to any homogeneous space. It elucidates the increased complexity from the Grassmannian case. It also demonstrates a previously unobserved structure relating the mirrors of Grassmannians and flag varieties: although not at all obvious from the description of the Schubert map, we show that it precisely interpolates between the Pl\"u cker coordinate mirror of the flag variety $\Fl(n;\mathbf{r})$ and containing Grassmannians $\Gr(N,r_1), N>>0.$ It is this property of the Schubert map which is key to the proof of Theorem A, as it essentially allows us to reduce to the Grassmannian case. This interpolation result is a corollary of Theorem B below, a purely quantum cohomology statement. Following the approach of \cite{fgp} and \cite{cf}, we use a ``quantization" approach for the quantum cohomology ring of the flag variety. This and other descriptions will be reviewed in Sections \ref{sec:background} and \ref{sec:background2}. There is a natural ring homomorphism from the ring $\Lambda_{r_1}$ of symmetric polynomials in $r_1$ variables to $\mathrm{QH}^*\Fl(n;\mathbf{r})$ defined sending elementary symmetric polynomials to certain quantum elementary polynomials. We write $s^1_\lambda \in \mathrm{QH}^*\Fl(n;\mathbf{r})$ for the image of a Schur polynomial $s_\lambda$ (see \eqref{eq:si} for more details). The first part of Theorem B states that for certain partitions $\lambda$, the image is a Schubert class (up to multiplication by quantum parameters), and the second part of Theorem B states that for another class of partitions, the image is zero. For $0<b\leq n$, let $0\leq I\leq \rho$ be such that $n-r_I<b\leq n-r_{I+1}$. In \S \ref{sec:theoremB}, we define the \emph{quantum hook} or \emph{$q$-hook} of width $b$ to be the partition $H_b := (b^{b-n+r_1},(b-n+r_I)^{n-r_{I+1}-b})$, and set $R_b:=(b^{b-n+r_1})$ to be the maximal width rectangle contained in $H_b$, with $H_b=R_b=\emptyset$ if $b<n-r_1$. Set $$q^{H_{b}}:= q_1^{r_1-r_2}\cdots(q_1\cdots q_{I-1})^{r_{I-1}-r_I}(q_1\cdots q_I)^{b-(n-r_I)}.$$ For a partition $\lambda$ that contains the $q$-hook of width equal to the width of $\lambda$, we associate a tuple of partitions $\vec{\mu}=(\mu^1,\ldots,\mu^{I+1},\emptyset,\ldots,\emptyset)$ by subdividing the skew shape $\lambda/H_{\lambda}$ as in Figure \ref{fig:mu}, where $\mu_i\in P(r_{i-1},r_i)$ is of width $r_{i-1}-r_i$. (See Definition \ref{def:mu-lambda} for more details.) \begin{figure}[h!] \centering \begin{tikzpicture}[scale=.4] \fill[color=gray!40] (0,0) --(9, 0) -- (9,-5) -- (2, -5) -- (2, -7)-- (0, -7) -- cycle; \draw[thick] (6, 0) -- (9, 0) -- (9,-5) -- (2, -5) -- (2, -7) -- (0, -7)--(0,-3) -- (0,0) -- (6,0); \draw[thick] (0,-7) -- (0,-9.67) -- (1.33,-9.67)-- (1.33,-9) -- (2,-9)-- (3.33,-9) -- (3.33,-8) -- (6.5,-8)--(6.5,-7)--(8.2,-7) -- (8.2,-6)--(9,-6) --(9,-5); \draw (2,-6) -- (2,-9); \draw (4.67,-5) -- (4.67,-8); \node[scale=.8] at (5, -6.1) {$\cdots$}; \draw (5.3,-5) -- (5.3,-8); \draw (7,-5) -- (7,-7); \node[scale=.8] at (1, -8) {$\mu^{I+1}$}; \node[scale=.8] at (3.3,-6.1) {$\mu^I$}; \node[scale=.8] at (6.1,-6.1) {$\mu^2$}; \node[scale=.8] at (7.6,-6.1) {$\mu^1$}; \node[scale=.8] at (4.5,-3) {$H_{\lambda_1}$}; \end{tikzpicture} \caption{A partition $\lambda$ containing $H_{\lambda_1}$, the skew shape $\lambda/H_{\lambda_1}$, and the associated tuple of partitions $\vec{\mu}=(\mu^1,\ldots,\mu^{I+1},\emptyset,\ldots,\emptyset)$. } \label{fig:mu} \end{figure} \begin{theorem}\label{thm:thmB} Let $\lambda\subseteq r_1\times n$ be a partition, and let $I$ be such that $n-r_I<\lambda_1\leq n-r_{I+1}$. \begin{enumerate} \item[(a)] If $H_{\lambda_1}\subseteq \lambda$, then \[ s^1_\lambda = q^{H_{\lambda_1}}\sigma_{\vec{\mu}} \text{ in } \mathrm{QH}^*\Fl(n;\mathbf{r}), \] where $\vec{\mu}=(\mu^1,\ldots,\mu^{I+1},\emptyset,\ldots,\emptyset)$ is the tuple of partitions associated to $\lambda$ above. \item[(b)] If $\lambda$ contains $R_{\lambda_1}$, but $H_{\lambda_1}\not\subseteq \lambda$, then $$s^1_\lambda =0 \text{ in } \mathrm{QH}^*\Fl(n;\mathbf{r}).$$ \end{enumerate} In particular, $s^1_{H_{\lambda_1}} = q^{H_{\lambda_1}}$ since $H_{\lambda_1}/H_{\lambda_1}=\emptyset$, so $\mu^j=\emptyset$ for all $j$ and $\sigma_{(\emptyset,\ldots,\emptyset)} = 1$. \addtocounter{theorem}{-1} \end{theorem} In \S \ref{sec:background}, we review the necessary background on quantum cohomology of Grassmannians and flag varieties, and in \S \ref{sec:background2}, we discuss the EHX and Pl\"u cker coordinate mirror of the Grassmannian. In \S \ref{sec:theoremA}, we describe the Schubert map and prove Theorem A, and in \S \ref{sec:theoremB}, we study $q$-hooks and prove Theorem B. \subsection*{Acknowledgements} The authors would like to thank Konstanze Rietsch, Dave Anderson, and Jennifer Morse for helpful conversations. \section{Quantum cohomology of flag varieties}\label{sec:background} \subsection{Permutations and Schubert classes} Fix an $n$-dimensional vector space $V$ and a tuple of integers $\mathbf{r}= (n>r_1>\cdots > r_\rho>0)$ and let $\Fl(n;\mathbf{r})= \Fl(n;r_1,\dots,r_\rho) $ denote the partial flag variety parametrizing successive quotient flags of $V$ of dimensions $r_i$. It comes equipped with a tautological sequence of quotient bundles $V_{\Fl(n;\mathbf{r})} \twoheadrightarrow Q_1 \twoheadrightarrow \cdots \twoheadrightarrow Q_\rho$ of ranks $r_1,\dots,r_\rho$. The basis of Schubert classes for $\Fl(n;\mathbf{r})$ consists of geometrically described cohomology classes that are commonly indexed by \[ S(n;\mathbf{r}):= \{w\in S_n: w(i)<w(i+1) \text{ if } i\not\in\mathbf{r}\},\] the set of permutations in $S_n$ whose descent set is contained in $\{r_1,\ldots,r_\rho\}$. If $S_{n,\mathbf{r}}$ is the parabolic subgroup of $S_n$ generated by simple transpositions $(i,I+1)$ for $i\not\in\mathbf{r}$, then $S(n;\mathbf{r})$ is a set of coset representations for $S_n/S_{n,\mathbf{r}}$. For a permutation $w\in S(n,\mathbf{r})$, define $r_w(p,q) = \#\{i \leq p \,|\, w(i)\leq q\}.$ This is the rank of the upper-left $p\times q$ submatrix of the permutation matrix corresponding to $w$ (which has $1$'s in positions $(i,w(i))$ and $0$'s elsewhere). The \emph{length} of $w$ is the number $\ell(w) = \#\{ i<j \,|\, w(i)>w(j) \}$. Let $E_\bullet$ be a flag of trivial vector bundles on $\Fl(n;\mathbf{r})$. Then the Schubert variety $\Omega_w = \Omega_w(E_\bullet) \subseteq \Fl(n;\mathbf{r})$ is defined by \[ \Omega_w = \{ x\in \Fl(n;\mathbf{r})) \,|\, \rk(E_q \to Q_p) \leq r_w(n_p,q) \text{ for all } 1\leq q\leq n, n_p\in\mathbf{r}\}. \] We write $\sigma_w$ for the corresponding Schubert class in $\mathrm{H}^{2\ell(w)}\Fl(n;\mathbf{r})$. The unique permutation of longest length in $S(n;\mathbf{r})$ is given explicitly by \[ w^\circ = [ n-r_\rho+1,\ldots,n,\; \ldots, n-n_{1}+1, \ldots, n-r_2, \ldots, 1,2,\ldots,n-r_1]. \] Its length is $\ell(w^\circ) = \dim\Fl(n;\mathbf{r})$. There is an involution on $S(n;\mathbf{r})$, using the longest element $w_\circ^{\mathbf{r}}$ of $S_{n,\mathbf{r}}$: \[ w^\vee = w_\circ \cdot w \cdot w_\circ^\mathbf{r}. \] This is an element of $S(n;\mathbf{r})$, with $\ell(w^\vee) = \dim\Fl(n;\mathbf{r}) - \ell(w)$. The classes $\sigma_{w^\vee}$ form a Poincar\'e dual basis: $\int_{\Fl(n;\mathbf{r})} \sigma_w \cup \sigma_{v^\vee} = \delta_{w,v}$. \subsection{Another basis and tuples of partitions} We describe another basis for $\mathrm{H}^*\Fl(n;\mathbf{r})$ in terms of tuples of partitions. Consider the set \[P(n,\mathbf{r}):=\prod_{i=1}^\rho P(r_{i-1},r_{i}),\] where we set $r_0:=n$ and $r_{\rho+1}:=0$, and where $P(a,b)$ denotes the partitions inside a $b \times (a-b)$ rectangle. \begin{rem} \label{rem:bijection} There is a bijection between permutations in $S(n,\mathbf{r})$ and tuples of partitions in $P(n,\mathbf{r})$. Given a tuple $\vec{\mu}=(\mu^1,\ldots,\mu^\rho)\in P(n,\mathbf{r})$, for $1\leq i\leq \rho$, denote by $w^i$ the Grassmannian permutation in $S_n$ with possible descent at $r_i$ defined by the partition $\mu^i\subseteq P( r_{i-1},r_i)\subseteq P(n,r_i)$, i.e. $\mu^i = (w^i(r_i)-r_i,\ldots w^i(1)-1)$, so that $w^i = w_{(\emptyset,\ldots, \mu^i, \ldots,\emptyset)}$ Then the tuple $\vec{\mu} \in P(n,\mathbf{r})$ corresponds to the permutation \[ w_{\vec{\mu}} := w_{(\mu^1,\emptyset,\ldots)} \cdots w_{(\emptyset,\ldots,\mu^\rho)} = w^1w^2\cdots w^\rho.\] On the other hand, given $w\in S(n,\mathbf{r})$, we can produce a tuple $\vec{\mu}=(\mu^1,\ldots,\mu^\rho)$. (See also \cite{WiAG}). \end{rem} If a tuple of partitions $\vec{\mu}=(\mu^1,\ldots,\mu^\rho)\in P(n,\mathbf{r})$ corresponds to the permutation $w$ under the bijection in Remark \ref{rem:bijection}, we also write the Schubert class $\sigma_w$ as $\sigma_{\vec{\mu}}$. \begin{eg} \label{eg:permutation} Consider the flag variety $\Fl(8;6,4,3)$ with $n=8$ and $\mathbf{r}=(6,4,3)$. For the tuple $\left(\yng(2,1),\yng(1,1),\emptyset\right)$ in $P(n,\mathbf{r})$, $w^1=[123457|68]\, w^2=[1245|3678], w^3=id$ with descents marked at $r_1=6$ and $r_2=4$. The corresponding permutation in $S(n,\mathbf{r})$ is $w=w^1w^2w^3=[1245|37|68]$. Similarly, $\left( \yng(2,1),\yng(2,2,1),\yng(1,1,1)\right)$ corresponds to the permutation $ [123468|57]\cdot [1356|2478]\cdot [234|15678]=[368|1|24|57]$, and $\left(\yng(1),\yng(2,1),\yng(1,1)\right)$ corresponds to $[123457|68]\cdot [1246|3578]\cdot[134|25678]=[147|2|35|68]$. \end{eg} For a partition $\lambda\in P(r_{i-1},r_i)$, we define the class $s^i_\lambda$ to be the Schur polynomial associated to the partition $\lambda$ in the Chern roots of $Q_i$, the rank $r_i$ tautological quotient bundle on $\Fl(n;\mathbf{r})$: \begin{equation} \label{eq:s-def} s^i_\lambda=\det(s^i_{1^{\lambda'_k+l-k}}). \end{equation} Note that $s^i_{1^a}=c_a(Q_i)$ is the $a$th Chern class of the bundle $Q_i$ so that $s^i_{1^a}=e_a(r_i)$. Via the bijection in Remark \ref{rem:bijection}, $s^i_\lambda$ is equal to the Schubert class $$s^i_\lambda= \sigma_{(\emptyset,\ldots,\lambda,\ldots,\emptyset)}$$ associated to the tuple of partitions consisting $i$th partition equal to $\lambda$ and the empty partition elsewhere. Note that we can use \eqref{eq:s-def} to define $s^i_\lambda$ even when $\lambda \not \in P(r_{i-1},r_i)$, although it is no longer a Schubert class in general. \begin{rem} Given a tuple of partitions $\vec{\mu}=(\mu^1,\ldots,\mu^\rho)$, we obtain another important class $$s_{\vec{\mu}}:= s^1_{\mu_1} \cdots s^\rho_{\mu_\rho}.$$ Running over all $\vec{\mu}$ we obtain another basis for the cohomology of the flag variety. The two bases $\{\sigma_{\vec{\mu}}\}$ and $\{s_{\vec{\mu}}\}$ are distinct, except in the case of the Grassmannian. \end{rem} \subsection{Quantum cohomology} The \emph{ quantum cohomology ring} $\mathrm{QH}^*\Fl(n;\mathbf{r})$ is a commutative and associative graded algebra over $\mathbb{Z}[q_1,\ldots,q_\rho]$, where $q_i$ is a parameter of degree $r_{i-1}-r_{i+1}$. As a module, $\mathrm{QH}^*\Fl(n;\mathbf{r})$ is simply $\mathbb{Z}[q]\otimes_\mathbb{Z} H^*\Fl(n;\mathbf{r})$, so it has a $\mathbb{Z}[q]$-basis of Schubert classes $\sigma_w$: \[ \mathrm{QH}^*\Fl(n;\mathbf{r}) = \bigoplus_{w\in S(n;\mathbf{r})} \mathbb{Z}[q]\cdot \sigma_w. \] The quantum product is a deformation of the usual product. For permutations $u,v\in S(n;\mathbf{r})$, define a product by \[ \sigma_u * \sigma_v = \sum_{w,\dd} \qq^\dd\, c_{u,v}^{w,\dd}\, \sigma_w, \] where $\dd$ ranges over $(n-1)$-tuples of nonnegative integers, and the \emph{three-pointed Gromov-Witten invariant} $c_{u,v}^{w,\dd}$ is defined as follows. Let $\overline{M}_{0,3}(\Fl(n;\mathbf{r}),\dd)$ be the Kontsevich moduli space of three-pointed genus-zero stable maps to $\Fl(n;\mathbf{r})$ of degree $\dd$, parametrizing data $(f,C,(x_1,x_2,x_3))$, where $C$ is a genus-zero curve with marked points $x_i$, $f:C\to\Fl(n;\mathbf{r})$ is a map of degree $\dd$, and a certain stability condition is imposed \cite{Kontsevich95}. The space of stable maps is of dimension $\dim \Fl(n;\mathbf{r}) + \sum_{i=1}^3 d_i(r_{i-1}-r_{i+1})$, and comes with natural evaluation morphisms \[\ev_i: \overline{M}_{0,3}(\Fl(n;\mathbf{r}),\dd)\to \Fl(n;\mathbf{r}) \] for $1\leq i\leq 3$ that send $(f,C,(x_1,x_2,x_3))$ to $f(x_i)$. Now one defines $c_{u,v}^{w,\dd}= \pi_*( \ev_1^*\sigma_u \cdot \ev_2^*\sigma_v \cdot \ev_3^*\sigma_{w^\vee} ).$ This defines an associative product. See \cite{fp} for more details on quantum cohomology. \subsection{Quantum cohomology of flag varieties} \label{sec:qh} The Schubert polynomials of Lascoux and Sch\"utzenberger are defined inductively, starting from $\Sch_{w_\circ}(x) = x_1^{n-1} x_2^{n-2} \cdots x_{n-1}$ and moving down Bruhat order using divided difference operators \cite{ls}. For any $w\in S_n$, the polynomial $\Sch_w(x)$ has a unique expansion in terms of elementary symmetric polynomials: \begin{equation}\label{e:sch-elem} \Sch_w(x) = \sum a_{k_1\ldots k_{n-1}}\,e_{k_1}(1) \cdots e_{k_{n-1}}(n-1) \end{equation} over sequences $(k_1,\ldots,k_{n-1})$ with $0\leq k_j\leq j$ and $\sum k_j=\ell(w)$, where the $a_{k_1\ldots k_{n-1}}$ are integers and $e_k(j):=e_k(x_1,\ldots,x_j)$ is the $k$th elementary symmetric polynomial in the variables $x_1,\ldots,x_l$. Let \[ \sigma_1^\rho,\ldots,\sigma_{r_\rho}^\rho,\,\sigma_1^{\rho-1},\ldots,\sigma_{r_{\rho-1}-r_\rho}^{\rho-1},\ldots,\sigma_1^{0},\ldots,\sigma_{n-r_1}^{0} \] be $n$ independent variables, with $\sigma_i^j$ of degree $i$. To form quantum polynomials for $\Fl(n;\mathbf{r})$, one replaces $e_k(j)$ with {\em quantum elementary polynomials} $e^q_k(r_l)$, which are defined for $r_l \in\mathbf{r}$ and $r_0=n$ recursively by \begin{equation} \label{eq:eq-recursion} e^{\mathbf{r},q}_a(r_{l-1}) = \sum_{m=0}^{r_{l-1}-r_l} \sigma_m^l e^{\mathbf{r},q}_{a-m}(r_l) + (-1)^{r_{l-1}-r_l+1}q_l \, e^{\mathbf{r},q}_{a-(r_{l-1}-r_{l+1})}(r_{l+1}), \end{equation} where we set $e^q_0(r_l)=1$ and $e^q_m(r_l)=0$ if either $m<0$ or $m>r_l$. When $\mathbf{r}$ is understood, we simply write $e^q_k(r_l)$ for $e^{\mathbf{r},q}_k(r_l)$. (Our conventions here differ from those found elsewhere in the literature, e.g. our $r_l$ and $\sigma_i^j$ correspond to $n_{\rho+1-l}$ and $\sigma_i^{\rho+1-j}$ in \cite{cf}.) From \cite{cf,kimpresentation}, we know a presentation of the quantum cohomology ring and polynomial representatives of the quantum Schubert classes. \[ \mathrm{QH}^*\Fl(n;\mathbf{r}) \cong \mathbb{Z}[q][\ \sigma_1^\rho,\ldots,\sigma_{r_\rho}^\rho,\ldots,\sigma_1^{0},\ldots,\sigma_{n-r_1}^{0}]/I^q, \] where $I^q$ is the ideal $({e}_1^{\mathbf{r},q}(n),\ldots,{e}_n^{\mathbf{r},q}(n))$ generated by $n$ relations ${e}_1^{\mathbf{r},q}(r_0)=0$ which specialize to the known relations defining $H^*\Fl(n;\mathbf{r})$ when $q \mapsto 0$, and \[ \sigma_w = \Sch^{\mathbf{r},q}_w(\sigma)\] for $w\in S(n;\mathbf{r})$, where the {\em quantum Schubert polynomial} $\Sch^{\mathbf{r},q}_w(\sigma)$ is formed by substituting $e^{\mathbf{r},q}_k(r_l)$ for $e_k(j)$ on the RHS of \eqref{e:sch-elem} whenever $j\in[r_l,r_{l-1})$. The quantum structure constants of the alternate basis, $s_{\vec{\mu}}$, can be computed using rim-hook removals via the Abelian/non-Abelian correspondence \cite{gukalashnikov}. \subsection{Determinantal formulas} In Section \ref{sec:theoremB}, we will study certain skew shapes $\lambda/\mu$ along with a labeling $\omega(i,j)=r_1+i-j$. By \cite{bjs}, associated to $(\lambda/\mu,\omega)$ is a 321-avoiding permutation $w$ whose corresponding Schubert polynomial is equal to a \emph{flagged skew Schur polynomial} that can be expressed as a determinant: \begin{equation}\label{eq:schub-det} \Sch_w(x) = \left| e_{\lambda'_i-\mu'_j+j-i}(f_j) \right|_{1\leq i,j\leq t} \end{equation} where $f_j=\omega(j,\lambda'_j)=r_1+j-\lambda'_j$ is the ``flagging" associated to $w$. For a skew shape $\lambda/\mu$ and $\phi=(\phi_1,\ldots,\phi_t)$ with $1\leq \phi_i \leq \rho$, define \begin{equation} \label{eq:q-determinant} \Delta_{\lambda/\mu}(e^q(\phi)):=\left| e^q_{\lambda'_i-\mu'_j+j-i}(r_{\phi_j}) \right|_{1\leq i,j\leq t}. \end{equation} When $\phi_j$ is defined by $r_{\phi_j} \leq f_j <r_{\phi_j -1}$, substituting $e_k(j)=e^q_k(r_l)$ in \eqref{eq:schub-det} as in the discussion in Section \ref{sec:qh}, we obtain a determinantal expression for the quantum Schubert class: \begin{equation} \label{eq:q-Schubert}\sigma_w = \Delta_{\lambda/\mu}(e^q(\phi)) \text{ in }QH^*\Fl(n;\mathbf{r}). \end{equation} We can also define quantum classes $s^i_\lambda$ for partitions $\lambda$ by computing the determinant \eqref{eq:s-def} using the quantum product. When $\lambda\in P(r_{i-1},r_i)$, this gives the quantum Schubert class $\sigma_{\emptyset,\cdots,\lambda,\cdots,\emptyset}$, but $s^i_\lambda$ is also defined when $\lambda\not\in P(r_{i-1},r_i)$. In particular, since $s^i_{1^a}=e_a(r_i)$ classically, we have $s^i_{1^a} = e^q_a(r_i)$ in $\mathrm{QH}(\Fl(n;\mathbf{r}))$ and \begin{equation}\label{eq:si} s^i_\lambda= \left| s^i_{1^{\lambda'_k+l-k}} \right|_{1\leq k,l\leq \lambda_1} =\Delta_\lambda(e^q(\phi)), \end{equation} where $\phi=(i,\ldots,i)$. \begin{rem} \label{rem:skewzero} If $\mu\not\subseteq\lambda$, then $\lambda'_k<\mu'_k$ for some $1\leq k\leq t$. If $i\geq k$ and $j\leq k$, the $(i,j)$th entry of the matrix in $\Delta_{\lambda/\mu}$ is indexed by $\lambda'_i-\mu'_j+j-i<\lambda'_k-\mu'_k<0$, and so is zero. Since the matrix is block upper triangular with left upper block of determinant zero, $\Delta_{\lambda/\mu} =0$. \end{rem} \section{Cluster structure and superpotentials}\label{sec:background2} \subsection{The cluster structure of the Grassmannian} In this section, some brief facts about the cluster structure of the Grassmannian are recalled. Good references include \cite{scott, RW}. Fix a Grassmannian of quotients $\Gr(n,r)$. Pl\"u cker coordinates on the Grassmannian are indexed by partitions $\lambda$ fitting in an $r \times (n-r)$ box, i.e. by $\lambda \in P(n,r)$. The homogeneous coordinate ring of the Grassmannian is generated by $p_\lambda, \lambda \in P(n,r)$, and relations are given by the Pl\"u cker relations. This ring, as well as certain localizations of it, has a cluster structure. Certain sets of algebraically independent Pl\"u cker coordinates are clusters. An important example of a cluster is the \emph{rectangles cluster}. \begin{mydef} The \emph{rectangles cluster chart} is the set of Pl\"u cker coordinates indexed by all partitions $\lambda \in P(n,r)$ such that $\lambda$ is a rectangle. \end{mydef} One cluster can be obtained from another via \emph{mutation}. These mutations arise from three-term Pl\"u cker relations \cite{scott}. The three term quadratic Pl\"u cker relations are of the form \[p_\lambda p_\mu = p_a p_b + p_c p_d\] where $\lambda, \mu, a, b, c, d \in P(n,r)$ are six partitions related in a particular way. A cluster containing $p_\lambda, p_a, p_b, p_c,$ and $p_d$ can be mutated to one containing $p_\mu, p_a, p_b, p_c,$ and $p_d$. Any cluster is related to any other by a series of mutations of this form. \subsection{Superpotentials of the Grassmannian} The Eguchi--Hori--Xiong (EHX) superpotential for Grassmannians is described by building a ladder diagram for the Grassmannian, and super-imposing a dual quiver on the diagram. The ladder diagram for the Grassmannian $\Gr(n,r)$ is an $r \times (n-r)$ grid. There is a toric degeneration of the Grassmannian to the quiver moduli space described by a quiver originating from the ladder diagram. The superpotential is given by a head-over-tails process on the dual quiver. We illustrate this briefly in the example $\Gr(5,2)$: the ladder diagram is a $2 \times 3$ grid: \[\begin{tikzpicture}[scale=0.6] \draw (0,0) rectangle (1,1); \draw (1,1) rectangle (2,2); \draw (0,1) rectangle (1,2); \draw (1,0) rectangle (2,1); \draw (2,1) rectangle (3,2); \draw (2,0) rectangle (3,1); \end{tikzpicture}.\] The dual quiver is then: \[\begin{tikzpicture}[scale=0.6] \draw[gray] (0,0) rectangle (1,1); \draw[gray] (1,1) rectangle (2,2); \draw[gray] (0,1) rectangle (1,2); \draw[gray] (1,0) rectangle (2,1); \draw[gray] (2,1) rectangle (3,2); \draw[gray] (2,0) rectangle (3,1); \node[circle,fill,scale=0.3] at (0.5,0.5) (a) {}; \node[circle,fill,scale=0.3] at (1.5,0.5) (b) {}; \node[circle,fill,scale=0.3] at (2.5,0.5) (c) {}; \node[circle,fill,scale=0.3] at (0.5,1.5) (e) {}; \node[circle,fill,scale=0.3] at (0.5,2.5) (f) {}; \node[circle,fill,scale=0.3] at (1.5,1.5) (i) {}; \node[circle,fill,scale=0.3] at (2.5,1.5) (k) {}; \node[circle,fill,scale=0.3] at (3.5,0.5) (m) {}; \draw[<-] (c)--(b); \draw[<-] (b)--(a); \draw[<-] (m)--(c); \draw[<-] (k)--(i); \draw[<-] (i)--(e); \draw[->] (e)--(a); \draw[->] (f)--(e); \draw[->] (i)--(b); \draw[->] (k)--(c); \end{tikzpicture}.\] In general, to form the dual quiver, place a vertex in each box of the ladder diagram, as well as one at the top left and bottom right of the diagram, and then add arrows oriented down and right. To obtain the superpotential, assign to each of the vertices a variable $z_{ij}$, where $i$ indicates the row (starting at 0) and $j$ the column (starting at 1). We set $z_{10}=1$ and $z_{r (n-r+1)}=q$. The EHX superpotential is then: \[W_{EHX}=\sum_{a} \frac{z_{h(a)}}{z_{t(a)}}.\] The sum is over the arrows of the quiver, and $h(a)$ and $t(a)$ indicate the head and tail of an arrow respectively. \begin{eg} The EHX superpotential for $\Gr(4,2)$ is \[z_{11}+\frac{z_{12}}{z_{11}}+\frac{z_{21}}{z_{11}}+\frac{z_{22}}{z_{12}}+\frac{z_{22}}{z_{21}}+\frac{q}{z_{22}}.\] \end{eg} A superpotential is a mirror to a Fano manifold if information about the genus 0 Gromov--Witten invariants of the Fano manifold can be computed by the superpotential. More precisely, one or both of the following conditions might hold: \begin{enumerate} \item The period sequence of the superpotential is equal to the regularized quantum period of the Fano manifold (see \cite{fanomanifolds} for definitions and details). \item The Jacobi ring of the superpotential computes the quantum cohomology ring of the Fano manifold. \end{enumerate} The first condition was the original conjecture of Eguchi--Hori--Xiong, later proved by Marsh--Rietsch \cite{MarshRietsch} for the Grassmannian. This conjecture remains open for flag varieties. The second condition -- that the superpotential produces relations in the quantum cohomologry ring -- is the central focus of the paper. We first discuss the proof in the case of the Pl\"u cker coordinate mirror for the Grassmannian, introduced by Marsh--Rietsch in \cite{MarshRietsch}; the same statement for the EHX mirror is obtained as a corollary. To construct the Pl\"u cker coordinate mirror the Grassmannian $\Gr(n,r)$, take $n$ equations of the form \[s_{\yng(1)} * s_\lambda=q^i s_{\mu}\] where $i=0,1$ depending on the partition. Here $\lambda=(a,\dots,a) \in P(n,r)$ is either the empty set or a rectangular partition either maximally wide or maximally tall: we denote the set of such partitions $M(n,r)$. \begin{rem} $M(n,r)$ is the set of \emph{frozen variables} in the cluster structure of the Grassmannian: they appear in every cluster. \end{rem} Note that the sum \[ \sum_{\lambda \in M(n,r)} \frac{q^{i} s_\mu}{s_\lambda}\] is equal to $n s_{\yng(1)}=-K_{\Gr(n,r)}$, the anti-canonical class of the Grassmannian. An analogous statement is true for the Hori--Vafa mirror of a toric variety. To transform the sum into a (rational) function, every Schubert class $s_\lambda$ is replaced with the Pl\"u cker coordinate $p_\lambda$. \begin{eg} The Marsh--Rietsch Pl\"u cker coordinate superpotential for $\Gr(4,2)$ is \[\frac{p_{\yng(1)}}{p_{\emptyset}}+\frac{p_{\yng(2,1)}}{p_{\yng(2)}}+\frac{p_{\yng(2,1)}}{p_{\yng(1,1)}}+\frac{q p_{\yng(1)}}{p_{\yng(2,2)}}.\] \end{eg} Following \cite{MarshRietsch}, we denote the open subvariety on which the Pl\"u cker coordinate superpotential is a function (i.e. where $p_\lambda \neq 0$, $\lambda \in M(n,r)$) as $\Gr(n,n-r)^\circ$. Using Pl\"u cker relations, we can expand the Pl\"u cker coordinate mirror into a Laurent polynomial in each cluster chart in the cluster structure on the coordinate ring of the Grassmannian. \begin{eg} We can use the three term Pl\"u cker relation \[p_{\yng(1)} p_{\yng(2,1)}=p_{\yng(1,1)}p_{\yng(2)}+p_{\emptyset} p_{\yng(2,2)}\] to find that in the rectangles cluster chart, the mirror for $\Gr(4,2)$ is \[\frac{p_{\yng(1)}}{p_{\emptyset}}+\frac{p_{\yng(2)}}{p_{\yng(1)}}+\frac{p_{\yng(1,1)}}{p_{\yng(1)}}+\frac{p_{\yng(2,2)}}{p_{\yng(2)}}+\frac{p_{\yng(2,2)}}{p_{\yng(1,1)}}+\frac{q p_{\yng(1)}}{p_{\yng(2,2)}}.\] \end{eg} In each cluster chart, one can compute the critical locus by setting the partial derivatives to zero: $\frac{\partial}{\partial W_C}W_C=0.$ It is clear how to interpret these equations as candidate relations in quantum cohomology: both Pl\"u cker coordinates and Schubert classes of the Grassmannian $\Gr(n,r)$ are indexed by the same set of partitions, $\lambda \subset r \times (n-r)$. \begin{thm}[\cite{MarshRietsch}] The Jacobi ring of the Pl\"u cker coordinate mirror is isomorphic to the quantum cohomology ring of the Grassmannian. \end{thm} The Pl\"u cker coordinate mirror is a compactification of the EHX mirror: that is, \begin{prop}[\cite{MarshRietsch}]\label{pro:isogr} The Pl\"u cker coordinate mirror in the rectangles cluster chart is isomorphic to the EHX mirror under the map \[z_{ij} \mapsto \frac{p_{i \times j}}{p_{(i-1)\times {(j-1)}}}.\] \end{prop} This proposition and theorem can be combined to show the following theorem: \begin{thm}[\cite{MarshRietsch}] \label{thm:thmMR}. Let $F: \mathbb{C}[z_{ij}] \to \mathrm{QH}^*\Gr(n,r)[s_\lambda^{-1}: \lambda \in R]$ be the map given by $z_{ij} \mapsto \frac{s_{i \times j}}{s_{(i-1)\times {j-1}}}$. Then for any $z_{ij}$, \[\phi\left(\frac{\partial}{\partial z_{ij}} W_{EHX}\right)=0.\] \end{thm} \subsection{Superpotentials of flag varieties} We first recall the Batyrev--Ciocan-Fontanine--Kim--van Straten generalization of the EHX mirror to flag varieties \cite{flagdegenerations}. Fixing $\Fl(n;r_1,\dots,r_\rho)$, for each Grassmannian step $\Gr(r_{i-1},r_i)$ draw an $r_i \times (r_{i-1}-r_i)$ grid of boxes, placing them together. For example, the ladder diagram of $\Fl(5,4,2,1)$ is \[\begin{tikzpicture}[scale=0.6] \draw (0,0) rectangle (1,1); \draw (1,1) rectangle (2,2); \draw (0,1) rectangle (1,2); \draw (1,0) rectangle (2,1); \draw (2,1) rectangle (3,2); \draw (0,2) rectangle (1,3); \draw (0,3) rectangle (1,4); \draw (2,0) rectangle (3,1); \draw (3,0) rectangle (4,1); \end{tikzpicture}.\] The dual quiver is similar to the Grassmannian case. There are vertices inside each box, as well as at the top left and bottom right corners and in the inner corner of each step of the diagram. In this example, it is \[\begin{tikzpicture}[scale=0.6] \draw (0,0) rectangle (1,1); \draw (1,1) rectangle (2,2); \draw (0,1) rectangle (1,2); \draw (1,0) rectangle (2,1); \draw (2,1) rectangle (3,2); \draw (0,2) rectangle (1,3); \draw (0,3) rectangle (1,4); \draw (2,0) rectangle (3,1); \draw (3,0) rectangle (4,1); \node[circle,fill,scale=0.3] at (0.5,0.5) (a) {}; \node[circle,fill,scale=0.3] at (1.5,0.5) (b) {}; \node[circle,fill,scale=0.3] at (2.5,0.5) (c) {}; \node[circle,fill,scale=0.3] at (3.5,0.5) (d) {}; \node[circle,fill,scale=0.3] at (0.5,4.5) (l) {}; \node[circle,fill,scale=0.3] at (0.5,1.5) (e) {}; \node[circle,fill,scale=0.3] at (0.5,2.5) (f) {}; \node[circle,fill,scale=0.3] at (0.5,3.5) (g) {}; \node[circle,fill,scale=0.3] at (4.5,0.5) (h) {}; \node[circle,fill,scale=0.3] at (1.5,1.5) (i) {}; \node[circle,fill,scale=0.3] at (1.5,2.5) (j) {}; \node[circle,fill,scale=0.3] at (2.5,1.5) (k) {}; \node[circle,fill,scale=0.3] at (3.5,1.5) (m) {}; \draw[<-] (h)--(d); \draw[<-] (d)--(c); \draw[<-] (c)--(b); \draw[<-] (b)--(a); \draw[<-] (m)--(k); \draw[<-] (k)--(i); \draw[<-] (i)--(e); \draw[<-] (j)--(f); \draw[->] (e)--(a); \draw[->] (f)--(e); \draw[->] (g)--(f); \draw[->] (i)--(b); \draw[->] (j)--(i); \draw[->] (k)--(c); \draw[->] (l)--(g); \draw[->] (m)--(d); \end{tikzpicture}.\] Assigning to each of the vertices a variable $z_v$, the EHX superpotential is: \[W_{EHX}=\sum_{a} \frac{z_{h(a)}}{z_{t(a)}}.\] In \cite{kflags}, the second author proposes a generalization of the Pl\"u cker coordinate mirror from Grassmannians to type A flag varieties. We recall the construction now. Fix a flag variety $\Fl(n,r_1,\dots,r_\rho)$. For each $i=1,\dots,\rho$, we can consider $r_{i-1}$ equations \[s^i_{\yng(1)} * s^i_\lambda=F^i_\lambda,\] where $\lambda \in M(r_{i-1},r_i)$, and $F^i_\lambda$ is simply the expansion of the left hand side in quantum Schubert calculus. This can be described explicitly -- see \cite{kflags} for details. As in the Marsh--Rietsch construction, we can use this to obtain an expression of the anti-canonical class of the flag variety: \[\sum_{i=1}^\rho \left(\sum_{\lambda \in M(r_{i-1},r_i)} \frac{F^i_\lambda}{s^i_{\lambda}}\right)-r_{I+1} s^i_{\yng(1)}.\] The set $P(n,\mathbf{r})$ naturally indexes elements of the coordinate ring of the product of Grassmannians \[Y(n,\mathbf{r}):=\prod_{i=1}^\rho \Gr(r_{i-1},r_i).\] Let $Q_i$ be the tautological quotient bundle pulled back to $Y(n,\mathbf{r})$ from the $i^{th}$ Grassmannian factor. Sections of $\det(Q_i)$ are indexed by $\lambda \in P(r_{i-1},r_i)$. We write $p^i_\lambda$ for the Pl\"u cker coordinate associated to $i$ and $\lambda$. We denote $Y(n,\mathbf{r})^\circ:=\prod_{i=1}^\rho \Gr(r_{i-1},r_{i-1}-r_i)^\circ$ the locus in $Y(n,\mathbf{r})$ where $p^i_\lambda \neq 0$ for all $i$ and $\lambda \in M(r_{i-1},r_i)$. This is the complement of an anti-canonical divisor on $Y(n,\mathbf{r})$. To each Schubert class $s_{\vec{\mu}}$ we associate the product \[p_{\vec{\mu}}:=\prod_{i=1}^\rho p^i_{\mu_i}.\] We denote the polynomial in the coordinate ring of $Y(n,\mathbf{r})$ and the $q_1,\dots,q_\rho$ obtained by replacing the Schubert classes in $F^i_\lambda$ with Pl\"u cker coordinates in this way as $G^i_\lambda$. \begin{mydef}\label{def:mirror} The Pl\"u cker coordinate superpotential $W_P$ of the flag variety is \[\sum_{i=1}^\rho \left(\sum_{\lambda \in M(r_{i-1},r_i)} \frac{G^i_\lambda}{p^i_{\lambda}}\right)-r_{I+1} p^i_{\yng(1)}.\] \end{mydef} \begin{eg} Consider the flag variety $\Fl(6;4,2,1)$. The Pl\"u cker coordinate superpotential is \begin{align*} \frac{p^1_{\yng(1)}}{p^1_\emptyset}+\frac{p^1_{\yng(2,1,1,1)}}{p^1_{\yng(1,1,1,1)}}+\frac{p^1_{\yng(2,1)}}{p^1_{\yng(2)}}+\frac{p^1_{\yng(2,2,1)}+q_1 p^1_{\yng(1)}}{p^1_{\yng(2,2)}}+\frac{p^1_{\yng(2,2,2,1)}+q_1 p^1_{\yng(1,1)} p^2_{\yng(1)}}{p^1_{\yng(2,2,2)}}+\frac{q_1 p^1_{\yng(1,1,1)} p^2_{\yng(1,1)}}{p^1_{\yng(2,2,2,2)}} \\+\frac{p^2_{\yng(1)}}{p^2_\emptyset}+\frac{p^2_{\yng(2,1)}}{p^2_{\yng(1,1)}}+\frac{p^2_{\yng(2,1)}+q_2}{p^2_{\yng(2)}}+\frac{q_2 p^2_{\yng(1)} p^3_{\yng(1)}}{p^2_{\yng(2,2)}}+\frac{p^3_{\yng(1)}}{p^3_{\emptyset}}+\frac{q_3}{p^3_{\yng(1)}}. \end{align*} \end{eg} By choosing a cluster chart for each Grassmannian factor of $Y(n,\mathbf{r})$, we can expand the Pl\"u cker coordinate mirror of the flag variety into algebraically independent sets of coordinates on $Y(n,\mathbf{r})$. In \cite{kflags}, a first check of the validity Pl\"u cker coordinate mirror is carried out by demonstrating that the Pl\"u cker coordinate mirror is a compactification of the EHX mirror (that is, Proposition \ref{pro:isogr} in the flag case). Fix a flag variety $\Fl(n;\mathbf{r})$. Recall that the ladder diagram is made up of the ladder diagrams of $\rho$ Grassmannians, i.e. an $r_i \times (r_{i-1} - r_i)$ grid for each $i$. Given a vertex $v$ in the $i^{th}$ block of the dual quiver, let $\phi(z_v)$ be as prescribed in the Grassmannian case for $\Gr(r_{i-1},r_i)$, and then scale by $q_1 \cdots q_{i-1}$. \begin{eg}\label{eg:labels} To demonstrate, we label the vertices with $\phi(z_v)$ in the following example (where the flag variety is $\Fl(5;3,2,1)$): \[\begin{tikzpicture}[scale=1.8] \draw[gray] (0,0) rectangle (1,1); \draw[gray] (1,1) rectangle (2,2); \draw[gray] (0,1) rectangle (1,2); \draw[gray] (1,0) rectangle (2,1); \draw[gray] (2,1) rectangle (3,2); \draw[gray] (0,2) rectangle (1,3); \draw[gray] (1,2) rectangle (2,3); \draw[gray] (2,0) rectangle (3,1); \draw[gray] (3,0) rectangle (4,1); \node at (0.5,0.5) (a) {$\frac{p^1_{\yng(1,1,1)}}{p^1_\emptyset}$}; \node at (1.5,0.5) (b) {$\frac{p^1_{\yng(2,2,2)}}{p^1_{\yng(1,1)}}$}; \node at (2.5,0.5) (c) {$\frac{q_1 p^2_{\yng(1,1)}}{p^2_\emptyset}$}; \node at (3.5,0.5) (d) {$ \frac{q_1 q_2 p^3_{\yng(1)}}{p^3_\emptyset}$}; \node at (0.5,1.5) (e) {$\frac{p^1_{\yng(1,1)}}{p^1_\emptyset}$}; \node at (0.5,2.5) (f) {$\frac{p^1_{\yng(1)}}{p^1_\emptyset}$}; \node at (0.5,3.5) (g) {1}; \node at (4.5,0.5) (h) {$q_1 q_2 q_3$}; \node at (1.5,1.5) (i) {$\frac{p^1_{\yng(2,2)}}{p^1_{\yng(1)}}$}; \node at (1.5,2.5) (j) {$\frac{p^1_{\yng(2)}}{p^1_\emptyset}$}; \node at (2.5,1.5) (k) {$\frac{q_1 p^2_{\yng(1)}}{p^2_\emptyset}$}; \node at (2.5,2.5) (l) {$q_1$}; \node at (3.5,1.5) (m) {$q_1 q_2$}; \draw[<-] (h)--(d); \draw[<-] (d)--(c); \draw[<-] (c)--(b); \draw[<-] (b)--(a); \draw[<-] (m)--(k); \draw[<-] (k)--(i); \draw[<-] (i)--(e); \draw[<-] (l)--(j); \draw[<-] (j)--(f); \draw[->] (e)--(a); \draw[->] (f)--(e); \draw[->] (g)--(f); \draw[->] (i)--(b); \draw[->] (j)--(i); \draw[->] (k)--(c); \draw[->] (l)--(k); \draw[->] (m)--(d); \end{tikzpicture}.\] \end{eg} \begin{thm}[\cite{kflags}] \label{thm:flagMR} For any type A flag variety, the Pl\"u cker coordinate mirror in the rectangles cluster chart is isomorphic to the EHX mirror under the isomorphism \[z_v \mapsto \phi(z_v).\] \end{thm} \section{Quantum cohomology and mirrors of the flag variety}\label{sec:theoremA} To summarize the situation for the Grassmannian, there are two mirrors -- the Pl\"u cker coordinate mirror and the EHX mirror -- the first of which is isomorphic with the second in a particular cluster chart. Because the same partitions index Pl\"u cker coordinates and Schubert classes, partial derivatives of the Pl\"u cker coordinate mirror can easily be interpreted -- and indeed give -- quantum cohomology relations. Up until the last clause, the same is true for a multi-step flag variety: there are two mirrors -- the Pl\"u cker coordinate mirror and the EHX mirror -- the first of which is isomorphic with the second in a particular cluster chart. The same partitions index Pl\"u cker coordinates and Schubert classes -- and indeed, the Abelian/non-Abelian basis of the cohomology as well. But consider the following example. \begin{eg} The Pl\"u cker coordinate mirror of $\Fl(4;2,1)$ is \[\frac{p^1_{\yng(1)}}{p^1_{\emptyset}}+\frac{p^1_{\yng(2,1)}+q_1}{p^1_{\yng(2)}}+\frac{p^1_{\yng(2,1)}}{p^1_{\yng(1,1)}}+\frac{q_1 p^1_{\yng(1)} p^2_{\yng(1)}}{p^1_{\yng(2,2)}}+\frac{p^2_{\yng(1)}}{p^2_{\emptyset}}+\frac{q_2}{p^2_{\yng(1)}}.\] Expanding in the rectangles cluster and applying $p^2_{\yng(1)} \frac{\partial}{\partial p^2_{\yng(1)}}$, we obtain \[\frac{q_1 p^1_{\yng(1)} p^2_{\yng(1)}}{p^1_{\yng(2,2)}}+p^2_{\yng(1)}-\frac{q_2}{p^2_{\yng(1)}}.\] The most natural way to interpret this as a quantum cohomology relation is as: \[\frac{q_1 s^1_{\yng(1)} s^2_{\yng(1)}}{s^1_{\yng(2,2)}}+s^2_{\yng(1)}-\frac{q_2}{s^2_{\yng(1)}}=0,\] however, this relation does not hold. One could attempt to use Schubert classes instead, for example: \[\frac{q_1 \sigma_{\yng(1),\yng(1)}}{\sigma_{\yng(2,2),\emptyset}}+\sigma_{\emptyset,\yng(1)}-\frac{q_2}{\sigma_{\emptyset,\yng(1)}}=0.\] However, this relation also does not hold, and at any rate there will quickly be ambiguity with this approach with multi-step flag varieties. \end{eg} The above example demonstrates the central difficulty in the flag case: the Pl\"u cker coordinate mirror is built out of quantum Schubert calculus, but is written in Pl\"u cker coordinates which have the same multiplicative structure of the $s_{\vec{\mu}}$ basis. By multiplicative structure, we mean the property that the basis element associated to a tuple $(\lambda_1,\dots,\lambda_\rho)$ is the product of the $\rho$ basis elements given by tuples with a single non-empty partition $\lambda_i$ in the $i^{th}$ spot, as $i$ runs from $1$ to $\rho$. For the flag variety, we must instead use the \emph{Schubert map}, which we introduce now. Fix a flag variety $\Fl(n;\mathbf{r})$, where $\mathbf{r}:=r_1,\dots,r_\rho$ as usual. Recall that $P(n,\mathbf{r})$ is the set of Pl\"u cker coordinates $p^i_\lambda$ on $Y(n,\mathbf{r})$, where $\lambda$ is a rectangle. Let $U_{P(n,\mathbf{r})}$ be the open subvariety of $Y(n,\mathbf{r})$ where the $p^i_\lambda, \lambda \in P(n,\mathbf{r})$ do not vanish. Let $\widetilde{QH}^*(\Fl(n;\mathbf{r}))$ denote the localization of the quantum cohomology ring at the rectangular Schubert classes. The ring of functions $\mathbb{C}[U_{P(n,\mathbf{r})}]$ is generated (as an algebra) by $P(n,\mathbf{r})$, as every Pl\"u cker coordinate can be written as a Laurent polynomial in the rectangular Pl\"u cker coordinates using three term Pl\"u cker relations. We extend the coefficient field to the ring $R=\mathbb{C}[q_1,\dots,q_\rho]$ We define a map \[F: R[U_{P(n,\mathbf{r})}] \to \widetilde{QH}^*(\Fl(n;\mathbf{r}))\] -- a morphism of $\mathbb{C}[q_1,\dots,q_\rho]$ algebras -- by setting the images of the rectangular Pl\"u cker coordinates. Fix some $p^i_{j \times k}$, where the rectangle $j \times k$ is an element of $P(r_{i-1},r_i)$. We define two tuples of partitions. For $l=1,\dots,{i-1}$, let $R_l$ be the $(j-k+r_{i-1}-r_{i})\times (r_{l-1}-r_{l})$ rectangle, and set $R_i:= j \times k$. Set $\vec{\mu}_1:=(R_1,\dots,R_i,\emptyset,\dots,\emptyset)$ and $\vec{\mu}_2:=(R_1,\dots,R_{i-1},\emptyset,\emptyset,\dots,\emptyset)$. \begin{mydef}\label{def:Schubertmap}The \emph{Schubert map} \[F: \mathbb{C}[U_{P(n,\mathbf{r})}][q_1,\dots,q_\rho] \to \widetilde{QH}^*(\Fl(n;\mathbf{r}))\] is defined by setting \[F(p^i_{j \times k})=\frac{\sigma_{\vec{\mu}_1}}{\sigma_{\vec{\mu}_2}}.\] \end{mydef} \begin{rem} Note that the Schubert map in the Grassmannian case agrees with the map defined in Theorem \ref{thm:thmMR}. \end{rem} The Schubert map allows partial derivatives of the Pl\"u cker coordinate mirror to be interpreted as quantum relations. We are now ready to prove Theorem \ref{thm:thmA} as stated in the introduction, which we restate here. \begin{theoremrepeat} $C=(C_1,\dots,C_\rho)$ be a choice of clusters for each Grassmannian factor in $Y$, and let $W_C$ be the expansion of the Pl\"u cker coordinate mirror in this chart. For all $i$ and $p^i_\lambda \in C_i$, \[F\left(\frac{\partial}{\partial p^i_{\lambda}} W_{C}\right)=0.\] \end{theoremrepeat} To show this theorem will require two propositions. \begin{prop}\label{pro:reduction} Let $C=(C_1,\dots,C_\rho)$ and $C'=(C'_1,\dots,C'_\rho)$ be two choices of clusters for $Y$ connected by a mutation. Let $W_C$ and $W_{C'}$ be the expansions of $W$ in $C$ and $C'$ respectively. Suppose Theorem \ref{thm:thmA} holds for $C$. Then it holds for $C'$. \end{prop} \begin{proof} For some $i=1,\dots,\rho$ there is a $\lambda, \mu, a, b, c, d \in P(r_{i-1},r_i)$ such that $C_i'$ is obtain from $C_i$ via the three term Pl\"u cker relation \[p^i_\lambda p^i_\mu = p^i_a p^i_b + p^i_c p^i_d\] That is, $p^i_\lambda \in C_i$ and $p^i_\mu \in C_i'$, and $p^i_a,p^i_b, p^i_c$ and $p^i_d$ are elements of both $C_i$ and $C_i'$. The Laurent polynomial $W_{C'}$ is obtained from $W_C$ by replacing $p^i_\lambda$ with \[\frac{ p^i_a p^i_b + p^i_c p^i_d}{p^i_\mu}.\] Note that by construction, \[ F(p^i_\lambda)=F\left(\frac{ p^i_a p^i_b + p^i_c p^i_d}{p^i_\mu}\right).\] For any $ C_i'$, we can then compute using the multi-variable chain rule that \[\frac{\partial}{\partial p^i_\alpha} W_{C'}=\frac{\partial}{\partial p^i_\alpha} \left(\frac{ p^i_a p^i_b + p^i_c p^i_d}{p^i_\mu}\right) \frac{\partial}{\partial p^i_\lambda} W_{C}|_{p^i_\lambda=\frac{ p^i_a p^i_b + p^i_c p^i_d}{p^i_\mu}}+\frac{\partial}{\partial p^i_\alpha} W_{C}|_{p^i_\lambda=\frac{ p^i_a p^i_b + p^i_c p^i_d}{p^i_\mu}}.\] It follows that \[F\left(\frac{\partial}{\partial p^i_\alpha} W_{C'}\right)=0,\] as \[F\left( \frac{\partial}{\partial p^i_\lambda} W_{C}|_{p^i_\lambda=\frac{ p^i_a p^i_b + p^i_c p^i_d}{p^i_\mu}}\right)=F\left( \frac{\partial}{\partial p^i_\lambda} W_{C}\right)=0\] and \[F\left( \frac{\partial}{\partial p^i_\alpha} W_{C}|_{p^i_\lambda=\frac{ p^i_a p^i_b + p^i_c p^i_d}{p^i_\mu}}\right)=F\left( \frac{\partial}{\partial p^i_\alpha} W_{C}\right)=0.\] \end{proof} The implication of this proposition is that we can reduce Theorem \ref{thm:thmA} to the statement for a single cluster, the rectangles cluster. The next proposition is the main ingredient in the proof of Theorem \ref{thm:thmA}, and is a corollary of the second theorem proved in this paper. This proposition uses the fact that the ladder diagram of a flag variety $\Fl(n;\mathbf{r})$ can be viewed naturally as a subquiver of the ladder diagram of a Grassmannian $\Gr(N,r_1)$, where $N>>0$ (or we can think of $\Gr(\infty,r_1)$ if we wish). For example, below, the ladder diagram of the flag variety $\Fl(5,3,2,1)$ is superimposed on that of $\Gr(\infty,3)$ (the second is drawn dashed in grey): \[\begin{tikzpicture}[scale=0.6] \draw (0,0) rectangle (1,1); \draw (1,1) rectangle (2,2); \draw (0,1) rectangle (1,2); \draw (1,0) rectangle (2,1); \draw (2,1) rectangle (3,2); \draw (0,2) rectangle (1,3); \draw (1,2) rectangle (2,3); \draw (2,0) rectangle (3,1); \draw (3,0) rectangle (4,1); \draw[gray,dashed] (2,2) rectangle (3,3); \draw[gray,dashed] (3,1) rectangle (4,2); \draw[gray,dashed] (3,2) rectangle (4,3); \draw[gray,dashed] (4,0) rectangle (5,1); \draw[gray,dashed] (4,1) rectangle (5,2); \draw[gray,dashed] (4,2) rectangle (5,3); \draw[gray,dashed] (5,0) rectangle (6,1); \draw[gray,dashed] (5,1) rectangle (6,2); \draw[gray,dashed] (5,2) rectangle (6,3); \draw[gray,dashed] (5,3)--(8,3); \draw[gray,dashed] (5,2)--(8,2); \draw[gray,dashed] (5,1)--(8,1); \draw[gray,dashed] (5,0)--(8,0); \end{tikzpicture}.\] We now have two $\phi$ maps, as defined in Theorem \ref{thm:flagMR}, both with domain $\mathbb{C}[z_v]$, where $v$ ranges over the vertices of the dual ladder quiver of the flag variety. Let $\phi_{\Fl}:\mathbb{C}[z_v] \to \mathbb{C}[U_{P(n,\mathbf{r})}]$ denote the homomorphism obtained by viewing vertices as vertices in the flag quiver. If we view a vertex as a vertex of a Grassmannian quiver, then we obtain a map $\phi_{\Gr}$ from $\mathbb{C}[z_v]$ to a localization of the coordinate ring of $\Gr(\infty, r_1)$. More precisely, this is just the ring generated by minors of the infinite matrix \[ \begin{bmatrix} x_{1 1} & x_{12} & x_{13} & x_{14} & \cdots \\ x_{2 1} & x_{12} & x_{13} & x_{14} & \cdots \\ \vdots & & & \vdots \\ x_{r_1 1} & x_{r_1 2} & x_{r_1 3} & x_{r_1 4} & \cdots \\ \end{bmatrix} ,\] which we can index by all partitions of length at most $r$, localized at the rectangular partitions appearing in the flag quiver. Abusing notation, we call this ring $\mathbb{C}[U_{P(\infty,r_1)}].$ By taking limits, we can see that there is a well-defined map from the ring of minors of the infinite matrix above to the symmetric polynomial ring in $r_1$ variables, $\Lambda_{r_1}$, given by \[ p_\lambda \mapsto s_\lambda.\] Let $\Lambda_{r_1}^\circ$ be the localization at the rectangular coordinates. The map above gives rise to a natural generalization of the Schubert map \[F_{Gr}: \mathbb{C}[U_{P(\infty,r_1)}] \to \Lambda_{r_1}^\circ.\] We also have the Schubert map for the flag variety: \[F_{\Fl}: \mathbb{C}[U_{P(n,\mathbf{r})}][q_1,\dots,q_\rho] \to \widetilde{QH}^*(\Fl(n;\mathbf{r})).\] \begin{prop}\label{pro:commutes} Consider the natural map \[\pi:\Lambda_{r_1}^\circ \to \widetilde{QH}^*(\Fl(n;\mathbf{r})), \hspace{5mm} s_\lambda \mapsto s^1_\lambda\] discussed in the introduction and in \eqref{eq:si}. Then the following diagram commutes. \[ \begin{tikzcd} {} & \mathbb{C}[U_{P(\infty,r_1)}] \arrow{r}{F_{\Gr}} &\Lambda_{r_1}^\circ \arrow{dd}{\pi}& {} \\ \mathbb{C}[z_v] \arrow{ru}{\phi_{\Gr}} \arrow[swap]{rd}{\phi_{\Fl}} & {} & {}\\ {}&\mathbb{C}[U_{P(n,\mathbf{r})}][q_1,\dots,q_\rho] \arrow{r}{F_{\Fl}}& \widetilde{QH}^*(\Fl(n;\mathbf{r})) \end{tikzcd} \] \end{prop} \begin{eg} \label{eg:twostep-quiver} Consider the labeled dual ladder quiver for the flag variety $\Fl(4;2,1)$: \[\begin{tikzpicture}[scale=1.6] \draw (0,0) rectangle (1,1); \draw (1,1) rectangle (2,2); \draw (0,1) rectangle (1,2); \draw (1,0) rectangle (2,1); \draw (2,0) rectangle (3,1); \draw[gray,dashed] (3,2)--(3,1); \draw[gray,dashed] (4,2)--(4,0); \draw[gray,dashed] (2,2)--(6,2); \draw[gray,dashed] (3,1)--(6,1); \draw[gray,dashed] (3,0)--(6,0); \node at (1.5,1.5) (d) {$\frac{p^1_{\yng(2)}}{p^1_\emptyset}$}; \node at (0.5,0.5) (a) {$\frac{p^1_{\yng(1,1)}}{p^1_\emptyset}$}; \node at (1.5,0.5) (b) {$\frac{p^1_{\yng(2,2)}}{p^1_{\yng(1)}}$}; \node at (2.5,0.5) (c) {$\frac{q_1 p^2_{\yng(1)}}{p^2_\emptyset}$}; \node at (0.5,1.5) (f) {$\frac{p^1_{\yng(1)}}{p^1_\emptyset}$}; \node at (2.5,1.5) (h) {$q_1$}; \node at (3.5,0.5) (i) {$q_1 q_2$}; \node at (0.5,2.5) (g) {1}; \draw[->] (g)--(f); \draw[->] (f)--(a); \draw[->] (f)--(d); \draw[->] (d)--(b); \draw[->] (a)--(b); \draw[->] (d)--(h); \draw[->] (b)--(c); \draw[->] (h)--(c); \draw[->] (c)--(i); \end{tikzpicture}.\] The Grassmannian labels are given by: \[\begin{tikzpicture}[scale=1.6] \draw (0,0) rectangle (1,1); \draw (1,1) rectangle (2,2); \draw (0,1) rectangle (1,2); \draw (1,0) rectangle (2,1); \draw (2,0) rectangle (3,1); \draw[gray,dashed] (3,2)--(3,1); \draw[gray,dashed] (4,2)--(4,0); \draw[gray,dashed] (2,2)--(6,2); \draw[gray,dashed] (3,1)--(6,1); \draw[gray,dashed] (3,0)--(6,0); \node at (1.5,1.5) (d) {$\frac{p^1_{\yng(2)}}{p^1_\emptyset}$}; \node at (0.5,0.5) (a) {$\frac{p^1_{\yng(1,1)}}{p^1_\emptyset}$}; \node at (1.5,0.5) (b) {$\frac{p^1_{\yng(2,2)}}{p^1_{\yng(1)}}$}; \node at (2.5,0.5) (c) {$\frac{p^1_{\yng(3,3)}}{p^1_{\yng(2)}}$}; \node at (0.5,1.5) (f) {$\frac{p^1_{\yng(1)}}{p^1_\emptyset}$}; \node at (2.5,1.5) (h) {$\frac{p^1_{\yng(3)}}{p^1_\emptyset}$}; \node at (3.5,0.5) (i) {$\frac{p^1_{\yng(4,4)}}{p^1_{\yng(3)}}$}; \node at (0.5,2.5) (g) {1}; \draw[->] (g)--(f); \draw[->] (f)--(a); \draw[->] (f)--(d); \draw[->] (d)--(b); \draw[->] (a)--(b); \draw[->] (d)--(h); \draw[->] (b)--(c); \draw[->] (h)--(c); \draw[->] (c)--(i); \end{tikzpicture}.\] Proposition \ref{pro:commutes} states that if we apply the Schubert map to the Grassmannian labels and then apply $\pi$, we obtain the same cohomology class as applying the Schubert map to the flag labels. This is trivially true for the labels in the first block. Consider the vertex labeled $p_{\yng(3)}/p_\emptyset$. One can check using Theorem \ref{thm:thmB} (see Example \ref{eg:twostep-thm}) that \[\pi\left(F_{\Gr}\left(\frac{p_{\yng(3)}}{p_\emptyset}\right)\right)=\frac{s^1_{\yng(3)}}{s^1_\emptyset}=q_1,\] which is indeed the image under $F_{\Fl}$ of the label corresponding to the same vertex in the flag diagram. Similarly, from Example \ref{eg:twostep-thm}, we also have \[\pi\left(F_{\Gr}\left(\frac{p_{\yng(3,3)}}{p_{\yng(2)}}\right)\right)=\frac{s^1_{\yng(3,3)}}{s^1_{\yng(2)}}=\frac{q_1 \sigma_{\yng(2),\yng(1)}}{\sigma_{\yng(2),\emptyset}}=F_{\Fl}\left( \frac{q_1 p^2_{\yng(1)}}{p^2_\emptyset}\right),\] and \[\pi\left(F_{\Gr}\left(\frac{p_{\yng(4,4)}}{p_{\yng(3)}}\right)\right)=\frac{s^1_{\yng(4,4)}}{s^1_{\yng(3)}}=\frac{q_1^2 q_2}{q_1}=q_1 q_2=F_{\Fl}(q_1 q_2).\] \end{eg} To summarize, the ladder diagram of any flag variety is a subquiver of the ladder diagram of a sufficiently large Grassmannian. Using this inclusion of ladder diagrams, we can induce an inclusion of dual ladder quivers. For the Grassmannian, Theorem \ref{thm:thmMR} gives a map from vertices of the Grassmannian ladder quiver to the cohomology of the Grassmannian. Theorem \ref{thm:flagMR} together with the Schubert map gives a map from vertices of the flag variety to the quantum cohomology of the flag variety. There is a natural map from the cohomology of the Grassmannian to the flag variety. Proposition \ref{pro:commutes} states that the Schubert map is precisely the map that makes this diagram commute. We'll delay the proof of Proposition \ref{pro:commutes} to the next section, where it will be an easy corollary of Theorem \ref{thm:thmB}. \begin{proof}[Proof of Theorem \ref{thm:thmA}] By Proposition \ref{pro:reduction}, it suffices to show that for $C=(C_1,\dots,C_\rho)$ the rectangles cluster, and for all $i$ and $p^i_\lambda \in C_i$, \[F\left(\frac{\partial}{\partial p^i_{\lambda}} W_{C}\right)=0.\] Recall that Theorem \ref{thm:flagMR} implies that $W_C$ can be computed using the dual ladder quiver, together with the labels as in Example \ref{eg:labels}: that is, \begin{equation}\label{eqn:hot} W_C=\sum_{a} \frac{L(v_{t(a)})}{L(v_{s(a)})} \end{equation} where $a$ ranges over the arrows in the quiver, $v_{s(a)}$ and $v_{t(a)}$ are the vertices that are the source and target of the arrow $a$, and $L(v_{s(a)})$ and $L(v_{t(a)})$ the labels of these vertices. Fixing a rectangle $j \times k$, in either the Grassmannian or the flag case, the partial derivative $p^i_{j \times k} \frac{\partial}{\partial p^i_{j \times k}}$ can be computed using the ladder diagram as well just as in \eqref{eqn:hot}. In this case, it is a signed sum involving the arrows with source or target at one of the two vertices where $p^i_{j \times k}$ appears in the numerator or denominator of the label. That is, the sum is over the following eight arrows, and it is a signed sum -- red arrows have a negative sign and black arrows a positive sign: \begin{equation}\label{pic1} \begin{tikzpicture}[scale=1.6] \node at (0,0) (a) {$\frac{p^i_{j \times k}}{p^i_{(j-1) \times (k-1)}}$}; \node at (1,-1) (b) {$\frac{p^i_{(j+1) \times (k+1)}}{p^i_{j \times k}}$}; \node at (0,-1) (c) {}; \node at (1,0) (d) {}; \node at (0,1) (e) {}; \node at (-1,0) (f) {}; \node at (2,-1) (g) {}; \node at (1,-2) (h) {}; \draw[<-,red] (c)--(a); \draw[<-,red] (d)--(a); \draw[<-] (a)--(e); \draw[<-] (a)--(f); \draw[<-] (g)--(b); \draw[<-] (h)--(b); \draw[->,red] (c)--(b); \draw[->,red] (d)--(b); \end{tikzpicture} \end{equation} If a vertex is on the border of the diagram, some arrows do not appear. For example, a variable of the form $p^i_{r_i \times k}$ appears in the label of only one vertex, and that vertex is the in the bottom row. In this case, the diagram is simply \begin{equation*} \begin{tikzpicture}[scale=1.6] \node at (0,0) (a) {$\frac{p^i_{j \times k}}{p^i_{(j-1) \times (k-1)}}$}; \node at (1,0) (d) {}; \node at (0,1) (e) {}; \node at (-1,0) (f) {}; \draw[<-,red] (d)--(a); \draw[<-] (a)--(e); \draw[<-] (a)--(f); \end{tikzpicture} \end{equation*} Notice that if we consider two variables $p^i_{j \times k}$ and $p^i_{(j+1) \times (k+1)}$, the arrows involved overlap, and therefore the corresponding equations share half their terms in common. By starting with a variable of the form $p^i_{r_i \times k}$ and then consider consecutive variables \[p_{r_1 \times k}, p_{(r_1-1) \times (k-1)}, p_{(r_1-2) \times (k-2)}, \dots\] for some $k$, we can easily see the partial derivatives vanish under the Schubert map if and only if diagrams of the following form vanish: \begin{equation}\label{pic2} \begin{tikzpicture}[scale=1.4] \node at (0,0) (a) {$\frac{p^i_{j \times k}}{p^i_{(j-1) \times (k-1)}}$}; \node at (0,-1) (c) {}; \node at (1,0) (d) {}; \node at (0,1) (e) {}; \node at (-1,0) (f) {}; \draw[<-,red] (c)--(a); \draw[<-,red] (d)--(a); \draw[<-] (a)--(e); \draw[<-] (a)--(f); \end{tikzpicture}. \end{equation} Again, some arrows may not appear depending on the position of the middle vertex in the quiver. To summarize, it suffices to show for every internal vertex in the dual ladder quiver of the flag variety, the equation arising from \eqref{pic2} vanishes under the Schubert map. Let $E_{\Fl}$ be such an equation for a fixed vertex $v$. Let $E_{\Gr}$ be the corresponding equation for the Grassmannian for the same vertex. By Theorem \ref{thm:thmMR}, \[F_{\Gr}(E_{\Gr})=0.\]Our claim is that \[0=\pi(F_{\Gr}(E_{\Gr}))=F_{\Fl}(E_{\Fl}). \] If the arrows with source or target at $v$ as a vertex in the Grassmannian quiver are also arrows in the ladder quiver, this follows immediately from Proposition \ref{pro:commutes}. For some vertices along the border, however, there may be extra arrows in the Grassmannian quiver that contribute extra terms to $E_{\Gr}$. For example, the vertex in the red box is such a vertex in the following diagram: \[\begin{tikzpicture}[scale=0.6] \draw (0,0) rectangle (1,1); \draw (1,1) rectangle (2,2); \draw (0,1) rectangle (1,2); \draw (1,0) rectangle (2,1); \draw (0,2) rectangle (1,3); \draw (1,2) rectangle (2,3); \draw (2,0) rectangle (3,1); \draw[red] (3,0) rectangle (4,1); \draw[gray,dashed] (2,2) rectangle (3,3); \draw[gray,dashed] (3,1) rectangle (4,2); \draw[gray,dashed] (3,2) rectangle (4,3); \draw[gray,dashed] (4,0) rectangle (5,1); \draw[gray,dashed] (4,1) rectangle (5,2); \draw[gray,dashed] (4,2) rectangle (5,3); \draw[gray,dashed] (5,0) rectangle (6,1); \draw[gray,dashed] (5,1) rectangle (6,2); \draw[gray,dashed] (5,2) rectangle (6,3); \draw[gray,dashed] (5,3)--(8,3); \draw[gray,dashed] (5,2)--(8,2); \draw[gray,dashed] (5,1)--(8,1); \draw[gray,dashed] (5,0)--(8,0); \end{tikzpicture}.\] We claim, however, that these extra terms vanish under the Schubert map, and so the above equation still holds. In the example above, the extra term in $E_{Gr}$ comes from the vertical arrow into the red box, and is \[ \frac{p_{\yng(4,4,4)} p_{\yng(3)}}{p_{\yng(3,3)} p_{\yng(4,4)}}.\] Note that $\pi(F_{\Gr}(p_{\yng(3)}))=s^1_{\yng(3)}=0,$ so the whole term vanishes as required. In general, these extra arrows come in two forms: vertical arrows along the top of a step in the ladder diagram and horizontal arrows along the side. Fixing a block or step $i \geq 1$ of the quiver, vertical arrows contribute the factor below to an extra term: \[\frac{p_{(r_1-r_{i+1}-1) \times (n-r_{i}+k) }}{p_{(r_1-r_{i+1}) \times (n-r_{i}+1+k) }}\] for $k=1, \dots, r_{i}-r_{i+1}-1.$ Horizontal arrows contribute a factor of the form \[\frac{p_{(r_1-r_{i}+k) \times (n-r_{i}+1)}}{p_{(r_1-r_{i}+k-1) \times (n-r_{i})}},\] for $k=1,\dots, r_{i}-r_{i+1}-1.$ Since \[s^1_{(r_1-r_{i+1}-1) \times (n-r_{i}+k) }=0, \hspace{1mm} k=1, \dots, r_{i}-r_{i+1}-1 \] and \[s^1_{(r_1-r_{i}+k) \times (n-r_{i}+1)}=0, \hspace{1mm} k=1,\dots, r_{i}-r_{i+1}-1\] in $\Fl(n;\mathbf{r})$ by part (b) of Theorem \ref{thm:thmB}, the extra terms vanish as claimed. \end {proof} \section{Quantum hooks and quantum cohomology}\label{sec:theoremB} In this section, we study a natural ring homomorphism from the ring $\Lambda_{r_1}$ of symmetric polynomials in $r_1$ variables to $\mathrm{QH}^*\Fl(n;\mathbf{r})$ given by mapping the $k$th elementary symmetry polynomial in $r_1$ variables $e_k(r_1)$ to the $k$th \emph{quantum elementary polynomial} $e^q_k(r_1)$, defined by the recursion \eqref{eq:eq-recursion} as in Section \ref{sec:background}. We have a $\mathbb{Z}$-basis of $\Lambda_{r_1}$ given by Schur polynomials indexed by partitions $\lambda$ of height at most $r_1$. Using the identity $s_\lambda = \det(s_{1^{\lambda'_i+j-i}})=\det(e_{\lambda'_i+j-i}(r_1))$, where $\lambda'$ is the transpose of $\lambda$, we write $s^1_\lambda$ for the image of $s_\lambda$ under the map $\Lambda_{r_1} \to \mathrm{QH}^*\Fl(n;\mathbf{r})$: \begin{align*} s_\lambda & \mapsto s^1_\lambda:= \det(e^q_{\lambda'_i+j-i}(r_1)). \end{align*} For $\lambda\in P(n,r_1)$, $s^1_\lambda$ represents a quantum Schubert class. When $\lambda$ has width greater than $n-r_1$, $s^1_\lambda$ is still defined, and Theorem B states that for a particular class of partitions $\lambda$, $s^1_\lambda$ is equal to a Schubert class, up to power of $q$, and that for another class of partitions, $s^1_\lambda=0$. We begin with some terminology. For $0 < b\leq n-r_\rho$, write $\bar{b}:=b-(n-r_I)$ for $I$ such that $n-r_I < b\leq n-r_{I+1}$. As in the introduction, set the \emph{quantum hook} (or \emph{$q$-hook}) \emph{of width $b$} to be the partition \[ H_b := (b^{b-n+r_1},(b-n+r_I)^{n-r_{I+1}-b}) = (b^{r_1-r_I+\bar{b}},\bar{b}^{n-r_{I+1}-b}). \] In the proof of our results, we will often consider the column heights of $H_b$, which we can read from the transpose of $H_b$: \begin{equation}\label{eq:Hb-transpose} H'_b= ((r_1-r_{I+1})^{\bar{b}},(r_1-r_I+\bar{b})^{n-r_I}) = ((r_1-r_{I+1})^{b-n+r_I},(b-n+r_I)^{n-r_I}) \end{equation} The $q$-hook $H_b$ can also be described as the partition obtained from a $(r_1-r_I)\times (n-r_I)$ rectangle after adding $\bar{b}$ rim-hooks of length $n+r_1-r_I-r_{I+1}$, each beginning in row $r_1-r_{I+1}$ and ending in row $1$ (see Figure \ref{fig:justqhook}, also Figure \ref{fig:qhook}). \begin{figure}[h!] \centering \begin{subfigure}{0.3\textwidth} \begin{tikzpicture}[scale=.5] \filldraw[fill=blue!30, draw=black] (0,0) -- (3,0) -- (3,-1) -- (0,-1)-- cycle; \filldraw[fill=red!30, draw=black] (3,0) -- (3,-1) -- (0,-1) -- (0,-2) -- (4,-2) -- (4,0) -- cycle; \end{tikzpicture} \end{subfigure} \begin{subfigure}{0.3\textwidth} \begin{tikzpicture}[scale=.5] \filldraw[fill=blue!30, draw=black] (0,0) -- (3,0) -- (3,-1) -- (1,-1) -- (1,-2) -- (0,-2) -- cycle; \filldraw[fill=red!30, draw=black] (3,0) -- (3,-1) -- (1,-1) -- (1,-2) -- (4,-2) -- (4,0) -- cycle; \filldraw[fill=green!30, draw=black] (4,0)-- (5,0) -- (5,-3) --(0,-3) --(0,-2) -- (4,-2)--cycle; \filldraw[fill=orange!40, draw=black] (5,0) -- (6,0) -- (6,-4) -- (1,-4) --(1,-6) --(0,-6) -- (0,-3) -- (5,-3) --cycle; \filldraw[fill=orange!40, draw=black] (6,0)-- (7,0) -- (7,-5) -- (2,-5) -- (2,-6) -- (1,-6) -- (1,-4)--(6,-4) -- cycle; \filldraw[fill=orange!40, draw=black] (7,0) -- (7,-5) -- (2,-5) -- (2,-6) -- (8,-6) -- (8,0) --cycle; \end{tikzpicture} \end{subfigure} \caption{$q$-hooks $H_b$ of width $b=3,4$ for $\Fl(4;2,1)$ and of width $3\leq b\leq8$ for $\Fl(8;6,4,3)$. \label{fig:justqhook} } \end{figure} For a $q$-hook $H_b$ of width $b$, set $q^{H_{b}}:= q_1^{r_1-r_2}\cdots(q_1\cdots q_{I-1})^{r_{I-1}-r_I}(q_1\cdots q_I)^{b-(n-r_I)}.$ With this definition, note that \begin{equation}\label{eq:qpower} q^{H_b}=q^{H_{b-1}}\cdot q_1\cdots q_I. \end{equation} \begin{eg} \label{eg:twostep} Consider $\mathrm{QH}^*\Fl(4;2,1)$ with $\deg q_1=3$ and $\deg q_2=2$. For the $q$-hooks $H_3=(3,0)$ and $H_4=(4,4)$ shown in Figure \ref{fig:justqhook}, we have $q^{H_3}=q_1$ and $q^{H_4}=q_1(q_1q_2)=q_1^2q_2$. \end{eg} \begin{eg} \label{eg:qhook} Consider $\mathrm{QH}^*\Fl(8;6,4,3)$ with $\deg q_1=4, \deg q_2=3$, and $\deg q_3=4$. Let $I$ be such that $n-r_I < b\leq n-r_{I+1}$. For the $q$-hooks of width $3\leq b\leq8$ (depicted in Figure \ref{fig:justqhook}), we have: \begin{table}[h] \[ \begin{array}{|c|c|c|c|} \hline b & I & H_b & q^{H_b}\\ \hline \hline 3& 1& (3,1) & q_1\\ \hline 4 & 1& (4,4) & q_1^2 \\ \hline 5&2& (5,5,5) & q_1^2(q_1q_2)=q_1^3q_2\\ \hline 6 & 3& (6,6,6,6,1,1) & q_1^2(q_1q_2)(q_1q_2q_3)= q_1^4q_2^2q_3 \\ \hline 7 & 3& (7,7,7,7,7,2) & q_1^2(q_1q_2)(q_1q_2q_3)^2= q_1^5q_2^3q_3^2 \\ \hline 8 & 3& (8,8,8,8,8,8) &q_1^2(q_1q_2)(q_1q_2q_3)^3= q_1^6q_2^4q_3^3\\ \hline \end{array} \] \end{table} \end{eg} Let $R_b := (b^{r_1-r_I+\bar{b}}) = (b^{r_1-(n-b)})$ be the maximal rectangle of width $b$ contained in $H_b$, with $H_b=R_b=\emptyset$ if $b<n-r_1$. \begin{rem}\label{rem:qhook} For a partition $\lambda\subseteq r_1\times n$, let $I$ be such that $n-r_I < \lambda_1\leq n-r_{I+1}$. Then $R_{\lambda_1}\subseteq \lambda$ if condition (i) below holds, and $H_{\lambda_1}\subseteq \lambda$ if conditions (i) and (ii) below hold. \begin{enumerate} \item[(i)] $\lambda_{\lambda_1}'\geq \lambda_1-(n-r_1)$ \item[(ii)]$ \lambda_{\lambda_1-(n-r_I)}' \geq r_1 -r_{I+1}$. \end{enumerate} (Note that if $\lambda_1=n-r_{I+1}$, then condition (ii) is redundant.) \end{rem} Conditions (i) and (ii) are illustrated in the left diagram of Figure \ref{fig:qhook} by $\lambda$ containing the southeast corner boxes of the $q$-hook marked by $+$ and $\times$, respectively. Here, $b=\lambda_1$, $\bar{b}:=\lambda_1-(n-r_I)$, and $r_1-r_I+\bar{b}=\lambda_1-(n-r_1)$. \begin{mydef} \label{def:compatible}A partition $\lambda\subseteq r_1\times n$ is \emph{compatible with a $q$-hook} if $H_{\lambda_1}\subseteq \lambda$, i.e. conditions (i) and (ii) of Remark \ref{rem:qhook} holds. \end{mydef} \begin{figure}[h!] \centering \begin{tikzpicture}[scale=.5] \draw (0, -3) -- (7, -3) -- (7, 0) -- (9, 0) -- (9,-5) -- (2, -5) -- (2, -7)-- (0, -7) -- cycle; \fill[color=gray!40] (0, -3) -- (7, -3) -- (7, 0) -- (9, 0) -- (9,-5) -- (2, -5) -- (2, -7)-- (0, -7) -- cycle; \draw[thick] (0, 0) rectangle (7, -3); \fill[color=gray!40] (0, 0) rectangle (7, -3); \draw[thick] (6, 0) -- (9, 0) -- (9,-5) -- (2, -5) -- (2, -7) -- (0, -7)--(0,-3) -- (0,0) -- (6,0); \draw[dashed] (7.67,0) -- (7.67,-3.67) -- (.67,-3.67) -- (.67,-7); \draw[dashed] (8.33,0) -- (8.33,-4.33) -- (1.33,-4.33) -- (1.33,-7); \node[scale=.8] at (1.65, -6.65) {$\times$}; \node[scale=.8] at (8.65, -4.65) {$+$}; \node[scale=.8] at (4, .5) {$n-r_I$}; \draw [-|] (5.2, .5) -- (7, .5); \draw [|-] (0, .5) -- (2.8, .5); \node[scale=.8] at (8, .5) {$\bar{b}$}; \draw [-|] (8.5, .5) -- (9, .5); \draw [|-] (7, .5) -- (7.5, .5); \node[scale=.8, rotate = 90] at (-.5, -1.5) {$r_1-r_I$}; \draw [-|] (-.5, -2.5) -- (-.5, -3); \draw [|-] (-.5, 0) -- (-.5, -.5); \node[scale=.8, rotate = 90] at (-.5, -5) {$r_I-r_{I+1}$}; \draw [-|] (-.5, -6.5) -- (-.5, -7); \draw [|-] (-.5, -3) -- (-.5, -3.5); \node[scale=.8] at (1, -7.5) {$\bar{b}$}; \draw [-|] (1.5, -7.5) -- (2, -7.5); \draw [|-] (0, -7.5) -- (.5, -7.5); \node[scale=.8] at (5.5, -7.5) {$n-r_I$}; \draw [-|] (6.5, -7.5) -- (9, -7.5); \draw [|-] (2, -7.5) -- (4.5, -7.5); \end{tikzpicture} \hspace{.5in} \begin{tikzpicture}[scale=.5] \fill[color=gray!40] (0,0) --(9, 0) -- (9,-5) -- (2, -5) -- (2, -7)-- (0, -7) -- cycle; \draw[thick] (6, 0) -- (9, 0) -- (9,-5) -- (2, -5) -- (2, -7) -- (0, -7)--(0,-3) -- (0,0) -- (6,0); \draw[thick] (0,-7) -- (0,-9.67) -- (1.33,-9.67)-- (1.33,-9) -- (2,-9)-- (3.33,-9) -- (3.33,-8) -- (6.5,-8)--(6.5,-7)--(8.2,-7) -- (8.2,-6)--(9,-6) --(9,-5); \draw[dashed] (2,-6) -- (2,-9); \draw[dashed] (4.5,-5) -- (4.5,-8); \node[scale=.5] at (4.9, -6.1) {$\cdots$}; \draw[dashed] (5.3,-5) -- (5.3,-8); \draw[dashed] (7,-5) -- (7,-7); \node[scale=.8] at (1, -8) {$\mu^{I+1}$}; \node[scale=.8] at (3.3,-6.1) {$\mu^I$}; \node[scale=.8] at (6.1,-6.1) {$\mu^2$}; \node[scale=.8] at (7.6,-6.1) {$\mu^1$}; \node[scale=.8] at (4.5,-3) {$H_{\lambda_1}$}; \node[scale=.8] at (4.5, .5) {$\lambda_1$}; \draw [-|] (5.5, .5) -- (9, .5); \draw [|-] (0, .5) -- (3.5, .5); \end{tikzpicture} \caption{A $q$-hook $H_b$ of width $b$ and a skew shape $\lambda/H_{\lambda_1}$ with associated tuple of partitions $\vec{\mu}_\lambda=(\mu^1,\ldots,\mu^{I+1},\emptyset,\ldots,\emptyset)$. } \label{fig:qhook} \end{figure} \begin{rem}\label{rem:emptyhook} The partition $H_b$ has height $r_1-r_{I+1}$. By convention, $r_0=n$, so when $0=n-r_0< b\leq n-r_1$, $H_b$ is the empty partition, and so every partition $\lambda$ of width at most $n-r_1$ is compatible with a $q$-hook. \end{rem} For a partition $\lambda\subseteq r_1\times n$ that is compatible with a $q$-hook, define partitions $\mu^1,\ldots,\mu^I$ by subdividing the skew shape $\lambda/H_{\lambda_1}$, where $\mu^1$ is the partition consisting of the rightmost $n-r_1$ columns of $H_{\lambda_1}$, $\mu^2$ is the partition consisting of the second rightmost $r_1-r_2$ columns of $H_{\lambda_1}$, etc. If $I<\rho$, let $\mu^{I+1}$ be the partition consisting of the leftmost $\bar{b}$ columns. (See Figure \ref{fig:qhook}.) \begin{mydef} \label{def:mu-lambda} For a partition $\lambda\subseteq r_1\times n$ that is compatible with a $q$-hook, define \emph{the tuple of partitions associated to $\lambda/H_{\lambda_1}$} to be $\vec{\mu}_\lambda= (\mu^1,\ldots,\mu^{I+1},\emptyset,\ldots,\emptyset)$ if $I<\rho$ and $\vec{\mu}_\lambda= (\mu^1,\ldots,\mu^{I})$ if $I=\rho$, as described above (see Figures \ref{fig:qhook} and \ref{fig:eg}). Here, $\vec{\mu}_\lambda\in P(n,\mathbf{r})$ since $\mu^\ell\subseteq {r_\ell\times (r_{\ell-1}- r_\ell)}$ for $1\leq l\leq \rho$. \end{mydef} \begin{lem}\label{lem:bijections} For a partition $\lambda$ that is compatible with a $q$-hook, let $w$ be the (321-avoiding) permutation corresponding to $(\lambda/H_{\lambda_1},\omega)$ with labeling $\omega(i,j)=r_1+i-j$ under the bijection in \cite{bjs}. Then $w$ is equal to the permutation corresponding to the tuple $\vec{\mu}_\lambda$ via the bijection described in Remark \ref{rem:bijection}. Moreover, $w$ is either Grassmannian with descent at $r_{I+1}$ or has descents at exactly $r_I$ and $r_{I+1}$, where $I$ is such that $n-r_{I}<\lambda_1\leq n-r_{I+1}$. \end{lem} \begin{proof} A reduced expression for the (321-avoiding) permutation $w$ corresponding to $(\lambda/H_{\lambda_1},\omega)$ is given by \cite{bjs} as the product of simple transpositions obtained from reading the labeling from bottom to top, beginning with the rightmost column. This product respects the subdivision of $\lambda/H_{\lambda_1}$ into the tuple of labeled partitions $\mu^1,\ldots,\mu^I,\mu^{I+1}$ with labeling $\omega^\ell(i,j) = r_\ell + i - j$ for $1\leq \ell\leq I+1$. (From Definition \ref{def:mu-lambda}, if $I=\rho$, then the tuple consists of only $\mu^1,\ldots,\mu^I$.) Again, by \cite{bjs} (see also \cite{kmy}), a reduced word for $\mu^\ell$ is the product of simple transpositions obtained by reading the labeling of $\mu^\ell$ from bottom to top, beginning with the rightmost column. Concatenating these expressions recovers $w$. Moreover, define the partition $\mu^{[I]}$ to be the partition obtained by appending the partitions $\mu^I,\ldots,\mu^1$ together; this consists of the last $n-r_I$ columns of $\lambda/H_{\lambda_1}$. Let $w^{I+1}$ and $w^{[I]}:=w^1\cdots w^I$ be the Grassmannian permutations associated to $\mu^{I+1}$ and $\mu^{[I]}$; these have possible descents at $r_{I+1}$ and $r_{I}$, respectively, and so their product has possible descents at only $r_{I+1}$ and $r_{I}$. \end{proof} \begin{rem} \label{rem:lambda-perm} For a partition $\lambda$ that is compatible with a $q$-hook with corresponding tuple $\vec{\mu}_\lambda$ and permutation $w$, we denote the associated Schubert class by $\sigma_{\vec{\mu}_\lambda}$, $\sigma_{w}$, or simply $\sigma_{\lambda/H_{\lambda_1}}$. \end{rem} \begin{eg} \label{eg:twostep-cont} Consider $\Fl(4;2,1)$ as in Example \ref{eg:twostep}. The partition $(3,3)$ is compatible with the $q$-hook $H_3=(3,0)$. \[ \begin{tikzpicture}[scale=.5] \filldraw[fill=gray!40, draw=black] (0,0) -- (3,0) -- (3,-1) -- (0,-1)-- cycle; \draw (0,-1) rectangle (3,-2); \draw[dashed] (1,-1)--(1,-2); \end{tikzpicture} \] The associated tuple of partitions $(\yng(2),\yng(1))$ is read from right to left from the skew shape $(3,3)/H_3$. \end{eg} \begin{eg} \label{eg:qhook-cont}Consider $\Fl(8;6,4,3)$ as in Example \ref{eg:qhook} and partitions $\eta=(3,3,3,2), \lambda=(5,5,5,5,5,4,2)$ and $\nu= (6,6,6,6,5,3)$. Then $\eta$ is compatible with the $q$-hook $H_3=(3,1)$, $\lambda$ is compatible with the $q$-hook $H_5= (5,5,5)$ and $\nu$ is compatible with the $q$-hook $H_6=(6,6,6,6,1,1)$. The associated tuples of partitions to $\eta/H_3$, $\lambda/H_5$ and $\nu/H_6$ are $\vec{\mu}_\eta=(\yng(2,1),\yng(1,1),\emptyset)$, $\vec{\mu}_\lambda=\left( \yng(2,1),\yng(2,2,1),\yng(1,1,1)\right)$ and $\vec{\mu}_\nu=\left(\yng(1),\yng(2,1),\yng(1,1)\right) $, as seen in Figure \ref{fig:eg} by reading the associated tuple of partitions from right to left. Note that $I=\rho=3$ in Definition \ref{def:mu-lambda} for $\nu$ since $n-r_3=5<\nu_1$. Also note that as in Lemma \ref{lem:bijections} and Example \ref{eg:permutation}, the first permutation has descents at $r_1=6$ and $r_2=4$ and the other two permutations are Grassmannian with descent at $r_3=3$. \end{eg} \begin{figure}[h!] \centering \begin{tikzpicture}[scale=.5] \filldraw[fill=gray!40, draw=black] (0,0) -- (3,0) -- (3,-1) -- (1,-1) -- (1,-2) -- (0,-2) -- cycle; \draw (3,-1) -- (3,-2) -- (2,-2) -- (2,-3) -- (1,-3) -- (1,-4) --(0,-4)-- (0,-2); \draw[dashed] (1,-2) -- (1,-4); \node[scale=.8] at (1.5, -.5) {$H_3$}; \end{tikzpicture} \hspace{.5in} \begin{tikzpicture}[scale=.5] \filldraw[fill=gray!40, draw=black] (0,0) rectangle (5,-3); \draw (5,-3) -- (5,-4) -- (4,-4) -- (4,-5) --(2,-5) --(2,-6) -- (0,-6) -- (0,-3) --cycle; \draw[dashed] (3,-3) -- (3,-5); \draw[dashed] (1,-3) -- (1,-6); \node[scale=.8] at (2.5, -1.5) {$H_5$}; \end{tikzpicture} \hspace{.5in} \begin{tikzpicture}[scale=.5] \filldraw[fill=gray!40, draw=black] (0,0) -- (6,0) -- (6,-4) -- (1,-4) --(1,-6) --(0,-6) -- (0,-3) --cycle; \draw (5,-4)-- (5,-5) -- (3,-5) -- (3,-6) -- (1,-6)--(1,-4) --cycle; \draw[dashed] (4,-4) -- (4,-5); \draw[dashed] (2,-4) -- (2,-6); \node[scale=.8] at (3, -2) {$H_6$}; \end{tikzpicture} \caption{Partitions $\eta , \lambda$, and $\nu$, skew shapes $\eta/H_3$, $\lambda/H_5$ and $\nu/H_6,$ and their associated tuples of partitions $\vec{\mu}_\eta, \vec{\mu}_\lambda$ and $\vec{\mu}_\nu$ for $\Fl(8;6,4,3)$.} \label{fig:eg} \end{figure} Before proving Theorem \ref{thm:thmB}, we introduce and study the following auxiliary partitions. \begin{mydef} \label{def:qlambda} Given a partition $\lambda\subseteq r_1\times n$ and $1\leq m\leq \lambda_1$, define $\lambda^{(m)}$ to be the partition obtained from $\lambda$ by removing column $m$ from $\lambda$ and adding 1 to columns $1,\ldots,m-1$, i.e. \[(\lambda^{(m)})' =(\lambda_1'+1,\cdots, \lambda_{m-1}'+1,{\lambda}_{m+1}'+1,\cdots,\lambda_{\lambda_1}'), \] where $\lambda'$ is the transpose of $\lambda$, i.e. $(\lambda^{(m)})'_i = \lambda_i'+1$ for $i<m$ and $(\lambda^{(m)})'_i=\lambda_{i+1}$ for $i\geq m$. (See Figure \ref{fig:lambdamhook}.) \end{mydef} \begin{figure}[h!] \begin{tikzpicture}[scale=.6] \node[scale=.7] at (4.5, .5) {$\lambda_1$}; \draw [-|] (5, .5) -- (9, .5); \draw [|-] (0, .5) -- (4, .5); \draw[thick] (9,-5) -- (2, -5) -- (2, -7)-- (0, -7) -- (0,-9.67) -- (.67,-9.67) -- (.67,-9)-- (2,-9)-- (3.33,-9) -- (3.33,-7.67) -- (6.5,-7.67)--(6.5,-7)--(8.2,-7) -- (8.2,-6)--(9,-6) -- cycle; \fill[color=orange!40] (0, 0)--(9,0) -- (9,-4.33) -- (1.33,-4.33) -- (1.33,-7)-- (0, -7) -- cycle; \node[scale=.8] at (1.65, -6.65) {$\times$}; \node[scale=.8] at (8.65, -4.65) {$+$}; \node[scale=.8] at (8.65, -4) {$\oplus$}; \node[scale=.8] at (1, -6 ) {$\otimes$}; \draw[thick] (6, 0) -- (9, 0) -- (9,-5) -- (2, -5) -- (2, -7) -- (0, -7)--(0,-3) -- (0,0) -- (6,0); \draw (1.33,-7) -- (1.33,-4.33) -- (9,-4.33 ) ; \draw[pattern=north west lines] (4.67,0) rectangle (5.33,-7.67); \node[scale=.7] at (7.7,-8.5) {{$m$th column of $\lambda$}}; \draw[->] (5.7,-8.5) to [out=20,in=10, bend left, out looseness=1] (5 ,-7.8); \draw[fill=red!30] (0,-9.67) rectangle (.67,-10.33); \draw[fill=red!30] (.67,-9) rectangle (3.33,-9.67); \draw[fill=red!30] (3.33,-7.67) rectangle (4.67,-8.33); \end{tikzpicture} \caption{The skew shape $\lambda/H_{\lambda_1}$ and the skew shape $\lambda^{(m)}/H_{\lambda_1-1}$. } \label{fig:lambdamhook} \end{figure} \begin{lem}\label{lem:lambdamcompatible} If a partition $\lambda\subseteq r_1\times n$ is compatible with a $q$-hook, then $\lambda^{(m)}$ is compatible with a $q$-hook for $1\leq m\leq \lambda_1$. \end{lem} \begin{proof} This follows from Remark \ref{rem:qhook} and Definition \ref{def:qlambda}, where conditions (i) and (ii) of Remark \ref{rem:qhook} for $\lambda^{(m)}$ are illustrated by $\otimes$ and $\oplus$ in Figure \ref{fig:lambdamhook}. \end{proof} For a partition $\lambda\subseteq r_1\times n$, consider $s^1_\lambda :=\det (s^1_{\lambda'_i+j-i})$ and the determinants $\Delta$ as in \eqref{eq:q-determinant} and \eqref{eq:si}. \begin{prop} \label{prop:expand} Given a partition $\lambda\subseteq r_1\times n$ of width $b:=\lambda_1$, let $I$ be such that $n-r_{I}<b\leq n-r_{I+1}$ and let $\bar{b}=b-(n-r_I)$. Then \[ \sum_{m=1}^b (-1)^{m-1} s^1_{1^{\lambda'_m-m+1}} * \Delta_{\lambda^{(m)}/H_{b-1}}(\overline{\psi}) = q_1\cdots q_I \, \Delta_{\lambda/H_b}(e^q(\phi)) \text{ in } \mathrm{QH}^*\Fl(n;\mathbf{r}),\] where $\phi=((I+1)^{\bar{b}},I^{r_{I-1}-r_I},\ldots,1^{n-r_1})$ and $\overline{\psi}=((I+1)^{\bar{b}-1}, I^{r_{I-1}-r_I},\ldots,1^{n-r_1})$. \end{prop} \begin{proof} From \eqref{eq:q-determinant}, we have \begin{align} \Delta_{\lambda/H_b}(\phi) &= \det\left(e^q_{\lambda'_i-(H_b)'_j+j-i}(\phi_j) \right)=: \det[v_1,\ldots,v_b] \label{eq:determinant} \\ \Delta_{\lambda^{(m)}/H_{b-1}}(\overline{\psi}) &=\det\left(e^q_{(\lambda^{(m)})'_i-(H_{b-1})'_j+j-i}(\overline{\psi}_j) \right), \label{eq:mdeterminant} \end{align} where $\phi$ and $\overline{\psi}$ are as in the statement of the proposition, and where we write $v_j$ for the $j$th column of the matrix in \eqref{eq:determinant}. (Note that $\lambda$ and $\lambda^{(m)}$ need not contain $H_b$ and $H_{b-1}$, respectively.) Since $s^1_{1^a} = e^q_a(r_1)$ in $\mathrm{QH}^*\Fl(n;\mathbf{r})$, the left hand quantity of the proposition can be rewritten as the determinant: \begin{equation}\label{eq:intermediate} \det\left(e^q_{\lambda'_i-(0,(H_{b-1})')_j+j-i}( 1,\overline{\psi}_j )\right)_{i,j} \end{equation} where $(1,\overline{\psi})=(1,(I+1)^{\bar{b}-1}, I^{r_{I-1}-r_I},\ldots,1^{n-r_1})$. We proceed by reordering the columns of this matrix and then comparing the resulting determinant to \eqref{eq:determinant}. More concretely, let $\tau\in S_b$ be the permutation defined by \[ \tau(j) =\left\{ \begin{array}{ll} {\bar{b}+r_1-r_I} &\text{if } j=1 \\ j & \text{if } 2\leq j\leq \bar{b} \\ {\bar{b}+r_{\ell+1}-r_I +1}&\text{if } j = \bar{b}+r_{\ell-1}-r_I \text{ for } 1\leq l< I\\ 1 &\text{if } j = \bar{b}+r_{I-1}-r_I\\ {j+1} &\text{otherwise}. \end{array} \right. \] Reordering columns using the permutation $\tau$, from the description of $H'_b$ and $H'_{b-1}$ in \eqref{eq:Hb-transpose}, \eqref{eq:intermediate} is equal to ${\sgn(\tau})$ times \begin{equation} \label{eq:otherdeterminant} \det\left(e^q_{\lambda'_i-\kappa_j+j-i}( \psi_j )\right)_{1\leq i,j\leq b} =: \det[\overline{v}_1,\ldots,\overline{v}_b] , \end{equation} where $\displaystyle \kappa=H'_b - (r_I-r_{I+1}){\bf e}_1-\sum_{1\leq l<I}(r_{\ell-1}-r_\ell){\bf e}_{\bar{b}+1+r_\ell-r_I}$ and $\displaystyle\psi=\phi- {\bf e}_1 -\sum_{1\leq l<I}{\bf e}_{\bar{b}+1+r_\ell-r_I}$. Here, ${\bf e}_j$ denotes the sequence that is $1$ in position $j$ and $0$ elsewhere, and $\overline{v}_j$ is the $j$th column of the determinant in \eqref{eq:otherdeterminant}. Thus, column $v_j$ of \eqref{eq:determinant} is equal to column $\bar{v}_j$ of \eqref{eq:otherdeterminant} except when $j=1$ or $j=\bar{b}+1+(r_{\ell-1}-r_I)$ with $1\leq l<I$. We rewrite \eqref{eq:eq-recursion} as \begin{equation}\label{eq:eq-recursion-l} e^{q}_a(r_\ell) = e^{q}_a(r_{\ell-1}) -\left( \sum_{m=1}^{r_{\ell-1}-r_\ell} \sigma_m^l e^{q}_{a-m}(r_\ell) \right) + (-1)^{r_{\ell-1}-r_\ell}q_\ell \, e^{q}_{a-(r_{\ell-1}-r_{\ell+1})}(r_{\ell+1}). \end{equation} Note that for $l=1$, the first term vanishes since $e^q_a(r_0)=e^q_a(n)=0$ is a relation in the quantum cohomology ring for all $a$. We now describe the transition matrix between vectors $v_j$ and $v'_j$. Consider the $b\times b$ matrix $A=(a_{ij})$ with entries \begin{equation} \label{eq:A-matrix-defn} a_{ij} = \begin{cases} -\sigma_{\bar{b}+r_{I-1}-r_I+1-i}^I&\text{ if } j=1\\ -\sigma^l_{\bar{b}+r_{\ell-1}-r_I+1-i}&\text{ if } j=\bar{b}+1+r_{\ell+1}-r_I \text{ for } 1\leq l<I \\ 0 & \text{ otherwise}, \end{cases} \end{equation} with the convention that $\sigma_0^l=1$ and $\sigma_i^l=0$ for $i<0$ and $i>{r_{\ell-1}-r_\ell}$. Then $A$ is a lower triangular matrix with zeros along the diagonal. Let $D = (d_{ij})$ be the $b\times b$ diagonal matrix with entries \begin{equation} \label{eq:D-matrix-defn} d_{jj} = \begin{cases} (-1)^{r_{I-1}-r_{I}}q_I &\text{ if } j=1\\ (-1)^{r_{\ell-1}-r_\ell}q_\ell &\text{ if } j=\bar{b}+1+r_{\ell+1}-r_I \text{ for } 1\leq l<I \\ 1 & \text{ otherwise}. \end{cases} \end{equation} With this notation, the relation between the vectors $v_j$ and $v'_j$ is given by matrix multiplication \[ [\overline{v}_1,\ldots,\overline{v}_b] =(A+D) [v_1,\ldots,v_b].\] Since $A+D$ is lower triangular with diagonal entries $d_{jj}$, $\det(A+D) = \prod_{\ell=1}^I (-1)^{r_{\ell-1}-r_\ell}q_\ell$, and so \[ \det[\overline{v}_1,\ldots,\overline{v}_b] = (-1)^{n-r_1}q_1\ldots q_I \cdot \det[v_1,\ldots,v_b] \] and hence by \eqref{eq:intermediate} and \eqref{eq:otherdeterminant}, the left hand side of the proposition is equal to \[ (-1)^{n-r_1+I}q_1\ldots q_I \,\sgn(\tau) \det[v_1,\ldots,v_b]. \] Since the signature $\sgn(\tau)$of the permutation $\tau$ is $(-1)^{n-r_I}$, we conclude that \eqref{eq:otherdeterminant} is equal to $q_1\cdots q_I$ times the determinant \eqref{eq:determinant}, as needed. \end{proof} \section{Proof of Theorem \ref{thm:thmB}} In this section, we use the set up and results from Section \ref{sec:theoremB}, including Proposition \ref{prop:expand}, to prove Theorem \ref{thm:thmB}, which we restate here. \begin{theoremrepeat} Let $\lambda\subseteq r_1\times n$ be a partition, and let $I$ be such that $n-r_I<\lambda_1\leq n-r_{I+1}$. \begin{enumerate} \item[(a)] If $H_{\lambda_1}\subseteq \lambda$, then \[ s^1_\lambda = q^{H_{\lambda_1}}\sigma_{\vec{\mu}} \text{ in } \mathrm{QH}^*\Fl(n;\mathbf{r}), \] where $\vec{\mu}=(\mu^1,\ldots,\mu^{I+1},\emptyset,\ldots,\emptyset)$ is the tuple of partitions associated to $\lambda$ above. In particular, $s^1_{H_{b}} = q^{H_{b}}$ since $H_{b}/H_{b}=\emptyset$, so $\mu^j=\emptyset$ for all $j$ and $\sigma_{(\emptyset,\ldots,\emptyset)} = 1$. \item[(b)] If $\lambda$ contains $R_{\lambda_1}$, but $H_{\lambda_1}\not\subseteq \lambda$, then $$s^1_\lambda =0 \text{ in } \mathrm{QH}^*\Fl(n;\mathbf{r}).$$ \end{enumerate} \end{theoremrepeat} \begin{eg}\label{eg:twostep-thm} For $\mathrm{QH}^*\Fl(4;2,1)$, by part (a) of the theorem and Examples \ref{eg:twostep} and \ref{eg:twostep-cont}, we have \begin{align*} s^1_{\yng(2)} & = \sigma_{\yng(2),\emptyset} \\ s^1_{\yng(3)} & = q_1 \\ s^1_{\yng(3,3)} &= q_1\sigma_{\yng(2),\yng(1)}\\ s^1_{\yng(4,4)} & = q_1^2q_2. \end{align*} (See also Example \ref{eg:twostep-quiver}.) \end{eg} \begin{eg} \label{eg:thm} Consider $\mathrm{QH}^*\Fl(8;6,3,2)$ as in Examples \ref{eg:qhook} and \ref{eg:qhook-cont}. By part (a) of the theorem, for the partitions $\eta=(3,3,3,2),\lambda=(5,5,5,5,5,4,2)$ and $\nu= (6,6,6,6,5,3)$, we have \[ s^1_\eta=q_1\sigma_{\yng(2,1),\yng(1,1),\emptyset}, \, s^1_\lambda = q_1^3q_2 \sigma_{\yng(2,1),\yng(2,2,1),\yng(1,1,1)} \, \text{ and } \, s^1_\nu = q_1^4q_2^2q_3 \sigma_{\yng(1),\yng(2,1),\yng(1,1)}. \] From Remark \ref{rem:bijection} and Example \ref{eg:permutation}, we can also write this in terms of the indexing of Schubert classes by permutations as \[ s^1_\eta=q_1\sigma_{12453768}, \,s^1_\lambda = q_1^3q_2 \sigma_{36812457}\, \text{ and } \, s^1_\nu = q_1^4q_2^2q_3 \sigma_{14723568 } . \] On the other hand, for the partition $\gamma=(6,6,6,6,3)$, we have $s^1_\gamma=0$ by part (b) of the theorem since $\gamma$ contains $R_6=(6,6,6,6)$ but not $H_6=(6,6,6,6,1,1)$. \end{eg} We now prove part (a) of Theorem \ref{thm:thmB} and then use part (a) to prove part (b). \begin{proof}[Proof of part (a) of Theorem \ref{thm:thmB}] We proceed by induction on the width $b:=\lambda_1$ of $\lambda$. For the base cases, when $0<b\leq n-r_1$, by Remark \ref{rem:emptyhook}, $H_b$ is the empty partition, and we have the equality $s^1_\lambda=\sigma_{\lambda}=\sigma_{\lambda/\emptyset}$. Now assume the result for partitions of width at most $b-1$. Given a partition $\lambda\subseteq r_1\times n$, expanding the determinant $s^1_\lambda :=\det (s^1_{\lambda'_i+j-i})$ along the first column gives \begin{equation} \label{eq:expansion0} s^1_\lambda =\sum_{m=1}^b (-1)^{m-1} s^1_{1^{\lambda'_m-m+1}} * s^1_{\lambda^{(m)}}. \end{equation} From Lemma \ref{lem:lambdamcompatible}, $\lambda^{(m)}$ is compatible with a $q$-hook, so by the induction hypothesis, $s^1_{\lambda^{(m)} }= q^{H_{b-1}}\sigma_{\lambda^{(m)}/H_{b-1}}$, and \eqref{eq:expansion0} becomes \begin{align*} s^1_\lambda &=\sum_{m=1}^b (-1)^{m-1} s^1_{1^{\lambda'_m-m+1}} * q^{H_{b-1}}\sigma_{\lambda^{(m)}/H_{b-1}} \\ &= q^{H_{b-1}}*(q_1\cdots q_I \, \sigma_{\lambda/H_b}) = q^{H_b} \, \sigma_{\lambda/H_b}, \end{align*} where the second and third equalities follow from Proposition \ref{prop:expand}, Lemma \ref{lem:bijections}, \eqref{eq:q-Schubert}, and \eqref{eq:qpower}. \end{proof} We can now prove Proposition \ref{pro:commutes}. \begin{proof}[Proof of Proposition \ref{pro:commutes}] Choose a vertex in the $(I+1)^{th}$ step of the ladder quiver (choosing this notation for compatibility with Theorem \ref{thm:thmB}), and let $z_v$ be the associated variable in the EHX mirror. We need to show that \begin{equation} \label{eq:commutes} \pi(F_{\Gr}(\phi_{\Gr}(z_v)))=F_{\Fl}(\phi_{\Fl}(z_v)).\end{equation} Suppose $v$ is in the $j^{th}$ row of the $k^{th}$ column of the $(I+1)^{th}$ block of the ladder quiver. Then \[\phi_{\Fl}(z_v)=q_1 \dots q_{I} \frac{p^{I+1}_{j \times k}}{p^{I+1}_{(j-1) \times (k-1)}}.\] As in the definition of the Schubert map, for $\ell=1,\dots,{I}$, let $R_\ell$ be the \[c \times (r_{\ell-1}-r_{\ell}), \hspace{2mm} c:=r_I-r_{I+1}+j-k\] rectangle, and set $R_{I+1}:= j \times k$. Set $\overline{R}_{I+1}:= (j-1) \times (k-1)$. Set $\vec{\mu}_a:=(R_1,\dots,R_{I+1},\emptyset,\dots,\emptyset)$, $\vec{\mu}_b:=(R_1,\dots,\overline{R}_{I+1},\emptyset,\dots,\emptyset)$, and $\vec{\mu}_2:=(R_1,\dots,R_{I},\emptyset,\emptyset,\dots,\emptyset)$. Then the right hand side of \eqref{eq:commutes} is \[q_1 \dots q_{I} \frac{\sigma_{\vec{\mu}_a}}{\sigma_{\vec{\mu}_2}} \frac{\sigma_{\vec{\mu}_2}}{\sigma_{\vec{\mu}_b}}= q_1 \dots q_{I}\frac{\sigma_{\vec{\mu}_a}}{\sigma_{\vec{\mu}_b}}.\] Next, we compute the left hand side of \eqref{eq:commutes}. It isn't hard to see that the vertex under consideration is in the $n-r_{I}+k$ column and the $r_1-r_{I+1}+j$ row of the Grassmannian quiver. Let $\lambda_a= (r_1-r_{I+1}+j) \times (n-r_{I}+k) $, and let $\lambda_b= (r_1-r_{I+1}+j-1)\times (n-r_{I}+k-1)$. The left hand side of $\eqref{eq:commutes}$ is therefore \[ \frac{s^1_{\lambda_a}}{s^1_{\lambda_b}}.\] Both $\lambda_a$ and $\lambda_b$ are compatible with a $q$-hook. The partition $\lambda_a$ is compatible with the $q$-hook $H_{n-r_I+k}$. Note that we can partition $\lambda_a$ as in Figure \ref{fig:comparehook} (so, in the notation of Theorem \ref{thm:thmB}, $\overline{b}=k$): \begin{figure}[h!] \centering \begin{tikzpicture}[scale=.5] \fill[color=gray!40] (0,0) --(9, 0) -- (9,-5) -- (2, -5) -- (2, -7)-- (0, -7) -- cycle; \node[scale=.8] at (2,-1.5) {$R_{n-r_I+k}$}; \draw[thick] (6, 0) -- (9, 0) -- (9,-5) -- (2, -5) -- (2, -7) -- (0, -7)--(0,-3) -- (0,0) -- (6,0); \draw (7,0)--(7,-3)--(0,-3); \draw[thick] (0,-7) -- (0,-9) --(9,-9)--(9,-5); \draw (2,-6) -- (2,-9); \draw (4.67,-5) -- (4.67,-9); \node[scale=.5] at (5, -6.1) {$\cdots$}; \draw (5.3,-5) -- (5.3,-9); \draw (7,-5) -- (7,-9); \node[scale=.8] at (1, -8) {$R^{I+1}$}; \node[scale=.8] at (3.3,-6.1) {$R^I$}; \node[scale=.8] at (6.1,-6.1) {$R^2$}; \node[scale=.8] at (7.6,-6.1) {$R^1$}; \node[scale=.6] at (1, -9.5) {$k$}; \draw [-|] (1.3, -9.5) -- (2, -9.5); \draw [|-] (0, -9.5) -- (.7, -9.5); \node[scale=.8] at (4.5, .5) {$n-r_I+k$}; \draw [-|] (6, .5) -- (9, .5); \draw [|-] (0, .5) -- (3, .5); \node[scale=.6, rotate = 90] at (-.5, -5) {$r_I-r_{I+1}$}; \draw [-|] (-.5, -6) -- (-.5, -7); \draw [|-] (-.5, -3) -- (-.5, -4); \node[scale=.6, rotate = 90] at (-.5, -1.5) {$r_1-r_I$}; \draw [-|] (-.5, -2.5) -- (-.5, -3); \draw [|-] (-.5, 0) -- (-.5, -0.5); \node[scale=.6, rotate = 90] at (-.5, -8) {$j$}; \draw [-|] (-.5, -8.5) -- (-.5, -9); \draw [|-] (-.5, -7) -- (-.5, -7.5); \node[scale=.6, rotate = 90] at (9.5, -2.5) {$r_1-r_I+k$}; \draw [-|] (9.5, -4) -- (9.5, -5); \draw [|-] (9.5, 0) -- (9.5, -1); \node[scale=.6, rotate = 90] at (9.5, -7) {$c$}; \draw [-|] (9.5, -7.5) -- (9.5, -9); \draw [|-] (9.5, -5) -- (9.5, -6.5); \end{tikzpicture} \caption{The partitioning of $\lambda_a$. } \label{fig:comparehook} \end{figure} Therefore \[ s^1_{\lambda_a}=q^{H_{n-r_I+k}} \sigma_{\vec{\mu}_1}.\] Similarly, $\lambda_b$ is compatible with the $q$-hook $H_{n-r_I+k-1}$, and \[ s^1_{\lambda_b}=q^{H_{n-r_I+k-1}} \sigma_{\vec{\mu}_b}.\] It finally follows by comparing the $q$ factors that \[ \frac{s^1_{\lambda_a}}{s^1_{\lambda_b}}=q_1 \dots q_{I}\frac{\sigma_{\vec{\mu}_a}}{\sigma_{\vec{\mu}_b}}\] as required. \end{proof} We now prove the second part of Theorem \ref{thm:thmB}, that for $\lambda\subseteq r_1\times n$ such that $R_{\lambda_1}\subseteq \lambda$, but $H_{\lambda_1}\not\subseteq \lambda$, $s^1_\lambda =0$ in $\mathrm{QH}^*F$. \begin{proof}[Proof of part (b) of Theorem \ref{thm:thmB}] We proceed by induction on the width of $\lambda$. If $\lambda_1\leq n-r_1$, then $\lambda$ is compatible with a $q$-hook since $H_{\lambda_1}=\emptyset$, so the proposition holds vacuously. Now assume the result holds for partitions of width $b-1$, and consider a partition $\lambda$ of width $b:=\lambda_1$ that contains $R_b$ but not $H_b$. Let $I$ be such that $n-r_I<\lambda_1\leq n-r_{I+1}$. We use the expansion \eqref{eq:expansion0} of the determinant $s^1_\lambda$. From Remark \ref{rem:qhook} and Definition \ref{def:qlambda}, $\lambda^{(m)}$ contains $R_{b-1}$ for $1\leq m\leq \lambda_1$. If $\lambda_1=n-r_{I+1}$, then $H_b=R_b$ and there is nothing to prove, so assume $n-r_I<\lambda_1< n-r_{I+1}$ and write $\bar{b}=\lambda_1-(n-r_I)$. First consider the case where $\bar{b}>1$ and $\lambda'_{\bar{b}-1}< r_1-r_{I+1}-1$. This corresponds to the cell marked $\otimes$ in Figure \ref{fig:lambdamhook} not being contained in $\lambda$. In this case, $(\lambda^{(m)})'_{\bar{b}-1}<r_1-r_{I+1}$ for $1\leq m\leq \lambda_1$ by Remark \ref{rem:qhook} so that $\lambda^{(m)}$ does not contain $H_{b-1}$. Then by the inductive hypothesis, $s^1_{\lambda^{(m)}} =0$. Since all the summands in \eqref{eq:expansion0} are zero, $s^1_\lambda=0$. Now if $\bar{b}>1$ and $\lambda'_{\bar{b}-1}\geq r_1-r_{I+1}-1$, i.e. $\lambda$ contains the cell marked $\otimes$ in Figure \ref{fig:lambdamhook}, then by Remark \ref{rem:qhook} and Definition \ref{def:qlambda}, if $m<\bar{b}$, then $\lambda^{(m)}$ does not contain $H_{b-1}$, so by the inductive hypothesis, $s^1_{\lambda^{(m)}}=0$. On the other hand, if $m\geq \bar{b}$, then $\lambda^{(m)}$ contains $H_{b-1}$, and so by part (a) of Theorem \ref{thm:thmB}, the expansion \eqref{eq:expansion0} becomes \[ s^1_\lambda =\sum_{m=\bar{b}}^b (-1)^{m-1} s^1_{1^{\lambda'_m-m+1}} * q^{H_{b-1}}\sigma_{\lambda^{(m)}/H_{b-1}}. \] Since $H_{b-1}\not\subseteq\lambda^{(m)}$ for $m<\bar{b}$, by Remark \ref{rem:skewzero} and \eqref{eq:q-Schubert}, we have \begin{equation} \label{eq:expansionm} s^1_\lambda =\sum_{m=1}^b (-1)^{m-1} s^1_{1^{\lambda'_m-m+1}} * q^{H_{b-1}}\Delta_{\lambda^{(m)}/H_{b-1}}(e^q(\psi)), \end{equation} where $\psi=((i+1)^{\bar{b}-1}, i^{r_{I-1}-r_I},\ldots,1^{n-r_1})$. Similarly, if $\bar{b}=1$, then $H_{b-1}=R_{b-1}$ and $\lambda^{(m)}$ contains $H_{b-1}$ for all $1\leq m\leq b$, so by part (a) of Theorem \ref{thm:thmB} and \eqref{eq:q-Schubert}, we have \eqref{eq:expansionm} as well. Moreover, by Proposition \ref{prop:expand}, \eqref{eq:expansionm} becomes \[ s^1_\lambda= q_1\cdots q_I \, \Delta_{\lambda/H_b}(e^q(\phi)) \text{ in } \mathrm{QH}^*\Fl(n;\mathbf{r}),\] where $\phi=((i+1)^{\bar{b}},i^{r_{I-1}-r_I},\ldots,1^{n-r_1})$. Since $H_b\not\subseteq \lambda$, we conclude that $s^1_\lambda=q_1\cdots q_I \cdot 0=0$ by Remark \ref{rem:skewzero}. \end{proof} \bibliographystyle{amsplain}
1,116,691,497,980
arxiv
\section{Introduction} The analysis of the experimental data associated to the production of the quark-gluon plasma (QGP) in heavy ion collisions at RHIC (Brookhaven, USA) and LHC (CERN) is interpreted, despite the large errors involved, as an evidence that this state of matter is a strongly interacting fluid at high temperature ($\sim200$ MeV), composed of deconfined adjoint (gluons) and fundamental (quarks) matter. The QGP is supposed to have existed in the immediate moments after the big bang, hence the importance to understand its behaviour. Unfortunately, due to the strong nature of the interaction, the well-known perturbative methods of QCD are not sufficient to study the QGP. Lattice calculations proved a valuable tool, however they are not well suited to study real-time properties of the system. These properties include the transport coefficients which govern the hydrodynamic behaviour at long distances and times as compared to the inverse temperature. Were these coefficients known, especially the viscosities and relaxation times, we would be able to run computer simulations and compare the theoretical predictions with the observed experimental behaviour. The AdS/CFT correspondence \cite{malda, gkp, witten} exploits the holographic principle to study strongly coupled $D$-dimensional conformal quantum field theories by means of dual $D+1$-dimensional gravitational models in asymptotically anti-de Sitter spaces. Within this correspondence (which can be extended to many non-conformal setups as well) a thermal gauge theory is associated with a black hole background. Each fluid mode in the plasma has a corresponding gravity mode, whose fluctuations, governed by gravity equations, can be used to obtain retarded correlators, from where we can obtain the transport coefficients. The main ingredient in the AdS/CFT correspondence is the relation \begin{equation} \langle e^{-\int \phi_0 {\cal O}} \rangle = e^{- S_{gravity}(\phi_0)} \, , \end{equation} where the source $\phi_0$ of the field theory operator $\cal O$ is identified with the value of the dual gravitational mode at the AdS boundary $\phi_0 = \lim_{r\to\infty}\phi(r)$, and $r$ is the AdS radial coordinate. The correspondence is actually a limit of a conjectured more general equivalence between quantum field theories and higher dimensional string models having a gravitational description at low energy and weak coupling. \section{Hydrodynamics from AdS/CFT} At long distances and times as compared to the inverse temperature, field theories admit a hydrodynamic description dictated by the conservation of energy and momentum. This hydrodynamic description can be organized in a derivative expansion. Up to second derivatives the expansion of the energy-momentum tensor for a relativistic uncharged fluid reads\cite{baier,romatschke} \begin{equation}\label{enmom} T^{\mu\nu}=\varepsilon u^\mu u^\nu + p \Delta^{\mu\nu} + \pi^{\mu\nu} + \Delta^{\mu\nu}\Pi \, , \end{equation} where $\varepsilon$ is the energy density, $u^\mu$ the velocity field, $p(\varepsilon)$ the pressure, $\Delta^{\mu\nu} = h^{\mu\nu}+u^\mu u^\nu$ with $h^{\mu\nu}$ the $4$-dimensional metric and \begin{eqnarray} \pi^{\mu\nu} &=&- \eta \sigma^{\mu\nu} +\eta \tau_\pi \Bigl[\langle D \sigma^{\mu\nu}\rangle + \frac{\nabla \cdot u}{3}\sigma^{\mu\nu} \Bigr] + \kappa \Bigl[ R^{<\mu\nu>}-2 u_\alpha u_\beta R^{\alpha <\mu\nu> \beta} \Bigr] \nonumber \\ && + \lambda_1 \sigma^{<\mu}_{\lambda} \sigma^{\nu>\lambda} + \lambda_2 \sigma^{<\mu}_{\lambda} \Omega^{\nu>\lambda} + \lambda_3 \Omega^{<\mu}_{\lambda} \Omega^{\nu>\lambda} + \kappa^* 2u_\alpha u_\beta R^{\alpha <\mu\nu> \beta} \nonumber \\ && + \eta \tau_\pi^* \frac{\nabla \cdot u}{3}\sigma^{\mu\nu} + \lambda_4 \nabla^{<\mu} \log{s} \nabla^{\nu >} \log{s} \, , \end{eqnarray} \begin{eqnarray} \Pi &=&- \zeta (\nabla \cdot u) + \zeta \tau_\Pi D (\nabla \cdot u) + \xi_1 \sigma^{\mu\nu}\sigma_{\mu\nu}+ \xi_2 (\nabla \cdot u)^2 + \xi_3 \Omega^{\mu\nu}\Omega_{\mu\nu} \nonumber \\ && + \xi_4 \nabla_{\mu}^{\perp} \log{s} \nabla^{\mu}_{\perp} \log{s}+ \xi_5 R + \xi_6 u^\alpha u^\beta R_{\alpha \beta}\, . \end{eqnarray} We refer the reader to \cite{romatschke} for the precise definitions of the structures in these formulas, which will not be necessary for the rest of this note. The shear viscosity $\eta$ and the second order coefficients $\tau_\pi$ (``shear" relaxation time), $\kappa$, $\lambda_1, \lambda_2, \lambda_3$ are the only ones defined in conformal fluids. All other coefficients, i.e. the bulk viscosity $\zeta$ and the second order coefficients $\kappa^*, \tau_\pi^*, \lambda_4, \tau_\Pi$ (``bulk" relaxation time), $\xi_1, \xi_2, \xi_3, \xi_4, \xi_5, \xi_6$, are only defined in non-conformal plasmas. Holographic methods allow to extract these transport coefficients in classes of strongly coupled plasmas having a dual gravity description. Moreover, in the regime where higher derivative corrections to the gravity action can be neglected, the corresponding plasmas display some relevant universal features. For example, they all have the same shear viscosity over entropy density ratio, $\eta/s=1/4\pi$, as the ${\cal N}=4$ supersymmetric Yang-Mills (SYM) plasma \cite{pss}. Remarkably this ratio is compatible with the one which can be deduced for the QGP at RHIC and LHC. This raises the hope that, at least in some limits, holographic results (despite strictly valid for theories still quite far from QCD), can be used as benchmarks for realistic simulations of real-time properties of the QGP. A sketch of the relevant holographic methods is as follows. Consider a fluid moving along one (say, $z$) of the 3 spatial directions $x,y,z$. For any field $\psi$ on the dual gravity background, consider fluctuations of the form $\exp(-i \omega t+ i q z)\psi(r)$, with $\omega$ and $q$ frequency and momentum. The fluctuations $\psi(r)$ are classified according to their transformation under $SO(2)_{x-y}$. Solving the equations of motions (with suitably chosen boundary conditions) for the fluctuations, one can get the dispersion relations and thus deduce the transport coefficients, taking into account general expressions like \begin{equation}\label{vecdiff2} \omega = c_s q - i \Gamma q^2 + \frac{\Gamma}{c_s}\Bigl(c_s^2\tau^{eff}-\frac{\Gamma}{2}\Bigr)q^3 + {\cal O}(q^4)\qquad {\rm where} \qquad \Gamma=\frac{\eta}{sT} \left( \frac{2}{3} + \frac{\zeta}{2\eta} \right)\,, \end{equation} which holds for the scalar hydrodynamic modes \cite{baier,romatschke}.\footnote{Here $T$ and $c_s$ are the temperature and speed of sound of the plasma. Finally, $\tau^{eff}$ is an ``effective relaxation time''.} Another source of information comes from the study of retarded correlators of the stress-energy tensor. For the tensorial mode \cite{baier,romatschke} \begin{equation}\label{retcorr} G_R^{xy,xy}=p-i \eta \omega + \Bigl( \eta \tau_\pi -\frac{\kappa}{2} +\kappa^* \Bigr)\omega^2 -\frac{\kappa}{2}q^2 + {\cal O}(q^3,\omega^3)\, , \end{equation} where $p$ is the pressure. The holographic computation of these correlators gives direct access to the related transport coefficients. \section{A flavored ${\cal N}=4$ SYM plasma} Theories like (thermal) ${\cal N}=4$ $SU(N_c)$ SYM, which has a dual $AdS_5\times S^5$ (black hole) description, do not have matter fields transforming in the fundamental representation. The inclusion of fundamental matter has a precise counterpart in the dual string/gravity setup. It amounts on adding extended sources (like $N_f$ ``flavor" D7-branes) on the background. In the 't Hooft limit (where $N_c\to\infty$ whereas the 't Hooft coupling $\lambda$ and $N_f$ are kept fixed), the branes can be treated as probes \cite{kk} and thus do not deform the original background. This corresponds to taking the quenched approximation for the flavor fields in the dual gauge theory. Going beyond this approximation requires accounting for the backreaction of the flavor branes on the background. This is a difficult task in general, since the branes (which have codimension 2 in the D7 case and thus are localized at some angles of the $5$-sphere $S^5$) enter as delta function sources in the supegravity equations of motion and Bianchi identities. This gives rise to a set of partial differential equations to be solved for. In \cite{noncritical} a method named ``smearing technique'' was introduced. This method is appropriate in the Veneziano limit in which $N_c\to\infty$ and $N_f\to \infty$ with their ratio fixed. Instead of considering $N_f$ localized branes one homogeneously distributes them in the transverse space, in such a way to replace delta function sources with a density distribution 2-form and to recover (most of) the isometries of the original unflavored background. In this way one often has to solve just ordinary differential equations in a radial variable (see \cite{npr} for a review). In \cite{Bigazzi:2009bk} this method was applied to thermal ${\cal N}=4$ SYM with massless fundamental hypermultiplets (and then extended to more general flavored quivers), finding a solution which takes into account the D7-brane backreaction in a perturbative expansion in the parameter $ \epsilon_h \propto \lambda_h \frac{N_f}{N_c} \, , $ with $\lambda_h$ the 't Hooft coupling at the energy scale set by the temperature $T$. This parameter would weigh the internal quark loops in a perturbative expansion of, say, gluon polarization diagrams. In the string setup, it has to be taken very small in order for the gravity description to be reliable. The solution in \cite{Bigazzi:2009bk} was analytically found to order $\epsilon_h^2$. It is relevant to notice that the class of flavored plasmas here considered are examples of non-conformal models. The breaking of conformal invariance, driven by quantum effects since the flavors are massless, is precisely encoded by the running of the beta function for $\epsilon_h$: $ \ T \frac{d \epsilon_h}{d T} = \epsilon_h^2+ {\cal O}(\epsilon_h^3). $ The gravity solution in \cite{Bigazzi:2009bk} has a warped black hole metric of the form \begin{equation} ds^2 = \frac{r^2}{R^2} \left[-\left(1-\frac{r_h^4}{r^4}\right)\,dt^2 + dx_idx_i\right] + \frac{R^2}{r^2}\left[\frac{S^8F^2}{1-\frac{r_h^4}{r^4}}dr^2 + r^2 \left(S^2 ds_{KE}^2 + F^2 (d\tau + A_{KE})^2\right)\right]\,, \label{thebac} \end{equation} where $r_h$ is the horizon radius. The flavor brane backreaction is accounted for by the functions $S(r),F(r)$ and the metric of the original $S^5$ is expressed as a $U(1)$ fibration over a K\"ahler-Einstein base $CP^2$ ($dA_{KE}/2=J_{KE}$ is the K\"ahler form of the four-dimensional base of $S^5$). Moreover \begin{eqnarray} F &=& 1 - \frac{\epsilon_h}{24} + \frac{17}{1152}\epsilon_h^2 -\frac{\epsilon_h^2}{24}\log\frac{r}{r_h} \,,\\ S &=& 1 + \frac{\epsilon_h}{24} + \frac{1}{128}\epsilon_h^2 + \frac{\epsilon_h^2}{24}\log\frac{r}{r_h}\,,\\ \Phi &=& \Phi_h + \epsilon_h \log\frac{r}{r_h} + \frac{\epsilon_h^2}{6} \log\frac{r}{r_h} + \frac{\epsilon_h^2}{2} \log^2\frac{r}{r_h} + \frac{\epsilon_h^2}{16} Li_2\left(1-\frac{r_h^4}{r^4}\right)\,, \label{simple} \end{eqnarray} where we have also included the running dilaton $\Phi$. The solution contains $F_5$ and $F_1$ Ramond-Ramond field strengths too. We refer to \cite{Bigazzi:2009bk} for details and comments on the UV behaviour of the solution. The solution described above allows us to study a number of effects of dynamical flavors in a strongly coupled thermal theory in a completely controllable setting. Some thermodynamic quantities (entropy density $s$, energy density $\varepsilon$, free energy density $f$, speed of sound $c_s$) are \cite{Bigazzi:2009bk} \begin{eqnarray} s&=&\frac12 \pi^2 N_c^2 T^3 \left[1+\frac12 \epsilon_h +\frac{7}{24}\epsilon_h^2 \right]\, ,\\ f &=&-p=-\frac18 \pi^2 N_c^2 T^4 \left[1+\frac12 \epsilon_h +\frac16 \epsilon_h^2\right]\, , \\ \varepsilon-3p&=&\frac{1}{16}\pi^2 N_c^2 T^4 \epsilon_h^2\, ,\\ c_s^2 &=& \frac13 \left[1-\frac{1}{6} \epsilon_h^2\right]\, . \end{eqnarray} The transport coefficients up to ${\cal O}(\epsilon_h^2)$ obtained by studying the gravitational fluctuations, as sketched in the previous section, are \cite{noi} \begin{eqnarray}\label{resultbulk} \frac{\zeta}{\eta}&=&\frac19 \epsilon^2_h\,,\\ \label{resulttau} \tau^{eff}T&=&\tau_{\pi,0}T_{0} + \frac{16-\pi^2}{128\pi}\epsilon_h^2\,,\\ \label{resultk} \frac{T^2}{p}\kappa &=&\frac{T_{0}^2}{p_{0}}\kappa_{0}\,,\\\label{resultkstar} \frac{T^2}{p}(\kappa^*+\eta\tau_\pi)&=&\frac{T_{0}^2}{p_{0}}\eta_{0}\tau_{\pi,0} + \frac{T_{0}^2}{p_{0}}\eta_{0}\Bigl(\frac{\tau_{\pi,0}}{8}-\frac{1}{8\pi T_{0}}\Bigr)\epsilon_h^2\,, \end{eqnarray} where $ \tau_{\pi,0}T_{0}=\frac{2-\log{2}}{2\pi} $ and $ \frac{T_{0}^2}{p_{0}}\kappa_{0}=\frac{1}{\pi^2}\, $, are the corresponding values in the conformal plasmas. \section{Hydrodynamics from AdS/CFT revisited} There is a simple way to obtain, holographically, all the second order transport coefficients in the above flavored plasmas, avoiding the explicit study of fluctuating modes and correlators. The flavored ${\cal N}=4$ plasma has a dual effective $5$-dimensional description in terms of a metric and three scalars \cite{Benini:2006hh,noi}. One of these scalars is the dilaton. The others describe the volume of the compact deformed $S^5$ and the squashing between the fiber and the base. The corresponding field theory operators have dimensions $\Delta=4,8,6$ at the unflavored conformal fixed point. Thus, giving a non trivial profile to the dilaton around the AdS background corresponds to turning on a (marginally irrelevant) deformation in the field theory. The other scalars, instead, would drive irrelevant deformations. At order $\epsilon_h^2$ the breaking of conformality can be accounted for just by the dilaton, so that the $5D$ model reduces effectively to a single scalar one. Crucially, the latter is in the Chamblin-Reall class \cite{Chamblin:1999ya}, already studied in \cite{Gubser:2008sz} as for some hydrodynamical properties. In more generality, Chamblin-Reall models (which are characterized by a simple exponential potential for the scalar field) provide good effective holographic descriptions, at leading order in the deformation, for classes of strongly coupled conformal gauge theories slightly deformed by marginally (ir)relevant operators \cite{stima}. Quite crucially, for the Chamblin-Reall theories, all the hydrodynamic transport coefficients up to second order can be extracted \cite{stima} using the results in \cite{Kanitscheider:2009as}. With the definition \begin{equation}\label{Delta} \delta \equiv 1-3c_s^2\,, \end{equation} where $c_s$ is the speed of sound, and referring to the hydrodynamic stress-energy tensor in (\ref{enmom}), the transport coefficients are given in Table \ref{relations}.\footnote{The flavored ${\cal N}=4$ SYM plasma has these same coefficients with $\delta=\epsilon_h^2/6$.} \begin{table}[h]\label{relations} \begin{center} \caption{The transport coefficients at leading order in the conformality deformation parameter $\delta \equiv 1-3c_s^2$.}\label{relations} \begin{tabular}{||c|c||c|c||c|c||} \hline & & & & & \\ $ \frac{\eta}{s} $ & $\frac{1}{4\pi}$ & $T\tau_{\pi} $ & $ \frac{2-\log{2}}{2\pi} + \frac{3(16-\pi^2)}{64\pi}\delta $ & $ \frac{T\kappa}{s} $ & $ \frac{1}{4\pi^2}\Bigl(1-\frac34 \delta \Bigr) $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \lambda_1}{s} $ & $\frac{1}{8\pi^2}\Bigl(1+\frac34 \delta \Bigr) $ & $\frac{T \lambda_2}{s} $ & $-\frac{1}{4\pi^2}\Bigl( \log{2}+\frac{3\pi^2}{32}\delta \Bigr) $ & $\frac{T \lambda_3}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T\kappa^*}{s} $ & $-\frac{3}{8\pi^2}\delta $ & $T\tau_{\pi}^* $ & $-\frac{2-\log{2}}{2\pi}\delta $ & $\frac{T \lambda_4}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{\zeta}{\eta} $ & $\frac23 \delta $ & $T\tau_{\Pi} $ & $\frac{2-\log{2}}{2\pi} $ & $\frac{T \xi_{1}}{s} $ & $\frac{1}{24\pi^2}\delta $ \\ & & & & & \\ \hline \hline & & & & & \\ $ \frac{T \xi_{2}}{s} $ & $\frac{2-\log{2}}{36\pi^2}\delta $ & $\frac{T \xi_{3}}{s} $ & $0 $ & $\frac{T \xi_{4}}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \xi_{5}}{s} $ & $\frac{1}{12\pi^2}\delta $ & $\frac{T \xi_{6}}{s} $ & $\frac{1}{4\pi^2}\delta $ & & \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} Considering the difficulty of dealing with such coefficients in QCD, this information\footnote{In particular, the behavior with the temperature and the speed of sound of the shear and bulk relaxation times $\tau_\pi, \tau_\Pi$ is both potentially relevant and unexpected.} could be useful in numerical simulations of the hydrodynamic evolution of the QGP, provided at some stage of its thermalization (well above the critical temperature $T_c$ for deconfinement) it can be approximated by a small deformation of a conformal plasma in the class describe above. Actually, some of the thermodynamical properties of the QGP, as deduced from the lattice, in the temperature window $1.5 T_c \leq T \leq 4 T_c$ (relevant at RHIC and LHC), suggest that the QGP can be treated as a nearly conformal system. In order to provide a numerical example, taking $c_s^2\sim 0.26$ at $T\sim 1.5\,T_c$ as sensible estimate from lattice studies for the current RHIC experiment \cite{Katz:2005br1,Katz:2005br2,nuovo}, we would get the results collected in Table 2 (updating the ones in \cite{stima}). \begin{table}[h] \begin{center} \caption{The transport coefficients at $T\sim 1.5\,T_c$ and $c_s^2 \sim 0.26$.} \begin{tabular}{||c|c||c|c||c|c||} \hline & & & & & \\ $ \frac{\eta}{s} $ & $\frac{1}{4\pi}$ & $T\tau_{\pi} $ & $0.228 $ & $ \frac{T\kappa}{s} $ & $0.021 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \lambda_1}{s} $ & $0.015 $ & $\frac{T \lambda_2}{s} $ & $-0.023 $ & $\frac{T \lambda_3}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T\kappa^*}{s} $ & $-0.008 $ & $T\tau_{\pi}^* $ & $-0.046 $ & $\frac{T \lambda_4}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{\zeta}{\eta} $ & $0.147 $ & $T\tau_{\Pi} $ & $0.208 $ & $\frac{T \xi_{1}}{s} $ & $0.001 $ \\ & & & & & \\ \hline \hline & & & & & \\ $ \frac{T \xi_{2}}{s} $ & $0.001 $ & $\frac{T \xi_{3}}{s} $ & $0 $ & $\frac{T \xi_{4}}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \xi_{5}}{s} $ & $0.002 $ & $\frac{T \xi_{6}}{s} $ & $0.006 $ & & \\ & & & & & \\ \hline \end{tabular} \end{center} \end{table} \begin{acknowledgement}We thank A. Paredes and C. Ratti for useful observations. F.B. and A.L.C. have received funding from the European Community Seventh Framework Programme (FP7/2007-2013) under grant agreement n. 253937 and 253534 respectively. J.T. has been supported by the Netherlands Organization for Scientific Research (NWO) under the FOM Foundation research program. J.T. is thankful to the Front of Galician-speaking Scientists for encouragement. \end{acknowledgement}
1,116,691,497,981
arxiv
\section{Introduction} Methods involving monotone evolution of simply connected domains in the plane have been successfully applied to a range of problems in complex analysis. The Riemann mapping theorem and the notion of Carath\'eodory convergence allow to encode the dynamics by a continuous family of conformal maps from a reference domain such as the unit disk $\m D$ onto the evolving domains. Loewner proved in 1923 \cite{Loewner23} that the family of maps generated by progressively slitting a domain along a simple curve satisfies a differential equation with a real-valued control function. Loewner's work was motivated by the Bieberbach conjecture and was ultimately instrumental in its resolution by de Branges some sixty years later \cite{debranges}. This method was extended by Kufarev \cite{Kufarev} and further developed by Pommerenke \cite{Pom1965} to cover general evolution families beyond slitted domains. In this case, the dynamics is described by the Loewner-Kufarev equation which is controlled by a family of measures. Besides the analysis of univalent functions in geometric function theory, the Loewner equation has in recent years played a key part in the rigorous analysis of conformally invariant random systems with the introduction of the Schramm-Loewner evolution (SLE) by Schramm in the late 1990s using Brownian motion as the control function, see, e.g., \cite{Schramm_LERW_UST, LSW_exponent,Schramm:ICM,Smi:ICM}. While these applications of the Loewner equation are to essentially static problems, the Loewner-Kufarev equation provides a rather general device for generating and describing shape evolution and it has been employed to study physical systems as well. Examples include aggregation processes and models related to Laplacian growth as well as integrable systems, see, e.g., \cite{carleson-makarov, Markina2011EvolutionOS, Amaba-Fr1, gustafsson-vasilev}, the last of which has many additional references and historical remarks. The recently introduced Quantum Loewner evolution \cite{MilShe2016}, of importance in Liouville quantum gravity, is a stochastic version of Laplacian growth-type dynamics. In this paper, we study the Loewner-Kufarev equation driven by members of a natural and interesting class of measures defined by the requirement to have finite Loewner-Kufarev energy. This quantity arises as a large-deviation rate function for SLE$_\kappa$ when $\kappa \to \infty$ \cite{APW} and can in a certain sense be considered dual to the Loewner energy of a single Jordan curve, a quantity that also appears in the context of large deviations for SLE$_\kappa$, but this time when $\kappa \to 0+$ \cite{W1,RW,W2}. (Our methods and conclusions are entirely deterministic, however, and no results from probability are needed in this paper.) We show that the boundaries of the evolving domains driven by measures of finite Loewner-Kufarev energy are Weil-Petersson quasicircles (which is quivalent to having finite Loewner energy) that foliate the twice punctured Riemann sphere $\mathbb C \smallsetminus \{0\}$. This class of measures seems to be one of very few for which a complete description of the geometry of the generated non-smooth interfaces is possible. Moreover, the induced dynamical process on Weil-Petersson quasicircles sweeping out the sphere exhibits several remarkable features and symmetries. \subsection{Loewner energy and Weil-Petersson quasicircles} \label{intro_wp} Suppose $\gamma$ is a Jordan curve in $\m C$ that separates $0$ from $\infty$ and write $D$ and $D^*$ for the bounded and unbounded component, respectively, and set $\m D^* = \m C \smallsetminus \ad{\m D}$. If $f: \m D \to D$ and $h: \m D^* \to D^*$ are conformal maps fixing $0$ and $\infty$ respectively, then the M\"obius invariant \emph{Loewner energy} of $\gamma$ can be defined as \begin{equation}\label{eq:loop_LE_def} I^L(\gamma) = \mc D_{\m D} (\log \abs{f'}) + \mc D_{\m D^*} (\log \abs{h'})+4 \log \abs{f'(0)/h'(\infty)}, \end{equation} where \[\mc{D}_{D}(u)=\frac{1}{\pi}\int_D |\nabla u|^2 \mathrm{d} A\] is the Dirichlet integral. (Here and below we write $\mathrm{d} A$ for two-dimensional Lebesgue measure.) The quantity $I^L$ was originally introduced in a different form \cite{W1, RW}, see also Section~\ref{sec:further}. The identity \eqref{eq:loop_LE_def} is established in \cite{W2}\footnote{The right-hand side of \eqref{eq:loop_LE_def} was introduced in \cite{TT06} and there referred to as the universal Liouville action (up to a constant factor $\pi$). As we will discuss connections to random conformal geometry, we choose to use the term Loewner energy.}. A Jordan curve $\gamma$ has finite Loewner energy if and only if it belongs to the set of Weil-Petersson quasicircles \cite{TT06}, a class of non-smooth chord-arc Jordan curves that has a number of equivalent characterizations from various different perspectives, see, e.g., \cite{Cui,TT06,Shen_WP_1,Shen_Grunsky,W3}, and in particular the recent preprint \cite{bishop-WP} and the references therein. One way to characterize Weil-Petersson quasicircles is to say that their welding homeomorphisms $h^{-1} \circ f|_{S^1}$ (more precisely, the equivalence class modulo left-action by the group of M\"obius transformations preserving $S^1$) belong to the Weil-Petersson Teichm\"uller space $T_0(1)$, defined as the completion of $\operatorname{M\"ob}(S^1) \backslash \operatorname{Diff}(S^1)$ under its unique homogeneous K\"ahler metric. The space $T_0(1)$ carries an infinite dimensional K\"ahler-Einstein manifold structure and the Loewner energy itself is a K\"ahler potential for the Weil-Petersson metric, see \cite{TT06} for definitions and a thorough discussion. The effort to find a mathematical foundation for non-perturbative string theory seems to have motivated the initial interest in $\operatorname{M\"ob}(S^1) \backslash \operatorname{Diff}(S^1)$ and its K\"ahler structure, see, e.g., \cite{bowick1987holomorphic,Witten, nag1991non,Hong-Rajeev} as well as the survey \cite{pekonen}. More recently, Weil-Petersson Teichm\"uller space has served as a fundamental building block in a construction of conformal field theory interpreted in the sense of Segal, as proposed in \cite{RSS_CFT}. Other applications include analysis of the periodic KdV equation \cite{STZ_KdV} and Sharon and Mumford's work on computer vision \cite{sharon20062d} which both consider geodesics in $T_0(1)$ that correspond to evolutions of Weil-Petersson quasicircles. \subsection{Loewner-Kufarev energy} Let $\mc N$ be the set of Borel measures $\rho$ on the cylinder $S^1 \times \mathbb{R}$ with the property that $\rho(S^1 \times I)$ equals $|I|$ for any interval $I$. Each $\rho \in \mc N$ can be disintegrated into a measurable family of probability measures $(\rho_t)_{t \in \m R}$ on $S^1$. Let $(D_t)_{t \in \m R}$ be a family of simply connected domains such that $ 0 \in D_t \subset D_s$ for all $s \le t$. We assume that for each $t$, the conformal radius of $D_t$ at $0$ equals $e^{-t}$ which implies $\bigcup_{t\in \m R} D_t = \m C$. Let $(f_t: \m D \to D_t)_{t \in \m R}$ be the associated family of conformal maps normalized so that $f_t(0)=0$ and $f'_t(0) = e^{-t}$. (Here and below we write $'$ for $\partial_z$.) By a result of Pommerenke \cite{Pom1965}, there exists $\rho\in \mc N$ such that $(f_t)_{t \ge s}$ satisfies the \emph{Loewner-Kufarev equation} $$ \partial_t f_t(z) = -z f'_t(z)H_t(z), \qquad H_t(z) = \int_{S^1} \frac{e^{i\theta}+z}{e^{i\theta}-z} \mathrm{d} \rho_t(\theta). $$ Here the Herglotz integral $H_t$ is holomorphic in $\m D$ with positive real part. The equation is interpreted in the sense that $t \mapsto f_t(z)$ is absolutely continuous. Conversely, for any $\rho \in \mc N$ the monotone family of simply connected domains $(D_t)_{t \in \m R}$ and the corresponding family of conformal maps $(f_t)_{t \in \m R}$ can be recovered via the Loewner-Kufarev equation. We say that $(f_t)_{t \in \m R}$ is the \emph{Loewner chain} driven by $\rho$. It is sometimes convenient to work with the family of inverse maps $(g_t=f_t^{-1} : D_t \to \m D)_{t \in \m R}$, referred to as the \emph{uniformizing Loewner chain}. See Sections~\ref{sect:Loewner-Kufarev-Equation} and \ref{subsec:whole_plane_Loewner_chain} for more details. For $\rho \in \mc N$, the \emph{Loewner-Kufarev energy} $S(\rho)$ introduced in \cite{APW} is defined by \[S(\rho) = \int_{\mathbb{R}} L(\rho_t) \,\mathrm{d} t, \qquad \textrm{where} \quad L(\rho_t) = \frac{1}{2}\int_{S^1}\nu_t'(\theta)^2 \, \mathrm{d} \theta,\] whenever $\mathrm{d} \rho_t(\theta) = \nu_t^2(\theta) \mathrm{d} \theta$ and $S(\rho)$ is set to $\infty$ otherwise. Notice that $L(\rho_t)$ is the Dirichlet energy of $\nu_t$ on $S^1$. We refer to measures with finite Loewner-Kufarev energy simply as \emph{finite energy measures}. We will discuss how the Loewner-Kufarev energy relates to conformally invariant random systems in Section~\ref{sec:further}. \subsection{Main results} \label{sec:main_results} We now describe our main results. We call a monotone and continuous\footnote{Monotone is meant in the following sense: if $s < t$ then $D_t \subset D_s$ where $D_t$ is the bounded connected component of $\m C \smallsetminus \gamma_t$ . Continuous is meant in the supremum norm sense modulo increasing reparametrization.} family $(\gamma_t)_{t \in \m R}$ of chord-arc Jordan curves whose union covers $\m C\smallsetminus \{0\}$ (or $\ad{\m D}\smallsetminus \{0\}$) a \emph{foliation}, see Section~\ref{sect:winding-function}. Individual curves in a foliation are called \emph{leaves}. Our definition allows more than one leaf passing through a given point $z \in \m C \smallsetminus \{0\}$. If the family of interfaces $(\gamma_t = \partial D_t)_{t\in \m R}$ arising from the Loewner-Kufarev equation driven by the measure $\rho$ form a foliation, we say that $\rho$ \emph{generates a foliation}. \begin{thm}[Weil-Petersson leaves]\label{thm:WP-leaf} If $S(\rho) < \infty$, then $\rho$ generates a foliation of $\m C \smallsetminus \{0\}$ in which every leaf is a Weil-Petersson quasicircle. \end{thm} See Corollary~\ref{cor:S_finite_foliation_WP} and Section~\ref{subsec:whole_plane}. Theorem~\ref{thm:WP-leaf} shows that any $\rho$ with $S(\rho) < \infty$ gives rise to a dynamical process in $T_0(1)$, see Section~\ref{sec:further}. We next associate to a foliation $(\gamma_t)_{t \in \m R}$ a real-valued function $\varphi$ as follows. Given the leaf $\gamma_t$, let $g_t$ be the conformal map of the bounded component of $\hat{\m C} \smallsetminus \gamma_t$ onto $\m D$, fixing $0$ and with positive derivative there. If $z \in \gamma_t$ is a differentiable point, we define $\varphi(z)$ to be the non-tangential limit at $z$ of the function $\arg w g'_{t}(w)/g_t(w)$. (We choose the continuous branch that equals $0$ at $0$.) This defines $\varphi$ arclength-a.e. on $\gamma_t$. Monotonicity of $(\gamma_t)_{t \in \m R}$ implies that there is no ambiguity in defining $\varphi(z)$ if $z \in \gamma_t \cap \gamma_s$ and both curves are differentiable at $z$, see Section~\ref{sect:winding-function} for more details. Modulo $2\pi$, $\varphi(z)$ equals the difference of the argument of the tangent of $\gamma_{t}$ at $z$ and that of the tangent to the circle centered at $0$ passing through $z$. See Figure~\ref{fig:intro_fol}. We call $\varphi$ the \emph{winding function} associated with a foliation $(\gamma_t)_{t \in \m R}$. The simplest example of winding function is when the measure $\rho$ has zero energy, namely when $\rho_t$ is the uniform probability measure on $S^1$ for a.e. $t \in \m R$. In this case, the associated foliation is the family of concentric circles centered at $0$, and the winding function is identically $0$. We discuss additional examples in Section~\ref{sect:examples}. \begin{figure} \centering \includegraphics[width=.45\textwidth]{intro_fol.pdf} \caption{Illustration of the winding function $\varphi$.} \label{fig:intro_fol} \end{figure} In the present non-smooth setting, a function defined on each leaf arclength-a.e.\ is not necessarily defined Lebesgue-a.e.\ in $\m C$, see Section~\ref{sect:winding-function}. However, we prove that if it is possible to extend the winding function $\varphi$ to an element in $W^{1,2}_{\mathrm{loc}}$, then the extension is unique. (This is one main reason why we choose to work in the setting of chord-arc curves.) See Proposition~\ref{prop:unique_extension}. Statements about the Dirichlet energy of $\varphi$ will be understood in terms of this extension whose existence is implicitly part of any such statement. The following is our main theorem, which shows that the dynamically defined Loewner-Kufarev energy can be expressed by a purely geometric and static quantity. \begin{thm}[Energy duality]\label{thm:main0} Assume that $\rho \in \mc N$ generates a foliation and let $\varphi$ be the associated winding function on $\m C$. Then $\mc D_{\m C} (\varphi) <\infty$ if and only if $S(\rho) <\infty$ and \[\mc D_{\m C} (\varphi) = 16 \, S(\rho). \] \end{thm} The proof of Theorem~\ref{thm:main0} is completed in Section~\ref{subsec:whole_plane}. \begin{rem} The factor $16$ in Theorem~\ref{thm:main0} is consistent with the SLE duality relation $\kappa \leftrightarrow 16/\kappa$ \cite{Zhan_duality, Dub_duality}. See Section~\ref{sec:further}. \end{rem} Theorem~\ref{thm:main0} has several consequences. The first is the reversibility of the Loewner-Kufarev energy. Consider $\rho \in \mc N$ and the corresponding evolution family of domains $(D_t)_{t \in \m R}$. Applying $z\mapsto 1/z$ to the complementary domains $\hat{\m C} \smallsetminus D_t$, we obtain an evolution family of domains $(\tilde D_t)_{t \in \m R}$ upon time-reversal and reparametrization, which may be described by the Loewner-Kufarev equation with an associated driving measure $\tilde \rho$. While there is no known simple description of $\tilde \rho$ in terms of $\rho$, energy duality implies remarkably that the Loewner-Kufarev energy is invariant under this transformation. \begin{thm}[Energy reversibility] \label{thm:main_rev} We have $S (\rho) = S(\tilde \rho).$ \end{thm} See Theorem~\ref{thm:energy_rev}. The Loewner energy of a simple chord connecting two boundary points in a simply connected domain also satisfies a form of reversibility, see \cite{W1}. We will comment on the relation between these results and relevant probabilistic models in Section~\ref{sec:further}. By Theorem~\ref{thm:WP-leaf}, every leaf in the foliation generated by a measure with $S(\rho)< \infty$ satisfies $I^L(\gamma) < \infty$. Conversely, the next result shows that if $I^L(\gamma) < \infty$, then $\gamma$ can always be realized as a leaf in a foliation generated by Loewner evolution driven by a measure with $S(\rho) < \infty$. We obtain a new and quantitative characterization of Weil-Petersson quasicircles. \begin{thm}[Characterization of Weil-Petersson quasicircles] \label{thm:main-jordan-curve} A Jordan curve $\gamma$ separating $0$ from $ \infty$ is a Weil-Petersson quasicircle if and only if $\gamma$ can be realized as a leaf in the foliation generated by a measure $\rho$ with $S(\rho) < \infty$. Moreover, the Loewner energy of $\gamma$ satisfies the identity \[I^L(\gamma) = 16 \inf_{\rho} S(\rho) + 2 \log |f'(0)/h'(\infty)|,\] where the infimum, which is attained, is taken over all $\rho \in \mc N$ such that $\gamma$ is a leaf of the generated foliation. \end{thm} See Theorem~\ref{thm:dual_Jordan_curve} and Corollary~\ref{cor:WP-characterization}. The infimum is realized for the measure generating the family of equipotentials on both sides of $\gamma$. (By equipotential we mean the image of a circle about $0$ under the Riemann map from $\m D$ to a component of $\m C \smallsetminus \gamma$ fixing $0$ or taking $0$ to $\infty$.) In this case, the winding function is harmonic in $\m C \smallsetminus \gamma$, see Section~\ref{subsec:jordan_curve}. This minimum is zero if and only if $\gamma$ is a circle centered at $0$, whereas $I^L(\gamma)$ is zero for all circles. This explains the presence of the derivative terms. Corollary~\ref{cor:WP-LE-bound} of Theorem~\ref{thm:main-jordan-curve} shows that the Loewner energies of the leaves in a foliation generated by $\rho$ with $S(\rho) < \infty$ are uniformly bounded by $16 \, S(\rho)$. Another consequence of Theorem~\ref{thm:main-jordan-curve} is the following identity that simultaneously expresses interplay between Dirichlet energies under ``welding'' and ``flow-line'' operations \cite[Thm.\,1.1, Thm.\,1.4]{VW1}, in a similar spirit as \cite[Cor.\,1.6]{VW1}. Given a chord-arc curve $\gamma$ separating $0$ and $\infty$, we define a winding function on $\gamma$ arclength-a.e.\ exactly as above. We say that a Weil-Petersson quasicircle $\gamma$ is compatible with $\varphi \in W^{1,2}_{\textrm{loc}}$, if the winding function of $\gamma$ coincides with the trace $\varphi|_{\gamma}$ arclength-a.e. \begin{prop}[Complex identity] \label{prop:complex_id} Let $\psi$ be a complex valued function on $\m C$ with $\mc D_{\m C}(\psi) = \mc D_{\m C}(\Re \psi) + \mc D_{\m C}(\Im \psi)<\infty$ and $\gamma$ a Weil-Petersson quasicircle separating $0$ from $\infty$ compatible with $\Im \psi$. Let \begin{equation}\label{eq:complex_transform} \zeta (z): = \psi \circ f (z) + \log \frac{f'(z) z}{ f(z)}; \quad \xi(z) : = \psi \circ h(z) + \log \frac{h'(z) z}{ h(z)}, \end{equation} where we choose the continuous branches of $\log f'(z) z/ f(z)$ and $\log h'(z) z/ h(z)$ that equal $0$ at $0$ and $\infty$, respectively. Then $\mc D_{\m C} (\psi) = \mc D_{\m D} (\zeta) + \mc D_{\m D^*} (\xi)$. \end{prop} See Section~\ref{sec:complex}. Finally, we study the change of the Loewner-Kufarev energy under local conformal distortion of the foliation. We consider the following setup. Let $\rho \in \mc N$ be a measure such that $\rho_t$ is the uniform measure for $t < 0$ and $$S_{[0,1]} (\rho) : = \int_0^1 L(\rho_t)\,\mathrm{d} t < \infty.$$ (The choice of the upper bound $t=1$ is only for notational simplicity and the result is easily generalized to other bounded time intervals.) We have $D_0 = \m D$ and write $K_t = \ad{ \m D} \smallsetminus D_t$. Let $\psi$ be a conformal map from a neighborhood $U$ of $K_1$ in $\ad {\m D}$ to a neighborhood $\tilde U$ of $\tilde K_1$, another compact hull in $\m D$, such that $\psi (K_1) = \tilde K_1$. The family of compact hulls $(\tilde K_t: = \psi (K_t))_{t \in [0,1]}$ can be generated by a measure $\tilde \rho$ with the associated uniformizing Loewner chain $\tilde g_t: \m D \smallsetminus \tilde K_t \to \m D$. \begin{thm}[Conformal distortion] \label{thm:conformal-distortion} We have the formulas \[ L(\tilde \rho_t) - L( \rho_t) = \frac{1}{4} \int_{S^1} e^{2i\theta} \mc S \psi_t (e^{i\theta}) \rho_t (\theta) \,\mathrm{d}\theta + \frac{1}{8}\left(|\tilde \rho_t| - | \rho_t| \right) \] and \[ S_{[0,1]} (\tilde \rho) - S_{[0,1]} (\rho) = \frac{1}{4} \int_0^1 \int_{S^1} e^{2i\theta} \mc S \psi_t (e^{i\theta}) \rho_t (\theta) \,\mathrm{d}\theta\mathrm{d} t + \frac{1}{8}\left(\log \tilde g_1'(0) - \log g_1'(0) \right), \] where $\psi_t = \tilde g_t \circ \psi \circ g_t^{-1}$, $\mc S \psi = (\psi''/\psi')' - (\psi''/\psi')^2/2$ is the Schwarzian derivative, and $|\tilde \rho_t| = \tilde \rho_t(S^1)$. \end{thm} The proof is given in Section~\ref{subsec:variation_LK}. Theorem~\ref{thm:conformal-distortion} is related in spirit to the notion of conformal restriction \cite{LSW_CR_chordal}. We will relate the formulas in Theorem~\ref{thm:conformal-distortion} to Brownian loop measures (see \cite{LSW_CR_chordal,LW2004loopsoup}) in a forthcoming paper with Lawler. \subsection{Core argument for Theorem~\ref{thm:main0}}\label{sect:core-argument} We now indicate the key ideas for the proof of the energy duality Theorem~\ref{thm:main0} and along the way discuss some of the tools we use and develop. We will deduce Theorem~\ref{thm:main0} from the analogous result for the unit disk and a limiting argument. To prepare for the statement, let $\mc N_+$ be defined analogously to $\mc N$ but considering measures on $S^1 \times \m R_+$ and we define the corresponding Loewner-Kufarev energy $S_+(\rho) = \int_0^{\infty} L(\rho_t) \, \mathrm{d} t$. The Loewner-Kufarev equation with the initial condition $f_0 (z) = z$ generates a Loewner chain $(f_t: \m D \to D_t)_{t \ge 0}$, where $(D_t)_{t\ge 0}$ is a monotone family of simply connected domains in $\m D = D_0$. It can be viewed as a special case of the whole-plane Loewner chain by extending the measure $\rho$ to a measure on $S^1 \times \m R$, where $\rho_t$ is the uniform probability measure on $S^1$ when $t < 0$, which implies $D_t = e^{-t} \m D$ for all $t \le 0$. \begin{thm}[Disk energy duality]\label{thm:main} Assume $\rho \in \mc N_+$ generates a foliation of $\ad{\m D} \smallsetminus \{ 0\}$ and let $\varphi$ be the associated winding function. Then $\mc D_{\m D}(\varphi) < \infty$ if and only if $S_+(\rho) <\infty$ and $\mc D_{\m D} (\varphi) = 16 \, S_+(\rho).$ \end{thm} \begin{rem} This result is a special case of Theorem~\ref{thm:main0} using the extension by the uniform measure for $t < 0$ discussed above. \end{rem} The proof of Theorem~\ref{thm:main} is completed in Section~\ref{sec:disk_duality}. The starting point of the proof is Hadamard's classical formula for the variation of the Dirichlet Green's function. We express Hadamard's formula using the Loewner-Kufarev equation: \[ -\partial_t G_{D_t}(z,w) = \int_{S^1} P_{\m D}(g_t(z), e^{i\theta})P_{\m D}(g_t(w), e^{i\theta}) \mathrm{d} \rho_t(\theta), \] where $P_{\m D}$ is the Poisson kernel for $\m D$ and $\rho \in \mc N_+$, see Lemma~\ref{loewner-hadamard}. The Sobolev space $\mc E_0 (\m D) = W^{1,2}_0 (\m D)$ is a Hilbert space when endowed with the Dirichlet inner product. Hadamard's formula and the orthogonal decomposition with respect to the Dirichlet inner product along the Loewner evolution lead to a correspondence between $\mc E_0(\m D)$ and $L^2$-integrable functions on the cylinder $S^1 \times \m R_+$. More precisely, setting $L^2(2\rho) : = L^2(S^1 \times \m R_+, 2\rho)$, we define an operator \begin{equation}\label{june24} \iota: C_c^\infty(\m D) \to L^2(2\rho), \qquad \phi \mapsto \frac{1}{2\pi}\int_{\m D} \Delta ( \phi \circ f_t) (z) P_{\m D} (z, e^{i\theta}) \,\mathrm{d} A(z). \end{equation} We prove the following result which is an important step in our proof and which we also believe to be of independent interest. \begin{thm}[Foliation disintegration isometry]\label{thm:intro_isom} If $\rho \in \mc N_+$ generates a foliation of $\ad{\m D} \smallsetminus\{0\}$, then \eqref{june24} extends to a bijective isometry $\iota : \mc E_0(\m D) \to L^2(2\rho)$ with the inverse mapping $\varkappa: L^2(2 \rho) \to \mc E_0 (\m D)$ \[ \varkappa [u] (w) = 2 \pi \int_0^{\tau(w)} P_{\m D}[u_t\rho_t](g_t(w)) \, \mathrm{d} t, \qquad u_t(\cdot) := u(\cdot, t). \] \end{thm} See~Lemma~\ref{lem:disintegration_general}, Proposition~\ref{prop:kappa_formula}, and Theorem~\ref{thm:bi_isometry}. We show that $P_{\m D} [u_t \rho_t] \in \mathfrak h^1$ (the harmonic Hardy space on $\m D$), and Theorem~\ref{thm:intro_isom} can be interpreted as a disintegration of finite Dirichlet energy functions into $\mathfrak h^1$ functions. This implies the formula \begin{align}\label{eq:intro_isom_T} \phi_t^0 \circ g_t = \varkappa \big[\iota [\phi] \mathbf{1}_{S^1 \times [t,\infty)}\big] \end{align} where $\phi_t^0 \in \mc E_0(\m D)$ is the zero-trace part of the function $\phi_t=\phi \circ f_t$ (so that $\phi_t - \phi_t^0$ is harmonic). See Corollary~\ref{cor:ortho_decomp_formula}. \begin{rem} If $\rho$ is smooth and $t$ is fixed, the function $\theta \mapsto \iota[\phi](\theta,t)$ can be interpreted as the inward pointing normal derivative in $\m D$ at $e^{i\theta}$ applied to $\phi_t^0$. The foliation disintegration isometry is closely related to the ``Hadamard operators'' considered in \cite{Haakan_GFF} in a $C^2$-smooth and strictly monotone setting, see Section~\ref{sec:hadamard} for further discussion. Here, we consider chord-arc foliations which in general have leaves that are not $C^1$, and are not even locally Lipschitz graphs. Moreover, $t\mapsto \gamma_t$ is only continuous and not strictly monotone. Theorem~\ref{thm:intro_isom} allows us to work under such rather weak regularity assumptions and this level of generality is needed in order to include Weil-Petersson quasicircles and to obtain optimal statements as in Theorem~\ref{thm:main0} and Theorem~\ref{thm:main}. While not needed in this paper, the conclusions of Theorem~\ref{thm:intro_isom} hold under even weaker regularity assumptions on the interfaces of the evolution. We will discuss this elsewhere. \end{rem} \begin{rem} By considering the Gaussian measures associated to the Hilbert spaces $\mc E_0(\m D)$ and $L^2(2\rho)$, Theorem~\ref{thm:intro_isom} immediately entails a decomposition of the Dirichlet Gaussian free field on $\m D$ into white-noise on the cylinder $S^1 \times \m R_+$ weighted by $2\rho$, generalizing the main result of \cite{Haakan_GFF}. \end{rem} The next step is to prove that $\rho \in \mathcal{N}_+$ with $S_+(\rho) < \infty$ generates a foliation so that Theorem~\ref{thm:intro_isom} can be applied. For this, we first derive a ``weak energy duality'' result: under the strong assumption that $\rho$ is piecewise constant in time and the disintegration measures are all strictly positive and smooth, we prove in Proposition~\ref{prop:weak_duality} that the winding function is defined, continuous and piecewise smooth in $\overline{\m D}$. By essentially explicit computation, the identity $16 S_+(\rho) = \mathcal{D}_{\m D}(\varphi)$ follows. This result combined with an approximation argument is then used to prove that $\rho$ with $S_+(\rho)<\infty$ generates a foliation by Weil-Petersson quasicircles, see Proposition~\ref{prop:WP-QC-final}. At this point the proof of Theorem~\ref{thm:WP-leaf} is completed in the case when $\rho$ is supported on $S^1 \times \m R_+$. Then in Section~\ref{sec:disk_duality} we prove disk energy duality in full generality. From the work in Section~\ref{sect:weak-WP} we know that $\rho$ with $S_+(\rho)< \infty$ generates a foliation. Using the inverse operator $\varkappa$ we prove that the winding function $\varphi=-\varkappa[2 \nu_t'/\nu_t]$ from which it follows immediately that $\varphi \in \mc E_0 (\m D)$ and that $\mc D_{\m D}(\varphi) = 16\, S_+(\rho)$. The final step assumes $\rho \in \mc{N}_+$ generates a foliation whose winding function can be extended so that $\mc D_{\m D}(\varphi) < \infty$. To point to the difficulty, at this stage we do not know that $\rho_t$ is absolutely continuous. However, applying Theorem~\ref{thm:intro_isom}, we have that $\iota[\varphi]\in L^2(2\rho)$. Using the integrability information we deduce that $H'_t$ is in the Hardy space $\mathcal{H}^1$ and this implies that $\rho_t$ is absolutely continuous and that the density is differentiable a.e. It then follows that $S_+(\rho) < \infty$ using Theorem~\ref{thm:intro_isom}. \bigskip {\bf Structure of the paper.} We recall basic definitions in Section~\ref{sec:prelim}. The foliation disintegration isometry, Theorem~\ref{thm:intro_isom}, which is a key step in our proof of the energy duality, is established in Section~\ref{sec:hadamard}. In Section~\ref{sect:LK-energy} we define the Loewner-Kufarev energy, derive a few basic properties, and discuss examples of finite energy measures and associated foliations. Section~\ref{sect:weak-WP} is devoted to the proof of energy duality in the disk under strong regularity assumptions and here we obtain the important \emph{a priori} result that finite energy measures generate foliations whose leaves are Weil-Petersson quasicircles, see Proposition~\ref{prop:WP-QC-final}. The general energy duality result for the unit disk, Theorem~\ref{thm:main}, is proved in Section~\ref{sec:disk_duality}. We work in the set-up of the Loewner-Kufarev equation in the unit disk in Sections~\ref{sec:prelim}-\ref{sec:disk_duality} and we introduce the whole-plane Loewner-Kufarev equation only in Section~\ref{sec:whole-plane}, where we prove whole-plane energy duality, see Theorems~\ref{thm:WP-leaf} and \ref{thm:main0}. In Section~\ref{sec:application} we derive the consequences of energy duality, see Theorem~\ref{thm:main_rev}, \ref{thm:main-jordan-curve}, and Proposition~\ref{prop:complex_id}. Section~\ref{subsec:variation_LK} is devoted to the proof of the conformal distortion formula, Theorem~\ref{thm:conformal-distortion}. Section~\ref{sec:further} collects further comments, including a discussion of additional interpretations of our results as well as open problems. \subsection*{Acknowledgments} We are very grateful to H\aa kan Hedenmalm for pointing out the reference \cite{Haakan_GFF}, which helped us improve and simplify our argument considerably. We thank David Jerison and Greg Lawler for discussions and helpful input, Paul Laurain and Andrea Seppi for discussions on the implication to minimal surfaces, and Steffen Rohde and Wendelin Werner for useful comments on an earlier version of our paper. F.V. is supported by the Knut and Alice Wallenberg Foundation, the Swedish Research Council, and the Ruth and Nils-Erik Stenb\"ack Foundation. Y.W. is supported by the NSF grant DMS-1953945. \section{Preliminaries} \label{sec:prelim} \subsection{Basic definitions} For a bounded simply connected domain $D \subset \m C$, we write $G_D(z,w)$ for the Green's function associated to the positive Laplacian $\Delta = -(\partial_{xx} + \partial_{yy})$ with Dirichlet boundary condition, where $z = x + i y$. It is convenient to normalize the Green's function so that $G_D(z,w)-\log|z-w|$ is harmonic in each variable, that is, so that $G_D = 2\pi \Delta^{-1}$. For sufficiently regular domains $D$, the Poisson kernel is defined for $z \in D, \zeta \in \partial D$, by \[P_D(z,\zeta) :=\partial_{n(\zeta)} G_D(z,\zeta),\] where $\partial_{n(\zeta)}$ is the inward normal derivative at $\zeta$. If $D = \m D$ we have \[G_{\m D}(z,w) = -\log \left|\frac{z-w}{1-\overline{w}z}\right|, \qquad P_{\m D}(z,e^{i\theta}) = \frac{1-|z|^2}{|z-e^{i\theta}|^2}.\] If $\sigma$ is a finite measure on the unit circle $S^1$, identified with $[0,2\pi]_{/0\sim 2\pi}$, we write \[ P_{\m D}[\sigma](z) = \frac{1}{2\pi} \int_0^{2\pi} P_{\m D} (z,e^{i\theta}) \mathrm{d} \sigma(\theta) \] for its Poisson integral. If $\mathrm{d} \sigma = u \,\mathrm{d}\theta$ for $u \in L^1(S^1)$ we write simply $P_{\m D}[u](z)$. By Fatou's theorem, $P_{\m D}[u]$ has non-tangential limit $u$ Lebesgue-a.e.\ on $S^1$. If $D$ is an open set, the Dirichlet energy of an almost everywhere defined function $\phi : D \to \mathbb{R}$ is given by \[ \mc D_{D} (\phi) := \brac{\phi,\phi}_\nabla : = \frac{1}{\pi} \int_{D} |\nabla \phi|^2 \mathrm{d} A(z) \] whenever $\phi$ has weak first derivatives in $L^2(D)$. Here and below we write $\mathrm{d} A$ for two-dimensional Lebesgue measure. We write $W^{1,2}(D)$ for the Sobolev space of real-valued functions $\phi$ such that both $\phi$ and its weak first derivatives are in $L^2(D)$ with norm $\|\phi\|_{W^{1,2}(D)} = \|\phi\|_{L^2(D)} + \|\nabla \phi\|_{L^2(D)}$ and write $W^{1,2}_0(D)$ for the closure of smooth and compactly supported functions in $D$, $C_c^\infty(D)$, in $W^{1,2}(D)$. When $D$ is bounded, $\mc D_{D}^{1/2}$ is a norm equivalent to $\norm{\cdot}_{W^{1,2}}$ on $W^{1,2}_0(D)$ by the Poincar\'e inequality. In this case, we shall write $\mc E_0(D)$ for $W^{1,2}_0(D)$ equipped with the norm $\mc D_{D}^{1/2}$ which is a Hilbert space endowed with the Dirichlet inner product $\brac{\cdot, \cdot}_{\nabla}$. Every $\phi$ with $\mc D_{\m C} (\phi) < \infty$ has a unique decomposition with respect to $D$ \begin{equation}\label{eq:orthogonal} \phi = \phi^0 + \phi^{h} \quad \text{and} \quad \brac{\phi^0, \phi^{h}}_\nabla = 0 \end{equation} such that $\phi^0 \in \mc E_0(D)$ and $\phi^{h}$ with $\mc D_{\m C}(\phi^h)<\infty$ is harmonic in $D$, see, e.g., \cite[P.\,77]{adams} and \cite[App.\,B]{VW1}. The Hardy space $\mc{H}^p$ (resp. harmonic Hardy space $\mathfrak h^p$) for $p > 0$ consists of functions $f$ holomorphic (resp. harmonic) in $\m D$ satisfying \[\sup_{0 \le r <1} \int_0^{2\pi}|f(re^{i\theta})|^p \mathrm{d} \theta < \infty.\] Let $\gamma$ be a Jordan curve in $\m C$, and write $D$ and $D^*$ for the connected components of $\mathbb{C} \smallsetminus \gamma$. Let $f$ be a conformal map from $\mathbb{D}$ onto the bounded component $D$, and $h$ a conformal map from $\mathbb{D}^*$ onto the unbounded component $D^*$ fixing~$\infty$. The Loewner energy of $\gamma$ is defined as \begin{equation} \label{eq_disk_energy} I^L(\gamma) = \mc D_{\m D} (\log \abs{f'}) + \mc D_{\m D^*} (\log \abs{h'})+4 \log \abs{f'(0)} - 4 \log \abs{h'(\infty)}, \end{equation} where $h'(\infty):=\lim_{z\to \infty} h'(z) = \tilde h'(0)^{-1}$ and $\tilde h(z) := 1/h(1/z)$. The Loewner energy $I^L$ is finite if and only if either Dirichlet integral on the right-hand side in \eqref{eq_disk_energy} is finite. We summarize this as a lemma. \begin{lemma} \label{thm_TT_equiv_T01} Suppose $\gamma$ is a bounded Jordan curve. Then the following statements are equivalent: \begin{enumerate}[itemsep= -2pt, topsep= -1pt] \item $I^L (\gamma) < \infty;$ \item $\mc D_{\m D} (\log \abs{f'}) = {\displaystyle \frac{1}{\pi}\int_{\mathbb{D}} \left|\frac{f''(z)}{f'(z)}\right|^2 \mathrm{d} A(z) < \infty;}$ \item $ \mc D_{\m D^*} (\log \abs{h'}) <\infty$. \end{enumerate} \end{lemma} There are several additional equivalent ways to define the Loewner energy, and the class of Jordan curves with finite Loewner energy coincides with the class of Weil-Petersson quasicircles, see \cite{W2}. Henceforth we will refer to Jordan curves with finite Loewner energy as Weil-Petersson quasicircles. \subsection{The Loewner-Kufarev equation}\label{sect:Loewner-Kufarev-Equation} We consider first the version for the unit disc and then the whole-plane version later in Section~\ref{sec:whole-plane}. Let $\mc M (\Omega)$ (resp. $\mc M_1 (\Omega)$) be the space of Borel measures (resp. probability measures) on $\Omega$ endowed with the topology induced by weak convergence on compact subsets. Let $$\mc N_+ = \{\rho\in \mc M (S^1 \times \m R_+): \rho(S^1\times I) = |I| \text{ for any interval } I \}.$$ Any $\rho \in \mc N_+$ can be disintegrated into a family of measures $(\rho_t)_{t \ge 0}$ measurable in $t$, such that $\rho (\mathrm{d} \theta \mathrm{d} t)= \rho_t (\mathrm{d} \theta) \mathrm{d} t$ and $\rho_t \in \mathcal M_1$. The disintegration is unique in the sense that any two disintegrations $(\rho_t), ( \widetilde \rho_t)$ of $\rho$ must satisfy $\rho_t = \widetilde \rho_t$ for a.e. $t$. See, e.g., \cite[Sec.\,70]{DM_prob_potential}. We will work with the Loewner-Kufarev equation driven by measures $\rho \in \mathcal{N}_+$ and we now review some of the relevant definitions and facts. See, e.g., \cite[Ch.\,8]{rosenblum} for proofs and a detailed discussion in this general setting. Given $\rho \in \mathcal{N}_+$, consider the associated Herglotz integrals \begin{equation}\label{def:herglotz}H_t (z) := H[\rho_t](z) = \int_{0}^{2\pi} \frac{e^{i \theta} + z}{e^{i \theta} -z} d\rho_t (\theta).\end{equation} The mapping $z \mapsto H_t(z)$ is a holomorphic function in $\mathbb{D}$ with positive real part and $t \mapsto H_t(z)$ is measurable. In this setting, the Loewner-Kufarev partial differential equation for $\m D$ reads \begin{equation} \label{eq:loewner-pde} \partial_t f_t (z) = -z f_t'(z) H_t(z), \quad f_0 (z) = z. \end{equation} The equation \eqref{eq:loewner-pde} is understood in the sense that $t \mapsto f_t (z)$ is absolutely continuous, and \eqref{eq:loewner-pde} is required to hold for a.e. $t \ge 0$. The exceptional set can be taken to be the same for all $z \in \m D$. Throughout the paper, all differential equations are interpreted in a similar manner. The unique solution to \eqref{eq:loewner-pde} gives rise to a family of conformal maps $(f_t)_{t \ge 0}$ fixing $0$ such that for each $t$, $f_t:\m D \to D_t = f_t(\m D)$ where if $s < t$, then $D_t \subset D_s$. In the present case where the disintegrated measures are probability measures we have $f'_t(0) = e^{-t}$. (Indeed, $\partial_t \log f_t'(0) = - H_t (0 ) = -|\rho_t| \equiv -1$.) We refer to the family $(f_t)_{t \ge 0}$ as the \emph{Loewner chain} driven by $\rho$. We call the family of compact sets $(K_t=\overline{\m D} \smallsetminus D_t)_{t \ge 0}$ the \emph{hulls} associated to the Loewner chain. A converse statement is also true. Consider a monotone family of simply connected domains $(D_t)_{t \ge 0}$ containing $0$: $D_0 = \m D$, $0\in D_t$ for all $t$, and if $s < t$ then $D_t \subset D_s$. Let $f_t : \m D \to D_t$ be the conformal map normalized by $f_t(0)=0, f'_t(0)>0$. According to a theorem of Pommerenke \cite[Satz 4]{Pom1965} (see also \cite[Thm~6.2]{Pom_uni} and \cite{rosenblum}), if $t\to f_t'(0)$ is a decreasing homeomorphism of $\m R_+ \to (0,1]$, one can reparametrize $(D_t)_{t\ge 0}$ so that $f_t'(0) = e^{-t}$, and there exists a measurable family of holomorphic functions $(H_t)_{t \ge 0}$ in $\m D$, uniquely defined for a.e. $t \ge 0$, with positive real part such that \eqref{eq:ODE} holds. The measurable family of probability measures $(\rho_t)_{t \ge 0}$ is obtained from the Herglotz-Riesz representation of $(H_t)_{t \ge 0}$. The upshot of the discussion is that the measure $\rho \in \mc N_+$, Loewner chain $(f_t)_{t\ge 0}$, uniformizing Loewner chain $(g_t)_{t \ge 0}$, the monotone family of simply connected domains $(D_t)_{t\ge 0}$, and the increasing family of hulls $(K_t)_{t \ge 0}$ each determine the others. Let $\mathcal{L}_+$ be the set of Loewner chains $(f_t)_{t\ge 0}$, and change notation by writing $f(z,t) = f_t(z)$. We endow $\mathcal{L}_+$ with the topology of uniform convergence of $f$ on compact subsets of $\mathbb{D} \times \m R_+$. In this setting, the continuity of the Loewner transform is well-known. \begin{lemma}[See e.g., {\cite[Prop. 6.1]{MilShe2016} and \cite{JohSolTur2012}}]\label{lem:cont-bij-loewner-transf} The Loewner transform $\rho \in \mathcal{N}_+ \mapsto f \in \mathcal{L}_+$ is a homeomorphism. \end{lemma} The \emph{uniformizing Loewner chain} $(g_t:=f_t^{-1}: D_t \to \m D)_{t \ge 0}$ driven by $\rho$ satisfies a similar equation \begin{equation} \label{eq:ODE} \partial_t g_t (z) = g_t(z) H_t(g_t(z)), \quad g_0 (z) = z, \end{equation} for a.e. $t < \tau(z)$ where for $z \in \m D$, \begin{equation} \tau(z) = \sup\{ t \ge 0, \, \text{ the solution of \eqref{eq:ODE} exists for all } s < t\} = \sup\{t \ge 0: z \in D_t\}, \label{eq:tau_def} \end{equation} and we set by convention $\tau (z) = 0$ for $z \in S^1$. Further, we can write \[D_t = \{z \in \m D, \, \tau(z) > t\}.\] An important property which follows immediately from \eqref{eq:ODE} is the \emph{domain Markov property} of the Loewner transform. \begin{lemma}\label{lem:domain_markov} Let $\rho \in \mc N_+$ and $(g_t)_{t \ge 0}$ the associated uniformizing Loewner chain. Fix $T \ge 0$, then $(\tilde g_s : = g_{s +T} \circ g_T^{-1})_{s\ge 0}$ is the uniformizing Loewner chain driven by the measure $\tilde \rho \in \mc N_+$ with disintegration $\tilde \rho_s = \rho_{s+T}$. \end{lemma} \subsection{Chord-arc foliations and the winding function}\label{sect:winding-function} Let $\gamma$ be a rectifiable Jordan curve. We say that $\gamma$ is chord-arc if there exists a constant $A < \infty$ such that for all $z,w \in \gamma$, we have the inequality $|\gamma^{z,w}| \le A|z-w|$, where $\gamma^{z,w}$ is the subarc of $\gamma \smallsetminus \{z,w\}$ of smaller length. Consider a monotone family of simply connected domains $(D_t)_{t \ge 0}$ that can be described by Loewner evolution as in Section~\ref{sect:Loewner-Kufarev-Equation}. We call the family $(\gamma_t := \partial D_t)_{t \ge 0}$ a \emph{non-injective chord-arc foliation} of $\ad {\m D} \smallsetminus \{0\}$ if \begin{enumerate}[itemsep=-2pt] \item For all $t \ge 0$, $\gamma_t$ is a chord-arc Jordan curve. \item It is possible to parametrize each curve $\gamma_t, t \ge 0,$ by $S^1$ so that the mapping $t \mapsto \gamma_t$ is continuous in the supremum norm. \item For all $z \in \ad{\m D} \smallsetminus \{0\}$, $\tau(z) <\infty$, where $\tau$ is defined in \eqref{eq:tau_def}. \end{enumerate} For convenience we shall say simply foliation in what follows, but we stress that the chord-arc assumption is always in effect. We refer to the Jordan curves $\gamma_t$ as \emph{leaves}. Non-injective here means that we do not require that there is a unique leaf that passes through each point in $\ad{\m D}$. The following lemma shows that a foliation in the above sense indeed foliates the punctured unit disk. \begin{lemma}\label{lem:tau_foliates} Assume that $(\gamma_t)_{t\ge 0}$ is a foliation of $\ad{\m D} \smallsetminus \{0\}$. Then for all $z \in \ad{\m D} \smallsetminus \{0\}$, we have $z \in \gamma_{\tau(z)}$. In particular, $\bigcup_{t \ge 0} \gamma_t = \ad{\m D} \smallsetminus \{0\}$. \end{lemma} This lemma shows that the definition of foliation in $\ad {\m D} \smallsetminus \{0\}$ and $\tau$ coincide with the definitions in Section~\ref{sec:main_results}. \begin{proof} From the monotonicity of $(D_t)$ and the continuity of $t \mapsto \gamma_t$, we have $\bigcup_{t > \tau(z)} D_t = D_{\tau(z)}$. Since $z \notin D_t$ for all $t >\tau(z)$, we have $z \notin D_{\tau(z)}$. It remains to show that $z \in \ad {D_{\tau(z)}}$. Since $(\ad {D_t}^c : = \hat{\m{C}} \smallsetminus D_t)$ is also a monotone family of domains bounded by a continuous family of Jordan curves, we have $$\cap_{t < \tau (z)} \ad {D_t} = \Big( \bigcup_{t < \tau (z)} (\ad {D_t})^c \Big)^c = \Big( (\ad {D_{\tau(z)}})^c \Big)^c = \ad {D_{\tau(z)}}.$$ The monotonicity shows that for all $t < \tau(z)$, $z \in D_t$. Therefore $z \in \cap_{t < \tau (z)} \ad {D_t} = \ad {D_{\tau(z)}}$ and this completes the proof. \end{proof} If $\rho \in \mathcal{N}_+$ and the family of interfaces $(\gamma_t = \partial D_t)_{t \ge 0}$ produced by Loewner evolution forms a foliation of $\ad{\m D} \smallsetminus\{0\}$, we say that $\rho\in \mc N_+$ \emph{generates a foliation}. We will often choose the conformal parametrization for each $\gamma_t$, obtained by continuously extending $f_t: \m D \to D_t$ to $\overline{\m D}$ by Carath\'eodory's theorem and then restricting to $S^1$. We will now define the winding function $\varphi$ associated with a foliation $(\gamma_t = \partial D_t)_{t \ge 0}$. It will be convenient to use the notation \begin{equation}\label{eq:vartheta} \vartheta[f](z) = \arg \frac{z f'(z)}{f(z)} = \int_0^z \mathrm{d} \arg \frac{f(w) - f(z)}{w - z}, \end{equation} when $f$ is a conformal map (defined on a simply connected domain) fixing $0$. Here and elsewhere, the continuous branch of $\arg (z f'/f)$ is chosen to equal $0$ at the origin. The integral is taken along any continuous path from $0$ to $z$. The function $\vartheta$ satisfies a chain rule \begin{equation}\label{eq:theta_chain} \vartheta[f \circ g](z) = \vartheta [f] \circ g + \vartheta [g] \end{equation} if $f,g, f\circ g$ are all conformal maps defined on simply connected neighborhoods of the origin which $f,g$ fixes. \begin{lemma}\label{lem:goluzin} Suppose $f: \m D \to D \subset \m D$ is a conformal map satisfying $f(0)=0$ and assume $\partial D \cap \partial \m D \neq \emptyset$. If $z \in \partial \m D$ is such that the non-tangential limit $\arg f'(z)$ exists and $f(z) \in \partial \m D$, then $\vartheta[f](z)=0$. \end{lemma} \begin{proof} If $z, f(z) \in \partial \m D$, $w \mapsto (f(w) -f(z))/(w -z)$ takes value in the slit domain $\m C \smallsetminus \{- t f(z)/z, \, t\ge 0\}$. We can therefore define $w \mapsto \arg (f(w) -f(z))/(w -z)$ continuously and unambiguously for $w \in \m D$ by choosing a branch of $\arg$ on this slit domain. Consider the path $s\mapsto sz$, $s \in [0,1]$. Equation~\eqref{eq:vartheta} and \cite[Thm.\,II.4.2]{GM} show that $$\vartheta [f] (z) =\lim_{s \to 1-} \arg \frac{f(sz) - f(z)}{sz - z} - \arg \frac{f(z)}{z} = 0$$ which completes the proof. \end{proof} Let $\mathcal{T}_t$ be the set of $z \in \gamma_t$ at which $\gamma_t$ is differentiable. Since $\gamma_t$ is rectifiable, $\gamma_t \smallsetminus \mathcal{T}_t$ has arclength $0$. If $z \in \mathcal{T}_t$, the harmonic function $\vartheta[g_t](w)$ has a non-tangential limit $\vartheta[g_t](z)$ as $w$ tends to $z$ inside $D_t$. See \cite[Thm.\,II.4.2]{GM} or \cite[Thm.\,6.8]{Pommerenke_boundary}. We define \begin{equation}\label{eq:def_winding} \hat\varphi(t,z) = \vartheta[g_t](z), \quad z \in \mathcal{T}_t, \quad t \ge 0. \end{equation} \begin{lemma}\label{lem:merge} Suppose $s< t$ and $z \in \mathcal{T}_s \cap \mathcal{T}_t$ so that $z \in \gamma_s \cap \gamma_t$ and both curves are differentiable at $z$. Then $\hat \varphi(s,z) = \hat \varphi(t,z)$. \end{lemma} \begin{proof} By the chain rule \eqref{eq:theta_chain} and Lemma~\ref{lem:domain_markov}, it is enough to assume that $s=0$ and verify that $\hat \varphi(t,z) =0$ if $z\in S^1$. This follows from Lemma~\ref{lem:goluzin} which shows that $\vartheta [g_t] (z) = -\vartheta[f_t] (g_t(z)) = 0$. \end{proof} Let $\mathcal{T}:=\cup_{t \ge 0} \mathcal{T}_t \subset \overline{\m D}$. We define the \emph{winding function} $\varphi: \mathcal{T} \to \mathbb{R} $ by forgetting the leaf in $\hat \varphi$, namely $\varphi(z) :=\hat \varphi(t, z)$ if $z \in \mathcal{T}_t$. We set by convention $\varphi(0)=0$. By Lemma~\ref{lem:merge} $\varphi$ is well-defined. \begin{rem} Geometrically, modulo $2 \pi$, the winding function at $z \in \mathcal T_t$ is the angle between $\gamma_{t}$ and and the circle passing through $z$ centered at $0$ as described in Section~\ref{sec:main_results}. \end{rem} \begin{rem}If the leaves are all $C^1$, $\varphi$ is defined everywhere since $\mathcal{T} = \overline{\m D} \smallsetminus \{0\}$ by Lemma~\ref{lem:tau_foliates}. In the general case of a chord-arc foliation, a function defined arclength-a.e. on all leaves need not be defined on $\ad{\m D}\smallsetminus\{0\}$ Lebesgue-a.e. To illustrate the subtlety, it is possible to construct a subset $X$ of $\m D$ of full Lebesgue measure and a foliation of piecewise smooth curves such that each leaf intersects $X$ in exactly one point, see, e.g., \cite{FubiniFoiled}. On the other hand, we shall prove in Proposition~\ref{prop:unique_extension} that a function in $\mc E_0(\m D)$ is determined by its ``values'' (interpreted appropriately) on the leaves if the curves are chord-arc. \end{rem} \begin{lemma} \label{lem:harmonic} Suppose $\rho \in \mc N_+$ generates a foliation and let $T \ge 0$. If $\rho_t$ is the uniform measure for all $t \ge T$, then for all $z \in D_T$, \begin{equation}\label{eq:harmonic} \varphi (z) = \vartheta [g_T] (z). \end{equation} In particular, $\varphi$ is defined and harmonic in $D_T$. Moreover, $(\gamma_t)_{t \ge T}$ are the equipotentials in $D_T$. \end{lemma} \begin{proof} If $T = 0$, $\rho_t$ is the uniform measure for all $t \ge 0$ which implies that $g_t(z) = e^t z$ and $\varphi = 0 = \vartheta[g_0]$. Now let $t > T \ge 0$. By Lemma~\ref{lem:domain_markov}, we have $g_t \circ f_T (z) = e^{t - T} z$. It follows from $g_t =(g_t \circ f_T) \circ g_T$, \eqref{eq:theta_chain} and $\vartheta [z \mapsto e^{t- T} z] = 0$ that $$\varphi (z) = \vartheta[g_t] (z) = \vartheta [g_T] (z) \qquad \forall z \in \partial D_t,$$ and we have $(\gamma_t = f_T (e^{T-t} S^1))_{t > T}$ which is by definition the family of equipotentials in $D_T$. We conclude the proof of \eqref{eq:harmonic} using $\cup_{t > T} \gamma_t = \cup_{t > T} g_T^{-1} (e^{T- t} S^1)= g_T^{-1} (\m D \smallsetminus \{0\}) = D_T \smallsetminus \{0\}$ (and by convention, $\varphi(0) = 0 =\vartheta [g_T] (0)$). \end{proof} \begin{lemma}\label{lem:zero_trace_varphi} Suppose $\rho \in \mc N_+$ generates a foliation and let $ \varphi$ be the corresponding winding function. For every $T \ge 0$, let $ \varphi^{(T)}$ denote the winding function associated to the truncated measure $\rho^{(T)}$ defined by $\rho^{(T)}_t : = \rho_t$ for $t \le T$ and $\rho_t^{(T)}$ is uniform for $t > T$. Then $(\varphi -\varphi^{(T)}) \circ f_T$ is the winding function generated by the measures $(\tilde \rho_s:= \rho_{T+s})_{s \ge 0}$. \end{lemma} \begin{proof} From Lemma~\ref{lem:domain_markov}, the winding function $\tilde \varphi$ associated to $\tilde \rho$ is obtained from $$\vartheta [\tilde g_s] = \vartheta [g_{s +T} \circ f_T] = \vartheta [g_{s+T}] \circ f_T - \vartheta [g_T] \circ f_T$$ where $(\tilde g_s)_{s \ge 0}$ is the Loewner chain of $\tilde \rho$. We conclude the proof using Lemma~\ref{lem:harmonic}. \end{proof} In other words, decomposing the driving measure $\rho$ into the truncated (and extended by uniform measure) measure on $S^1 \times [0,T]$, and the measure $\tilde \rho: s \mapsto \rho_{T+s}$ amounts to decomposing the winding function into $\varphi^{(T)}$ which is defined and harmonic in $D_T$ and $\varphi - \varphi^{(T)}$ which vanishes (arclength-a.e.) on every $\gamma_t$ for $t \le T$. \section{The foliation disintegration isometry}\label{sec:hadamard} \subsection{Hadamard's formula} \label{subsec:hadamard} In this section we use the Loewner-Kufarev equation to derive a formula for the dynamics of the Green's function. The resulting expression is a version of the classical Hadamard formula in a smooth setting. We fix $\rho \in \mc N_+$ and let $(g_t : D_t \to \m D)_{t\ge 0}$ be the associated uniformizing Loewner chain. For each $t \ge 0$, we let $G_t(z,w)$ be the Green's function for $D_t$. For $z,w$ fixed, we extend the definition of $G_t(z,w)$ to $t \ge \min\{\tau(z), \tau(w)\}$ by $0$. We will show that with this definition, $t \mapsto G_t(z,w)$ is absolutely continuous under the assumption that $\rho$ generates a foliation. Note that $t \mapsto G_t(z,w)$ is always absolutely continuous for $t \in [0,\min\{\tau(z), \tau(w)\}) $ but need not be continuous at $\min\{\tau(z), \tau(w)\}$, e.g., in the case when $\partial D_t$ disconnects an open set containing $z, w$ from $0$ as $t \to \tau = \tau(z) = \tau(w)$. \begin{lemma} \label{loewner-hadamard}\label{lem:G_to_0} Consider the Loewner-Kufarev equation driven by $\rho \in \mathcal{N}_+$. Then for a.e.\ $t < \min\{\tau(z) , \tau(w) \}$ and all $z,w \in D_t$, \[ -\partial_t G_{D_t}(z,w)= \int_0^{2\pi} P_{\m D}(g_t(z), e^{i\theta})P_{\m D}(g_t(w), e^{i\theta})\mathrm{d} \rho_t(\theta). \] If in addition $\rho$ is assumed to generate a foliation, then $t \mapsto G_t(z,w)$ is absolutely continuous for all $t$. \end{lemma} \begin{proof} First note that for $z,w \in D_t$, by conformal invariance of the Green's function, \begin{equation}\label{eq:G_t} G_t(z,w) = G_{D_t}(z,w)= -\log \left|\frac{g_t(z) - g_t(w)}{1- \overline{g_t(w)} g_t(z)}\right|. \end{equation} Setting $z_t=g_t(z), w_t = g_t(w)$, the Loewner-Kufarev equation \eqref{eq:ODE} implies that for a.e. $t < \min\{\tau(z) , \tau(w) \}$, $\partial_t z_t = z_t H_t(z_t)$, $\partial_t w_t = w_t H_t(w_t)$ and so \begin{align*} -\partial_t G_t(z,w) & = \Re \partial_t \log \frac{z_t - w_t}{1- \overline{w_t} z_t} \\ & = \Re \frac{\overline{w_t}z_t \left(H_t(z_t) + \overline{H_t(w_t)} \right)}{1- \overline{w_t} z_t} + \Re \frac{z_t H_t(z_t) - w_t H_t(w_t)}{z_t - w_t}. \end{align*} Next, note that \begin{align} \Re \frac{\overline{w_t}z_t \left(H_t(z_t) + \overline{H_t(w_t)} \right)}{1- \overline{w_t} z_t}& = \int_0^{2\pi}\Re \frac{\overline {w}_t z_t \cdot 2(1-z_t\overline{w_t})}{(1-z_t \overline{w_t} )(e^{i\theta} -z_t)(e^{-i\theta} - \overline{w}_t)} \mathrm{d} \rho_t(\theta) \nonumber \\ & = \int_0^{2\pi}\Re \frac{ 2 z_t \overline{w_t} }{(e^{i\theta} -z_t)(e^{-i\theta} - \overline{w}_t)} \mathrm{d} \rho_t(\theta) \nonumber \\ & = \int_0^{2\pi} \Re \frac{ 2 z_t \overline{w_t} (e^{-i\theta} -\overline{z_t})(e^{i\theta} - w_t) }{|e^{i\theta}-z_t|^2|e^{i\theta}-w_t|^2} \mathrm{d} \rho_t(\theta) \label{eq1}. \end{align} Moreover, \begin{align} & \Re \frac{z_t H_t(z_t) - w_t H_t(w_t)}{z_t - w_t} \nonumber \\ & = \int_0^{2\pi} \Re \left( \frac{e^{i\theta} + z_t}{e^{i\theta} - z_t} + \frac{2w e^{i\theta}}{(e^{i\theta}-z_t)(e^{i\theta}-w_t)} \right) \mathrm{d} \rho_t(\theta) \nonumber \\ & = \int_0^{2\pi} \Re \frac{(e^{i \theta} + z)(e^{-i\theta} -\overline{z}) |e^{i\theta}-w_t|^2 + 2w_t e^{i\theta}(e^{-i\theta}-\overline{z_t})(e^{-i\theta}-\overline{w_t})}{|e^{i \theta}-z_t|^2|e^{i\theta}-w_t|^2} \mathrm{d} \rho_t(\theta) \label{eq2}. \end{align} Adding \eqref{eq1} and \eqref{eq2} and simplifying, \begin{align*} -\partial_t G_t(z,w) & = \int_0^{2\pi} \frac{1-|z_t|^2}{|e^{i\theta}-z_t|^2} \cdot \frac{1-|w_t|^2}{|e^{i\theta}-w_t|^2} \mathrm{d} \rho_t(\theta) = \int_0^{2\pi} P_{\m D}(z_t, e^{i\theta})P_{\m D}(w_t, e^{i\theta})\mathrm{d} \rho_t(\theta), \end{align*} as claimed. Let us now assume $\rho \in \mc N_+ $ generates a foliation and prove that $t \mapsto G_t(z,w)$ is absolutely continuous for all $t \ge 0$. For all $z,w \in \m D$, $z \neq w$, we need to show that $G_t(z,w) \to 0$ as $t\nearrow \min\{\tau(z),\tau(w)\}$. Without loss of generality, we assume that $\tau(z) \le \tau(w)$. If $\tau(z) < \tau (w)$, it is clear from \eqref{eq:G_t} that $G_t(z,w) \to 0$ as $t \to \tau(z)$ since $|g_t(w)|$ is bounded away from $1$ for $t \in [0,\tau(z)]$. Assume now that $\tau = \tau(z) = \tau(w)$. Lemma~\ref{lem:tau_foliates} implies that $z, w \in \partial D_\tau$. Since $g_\tau : D_\tau \to \m D$ is a conformal map between two Jordan domains, $g_\tau$ extends to a homeomorphism $\ad D_\tau \to \ad {\m D}$ and $g_\tau(z) \neq g_\tau(w) \in S^1$. We claim that $$ \lim_{t \to \tau-} g_t (z) = g_\tau(z). $$ Assuming this, it follows that $|g_t (z) - g_t(w)|$ is bounded away from $0$ on $[0,\tau]$. Therefore $G_t(z,w) \to 0$ as $t\to \tau-$ using \eqref{eq:G_t}. To prove the claim, let $t_n, n=1,2,\ldots,$ be any sequence such that $t_n \nearrow \tau$ and such that $\zeta := \lim_{n \to \infty} g_{t_n}(z)$ exists. Pick $\epsilon > 0$ and let $v \in D_\tau$ be such that the interior distance between $z$ and $v$ in $D_\tau$ is at most $\epsilon$. By the Beurling estimate there is a constant $C$ that does not depend on $z,v$ such that $|g_\tau(z) - g_\tau(v)| \le C \sqrt{\epsilon}$ and by monotonicity of the domains the same estimate holds with $\tau$ replaced by any $t_n < \tau$. Let $n_0$ be large such that if $n > n_0$ then $|g_{t_n}(v)- g_{\tau}(v)| + |g_{t_n}(z)-\zeta| < \epsilon$. Such $n_0$ exists by definition of $\zeta$ and since $t \mapsto g_{t}(v)$ is continuous on $[0,\tau(v))$ and $\tau(v) > \tau(z)$. Combining these estimates with the triangle inequality it follows that $|g_\tau(z) - \zeta| \le 2C \sqrt{\epsilon}$. \end{proof} \subsection{Disintegration isometry}\label{subsec:hadamard_isometry} We assume in this section that $\rho \in \mc N_+$ generates a foliation of $\ad{\m D} \smallsetminus\{0\}$. We will write \[ L^2(2\rho) := L^2(S^1 \times \m R_+ , 2\rho) \] and $\langle \cdot, \cdot \rangle_{L^2(2\rho)}$ for the corresponding inner product. Recall also that we write $\mc{E}_0(\m D)$ for $W^{1,2}_0(\m D)$ equipped with the Dirichlet energy norm. \begin{lemma} \label{lem:disintegration_general} Suppose $\rho \in \mc N_+$ generates a foliation and that $ \phi \in C^\infty_c (\m D)$. Then \begin{equation}\label{eq:iota_general} \iota [ \phi ] (\theta,t) := \frac{1}{2\pi}\int_{\m D} \Delta ( \phi \circ f_t) (z) P_{\m D} (z, e^{i\theta}) \,\mathrm{d} A(z) \end{equation} is an element of $L^2 (2\rho)$. Moreover, for all $T > 0$, \begin{align*} \mc{D}_{\m D}(\phi) - \mc{D}_{\m D}(\phi^0_T) = \norm{\iota[\phi]\mathbf{1}_{S^1 \times [0,T]}}^2_{L^2(2\rho)} \end{align*} where $\phi_T^0 \in \mc E_0(\m D)$ is the zero-trace part of $\phi \circ f_T \in W^{1,2}(\m D)$ as in \eqref{eq:orthogonal}. In particular, the operator $\iota: \mc E_0(\m D) \cap C^\infty_c (\m D) \to L^2 (2\rho)$ is norm-preserving. \end{lemma} \begin{proof} Let $\phi \in C_c^\infty(\m D)$ set $\mu = \Delta \phi$ and write $G_0 = 2\pi \Delta^{-1}$, where we recall that we consider the positive Laplacian. (We write $G_0$ for both the operator and function.) For $z \in \m D$, $$ \phi (z) = \frac{1}{2\pi}G_0 \mu (z) = \frac{1}{2\pi}\int_{\m D} G_0(w,z) \mu(w) \,\mathrm{d} A(w).$$ Using integration by parts, since $\phi$ is smooth and has compact support, \begin{align*} \int_{\m D} |\nabla \phi (z)|^2 \mathrm{d} A(z) & = \int_{\m D} \phi (z) \Delta \phi (z) \,\mathrm{d} A(z)\\ & = \frac{1}{2\pi} \int_{\m D} \int_{\m D} G_0(w,z) \mu(w) \mu(z) \,\mathrm{d} A (w) \,\mathrm{d} A(z). \end{align*} As before we set $G_t(z,w) = G_{D_t}(z,w) = G_0 (z_t, w_t)$ for $z,w \in D_t$ (recall that $z_t = g_t (z)$ and $w_t = g_t(w)$) and $0$ otherwise. Let \[ \phi_t := \phi \circ f_t , \quad \mu_t : = \Delta \phi_t= |f_t'|^{2} \mu \circ f_t \] and note that the zero-trace part of $\phi_t$ satisfies $\phi_t^0 = G_0 \mu_t/2\pi$. Moreover, performing a change of variable, \begin{align} \int_{\m D} |\nabla \phi_T^0 (z)|^2 \mathrm{d} A(z) & = \frac{1}{2\pi} \int_{\m D} \int_{\m D} G_0(w,z) \mu_T(w) \mu_T(z) \,\mathrm{d} A (w) \,\mathrm{d} A(z) \nonumber \\ & = \frac{1}{2\pi} \int_{\m D} \int_{\m D} G_0(w,z) \mu \circ f_T (w) \mu \circ f_T(z) |f_T'(z)|^2 |f_T'(w)|^2 \mathrm{d} A (w) \,\mathrm{d} A(z) \nonumber \\ & = \frac{1}{2\pi} \int_{D_T} \int_{D_T} G_T(w,z) \mu (w) \mu(z) \mathrm{d} A (w) \,\mathrm{d} A(z). \label{june10} \end{align} Next, note that by Lemma~\ref{loewner-hadamard} $-\partial_t G_t(z,w) = \int_0^{2 \pi} P_{\m D} (g_t(z), e^{i\theta}) P_{\m D} (g_t(w),e^{i\theta}) \mathrm{d} \rho_t(\theta) \ge 0$ for $z,w \in \mathbb{D}$ and that $P_{\m D} (g_t(w),e^{i\theta}) \ge 0$. Since the singularity of the Green's function is logarithmic (and therefore integrable), we obtain \begin{align*} \int_{0}^T \int_{D_t}\int_{D_t} -\partial_t G_t(z,w) \mathrm{d} A(z) \mathrm{d} A(w) \mathrm{d} t & =\int_0^T \int_{\mathbb{D}}\int_{\mathbb{D}} -\partial_t G_t(z,w) \,\mathrm{d} A(z) \,\mathrm{d} A(w)\,\mathrm{d} t \\ & =\int_{\mathbb{D}}\int_{\mathbb{D}} \int_0^T -\partial_t G_t(z,w) \,\mathrm{d} t \,\mathrm{d} A(z) \,\mathrm{d} A(w) \\ & = \int_{\mathbb{D}}\int_{\mathbb{D}} G_0(z,w)-G_T(z,w) \, \mathrm{d} A(z) \mathrm{d} A(w) < \infty. \end{align*} Therefore, using the smoothness of $h$, we can apply Fubini's theorem (repeatedly) and \eqref{june10} to compute \begin{align*} &\int_{\m D} |\nabla \phi (z)|^2 \mathrm{d} A(z) - \int_{\m D} |\nabla \phi_T^0 (z)|^2 \mathrm{d} A(z) \\ & = \frac{1}{2\pi}\int_{\mathbb{D}}\int_{\mathbb{D}} \left(G_0(z,w)-G_T(z,w) \right) \mu(z) \mu(w) \, \mathrm{d} A(z) \mathrm{d} A(w) \\ & =\frac{1}{2\pi} \int_{\m D} \int_{\m D} \int_0^{T} -\partial_t G_t(z,w) \mu(z) \mu(w) \,\mathrm{d} t \mathrm{d} A (w) \mathrm{d} A(z) \\ & =\frac{1}{2\pi}\int_0^{T} \int_{D_t} \int_{D_t} -\partial_t G_t(z,w) \mu(z) \mu(w)\, \mathrm{d} A (w) \mathrm{d} A(z) \mathrm{d} t \\ & = \frac{1}{2\pi}\int_0^{T} \int_{D_t} \int_{D_t} \mu(z) \mu(w) \int_0^{2 \pi} P_{\m D} (g_t(z), e^{i\theta}) P_{\m D} (g_t(w),e^{i\theta}) \, \mathrm{d} \rho_t( \theta) \mathrm{d} A (w) \mathrm{d} A(z) \mathrm{d} t \\ & = \frac{1}{2\pi}\int_0^{T} \int_0^{2 \pi} \int_{D_t} \int_{D_t} \mu(z) \mu(w) P_{\m D} (g_t(z), e^{i\theta}) P_{\m D} (g_t(w),e^{i\theta}) \, \mathrm{d} A (w) \mathrm{d} A(z) \mathrm{d} \rho_t( \theta) \mathrm{d} t \\ & = \frac{1}{2\pi}\iint_{S^1 \times [0,T]} \left(\int_{\m D} \mu_t(z) P_{\m D}(z,e^{i\theta}) \mathrm{d} A(z)\right) \left(\int_{\m D} \mu_t(w) P_{\m D}(w,e^{i\theta}) \mathrm{d} A(w)\right) \mathrm{d} \rho(\theta,t )\\ & = 2\pi\iint_{S^1 \times [0,T]} \abs{ \iota [\phi] (\theta,t) }^2 \mathrm{d} \rho(\theta, t), \end{align*} as claimed. Letting $T \to \infty$, by monotone convergence, we obtain that the operator $\iota: \mc E_0(\m D) \cap C^\infty_c (\m D) \to L^2 (2\rho)$ is norm-preserving. \end{proof} \begin{rem} Given a signed measure $\mu$ supported on $\overline{\m D}$, the \emph{balayage} of $\mu$ to $\partial \m D$ is a measure $\nu$ on $\partial \m D$ such that the logarithmic potentials of $\mu$ and $\nu$ agree on $\m D^*$. Viewing $\mu$ as a charge density, $\nu$ represents the optimal way (with respect to logarithmic energy) to redistribute charge to $\partial \m D$ while keeping the potential fixed in the complementary domain. In our setting, for $t$ fixed, one can see that $\iota[\phi](\theta,t)$ in fact equals the (density of) the balayage of the ``charge density'' $\Delta \phi_t \,\mathrm{d} A$ to $\partial \m D$. See, e.g., Chapter~IV of \cite{Landkof}. \end{rem} \begin{rem} The disintegration isometry is closely related to the general approach of \cite{Haakan_GFF}, see in particular Section~3 of that paper. Hedenmalm and Nieminen consider $C^2$-smooth, strictly monotone deformations of domains, and do not employ the Loewner equation. In the smooth, strictly monotone case, a version of our Lemma~\ref{lem:disintegration_general} can be deduced from results of \cite{Haakan_GFF} via a ``polar coordinates'' change of variable which also relies on strong regularity assumptions of the interfaces. \end{rem} Since $C^\infty_c(\m D)$ is dense in $\mc{E}_0(\m D)$, the operator $\iota$ extends to $\mc{E}_0(\m D)$ and we have \[\mc D_{\m D}(\phi) = \norm{\iota[\phi]}^2_{L^2 (2\rho)}, \qquad \phi \in \mc E_0(\m D).\] We will now construct an inverse operator $\varkappa : L^2 (2\rho) \to \mc{E}_0(\m D)$ as follows. For $u \in L^2 (2\rho)$ and $\phi \in C^{\infty}_c (\m D)$, we consider $$\Phi_u : \quad \phi \mapsto 2 \iint_{\m R_+ \times S^1} u (t, \theta) \iota[\phi](\theta,t) \mathrm{d} \rho (t, \theta) = \brac{u, \iota[\phi]}_{L^2(2\rho)}$$ which extends to a bounded linear operator $\mc{E}_0(\m D) \to \m R$. Indeed, the Cauchy-Schwarz inequality and Lemma~\ref{lem:disintegration_general} show that \begin{equation}\label{eq:Riesz_inequality} \abs{ \brac{u, \iota[\phi]}_{L^2(2\rho)}} \le \norm {u}_{L^2 (2\rho)} \norm {\iota[\phi]}_{L^2 (2\rho)} = \norm {u}_{L^2 (2\rho)} \mc D_{\m D}(\phi)^{1/2}. \end{equation} By the Riesz representation theorem there exists a unique $\varkappa [u] \in \mc{E}_0(\m D)$ with the property that for all $\phi \in C^\infty_c(\m D)$, \begin{equation}\label{eq:inverse_char} \frac{1}{\pi} \int_{\m D} \varkappa [u] \Delta \phi \,\mathrm{d} A (z) = \frac{1}{\pi} \int_{\m D} \nabla \varkappa [u] \nabla \phi \,\mathrm{d} A (z) = \Phi_u (\phi). \end{equation} It follows immediately that $\varkappa \circ \iota = \operatorname{Id}_{\mc E_0 (\m D)}$. We now give the explicit formula for $\varkappa$. In this statement and below we use the notation $u_t(\cdot) := u(\cdot, t)$ for $u \in L^2 (2\rho)$. \begin{prop}\label{prop:kappa_formula} Suppose $\rho \in \mathcal{N}_+$ generates a foliation. Let $u \in L^2 (2\rho)$. Then for a.e.\ $w \in \m D$, $$ \varkappa [u] (w) = 2 \pi \int_0^{\tau(w)} P_{\m D}[u_t\rho_t](g_t(w)) \, \mathrm{d} t, $$ and $t\mapsto P_{\m D}[u_t\rho_t](g_t(w)) \in L^1([0,\tau(w)], \mathrm{d} t)$. \end{prop} \begin{proof} By linearity and splitting $u = u^+ - u^-$ with $u^+ = \max\{ u, 0\}$ and $u^- := \max\{ -u, 0\}$, it suffices to prove the proposition when $ u \ge 0$. We let $$\tilde \varkappa [u](w) : =2 \pi \int_0^{\tau(w)} P_{\m D}[u_t\rho_t](g_t(w)) \, \mathrm{d} t = \int_0^{\tau(w)} \int_{S^1} P_{\m D} (g_t(w), e^{i\theta}) u(\theta, t) \, \mathrm{d} \rho_t(\theta) \mathrm{d} t.$$ We first prove that for all $\phi \in C^\infty_c (\m D)$, \begin{align}\label{eq:converse_1} &2 \int_0^\infty \int_{S^1} u(\theta, t) \iota[\phi] (\theta, t) \mathrm{d} \rho_t(\theta) \mathrm{d} t = \frac{1}{\pi} \int_{\m D} \Delta \phi(w) \tilde \varkappa [u](w) \mathrm{d} A (w). \end{align} For this, we will interchange repeatedly the order of integration in the following computation. To justify this, we let $$\phi^+ := \Delta^{-1} \max\{\Delta \phi, 0\} \quad \text{ and } \quad \phi^- : =\Delta^{-1} \max \{-\Delta \phi, 0 \}. $$ We have $\phi = \phi^+ - \phi^-$. Since $\phi$ is smooth, $\max\{\Delta \phi, 0\}$ is Lipschitz continuous. By elliptic regularity (see, e.g., \cite[Sec.\, 4.3]{Gilbarg_Trudinger}), $\phi^+, \phi^- \in \mc E_0 (\m D) \cap C^{2,\alpha} (\m D)$ for any $\alpha <1$ and clearly $\Delta \phi^\pm \ge 0$. Moreover, recalling that $\phi_t^\pm = \phi \circ f_t$, we have $\Delta \phi^\pm_t= \Delta (\phi^\pm \circ f_t) = |f_t'|^2(\Delta \phi^\pm) \circ f_t \ge 0$. So it follows that $$\iota [\phi^\pm] (\theta, t) = \frac{1}{2\pi} \int_{\m D} \Delta \phi^\pm_t (z) P_{\m D} (z, e^{i\theta}) \mathrm{d} A(z) \ge 0.$$ Since $u,\iota [\phi^\pm] \in L^2(2\rho)$, the Cauchy-Schwarz inequality shows that $u \cdot \iota [\phi^\pm] \in L^1 (2\rho)$. Using Fubini's theorem, this implies that $u (\theta,t) \Delta \phi_t (z) P_{\m D}(z,e^{i\theta})$ is in $L^1 (\m D \times S^1 \times \m R_+, \mathrm{d} A \times \mathrm{d} \rho_t(\theta) \times \mathrm{d} t)$. Therefore the left-hand side of \eqref{eq:converse_1} equals \begin{align*} & 2 \int_0^\infty \int_{S^1} u(\theta, t) \left[\frac{1}{2\pi} \int_{\m D} \Delta \phi_t (z) P_{\m D} (z, e^{i\theta}) \mathrm{d} A(z) \right]\mathrm{d} \rho_t(\theta) \mathrm{d} t \\ & = \frac{1}{\pi} \int_0^\infty \int_{\m D} \Delta \phi_t (z) \int_{S^1} u(\theta, t) P_{\m D} (z, e^{i\theta}) \mathrm{d} \rho_t(\theta) \mathrm{d} A (z) \mathrm{d} t \\ & = \frac{1}{\pi} \int_0^\infty \int_{\m D} (\Delta \phi) \circ f_t(z) |f_t'(z)|^2 \int_{S^1} u(\theta, t) P_{\m D} (z, e^{i\theta}) \mathrm{d} \rho_t(\theta) \mathrm{d} A (z)\mathrm{d} t\\ & = \frac{1}{\pi} \int_0^\infty \int_{D_t} \Delta \phi(w) \int_{S^1} u(\theta, t) P_{\m D} (g_t(w), e^{i\theta}) \mathrm{d} \rho_t(\theta) \mathrm{d} A (w)\mathrm{d} t \\ & = \frac{1}{\pi} \int_{\m D} \Delta \phi(w) \int_{0}^{\tau(w)} \int_{S^1} u(\theta, t) P_{\m D} (g_t(w), e^{i\theta}) \mathrm{d} \rho_t(\theta) \mathrm{d} t \mathrm{d} A (w) \end{align*} which proves \eqref{eq:converse_1}. Given this, \eqref{eq:inverse_char} implies that $\varkappa [u] - \tilde\varkappa [u]$ is weakly harmonic, and by Weyl's lemma it is harmonic in $\m D$. Since we know that $\varkappa[u] \in \mc E_0(\m D)$, to complete the proof, it is therefore enough to show that $\tilde \varkappa[u] \in \mc E_0 (\m D)$. For this, we show that $\tilde \varkappa[u]$ can be extended to a function $\psi \in W^{1,2} (e \m D)$ where $\psi(w) = 0$ for all $1<|w|<e$ (see, e.g., \cite[Prop.\,9.18]{brezis}). To construct the extension, we define a measure $\hat \rho \in \mc N_+$ with $\hat \rho_{t} = \rho_{t-1}$ for $t \ge 1$ and uniform for $t \in [0,1)$. Similarly, let $\hat u (\theta, t) : = u (\theta, t-1)$ for $t \ge 1$, and $\hat u(\theta,t) = 0$ for $t \in [0,1)$. We clearly have $\hat u \in L^2 (2\hat \rho)$, and $ \psi (w): = \tilde \varkappa [\hat u] (e^{-1} w)= 0$ if $1 <|w| < e$. By construction, we have $\psi \in \mc E_0 (e \m D)$ from the proof above. Moreover, if $\hat g_t$ is the uniformizing Loewner chain of $\hat \rho$, then for $t \in [0,1]$ we have $\hat g_t(z) = e^t z$ and for $t \ge 1$, $\hat g_t (z) = g_{t - 1} (e z)$. It follows that $\tilde \varkappa [u] (w) = \tilde \varkappa [\hat u] (e^{-1} w) = \psi (w)$ for $w \in \m D$, and this shows that $\varkappa = \tilde \varkappa$ in $\mc E_0 (\m D)$. Finally, since $u \ge 0$, it follows that $P_{\m D} [u_t \rho_t] (g_t(w)) \in L^1([0,\tau(w)], \mathrm{d} t)$ for a.e. $w \in \m D$. \end{proof} \begin{cor} \label{cor:ortho_decomp_formula} Let $\phi \in \mathcal{E}_0 (\m D)$ and $T \ge 0$. The orthogonal decomposition of $\phi$ into a function in $\mc E_0 (D_T)$ and a function in $\mc E_0(\m D)$ harmonic in $D_T$ \eqref{eq:orthogonal} is given by $$\phi^{0,T} : = \varkappa \big[\iota[\phi] \mathbf{1}_{S^1 \times [T,\infty)}\big] \in \mc E_0 (D_T) \quad \text{and} \quad \phi^{h,T} := \varkappa \big[\iota[\phi] \mathbf{1}_{S^1 \times [0, T)}\big].$$ \end{cor} \begin{rem} Using the notation of Lemma~\ref{lem:disintegration_general}, a consequence of the corollary is that $\phi^{0,T} = \phi^0_T \circ g_T$. \end{rem} \begin{proof} Since $\varkappa \circ \iota [\phi] = \phi$, we have $\phi^{0,T} + \phi^{h,T} = \phi$. Consider the Loewner chain driven by $(\hat \rho_t : = \rho_{T+t})_{t \ge 0}$ and the associated operator $\hat \varkappa$. We obtain from Lemma~\ref{lem:domain_markov} and Proposition~\ref{prop:kappa_formula} that $ \phi^{0,T} \circ f_T = \hat \varkappa \big[\iota[\phi](\cdot, \cdot + T)\big] \in \mc E_0 (\m D).$$ Using conformal invariance, this yields $\phi^{0,T} \in \mc E_0 (D_T)$. For $z \in D_T$, by definition, $$\phi^{h,T} (z) = 2 \pi \int_0^{T} P_{\m D}[\iota[\phi]_t \rho_t](g_t(z)) \, \mathrm{d} t.$$ Note that $P_{\m D}[\iota[\phi]_t \rho_t](g_t(\cdot))$ is harmonic in $D_T$ for $t < T$. Therefore, by the characterization of harmonic functions in terms of the mean value property and Fubini's theorem we obtain that $\phi^{h,T}$ is harmonic in $D_T$. \end{proof} \begin{thm}\label{thm:bi_isometry} The operator $\iota : \mc{E}_0(\m D) \to L^2 (2\rho)$ is a \textnormal{(}bijective\textnormal{)} isometry. \end{thm} \begin{proof} We already know that $\varkappa \circ \iota = \operatorname{Id}_{\mc{E}_0(\m D)}$. To check that $\iota \circ \varkappa = \operatorname{Id}_{L^2(2\rho)}$, it suffices to show that $\operatorname{Ker} (\varkappa) = \{0\}$ in $L^2 (2\rho)$. Indeed, for all $u \in L^2(2\rho)$, we have $\varkappa [u] = \varkappa \circ \iota \circ \varkappa [u]$. Then $\operatorname{Ker} (\varkappa) = \{0\}$ implies that $u = \iota \circ \varkappa [u]$. Now let $u \in \operatorname{Ker} (\varkappa)$. Fix $T \ge 0$. By Corollary~\ref{cor:ortho_decomp_formula}, $\varkappa [u \mathbf{1}_{S^1 \times [0,T)}] \in \mc E_0 (\m D)$ and $\varkappa[ u \mathbf{1}_{S^1 \times [T,\infty)}]$ give the orthogonal decomposition \eqref{eq:orthogonal} of $\varkappa[u] = 0$ with respect to $D_T$ . We have therefore for all $w \in D_T$, $$\varkappa[u \mathbf{1}_{S^1 \times [0,T)}] (w) = 2 \pi \int_0^{T} P_{\m D} [u_t \rho_t] (g_t(w)) \mathrm{d} t= 0.$$ Taking the derivative in $T$, we obtain that the function $P_{\m D} [u_T \rho_T]$ vanishes in $\m D$. (One can choose a dense and countable family $J$ of points in $\m D$, we have for a.e. $T \ge 0$ and $w \in D_T \cap J$, $P_{\m D} [u_T \rho_T] (g_T(w))= 0$. Since the latter integral is harmonic thus continuous, it vanishes for a.e. $T \ge 0$ and all $w \in D_T$.) But this implies that the measure $u_T \rho_T$ on $S^1$ is the zero measure. Therefore, $u(\theta, T) = 0$ for $\rho_T$-a.e. $\theta \in S^1$. It follows that $$\iint_{S^1 \times \m R_+} \abs{u(\theta,t)}^2 \,\mathrm{d} \rho(\theta, t) = 0,$$ which shows $\operatorname{Ker}(\varkappa) = \{0\}$ and concludes the proof. \end{proof} We will now discuss the relation between $\mc{E}_0(\m D)$ and functions defined on the leaves of a foliation. We will consider a generalization of the standard trace operator for $W^{1,2}$ on Lipschitz domains to chord-arc domains, see \cite{Jonsson-Wallin} and \cite[Appx.A]{VW1}. Suppose $\phi \in W^{1,2}_{\textrm{loc}}(\m C)$ and $\gamma$ is a chord-arc curve in $\mathbb{C}$. The \emph{Jonsson-Wallin trace} of $\phi$ on $\gamma$ is defined for arclength-a.e. $z \in \gamma$ by the following limit of averages on balls $B(z,r) = \{w: |w-z| < r\}$ \begin{equation}\label{def:trace} \phi|_{\gamma}(z):=\lim_{r \to 0+} \int_{B(z,r)} \phi \,\mathrm{d} A. \end{equation} A function $\varphi$ defined arclength-a.e.\ on all leaves of a foliation $(\gamma_t = \partial D_t)$ is said to have an extension $\phi$ in $\mc E_0(\m D)$ if for all $t$, the Jonsson-Wallin trace of $\phi$ on $\gamma_t$ (denoted $\phi|_{\gamma_t}$) coincides with $\varphi$ arclength-a.e. (Here and below we identify $\phi \in \mc E_0(\m D)$ with the function in $W^{1,2}(\m C)$ that is equal to $\phi$ in $\m D$ and $0$ in $\m D^*$.) \begin{prop}\label{prop:unique_extension} If a function $\varphi$ defined arclength-a.e. on each leaf of a foliation of $\m D$ has an extension to $\mc E_0(\m D)$, then the extension is unique. \end{prop} \begin{proof} Let $\phi$ be an extension of $\varphi$ in $\mc E_0(\m D)$. For a fixed $T \ge 0$, the orthogonal decomposition $\phi = \phi^{0,T} + \phi^{h,T}$ where $\phi^{0,T} \in \mc E_0 (D_T)$ and $\phi^{h,T} \in \mc E_0(\m D)$ is harmonic in $D_T$ is given by $$\phi^{0,T} = \varkappa [\iota[\phi] \mathbf 1_{ S^1 \times [T,\infty) }] \quad \text{and} \quad \phi^{h,T} = \varkappa [\iota[\phi] \mathbf 1_{ S^1 \times [0,T) }],$$ by Corollary~\ref{cor:ortho_decomp_formula}. We have that $\phi|_{\partial D_T} = \phi^{h,T}|_{\partial D_T}$ arclength-a.e. since they coincide in $\m C \smallsetminus D_T$, see \cite[Lem.\,A.2]{VW1}. Hence, $\varphi$ determines $\phi^{h,T}|_{D_T}$ since arclength and harmonic measure are mutually absolutely continuous on chord-arc curves. Assume that $\tilde \phi$ also extends $\varphi$. Then we have for all $w \in D_T$, $$0 = \phi^{h,T}(w) - \tilde \phi^{h,T}(w) = \varkappa\left[(\iota[\phi] - \iota [\tilde \phi])\mathbf 1_{S^1 \times [0, T)}\right] (w) = 2\pi \int_0^{T} P_{\m D} [\iota[\phi-\tilde \phi]_t \rho_t] (g_t(w)) \mathrm{d} t.$$ The proof of Theorem~\ref{thm:bi_isometry} shows that $\iota[\phi] = \iota [\tilde \phi]$ in $L^2(2\rho)$. Therefore $\phi = \tilde \phi$ in $\mc E_0 (\m D)$ and this completes the proof. \end{proof} \section{Loewner-Kufarev energy}\label{sect:LK-energy} \subsection{Definitions} For each measure $\sigma \in \mc M(S^1)$ we define \begin{equation}\label{eq:ldp_local} L(\sigma) = \frac 12\int_{S^1} \nu'(\theta)^2 \,\mathrm{d}\theta \end{equation} if $\mathrm{d} \sigma(\theta)=\nu^2 (\theta)\,\mathrm{d} \theta$ with the non-negative square-root density $\nu \in W^{1,2} (S^1)$ and, by convention, $L(\sigma) = \infty$ otherwise. With this definition, $L(\sigma)$ is the usual Dirichlet energy of $\nu$ on $S^1$. \begin{rem} Note that $L (\sigma) = 0$ if and only if $\sigma$ is the uniform measure on $S^1$. \end{rem} Then for $\rho \in \mc N_+$ (see Section~\ref{sect:Loewner-Kufarev-Equation}), we define \begin{equation}\label{def:dual-loewner-energy} S_+ (\rho) =\int_0^{\infty} L(\rho_t)\, \mathrm{d} t, \end{equation} where $(\rho_t)_{t\ge 0}$ is a disintegration of $\rho$. We call $S_+(\rho)$ the \emph{Loewner-Kufarev energy} of the measure $\rho$. When $L(\rho_t) < \infty$, we write $\mathrm{d} \rho_t = \rho_t (\theta) \mathrm{d} \theta = \nu_t^2(\theta) \mathrm{d} \theta$. It is also useful to define \[ S_{[a,b]} (\rho) =\int_a^b L(\rho_t)\,\mathrm{d} t. \] \subsection{First properties} We now record a few simple properties that will be used in our proofs. \begin{lemma}\label{lem:time-reparam} Suppose $S_+(\rho) < \infty$ and define the time-changed family of measures $(\tilde{\rho}_{s} = t'(s) \rho_{t (s)} )_{s \ge 0}$, where $$ t(s) = \int_0^s |\tilde{\rho}_u| \mathrm{d} u.$$ Then $ S_+(\tilde{\rho}) = S_+(\rho)$. Moreover, if $(g^\rho_t)_{t \ge 0}$ and $(g^{\tilde{\rho}}_s)_{s \ge 0}$ are the corresponding uniformizing Loewner chains, then $g_s^{\tilde{\rho}} = g_{t(s)}^\rho$. \end{lemma} \begin{proof} We have $$S_+ (\tilde{\rho}) = \int_0^{\infty} L(\tilde{\rho}_t) \mathrm{d} t = \int_0^{\infty} L(t'(s) \rho_{t} ) \mathrm{d} s = \int_0^{\infty} L(\rho_{t}) t'(s) \mathrm{d} s = S_+ (\rho). $$ On the other hand, the Loewner flow driven by $(\tilde{\rho}_s)$ is the solution $s \mapsto g_s^{\tilde{\rho}} (z)$ to $$ \partial_s g_s^{\tilde{\rho}} (z) = g_s^{\tilde{\rho}} (z) H[\tilde{\rho}_s](g_s^{\tilde{\rho}}(z)) \quad g_0^{\tilde{\rho}}(z) = z.$$ % Using the definition of $\tilde{\rho}$, we therefore get $$ \partial_t g_s^{\tilde{\rho}} (z) = g_s^{\tilde{\rho}} (z) H[\rho_t](g_s^{\tilde \rho}(z))$$ and this implies $g_s^{\tilde{\rho}} = g_{t(s)}^\rho$. \end{proof} One should thus view $\tilde{\rho}$ as a time-reparametrization of $\rho$ and the solution to the Loewner equation associated with the measure $\tilde \rho$ is a reparametrization of the solution associated to $\rho$. The invariance property of the Loewner-Kufarev energy suggests that it is intrinsic to the foliation (once we know that finite energy measures generate foliations) and does not depend on the time-parametrization. This is further reflected in the energy duality Theorem~\ref{thm:main} expressing $S_+$ in terms of the winding function. \begin{lemma} \label{lem:foliates} If $S_+(\rho) <\infty$, then for every $z \in \overline{\m D} \smallsetminus \{0\}$ we have $\tau(z) < \infty$. \end{lemma} \begin{proof} We claim that there exists $\varepsilon > 0$ such that for all $t \ge 0$, $L(\rho_t) < \varepsilon$ implies $\min_{\theta} \nu_t^2(\theta) > 1/4\pi$. Indeed, suppose without loss in generality that $\nu_t^2(0) \le 1/4\pi$. Then by the mean value theorem and that $\rho_t \in \mc M_1(S^1)$, there is $\theta_0$ such that $\nu_t^2(\theta_0) \ge 1/2\pi$ and we obtain easily by interpolating linearly $\nu_t$ on $[0,\theta_0]$ that $L(\rho_t) \ge (3-2\sqrt 2)/16\pi^2 = : \varepsilon$. Let $E:=\{t \ge 0 : L(\rho_t) > \varepsilon \}$. For $t \notin E$ the Poisson integral of $\rho_t$ is continuous on $\overline{\m D}$ and by the maximum principle, $P_{\m D}[\rho_t] \ge 1/4\pi$. Now fix $z \in \ad{\m D} \smallsetminus \{0\}$. Using \eqref{eq:ODE}, for $t < \tau(z)$, \begin{align*} \log |g_t(z)/z| & = \int_0^t \partial_s \Re \log g_s(z) \, \mathrm{d} s = \int_0^t \Re H_s(g_s(z)) \mathrm{d} s\\ &= 2\pi \int_0^t P_{\m D}[\rho_s](g_s(z)) \, \mathrm{d} s \ge \big|[0,t] \smallsetminus E\big|/2 \end{align*} since the Poisson integral is non-negative. Since $\log |g_t (z) / z|\le -\log |z|$ it follows that $$\tau(z) \le -2 \log |z| + |E| \le -2 \log |z| + S_+(\rho)/\varepsilon <\infty $$ where we also used Markov's inequality and that $S_+(\rho) = \int_0^\infty L(\rho_t) \mathrm{d} t$. \end{proof} \subsection{Examples and energy minimizers}\label{sect:examples} \begin{figure} \centering \includegraphics[scale=0.7]{doublefol6.pdf} \caption{\emph{Left:} Leaves of the foliation corresponding to the measure that equals $\pi^{-1}\sin^2(\theta / 2) \,\mathrm{d} \theta \mathrm{d} t$ for $0 \le t < 1$ and is uniform for $t \ge 1$, drawn at equidistant times. \emph{Right:} Density plot of the winding function corresponding to the same measure, where lighter color represents a larger value. The winding function is harmonic, but non-zero, in the part foliated after time $1$.} \label{fig:my_label} \end{figure} We discussed the simple but important example of a Loewner chain driven by the constant zero-energy measure in the introduction and Section~\ref{sect:Loewner-Kufarev-Equation}. Here we give another example of a time-homogeneous driving measure, this time vanishing at $\theta = 0$, and minimizing the Loewner-Kufarev energy among all such measures. We compute explicitly the winding function of the corresponding non-injective foliation and verify energy duality by hand. Even for this simple example, while the computation is straightforward it is not entirely trivial. Let $T > 0$ and $\nu \in W^{1,2} ([0,2\pi])$ with $\nu(0) = \nu (2\pi) =0$ and $\int_{S^1} \nu^2 (\theta) \mathrm{d} \theta = 1$. Consider the measure $\rho \in \mc N_+$ such that $\mathrm{d} \rho_t (\theta) = \nu^2 (\theta) \mathrm{d} \theta$ for $t \in [0,T]$, and that is equal to the uniform measure on $S^1$ otherwise. Among all such measures, $\nu^2 (\theta) \mathrm{d}\theta$ minimizes $L$ if and only if $\nu$ is the first eigenfunction of the Dirichlet Laplacian on $[0,2\pi]$, namely, $ \nu^2(\theta) = \sin^2(\theta/2)/\pi$. In this case, we have $L(\rho_t ) \equiv 2$ so $S_+(\rho) = \int_0^T L(\rho_t) \mathrm{d} t= 2T$. The corresponding Herglotz integral can be evaluated explicitly and the Loewner-Kufarev equation in this case is simply \[ \partial_t f_t(z) =- zf'_t(z) \left(1- z \right) , \quad z \in \mathbb{D}, \] see, e.g.,~\cite{sola} where solutions to Loewner equations of this form were studied. We have \[ f_t(z) = \frac{e^{-t}z}{1-(1-e^{-t})z}. \] The leaf $\partial D_t$ is a circle of radius $1/(2-e^{-t})$ centered at $(1 - e^{-t})/(2-e^{-t})$ and the hull $K_t$ at time $t$ is a crescent-shaped compact set, see Figure~\ref{fig:my_label}. In particular, $z=0$ and $z=1$ are fixed points of the evolution. We now compute the corresponding winding function. Suppose $z \in K_T$ and let $t = \tau(z)$. Then \[ \varphi(z) = \arg \frac{z g'_t(z)}{g_t(z)} = -\arg(z(1-e^{-t})+e^{-t})=\arg \frac{z}{(z-1)^2} + \pi. \] Moreover, for $z \in D_T$, we have \[ \varphi(z) = -\arg(z(1-e^{-T})+e^{-T}), \] which is harmonic in $D_T$. From these formulas one can verify directly that $\mc D_{\m D}(\varphi) = 32 T = 16 S_+ (\rho)$, which is the statement of energy duality in this case. We note that the computation of $\mc D_{\m D}(\varphi)$ is slightly technical as $\nabla \varphi$ has a singularity at $1$ (which is $L^2$-integrable). \begin{rem}Conjugating $f_t$ by $z \mapsto z^m$, it is also possible carry out similar explicit computations for measures of the form $\pi^{-1} \sin^2(m\theta/2) \mathrm{d} \theta \mathrm{d} t$ that correspond to Laplace eigenfunctions on $S^1$ of higher eigenvalues. \end{rem} \section{Weak energy duality and Weil-Petersson interfaces}\label{sect:weak-WP} In this section we prove energy duality in the disk, for measures satisfying a strong smoothness assumption. We then use this result and an approximation argument to prove that any $\rho$ with $S_+(\rho)<\infty$, produces interfaces that are Weil-Petersson quasicircles which form a foliation of $\ad{\m D} \smallsetminus \{0\}$. \subsection{Weak energy duality} Let $\mc N_+^\infty$ denote the set of $\rho \in \mc N_+$ such that: \begin{enumerate}[itemsep=-2pt] \item There exists $T < \infty$, such that for $t > T$, $\rho_t = \mathrm{d} \theta /2\pi$ is the uniform measure on $S^1$. \item The mapping $t \mapsto \rho_t$ is piecewise constant (on finitely many time intervals). \item For all $t \ge 0$, $\rho_t$ has $C^\infty$-smooth and strictly positive density with respect to $\mathrm{d}\theta$. \end{enumerate} \begin{lemma}\label{lem:constant_rho}\label{cor:smooth_foliation} Any $\rho\in \mc N^\infty_+$ generates a foliation of $\ad{\m D}\smallsetminus\{0\}$ in which each leaf is a smooth Jordan curve. The corresponding winding function $\varphi$ is continuous and piecewise smooth on $\overline{\m D}$. \end{lemma} \begin{proof} We assume first $\rho \in \mathcal{N}_+^\infty$ and that $ \rho_t$ is constant on $[0,T]$. Let $H : =H[\rho_t]$ which is constant for $t \le T$. The family $(f_t)_{t \in [0,T]}$ solves the equation $\partial_t f_t(z) = -z f'_t(z) H(z)$ with $f_0(z) = z$ and forms a semigroup of conformal maps with fixed point $0$. (See Lemma~\ref{lem:domain_markov}.) The solution therefore enjoys the following representation: there exists a starlike\footnote{Starlike here means that $D$ contains $0$ and for every $z \in D$, the line segment $[0,z] \subset D$.} domain $D$ and a conformal map $\psi:\m D \to D$ fixing $0$ such that \[ f_t(z) = \psi^{-1}(e^{-t}\psi(z)), \qquad \psi(z) = z\exp\left(\int_0^z \frac{1}{w}\left(\frac{1}{H(w)} -1\right) \mathrm{d} w\right). \] See, e.g., Section~3 of \cite{siskakis} and the references therein. Since $H$ is the Herglotz integral of a smooth, positive function, the maximum principle implies that it is non-zero on $\overline{\m D}$ and the smoothness implies that all derivatives exist and extend continuously to $\overline{\m D}$. (See, e.g., Corollary~II.3.3 of \cite{GM}.) It follows that $\psi$ is smooth on $\overline{\m D}$. The foliation generated by $\rho$ is given by the family $(\gamma_t = \psi^{-1} (e^{-t} \psi(S_1)))$ for $t \le T$, and for $t > T$, the leaves $\gamma_t$ are equipotential curves in $D_T$ by Lemma~\ref{lem:harmonic}. We now verify the smoothness of the winding function $z \mapsto \varphi(z)$ in $K_T$. For this, consider $(\theta,t) \in S^1\times [0,T]$ and set $z(\theta,t) = f_t (e^{i\theta})$. Then $\varphi(z(\theta,t)) = -\arg e^{i\theta} f'_t(e^{i\theta})/f_t(e^{i\theta})$. We have \begin{align*} \partial_\theta z & = i e^{i\theta} f_t '(e^{i\theta}) \\ \partial_t z & = \partial_t f_t(re^{i\theta}) = e^{i\theta} f_t '(e^{i\theta}) H(e^{i\theta}). \end{align*} Since $\Re H > 0$ the Jacobian is non-zero. Further differentiation using the Loewner equation shows that $z(\theta,t)$ is smooth and by the inverse function theorem, the inverse is also smooth in $K_T$, and so is $\varphi$. Since $\varphi|_{D_T} = \vartheta (g_T)$ by Lemma~\ref{lem:harmonic} and is continuous on $\ad{D_T}$, we obtain that $\varphi$ is continuous on $\ad{\m D}$ and smooth in $\ad{\m D} \smallsetminus \gamma_T$. Using Lemma~\ref{lem:domain_markov} and the fact the smoothness is preserved under composition of smooth conformal mappings, the statement immediately generalizes to arbitrary $\rho \in \mc N^\infty_+$. \end{proof} \begin{lemma}\label{lem:Loewner-formulae} If $\rho \in \mc N_+$ and $z \in \m D$, then $- \partial_t \vartheta[f_t](z) \mid_{t = 0} = \Im zH_0'(z).$ \end{lemma} \begin{proof} Using \eqref{eq:loewner-pde}, we have for $t \ge 0$, \[ \partial_t \log f_t(z) = -z\frac{f'_t(z)}{f_t(z)} H_t(z). \] and \[ \partial_t \log f_t'(z) = -\frac{\partial_z (zf'_t(z) H_t(z))}{f_t'(z)} =-\left(1+z \frac{f''_t(z)}{f'_t(z)}\right) H_t(z) - z H_t'(z). \] Hence, \[\partial_t \log \frac{z f'_t(z)}{f_t(z)} = -\left(1+z \frac{f''_t(z)}{f'_t(z)}-z\frac{f'_t(z)}{f_t(z)}\right) H_t(z) - z H_t'(z). \] Since $f_0(z)=z$, we have $f''_0(z) = 0$ and the claimed expression when evaluated at $t = 0$ follows by taking the imaginary part. \end{proof} \begin{prop}[Weak energy duality]\label{prop:weak_duality} Suppose $\rho \in \mc N_+^\infty$ and let $\varphi$ be the associated winding function. Then $\mc D_{\m D} (\varphi) = 16\, S_+(\rho)$. \end{prop} \begin{proof} By Lemma~\ref{lem:constant_rho} the winding function $\varphi$ is continuous and piecewise smooth on $\ad{\m D}$. Moreover, $\varphi|_{S^1} = 0$ and therefore $\varphi \in \mc E_0(\m D)$ \cite[Thm. 9.17]{brezis} and we may apply the foliation disintegration isometry to $\varphi$. We claim that for $t \in [0,T]$, $\iota [\varphi ](\theta,t) = - 2 \nu'(\theta)/\nu(\theta)$. Given this, we can apply Theorem~\ref{thm:bi_isometry} to conclude the proof. To prove the claim, set $\varphi_t = \varphi \circ f_t$ and recall that by definition \begin{align}\label{eq:oct17.1} \iota [\varphi] (\theta,t) = \frac{1}{2\pi}\int_{\m D} \Delta \varphi_t (z) P_{\m D} (z, e^{i\theta}) \,\mathrm{d} A(z) = \frac{1}{2\pi}\int_{\m D} \Delta \varphi_t (z) \partial_{n(e^{i\theta})}G_{\m D} (z, e^{i\theta}) \,\mathrm{d} A(z). \end{align} On the other hand, by Lemma~\ref{lem:constant_rho}, $\Delta \varphi_t$ is smooth, so the last term in \eqref{eq:oct17.1} equals \begin{align*} \frac{1}{2\pi }\partial_{n} \int_{\m D} \Delta \varphi_t (z) G_{\m D} (z, e^{i\theta}) \,\mathrm{d} A(z) = \partial_{n} \varphi_t^0(e^{i \theta}). \end{align*} Recall from Lemma~\ref{lem:harmonic} and Lemma~\ref{lem:zero_trace_varphi} that $z \mapsto \varphi_t^0(z)$ is the winding function generated by the measures $s \mapsto \rho_{t+s}$ for $s \ge 0$. Hence, it suffices to consider $t=0$ and show that for a.e. $\theta \in [0, 2\pi)$, \begin{equation}\label{eq:normal_d_varphi} \partial_n \varphi_0^0(e^{i\theta}) = \partial_n \varphi_0(e^{i\theta})= -2\nu_0'(\theta)/\nu_0(\theta). \end{equation} We know that $\partial D_t$ is $C^\infty$ for all $t$, and all (complex) derivatives of $f_t$ are continuous on $\overline{\m D}$ and since the Herglotz integral $H_t$ is also continuous on $\overline{\m D}$, so is $\partial_t f_t(z)= -zf'_t(z) H_t(z)$. It follows that the normal velocity (with respect to $\partial D_t$) of the interface at the point $f_t(e^{i \theta})$ can be written \begin{align*} \operatorname{vel}_n(t,\theta) & = - \mathrm{Re} \left( \frac{\partial_t f_t(e^{i\theta}) \overline{e^{i\theta} f'_t(e^{i\theta})}}{|f'_t(e^{i\theta})|}\right) \\ & = \mathrm{Re} \left( \frac{e^{i\theta} f_t'(e^{i\theta}) H_t(e^{i\theta}) \overline{e^{i\theta} f'_t(e^{i\theta})}}{|f'_t(e^{i\theta})|}\right) \\ & = |f'_t(e^{i\theta})| \mathrm{Re} \, H_t(e^{i\theta}) = 2\pi |f'_t(e^{i \theta})| \nu_t( \theta)^2 > 0. \end{align*} In particular, at $t = 0$ the normal velocity of the interface at $e^{i\theta}$ equals $2 \pi \nu_0(\theta)^2$. On the other hand, using the chain rule \eqref{eq:theta_chain} \begin{align*} 2 \pi \nu_0(\theta)^2 \partial_n \varphi_0(e^{i\theta})& = \partial_t [\varphi_0 (f_t (e^{i\theta}))]_{t = 0} = \partial_t [\vartheta [g_t] ( f_t (e^{i\theta}) ) ] |_{t = 0} = -\partial_t \left(\vartheta [f_t] (e^{i\theta}) \right)|_{t = 0}. \end{align*} It follows from Lemma~\ref{lem:Loewner-formulae} that $$-\partial_t \left(\vartheta [f_t] (e^{i\theta}) \right)|_{t = 0} = \Im( e^{i\theta} H_0'(e^{i\theta})) = -4\pi \nu_0(\theta) \nu_0'(\theta).$$ The last identity is not hard to verify by hand, using integration by parts and the smoothness of $\nu_0$, see also Lemma~\ref{lem:H2W12}. This proves \eqref{eq:normal_d_varphi} and concludes the proof. \end{proof} Let us cite the following special case of the generalized Grunsky inequality, which provides a useful bound to control $\mc D_{\m D}(\arg f'_t)$ in terms of $\mc D_{\m D} (\vartheta[f_t])$ (see Corollary~\ref{cor:smooth_arg_f_bound}). \begin{lemma}[{See \cite[p.70-71]{TT06}}]\label{lem:Grunsky_inequality} Suppose that $f: \m D \to \m C$ and $h : \m D^* \to \m C$ are univalent functions on $\m D$ and $\m D^*$ such that $f(0) = 0$ and $h(\infty) = \infty$, and $f(\m D) \cap g(\m D^*) = \emptyset$. Then we have $$\int_{\m D} \abs{ \frac{f'(z)}{f (z)} - \frac{1}{z} }^2 \mathrm{d} A (z)+ \int_{\m D^*} \abs{\frac{h'(z)}{h(z)} -\frac{1}{z} }^2 \mathrm{d} A (z) \le 2 \pi \log \abs{\frac{ h'(\infty)}{f'(0)}}. $$ Equality holds if the omitted set $\m C \smallsetminus \{f (\m D) \cup h (\m D^*)\}$ has Lebesgue measure zero. \end{lemma} \begin{lemma} \label{lem:grunsky_f} Let $\rho \in \mc N_+$ and $(f_t)_{t \ge 0}$ is the associated Loewner chain. Then \begin{equation}\label{eq:Grunsky_f} \mc D_{\m D} \left(\arg [f_t(z)/z]\right) = \frac{1}{\pi} \int_{\m D} \abs{\frac{f_t'(z)}{f_t (z) }- \frac{1}{z}}^2 d A (z) \le 2 t. \end{equation} \end{lemma} \begin{proof} Since $D_t$ and $\m D^*$ are disjoint, and since $f_t'(0)=e^{-t}$, Lemma~\ref{lem:Grunsky_inequality} applied to the pair $(f_t,\operatorname{Id}_{\m D^*})$ shows that $$\int_{\m D} \abs{\frac{f_t'(z)}{f_t (z) }- \frac{1}{z}}^2 d A (z) \le - 2 \pi \log |f_t ' (0)| = 2 \pi t$$ as claimed. \end{proof} \begin{cor}\label{cor:smooth_arg_f_bound} Suppose $\rho \in \mc N_+^\infty$. Then for all $t \ge 0$, $ \mc D_{\m D}(\arg f'_t) \le 32 S_{[0,t]}(\rho) + 4 t. $ \end{cor} \begin{proof} Fix $t \ge 0$. Without loss of generality, we assume that $\rho_s$ is the uniform measure for all $s > t$. Combining Proposition~\ref{prop:weak_duality}, Lemma~\ref{lem:grunsky_f}, and the Cauchy-Schwarz inequality, we obtain \begin{align*} \mc D_{\m D}(\arg f'_t) & = \mc D_{\m D}(\vartheta [f_t] + \arg [f_t(z)/z]) \le 2 \mc D_{D_t}(\vartheta [g_t]) + 2 \mc D_{\m D}(\arg [f_t(z)/z]) \\ & \le 2 \mc D_{\m D}(\varphi) + 4 t = 32 S_{[0,t]}(\rho) + 4 t \end{align*} as claimed. \end{proof} \subsection{Weil-Petersson interfaces} \label{A-priori-WP} This section proves that the interfaces in Loewner evolution driven by a finite energy measure are Weil-Petersson quasicircles and produces a foliation (Corollary~\ref{cor:S_finite_foliation_WP}). We give first a quantitative upper bound for $\mc D_{\m D}(\arg f'_t)$ that depends only on $S_+(\rho)$ and $t$. \begin{lemma}\label{lem:arg-ft-energy} It $S_+(\rho)<\infty$, then for all $t \ge 0$, $ \mc D_{\m D}(\arg f'_t) \le 32 S_{[0,t]}(\rho) + 4 t. $ \end{lemma} \begin{proof} Fix $t > 0$. We will approximate $\rho$ by a sequence of measures $\rho^{(k)} \in \mc N_+^\infty$ which converges to $\rho$ weakly, and such that $S_{[0,t]} (\rho^{(k)}) \le S_{[0,t]} (\rho)$. The corresponding sequence of conformal maps $f^{(k)}_t$ then converges uniformly on compacts to $f_t$ by Lemma~\ref{lem:cont-bij-loewner-transf}, which, by Corollary~\ref{cor:smooth_arg_f_bound}, implies $$\mc D_{\m D} (\arg f_t') \le \liminf_{k \to \infty} \mc D_{\m D} (\arg (f^{(k)}_t)') \le 32\,S_{[0,t]} (\rho^{(k)}) + 4t \le 32\,S_{[0,t]} (\rho) + 4t.$$ We construct the approximation in two steps. We first let $\rho^{(n)}$ be the ``time-averaged'' measure, which is piecewise constant on dyadic time intervals $[jt/ 2^{n}, (j+1)t/2^{n})$ and is defined by $$\rho^{(n)}_s : = \sigma^j : = \frac{2^n}{t} \int_{jt/2^n}^{(j+1)t/2^n} \rho_r \ \mathrm{d} r \in \mc M_1 (S^1), \quad \forall s \in \left[\frac{jt}{2^{n}}, \frac{(j+1)t}{2^{n}}\right).$$ To see that $S_{[0,t]}(\rho^{(n)}) \le S_{[0,t]}(\rho)$, the key observation is that the map $\sigma \mapsto L(\sigma)$ from $\mc M_1 (S^1)$ to $[0, \infty]$ is convex \cite{DonVar1975} (see also \cite[Thm. 3.4]{APW}). We can therefore apply Jensen's inequality: \[S_{[0,t]}(\rho^{(n)}) = \frac{t}{2^n} \sum_{j= 0}^{2^n -1} L\left( \frac{2^n}{t} \int_{jt/2^n}^{(j+1)t/2^n} \rho_r \ \mathrm{d} r \right) \leq \sum_{j= 0}^{2^n -1} \int_{jt/2^n}^{(j+1)t/2^n} L\left( \rho_t \right) \ dt = S_{[0,t]}(\rho). \] It is clear that $\rho^{(n)}$ restricted to $S^1 \times [0,t]$ converges weakly to $\rho$: integrating against a continuous function $u$ on $S^1 \times [0,t]$, which is then uniformly continuous, $\int u \,\mathrm{d}\rho^{(n)}$ converges to $\int u \,\mathrm{d} \rho$. Since $\rho^{(n)}$ might not be strictly positive and smooth, the second step is to approximate each $\sigma = \sigma^j$ of the $2^n$ measures in $\mc M_1(S^1)$ on dyadics by $$\mathrm{d} \sigma_{r} (\theta):= \frac{\mathrm{d} \theta }{2\pi}\int_{\xi \in S^1} P_{\m D}(r e^{i\theta} , e^{i \xi}) \mathrm{d} \sigma (\xi)$$ where $r <1$, which has positive and smooth density with respect to $\mathrm{d} \theta$. Now we prove that $L(\sigma_r) \le L(\sigma)$. Let $f \in C^0(S^1)$, \begin{align*} \sigma_r (f) &: = \int_{S^1} f(\theta) \mathrm{d} \sigma_r (\theta) = \frac{1}{2\pi} \int_{\theta \in S^1} f(\theta) \int_{\xi \in S^1} P_{\m D} (r e^{i\theta}, e^{i \xi}) \,\mathrm{d} \sigma (\xi) \mathrm{d} \theta \\ &=\frac{1}{2\pi} \int_{\xi \in S^1} \int_{\theta \in S^1} f(\theta) P_{\m D} (r e^{i(\theta -\xi)}, 1) \, \mathrm{d} \theta \,\mathrm{d} \sigma (\xi) \\ & = \frac{1}{2\pi} \int_{\xi \in S^1} \int_{\eta \in S^1} f(\eta +\xi) P_{\m D} (r e^{i\eta}, 1) \,\mathrm{d} \eta \,\mathrm{d} \sigma (\xi) \\ & = \frac{1}{2\pi} \int_{\eta \in S^1} P_{\m D} (r e^{i\eta}, 1) \int_{\xi \in S^1} f(\eta +\xi) \,\mathrm{d} \sigma (\xi) \mathrm{d} \eta\\ & = \frac{1}{2\pi} \int_{\eta \in S^1} P_{\m D} (r, e^{i\eta}) \,\eta_*\sigma (f) \mathrm{d} \eta, \end{align*} where $\eta_* \sigma$ is the pull-back measure $\sigma$ by the rotation $\xi \mapsto \xi +\eta $. In particular, $L(\sigma) = L(\eta_* \sigma)$. We obtain $$\sigma_r = \frac{1}{2\pi} \int_{\eta \in S^1} P_{\m D} (r, e^{i\eta}) \eta_* \sigma \, \mathrm{d}\eta.$$ Since $\frac{1}{2\pi} \int_{\eta \in S^1} P_{\m D} (r, e^{i\eta}) \mathrm{d}\eta = 1$, $\sigma_r$ is a probability measure. Using convexity and Jensen's inequality once again, $$L(\sigma_r) \le \frac{1}{2\pi} \int_{\eta \in S^1} P_{\m D} (r, e^{i\eta}) L( \eta_* \sigma) \mathrm{d}\eta = L(\sigma).$$ Finally, note that since $f$ is continuous, $\eta \mapsto \eta_*\sigma (f)$ is continuous on $S^1$ and equal to $\sigma(f)$ for $\eta = 0$. Therefore, since $\sigma_r(f)$ is the Poisson integral of $\eta \mapsto \eta_*\sigma (f)$ evaluated at $r$, we have that $\lim_{r\to 1}\sigma_r (f) = \sigma(f)$. Hence $\sigma_r$ converges to $\sigma$ weakly and this completes the proof. \end{proof} We would now like to use Lemma~\ref{lem:arg-ft-energy} to conclude that $\partial D_t$ is a Weil-Petersson quasicircle. However, we cannot directly apply Lemma~\ref{thm_TT_equiv_T01} since we do not know \emph{a priori} that $\partial D_t$ is a Jordan curve. In fact, it is not hard to construct an example of a simply connected domain for which the boundary is self-touching, while $\mc D_{\m D}(\log f') < \infty$ where $f$ is the ``one-sided'' conformal map onto the domain. In the present case, however, we can use the fact that $\partial D_t$ arises from Loewner evolution: we will consider the evolution for a small time interval and use estimates on the Schwarzian combined with a result of Ahlfors-Weill. We can then complete the proof using Lemma~\ref{lem:domain_markov}. For a function $f$ holomorphic at $z$ such that $f'(z) \neq 0$, recall that the Schwarzian derivative of $f$ at $z$ is defined by $$\mc Sf(z) = \frac{f'''(z)}{f'(z)} - \frac{3}{2} \left( \frac{f''(z)}{f'(z)} \right)^2 = \left(\frac{f''(z)}{f'(z)}\right)' - \frac{1}{2}\left( \frac{f''(z)}{f'(z)} \right)^2. $$ \begin{lemma}[See {\cite[Lem.\,I.2.1, Lem.\,II.1.3 and Lem.\,II.1.5]{TT06}}] \label{lem:TT_bounded_S} There exists $\delta > 0$ such that if $\mc{D}_{\m D}(\log f') < \delta$, then $f$ is univalent and $f (\m D)$ is a Jordan domain bounded by a Weil-Petersson quasicircle. \end{lemma} \begin{proof} By Lemma~\ref{thm_TT_equiv_T01} and the assumption that $\mc{D}_{\m D}(\log f') < \delta < \infty$, it is enough to prove that $f(\m D)$ is a Jordan domain. We will prove more and show that $f(\m D)$ is a quasidisk. By a theorem of Ahlfors-Weill (see \cite[Cor.~5.24]{Pommerenke_boundary}), to show that $f(\m D)$ is a quasidisk it suffices to show that for small enough $\delta$, \begin{equation}\label{jun12.1} \sup_{z \in \m D} (1 - |z|^2)^2 \abs{\mc Sf (z)} < 2. \end{equation} We will estimate the left-hand side of \eqref{jun12.1} in terms of $\mc{D}_{\m D}(\log f')$. The required estimate is a combination of two bounds. First we claim that if $f$ is holomorphic on $\m D$ and $f' \neq 0$, then, $$ \int_{\m D} |\mc S f (z)|^2 (1 - |z|^2)^2 \mathrm{d} A(z) \le \pi \mc{D}_{\m D}(\log f')+ \frac{\pi}{8 } \left(\mc{D}_{\m D}(\log f')\right)^2.$$ Indeed, this follows from the proof of \cite[Lem.\,II.1.5]{TT06} where the bound depends on a constant from \cite[Lem.\,II.1.3]{TT06}. On the other hand, it follows directly from \cite[Lem.\,I.2.1]{TT06} that $$\sup_{z \in \m D} (1 - |z|^2)^2 \abs{\mc Sf (z)} \le \sqrt{\frac{12}{\pi}} \left(\int_{\m D} |\mc S f(z)|^2 (1 - |z|^2)^2 \mathrm{d} A(z)\right)^{1/2}.$$ Combining these bounds we see that \eqref{jun12.1} indeed holds provided $\delta$ is chosen sufficiently small. \end{proof} \begin{prop}\label{prop:WP-QC-final} If $S_+(\rho) <\infty$, then for all $t \ge 0$, $\partial D_t$ is a Weil-Petersson quasicircle. \end{prop} \begin{proof} Let $\delta > 0$ be the small constant from Lemma~\ref{lem:TT_bounded_S}. Pick $t_1 > 0$ such that $32 S_{[0,t_1]} (\rho)+ 4 t_1 < \delta$. For $t \le t_1$, Lemma~\ref{lem:arg-ft-energy} and Lemma~\ref{lem:TT_bounded_S} show that $\partial D_t$ is a Weil-Petersson quasicircle. Now we consider the general case $t \ge 0$. Lemma~\ref{lem:arg-ft-energy} shows that $\mc D_{\m D} (\arg f_t') <\infty$, it suffices to show $D_t$ is a Jordan domain to conclude that it is Weil-Petersson by Lemma~\ref{thm_TT_equiv_T01}. For this, let $0=t_0 < t_1 < t_2< \ldots$ be a sequence tending to $\infty$ such that $32 S_{[t_j, t_{j+1}]} (\rho) + 4(t_{j+1} - t_j) < \delta$. We show by induction that $f_{t}$ is a homeomorphism from $\ad{\m D}$ onto $\ad D_{t}$ for all $t \le t_j$. We have already proved this in the case $j = 1$. Assume it is true for all $t \le t_j$. Using Lemma~\ref{lem:domain_markov} and the choices of $\delta$ and $|t_{j+1}-t_j|$ we obtain that $g_{t_j} (\partial D_t)$ is a Weil-Petersson quasicircle for each $t_j \le t \le t_{j+1}$, in particular a Jordan curve. Since by assumption $f_{t_j}$ is a homeomorphism of $\ad{\m D}$, $\partial D_t = f_{t_j} \circ g_{t_j} (\partial D_t)$ is also a Jordan curve and this completes the induction. \end{proof} Notice that the next lemma does not assume that $D$ is a Jordan domain. \begin{lemma}\label{lem:derivative-estimate} Suppose $D$ is a simply connected domain containing $0$ and let $f: \m D \to D$ be a conformal map with $f(0)=0$ and assume that $\mc D_{\m D}(\log f') < \infty$. There exists a constant $C < \infty$ depending only on $\mc D_{\m D}(\log f')$ such that \begin{equation}\label{derivative} |f'(r e^{i\theta})| \le C |f'(0)| \exp \sqrt{C\log (1-r)^{-1}} . \end{equation} \end{lemma} \begin{rem} The estimate \eqref{derivative} easily implies that $f$ is continuous on $\overline{\m D}$. \end{rem} \begin{proof} We may assume $f'(0) = 1$. Any $\phi$ that is holomorphic in $\m D$ and such that $\mc{D}_{\m D}(\phi) < \infty$ has non-tangential limits a.e. on $S^1$, and writing $\phi$ for that function as well we have the following weak-type estimate: there exist universal constants $C_1, C_2$ such that $ \left|\{\theta \in S^1 : |\phi| > \lambda\} \right| \le C_1 e^{-C_2\lambda^2/(|\phi(0)|^2+\mc{D}_{\m D}(\phi))}.$ See \cite[Cor.~3.3.2]{primer} for a proof. We apply this estimate with $\phi = \log f'$ which is normalized so that $\log f'(0) = 0$. Hence the upper bound in the weak-type estimate depends only on $\mc{D}_{\m D}(\log f')$ and implies there exist $C_1,C_2$ depending only on $\mc{D}_{\m D}(\log f')$ (but which, in what follows, are allowed to change from line to line) such that \[ \int_0^{2\pi} \exp\left( C_1| \log|f'(e^{i\theta})||^2 \right) \,\mathrm{d}\theta \le C_2. \] Since $\log|f'(z)|$ is harmonic, $\exp\left(C_1|\log|f'(z)||^2\right)$ is subharmonic and it follows that for $0 \le r < 1$ \[ \int_0^{2\pi} \exp \left( C_1| \log|f'(re^{i\theta})||^2 \right) \,\mathrm{d}\theta \le C_2. \] Therefore, if $r_n = 1-2^{-n}$ and $z_{k,n} = r_n e^{i2\pi k/2^n}$, using Koebe's distortion theorem (see, e.g., \cite[Ch. 2.3]{Duren1983}), there is a universal constant $C_3 < \infty$ such that, taking $C_2$ larger if necessary, \[ \frac{1}{2^n} \sum_{k=1}^{2^n} \exp\left( C_1 (|\log|f'(z_{k,n})|)^2 \right) \le \int_0^{2\pi} \exp\left( C_1 (|\log|f'(r_ne^{i\theta})|| + C_3)^2 \right) \,\mathrm{d}\theta \le C_2. \] Whence, \[ |f'(z_{k,n})| \le \exp \sqrt{ C_1^{-1}\log(C_2 2^n)}. \] Using the distortion theorem again we deduce \[ |f'(r e^{i\theta})| \le C \exp\sqrt{ C \log (1-r)^{-1} }, \] where $C$ depends only on $\mc{D}_{\m D}(\log f')$, as claimed. \end{proof} If $(f_t)_{t\ge 0}$ is the Loewner chain generated by a finite energy measure $\rho$ then by Lemma~\ref{lem:arg-ft-energy} we have $\mc D_{\m D}(\log|f'_t|) \le 32S_+(\rho) + 4T$ for all $t \le T$. Therefore, by Lemma~\ref{lem:derivative-estimate}, there is a subpower function $\sigma (x) : = C \exp \sqrt {C\log (x)}$ (that is, for every $\varepsilon > 0$, $\lim_{x \to \infty}\sigma(x)/x^{\varepsilon} = 0$) that depends only on $S_+(\rho)$ and $T$, such that \begin{align}\label{derivative2} |f'_t(re^{i\theta})| \le |f'_t(0)| \sigma(1/(1-r)) \le \sigma(1/(1-r)) . \end{align} In the rest of the section we use the conformal parametrization of the leaves, namely $\gamma_t(\theta) := f_t(e^{i \theta})$. \begin{prop}\label{prop:Lipschitz} Suppose $S_+(\rho)<\infty$. Then the function $t \mapsto (\gamma_t: S^1 \to \m C): [0,\infty) \to (C^0, \|\cdot \|_{\infty})$ is continuous. \end{prop} \begin{proof} Throughout $C$ denotes a constant whose value is allowed to change from line to line. Fix any $T < \infty$. Then if $0\le s \le t \le T$, for any $0 \le r < 1$, \begin{align}\label{feb19.0} |\gamma_s(\theta) - \gamma_t(\theta)| & \le |\gamma_s(\theta) - f_s(re^{i\theta})| +|\gamma_t(\theta) - f_t(re^{i\theta})| + |f_s(re^{i\theta}) - f_t(re^{i\theta})|. \end{align} By integrating \eqref{derivative2} we have \begin{equation}\label{feb19.1} |\gamma_s(\theta) - f_s(re^{i\theta})| + |\gamma_t(\theta) - f_t(re^{i\theta})| \le C(1-r)\sigma(1/(1-r)), \end{equation} where $\sigma$ is a subpower function depending only on $S_+(\rho)$ and $T$. Using the Loewner equation and once again \eqref{derivative}, we have \[ |f_s(re^{i \theta}) - f_t(re^{i \theta})| = | \int_s^t zf'_u(z)H_u(z) \mathrm{d} u| \le C \sigma(1/(1-r)) \int_s^t|H_u(z)| \mathrm{d} u, \quad z=re^{i\theta}. \] Since for a.e. $t$, $\mathrm{d} \rho_t(\theta) = \nu_t(\theta)^2 \mathrm{d} \theta$, we can estimate using the Cauchy-Schwarz inequality\[ |\nu_t(\theta_1)^2 - \nu_t(\theta_2)^2| \le 2\|\nu_t\|_{\infty} \|\nu_t'\|_{L^2}|\theta_1-\theta_2|^{1/2} \le M_t|\theta_1-\theta_2|^{1/2}, \] where \[M_t := 2( 1/\sqrt{2\pi} + \sqrt{2 \pi} \|\nu_t'\|_{L^2} )\|\nu_t'\|_{L^2}.\] Indeed, since $\nu_t$ is continuous and $\int \nu_t^2 = 1,$ we can assume that $\nu_t(0) < 1/\sqrt{2\pi}$. Then $|\nu_t(\theta) - \nu_t(0)| \le \sqrt{\theta}\|\nu_t'\|_{L^2} \le \sqrt{2 \pi} \|\nu_t'\|_{L^2}$, so $\|\nu_t\|_{\infty} \le 1/\sqrt{2\pi} + \sqrt{2 \pi} \|\nu_t'\|_{L^2}.$ We claim that for a.e. $t$, $\sup_{z \in \overline{\m D}}|H_t(z)| \le C M_t$. Since $\Re H_t(z) = 2\pi P_{\m D}[\rho_t](z)$, we have $|H_t'(r e^{i\theta})| \le C M_t(1-r)^{-1/2}$. (See, e.g., \cite[Thm.\,5.8 and 5.1]{Duren_Hardy}.) So by integration, the claim follows. Consequently $\int_s^t|H_u(z)| \mathrm{d} u \le C \int_{s}^t M_u \mathrm{d} u$. Hence \begin{equation}\label{feb19.2} |f_s(re^{i\theta}) - f_t(re^{i\theta})| \le C \sigma(1/(1-r))\int_{s}^t M_u \mathrm{d} u. \end{equation} Now we choose $r$ so that $1-r = \int_s^tM_u \mathrm{d} u \wedge 1$ and plug in \eqref{feb19.1} and \eqref{feb19.2} into \eqref{feb19.0} to conclude that \[ \sup_{\theta \in [0,2\pi)} |\gamma_s(\theta) - \gamma_t(\theta)| =o(1) \] as $ |t-s| \to 0$. Since $T < \infty$ was arbitrary, this completes the proof. \end{proof} \begin{rem} Under the stronger assumption that $L(\rho_t)$ is uniformly bounded, the proof of Lemma~\ref{lem:derivative-estimate} shows that the mapping $t \mapsto (\gamma_t: S^1 \to \m C): [0,\infty) \to (C^0, \|\cdot \|_{\infty})$ is \emph{weakly Lipschitz continuous}, that is, it admits a modulus of continuity of the form $|\cdot|\sigma(1/|\cdot|)$, where $\sigma$ is a subpower function. \end{rem} We are now ready to prove the main result of this section. \begin{cor} \label{cor:S_finite_foliation_WP} If $S_+(\rho) < \infty$, then $\rho$ generates a foliation of $\ad{\m D} \smallsetminus\{0\}$ by Weil-Petersson quasicircles. \end{cor} \begin{proof} Proposition~\ref{prop:WP-QC-final} shows $\gamma_t$ is a Weil-Petersson quasicircle for each $t$ and $t\mapsto \gamma_t$ is continuous in the supremum norm for the conformal parametrization by Proposition~\ref{prop:Lipschitz}. Lemma~\ref{lem:foliates} shows that $\tau(z) < \infty$ for all $z \in \ad{\m D} \smallsetminus \{0\}$ and this completes the proof. \end{proof} The next result is not used in the rest of the paper, but as it is an interesting and immediate consequence of Lemma~\ref{lem:derivative-estimate}, we choose to state it here. \begin{cor} Suppose $D$ is a simply connected domain containing $0$ and let $f: \m D \to D$ be a conformal map with $f(0)=0$ and assume that $\mc D_{\m D}(\log f') < \infty$. Then the conformal parametrization of $\partial D$ is weakly Lipschitz continuous on $S^1$ with subpower function depending only on $\mc{D}_{\m D}(\log f')$. \end{cor} \begin{rem} The condition $\mc D_{\m D} (\log f') <\infty$ allows $f'$ to be unbounded and the conformal parametrization is not Lipschitz in general in this setting, so up to the exact form of the subpower function this modulus of continuity is sharp. \end{rem} \begin{proof} We have already noted that $f$ is continuous on $\overline{\m D}$. By Lemma~\ref{lem:derivative-estimate}, we have for $0 < r< 1$, \begin{align*} |f(e^{i \theta_1}) - f(e^{i \theta_1})| & \le |f(e^{i \theta_1}) - f(r e^{i\theta_1})| + |f(e^{i \theta_2}) - f(r e^{i\theta_2})| + |f(r e^{i\theta_1})- f(r e^{i\theta_2})| \\ & \le 2 \int_r^1\sigma(1/(1-u)) \mathrm{d} u+ \sigma(1/(1-r))|\theta_1 - \theta_2|\\ & \le ((1-r) + |\theta_1-\theta_2| )\tilde \sigma(1/(1-r)), \end{align*} where $\tilde \sigma$ is a subpower function that depends only on $\mc{D}_{\m D}(\log f')$. Now choose $r=1-|\theta_1-\theta_2|$ and we obtain the desired estimate. \end{proof} \section{Disk energy duality: proof of Theorem~\ref{thm:main}}\label{sec:disk_duality} This section proves Theorem~\ref{thm:main}. We assume that $\rho \in \mathcal{N}_+$ generates a foliation of $\ad{\m D} \smallsetminus \{0\}$ throughout. The proof is carried out in two steps: in Section~\ref{subsec:beta_energy} we assume $S_+ (\rho)<\infty$ and derive energy duality. Then in Section~\ref{subsec:converse} we assume $\mc D_{\m D}(\varphi)<\infty$ and prove that this implies $S_+(\rho)<\infty$. An overview of the argument presented in this section was provided in Section~\ref{sect:core-argument}. \subsection{$S_+(\rho) < \infty$ implies $\mc D_{\m D}(\varphi) = 16 S_+(\rho)$} \label{subsec:beta_energy} For $\rho \in \mathcal{N}_+$, we define (for a.e. $t$) \[ \alpha_t (z) = \Im (z H_t'(z)), \quad z \in \m D, \] where $H_t$ is the Herglotz integral of $\rho_t$. In this section, we assume $S_+(\rho) <\infty$, and write as before $\mathrm{d} \rho_t = \nu_t^2 (\theta )\mathrm{d} \theta$. In this case, we have $$H_t (z) = \int_{0}^{2\pi} \frac{e^{i\theta} +z }{e^{i\theta} - z}\nu_t^2(\theta) \mathrm{d}\theta. $$ \begin{lemma}\label{lem:H2W12} If $H_t' \in \mathcal{H}^1$ then for a.e. $\theta \in S^1$, \begin{equation}\label{eq:im_nu} \Im (e^{i\theta} H_t'(e^{i\theta})) = -4\pi \nu_t(\theta)\nu_t'(\theta), \end{equation} where the left-hand side is understood in terms of radial limits and $\alpha_t = -4\pi P_{\m D}[\nu_t \nu_t']$. In particular, this holds if $L(\rho_t) < \infty$. \end{lemma} \begin{proof} If $H_t' \in \mathcal{H}^1$, then by \cite[Thm.\,5.2]{Duren_Hardy} (see in particular the last paragraph of the proof), $H_t$ is continuous on $\overline{\m D}$ and the boundary function $H_t(e^{i\theta})$ is absolutely continuous on $S^1$. Since $2\pi \nu_t^2(\theta) = \Re H_t(e^{i\theta})$, $4\pi \nu_t (\theta) \nu_t'(\theta) = \partial_\theta \Re H_t(e^{i\theta}) $ exists a.e. on $S^1$. Moreover, $H_t'$ has radial limits a.e. on $S^1$ and by \cite[Thm.\,3.11]{Duren_Hardy}, $\partial_\theta H_t(e^{i\theta}) = ie^{i\theta}\lim_{r \to 1} H_t'(re^{i\theta})$ a.e. on $S^1$. This gives the identity \eqref{eq:im_nu}. Since $H_t' \in \mathcal{H}^1$, $\alpha_t (z) = \Im z H_t'(z)$ is the Poisson integral of its boundary values and this gives the second assertion. Finally, if $L(\rho_t) < \infty$ we have $\rho_t' = 2 \nu_t' \nu_t \in L^2(S^1, \mathrm{d} \theta)$ since $\nu_t$ is bounded and $\nu_t' \in L^2(S^1, \mathrm{d} \theta)$. Writing the complex derivative in polar coordinates shows $\Im zH'_t = -\partial_\theta \Re H_t$. We claim $ \Im z H'_t(z) = -2\pi P_{\m D}[\rho'_t](z)$. Indeed, writing $P_r(\theta -s) = P_{\m D}(re^{i(\theta-s)}) = (1-r^2)/|1-re^{i(\theta -s)}|^2$ for the Poisson kernel, using integration by parts \begin{align*} \partial_\theta \Re H_t(z) & = \int_0^{2\pi} \partial_\theta P_r(\theta -s) \nu_t(s)^2\mathrm{d} s \\ & = \int_0^{2\pi} (-\partial_s P_r(\theta -s)) \nu_t(s)^2\mathrm{d} s \\ & = \int_0^{2\pi} P_r(\theta -s) [\nu_t(s)^2]'\mathrm{d} s = 2\pi P_{\m D}[\rho_t'](z). \end{align*} Therefore, since $\rho_t' \in L^2$, we get that $\Im H'_t \in \mathfrak{h}^2$. By \cite[Thm 4.1]{Duren_Hardy} this in turn implies $\Re H'_t \in \mathfrak{h}^2$ and we conclude that $H'_t \in \mathcal{H}^2$. \end{proof} The following lemma holds for all $\rho \in \mc N_+$. \begin{lemma}\label{lem:A-alpha} For all $z\in D_T$, $ \vartheta[g_T](z) = \int_0^T \alpha_t(g_t (z))\,\mathrm{d} t$. \end{lemma} \begin{proof} Since $t \mapsto \vartheta[g_t](z)$ is absolutely continuous on $[0,T]$, we can use the Loewner equation \eqref{eq:ODE} to see that for a.e. $t$, \begin{align*} \partial_t \vartheta [g_t](z) & = \Im \left(\frac{\partial_t g_t'(z)}{g_t'(z)} - \frac{\partial_t g_t (z)}{g_t(z)} \right) \\ & =\Im \left(\frac{g_t'(z) H_t(g_t(z)) + g_t(z) H_t'(g_t (z)) g_t'(z)}{g_t'(z)} - H_t(g_t (z))\right) \\ & = \Im (g_t (z) H'_t (g_t (z))) = \alpha_t ( g_t (z) ). \end{align*} Since $g_0(z)=z$ we get the claim after integration. \end{proof} We define \begin{equation}\label{eq:winding_integral}\beta (z) = \int_0^{\tau(z)} \alpha_t ( g_t (z) )\, \mathrm{d} t, \qquad z \in \m D. \end{equation} It is not obvious that this quantity is finite a.e. However, part of the conclusion of the next result is that $\beta \in \mc E_0 (\m D)$ and we shall later prove that $\beta$ is the unique $\mc E_0(\m D)$ extension of the winding function $\varphi$. \begin{prop}\label{prop:psi_winding} Suppose $S_+(\rho) < \infty$. Let $u (\theta,t) : = - 2\nu_t'(\theta)/\nu_t(\theta)$ if $\nu_t (\theta) \neq 0$, and $u(\theta, t) := 0$ otherwise. Then $u \in L^2 (2\rho)$ and $\varkappa[u] = \beta$. In particular, $\beta \in \mc E_0(\m D)$ and $\mc D_{\m D} (\beta) = 16 \, S_+(\rho).$ \end{prop} \begin{proof} We verify directly that $u \in L^2 (2\rho)$ and \begin{equation}\label{eq:proof_beta_rho} \norm {u}_{L^2 (2 \rho)}^2 = 2 \int_0^{\infty} \int_{S^1} 1_{\nu_t \neq 0} \left[\frac{2 \nu_t'(\theta)} {\nu_t(\theta)}\right]^2 \,\nu_t^2(\theta)\,\mathrm{d}\theta\mathrm{d} t = 16 \, S_+ (\rho) < \infty. \end{equation} Corollary~\ref{cor:S_finite_foliation_WP} shows that $\rho$ generates a foliation, therefore the Hadamard disintegration isometry of Section~\ref{subsec:hadamard_isometry} applies. Using Proposition~\ref{prop:kappa_formula} and Lemma~\ref{lem:H2W12}, \begin{align*} \varkappa [u] (z)& = 2 \pi \int_0^{\tau(z)} P_{\m D}[u_t \nu_t^2] (g_t(z)) \, \mathrm{d} t = -4 \pi \int_0^{\tau(z)} P_{\m D}[ \nu_t \nu_t'] (g_t(z)) \,\mathrm{d} t \\ &= \int_0^{\tau(z)} \alpha_t ( g_t (z)) \, \mathrm{d} t = \beta(z). \end{align*} Moreover, by Theorem~\ref{thm:bi_isometry} and \eqref{eq:proof_beta_rho}, we obtain $16\,S_+ (\rho) = \norm {u}_{L^2 (2 \rho)}^2 = \mc{D}_{\m D} (\varkappa[u]) = \mc D_{\m D} (\beta)$ as claimed. \end{proof} \begin{cor}\label{cor:duality_beta} For $T > 0$, let $\rho^T_t = \rho_t$ for $t \le T$ and let $\rho^T_t$ be the uniform measure for $t > T$. Let $\beta^T$ be the associated function as in \eqref{eq:winding_integral}. Then we have $\vartheta [g_T] = \beta^T$ on $D_T$. In particular, $$\mc D_{D_T} (\vartheta [g_T]) \le \mc D_{\m D} (\beta^T) = 16 \, S_+(\rho^T) = 16 \, S_{[0,T]}(\rho).$$ \end{cor} \begin{proof} We only need to note that for $t \ge T$, $\alpha_t \equiv 0$. Therefore, for $z \in D_T$, $$\beta^T (z) = \int_{0}^{\min\{\tau(z),T\}} \alpha_t (g_t (z))\, \mathrm{d} t = \int_{0}^{T} \alpha_t (g_t (z))\, \mathrm{d} t,$$ since $\tau(z) > T$. Lemma~\ref{lem:A-alpha} implies $\vartheta[g_T] =\beta^T$ on $D_T$. \end{proof} We now show that $\beta$ is the unique extension of the winding function \eqref{eq:def_winding} $\varphi$ in $\mc E_0 (\m D)$. \begin{lemma}\label{lem:beta_varphi} If $S_+ (\rho) < \infty$, then for all $t \ge 0$, \[ \beta|_{\partial D_t} = \varphi|_{\partial D_t} \quad \text{arclength-a.e.}, \] where $\beta$ is as in \eqref{eq:winding_integral} and its trace is taken in the sense of Jonsson-Wallin~\eqref{def:trace}. \end{lemma} In other words, $\beta$ is the unique extension of $\varphi$ in $\mc E_0 (\m D)$ by Proposition~\ref{prop:unique_extension} (and from now on we will not distinguish $\beta$ and $\varphi$). \begin{proof} We will identify functions in $ \mc E_0(\m D)$ with their extension to $W^{1,2} (\m C)$ by $0$ in $\m D^*$. By Corollary~\ref{cor:duality_beta}, $\beta^t = \beta$ in $\m C \smallsetminus D_t$ and the Jonsson-Wallin traces satisfy $$\beta|_{\gamma_t} = \beta^t|_{\gamma_t} = \vartheta[g_t]|_{\gamma_t} = \varphi|_{\gamma_t} \quad \text{arclength-a.e.}$$ Here, the first equality is a property of the Jonsson-Wallin trace, see \cite[Lem.\,A.2]{VW1}. The second equality follows from Corollary~\ref{cor:duality_beta}, where we interpret $\vartheta[g_t]|_{\gamma_t}$ as the non-tangential limit from inside $D_t$ using \cite[Lem.\,A.5]{VW1}. The last equality is the definition of $\varphi$. \end{proof} \begin{cor}\label{cor:iota_varphi} If $S_+(\rho)<\infty$, then $\varphi \in \mc E_0 (\m D)$. For $\rho$-a.e. $(\theta, t)$, \begin{equation* \iota [\varphi] (\theta, t = -2\nu_t'(\theta)/\nu_t(\theta). \end{equation*} and $ \mathcal{D}_{\m D}(\varphi) = 16 S_+(\rho).$ \end{cor} \begin{proof} The proof is immediate by combining Proposition~\ref{prop:psi_winding} and Lemma~\ref{lem:beta_varphi}. \end{proof} \subsection{$\mc D_{\m D}(\varphi)<\infty$ implies $S_+(\rho)<\infty$}\label{subsec:converse} This section proves the following result. \begin{prop}\label{prop:converse} Suppose $\rho \in \mc N_+$ generates a foliation and assume that the winding function $\varphi : \mathcal T \mapsto \m R$ can be extended to a function in $\mc E_0 (\m D)$ \textnormal{(}also denoted $\varphi$\textnormal{)}. Then $S_+(\rho) <\infty$. \end{prop} Assuming this proposition, we can complete the proof of Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $\rho \in \mc N_+$. If $S_+(\rho) <\infty$, Corollary~\ref{cor:iota_varphi} shows that $\varphi \in \mc E_0 (\m D)$ and $\mc D_{\m D} (\varphi) = 16 \, S_+ (\rho).$ Conversely, Proposition~\ref{prop:converse} shows that if $\varphi \in \mc E_0 (\m D)$, then $S_+ (\rho) < \infty$. Therefore the identity also holds. \end{proof} To explain our present goal towards the proof of Proposition~\ref{prop:converse}, note that we do not know \emph{a priori} that $\rho_t$ is absolutely continuous with respect to Lebesgue measure, nor that its density (if it exists) is differentiable almost everywhere. We need to show that this is the case. Under the assumptions of Proposition~\ref{prop:converse}, we let as before $\varphi_s^0$ be the zero trace part of the function $\varphi \circ f_s$ and recall that $\alpha_t (z) = \Im (z H_t'(z))$ for $z \in \m D$. Since $\rho$ generates a foliation and since $\varphi \in \mc E_0(\m D)$ by assumption, using Theorem~\ref{thm:bi_isometry} we may consider \[u := \iota [\varphi] \in L^2 (2\rho).\] \begin{lemma}\label{lem:alpha_h} For a.e. $t \ge 0$, we have \begin{equation}\label{eq:alpha_h} \alpha_t (z) = 2 \pi P_{\m D}[u_t \rho_t](z), \qquad u_t(\cdot) := u(\cdot, t). \end{equation} \end{lemma} \begin{proof} Proposition~\ref{prop:kappa_formula} shows that a.e. in $\m D$, \begin{equation}\label{eq:winding_formula_u} \varphi (w) = \varkappa[u](w) = 2 \pi \int_{0}^{\tau(w)} P_{\m D}[u_t \rho_t](g_t(w))\, \mathrm{d} t. \end{equation} Lemma~\ref{lem:zero_trace_varphi} shows that the unique orthogonal decomposition of $\varphi$ with respect to $D_s$ is given by the zero-trace part $ \varphi_s^0 \circ g_s \in \mc E_0 (D_s)$ and the harmonic part $\varphi - \varphi_s^0 \circ g_s$ which equals $\vartheta[g_s]$ in $D_s$. On the other hand, Corollary~\ref{cor:ortho_decomp_formula} shows that this decomposition is also given by $\varkappa[u\mathbf{1}_{S^1 \times [0, s)}]$ and $\varkappa [u \mathbf{1}_{S^1 \times [s,\infty)}]$. Hence, for all $w \in D_s$ (so that $\tau(w) > s$), $$\vartheta [g_s] (w) = 2 \pi \int_0^{s} P_{\m D}[u_t \rho_t](g_t(w)) \, \mathrm{d} t. $$ Taking a derivative in $s$ in the above expression we obtain from Lemma~\ref{lem:A-alpha} that for a fixed $w \in \m D$ and a.e. $t < \tau(w)$, $ 2 \pi P_{\m D}[u_t \rho_t](g_t(w)) = \alpha_t (g_t (w))$. Indeed, by choosing a dense and countable family $J$ of points in $\m D$, we have for a.e. $t \ge 0$ and all $w \in D_t \cap J$, $ 2\pi P_{\m D}[u_t \rho_t](g_t(w)) = \alpha_t (g_t (w))$. Since both $\alpha_t$ and $P_{\m D}[u_t \rho_t]$ are continuous (actually harmonic) in $\m D$, we therefore obtain a.e. t, $\alpha_t = 2\pi P_{\m D}[u_t \rho_t]$ for all $w \in \m D$. \end{proof} We are now ready to prove the main result of this section. \begin{proof}[Proof of Proposition~\ref{prop:converse}] Since $u = \iota[\varphi] \in L^2 (2\rho)$, for a.e. $t \in \m R_+$, $u_t \in L^2(S^1, 2\rho_t)$. By the Cauchy-Schwarz inequality and the fact that $\rho_t \in \mc M_1 (S^1)$ is a probability measure, we also know that $u_t \in L^1 (S^1,2\rho_t)$. Hence $\alpha_t = 2\pi P_{\m D}[u_t \rho_t] \in \mathfrak{h}^1$ by Lemma~\ref{lem:alpha_h}. This implies $H_t' \in \mathcal{H}^p$ for any $p < 1$. (See \cite[Thm.\,4.2]{Duren_Hardy}, and use that an analytic function is in $\mathcal{H}^p$ if and only if its real and imaginary parts are in $\mathfrak{h}^{p}$.) Using \cite[Thm.\,5.15]{Duren_Hardy} this in turn implies $H_t \in \mathcal{H}^p$ for every $p < \infty$. (More precisely, if $f' \in \mathcal{H}^p$ for some $p<1$, then $f \in \mathcal{H}^q$ with $q=p/(1-p)$.) Therefore, the radial limits of $H_t$ exist a.e. and define a function in $L^p(S^1, \mathrm{d} \theta)$ for all $p<\infty$. It follows that the positive function $\Re H_t/2\pi$ is the Poisson integral of a function in $L^p(S^1, \mathrm{d} \theta)$ for all $p < \infty$, and we denote this function by $\nu_t^2 (\theta)$. (See e.g., Corollary 2 to \cite[Thm.\,3.1]{Duren_Hardy}.) In other words we have shown that the measure $\rho_t$ is absolutely continuous and $\mathrm{d} \rho_t(\theta) = \nu_t^2(\theta) \mathrm{d} \theta$, where $\nu_t^2 \in L^p(S^1, \mathrm{d} \theta)$ for every $p < \infty$. Since $u_t^2 \nu_t^2 \in L^1(S^1, \mathrm{d} \theta)$ and $\nu_t^2 \in L^p(S^1, \mathrm{d} \theta)$ for all $p < \infty$, H\"older's inequality with $p=2/(2-\varepsilon), q=2/\varepsilon$ implies $u_t \nu_t^2 \in L^{2-\varepsilon}(S^1, \mathrm{d} \theta)$ for any $\varepsilon \in (0, 2)$. This in turn implies that the Herglotz integral of $u_t \nu_t^2$ is in $\mathcal{H}^{2-\varepsilon}$ if $\varepsilon \in (0, 1)$. Lemma~\ref{lem:alpha_h} then implies that $H_t' \in \mathcal{H}^{2-\varepsilon}$. It follows that $H_t$ is continuous on $\overline{\m D}$ and absolutely continuous on $S^1$ (see \cite[Thm.\,3.11]{Duren_Hardy}) and consequently so is the density $\theta \mapsto \nu_t(\theta)$ and $\nu_t'(\theta)$ is well-defined for Lebesgue-a.e. $(\theta, t)$. Moreover, by taking radial limits in \eqref{eq:alpha_h}, we obtain using Lemma~\ref{lem:H2W12} and Fubini's theorem that for Lebesgue-a.e., $(\theta, t) \in S^1 \times \m R_+$, $$ u_t (\theta) \nu_t^2(\theta) = \frac{1}{2\pi} \alpha_t(e^{i\theta}) = - 2\nu_t'(\theta) \nu_t (\theta).$$ It follows that Lebesgue-a.e., when $\nu_t (\theta) \neq 0$, $u_t^2(\theta) \nu_t^2(\theta) = 4\nu_t'(\theta)^2,$ and both sides are equal to $0$ otherwise. We conclude the proof by integrating over $S^1 \times \m R_+$, and we obtain $S_+(\rho) <\infty$ since $u = \iota [\varphi] \in L^2(2 \rho)$. \end{proof} \section{Whole-plane energy duality: proof of Theorem~\ref{thm:main0}} \label{sec:whole-plane} In this section we deduce the whole-plane energy duality Theorem~\ref{thm:main0} from the disk energy duality Theorem~\ref{thm:main}. \subsection{Whole-plane Loewner evolution}\label{subsec:whole_plane_Loewner_chain} We now describe the whole-plane Loewner chain. We define similarly as before a space of driving measures $$\mc N := \{\rho\in \mc M (S^1 \times \m R): \rho(S^1\times I) = |I| \text{ for all intervals } I \}.$$ The whole-plane Loewner chain driven by $\rho \in \mc N$, or equivalently by its family of disintegration measures $\m R \to \mc M_1 (S^1): \, t \mapsto \rho_t$, is the unique family of conformal maps $(f_t : \m D \to D_t)_{t \in \m R}$ such that \begin{enumerate}[label= (\roman*), itemsep= -2pt] \item For all $s < t$, $0 \in D_t \subset D_s$. \label{it:whole_monotone} \item For all $t \in \m R$, $f_t (0) = 0$ and $f_t'(0) = e^{-t}$ (in other words, $D_t$ has conformal radius $e^{-t}$ seen from $0$). \label{it:whole_radius} \item For all $s \in \m R$, $( f_t^{(s)} : = f_{s}^{-1} \circ f_t : \m D \to D_{t}^{(s)})_{t \ge s}$ is the Loewner chain driven by $(\rho_t)_{t\ge s}$, which satisfies \eqref{eq:loewner-pde} with the initial condition $f_s^{(s)} (z) = z$, as discussed in Section~\ref{sect:Loewner-Kufarev-Equation}. \label{it:whole_Loewner} \end{enumerate} \begin{rem} If $\rho_t \in \mc M_1 (S^1)$ is the uniform measure for all $t \le 0$, then $f_t (z) = e^{-t} z$ for $t \le 0$ and $(f_t)_{t\ge 0}$ is the Loewner chain driven by $(\rho_t)_{t \ge 0} \in \mc N_+$ as in Section~\ref{sect:Loewner-Kufarev-Equation}. Indeed, we check directly that $(f_t)_{t\in \m R}$ satisfy the three conditions above. In other words, the Loewner chain in $\m D$ is a special case of the whole-plane Loewner chain. \end{rem} Note that the range $D_{t}^{(s)}$ of $f_t^{(s)}$ has conformal radius $e^{s-t}$ seen from $0$ and the family $(f_t^{(s)})_{t \ge s}$ is uniquely defined for all $s \in \m R$ and satisfies for $t \ge s$ and $z \in \m D$, $$\partial_t f_t^{(s)} (z) = -z f_t^{(s)}{}' (z) H_t(z), \qquad H_t(z) = \int_{S^1} \frac{ e^{i\theta} + z}{ e^{i\theta}-z } \,\mathrm{d} \rho_t (\theta),$$ with the initial condition $ f_s^{(s)} (z) = z$ (see Section~\ref{sect:Loewner-Kufarev-Equation}). The condition \ref{it:whole_Loewner} is then equivalent to for all $t \in \m R$ and $z \in \m D$, \begin{equation} \label{eq:whole-plane-radial} \partial_t f_t (z) = -z f'_t (z) H_t(z) \end{equation} as described in the introduction. For the purposes of our proofs later on it is convenient to give a slightly more explicit construction of the family $(f_t)_{t \in \m R}$ and explain why it is uniquely determined by $t \to \rho_t$, even though this statement is well-known. For this, note that once we determine $f_n$, \eqref{eq:whole-plane-radial} gives $f_t$ for all $t \ge n$ by \begin{equation} \label{eq:whole-plane-radial-comp} f_t = f_n \circ f_t^{(n)}. \end{equation} {\bf Existence:} Consider for $- \infty < s \le t$, the conformal map $$F_t^{(s)} (z) : = e^{-s}f_t^{(s)} (z)$$ which maps $\m D$ onto $e^{-s} D^{(s)}_{t}$. Since $F_t^{(s)}{}'(0) = e^{-t}$, $(F_t^{(s)})_{s \in (-\infty, t]}$ is a normal family for any $t$. We extract a sequence $(s_k)$ converging to $-\infty$, such that for all $n \in \m Z$, $F_n^{(s_k)}$ converges uniformly on compacts to a univalent function that we call $F_n$. We construct $(f_t)$ by taking $f_n : = F_n$ and generate the other $f_t$ using \eqref{eq:whole-plane-radial-comp}. We need to verify compatibility, that is, that we have that $F_{n+1} = F_n \circ f_{n+1}^{(n)}$. To see this, notice that for $s < t_1 < t_2$, $$(F_{t_1}^{(s)})^{-1} \circ F_{t_2}^{(s)} = (f_{t_1}^{(s)})^{-1} \circ f_{t_2}^{(s)} = f_{t_2}^{(t_1)} $$ is independent of $s$. The last equality follows from the fact that as a function of $t_2$ on $[t_1,\infty)$, both terms satisfy the same differential equation with same initial condition. Hence, we have $F_{n}^{-1} \circ F_{n+1} = f^{(n)}_{n+1}$ as claimed. {\bf Uniqueness:} If there are two such families $(f_t : \m D \to D_t)_{t \in \m R}$ and $(\tilde f_t : \m D \to \tilde D_t)_{t \in \m R}$. Let $\psi_t : = \tilde f_t \circ f_t^{-1}: D_t \to \tilde D_t$. Since $(f_t)$ and $(\tilde f_t)$ are driven by the same process of measures, for all $s \le t$, $$f_s^{-1} \circ f_t = \tilde f_s^{-1} \circ \tilde f_t.$$ And for $z = f_t (w) \in D_t$, $$\psi_s (z) = \tilde f_s \circ f_s^{-1} \circ f_t(w) = \tilde f_s \circ \tilde f_s^{-1} \circ \tilde f_t(w) = \tilde f_t \circ f_t^{-1} (z) = \psi_t(z).$$ That is, $\psi_s|_{D_t} = \psi_t$. Hence, $\psi_t$ extends to a conformal map $\cup_{t \in \m R} D_t \to \cup_{t \in \m R} \tilde D_t = \m C$. This shows that $\psi_t$ is the identity map and $D_t = \tilde D_t$, since $\psi_t(0) = 0$ and $\psi_t'(0) = 1$ and completes the proof of the uniqueness. We remark that for $\rho \in \mc N$, then $\cup_{t\in \m R} D_t = \m C$. Indeed, $D_{t}$ has conformal radius $e^{-t}$, therefore contains the centered ball of radius $e^{-t}/4$ by Koebe's $1/4$ theorem. We define for all $z \in \m C$, $$\tau(z) : = \sup\{ t\in \m R : z \in D_t\} \in (-\infty, \infty].$$ Similar to the definition of foliations of $\ad {\m D} \smallsetminus \{0\}$, we say that $\rho \in \mc N$ generates a foliation $(\gamma_t : = \partial D_t)_{t\in \m R}$ of $\m C \smallsetminus \{0\}$ if \begin{enumerate}[itemsep=-2pt] \item For all $t \in \m R$, $\gamma_t$ is a chord-arc Jordan curve. \item It is possible to parametrize each curve $\gamma_t, t \in \m R,$ by $S^1$ so that the mapping $t \mapsto \gamma_t$ is continuous in the supremum norm. \item For all $z \in \m C \smallsetminus \{0\}$, $\tau(z) <\infty$. \end{enumerate} We have the whole-plane version of Lemma~\ref{lem:tau_foliates}: \begin{lemma} Assume that $\rho$ generates a foliation $(\gamma_t)_{t\in \m R}$ of $\m C \smallsetminus \{0\}$. For all $z\neq 0$, we have $z \in \gamma_{\tau(z)}$. In particular, $\bigcup_{t \ge 0} \gamma_t = \m C \smallsetminus \{0\}$. \end{lemma} We associate similarly to a foliation of $\m C \smallsetminus \{0\}$ its \emph{winding function} defined by $\varphi (0) = 0$, $$\varphi (z) : = \vartheta[g_{t}] (z), \quad \text{for arclength-a.e. } z \in \gamma_t \text{ and } \forall t \in \m R,$$ where $g_t = f_t^{-1}$. We may also define $\varphi(z)$ by $\vartheta[g_{\tau(z)}] (z)$. We say that $\varphi$ has an extension to $W^{1,2}_{\mathrm{loc}}$ if $\varphi|_{\gamma_t}$ coincides a.e. with the Jonsson-Wallin trace of its extension on $\gamma_t$, for all $t$. \subsection{Proof of Theorem~\ref{thm:WP-leaf} and Theorem~\ref{thm:main0}}\label{subsec:whole_plane} Recall the definition of the Loewner-Kufarev energy in the whole-plane setting: for $\rho \in \mc N$, $$S (\rho) = \int_{\m R} L (\rho_t) \, \mathrm{d} t \quad \text{and} \quad S_{[a,b]} (\rho) = \int_{a}^b L (\rho_t) \,\mathrm{d} t.$$ We use the notation from Section~\ref{subsec:whole_plane_Loewner_chain} and write for $s \le t$, $$g_t^{(s)} : = (f_t^{(s)})^{-1} = g_t \circ f_s, \quad g_t = f_t^{-1}.$$ Then $(g^{(s)}_t)_{t \ge s}$ is the uniformizing Loewner chain driven by $\rho^{(s)} : = (\rho_t)_{t\ge s}$, mapping $D_t^{(s)} = g_s (D_t)$ onto $\m D$. We are now ready to prove Theorem~\ref{thm:WP-leaf}. \begin{proof}[Proof of Theorem~\ref{thm:WP-leaf}] By Conditions~\ref{it:whole_monotone} and \ref{it:whole_Loewner} we have that $$\cap_{t\in \m R} D_t = \cap_{t \in \m R_+} D_t = f_0 (\cap_{t \ge 0} D_t^{(0)}) = \{0\},$$ where we used Lemma~\ref{lem:foliates} and $S_+(\rho) < \infty$. Therefore $\tau(z) <\infty$ for all $z \neq 0$. Now we show that $\partial D_t$ is a Weil-Petersson quasicircle. Note that on $D_t$, $$g_t = (g_t \circ f_s) \circ g_s = g^{(s)}_t \circ g_s, \quad \forall s \le t.$$ Since $S_{[s,\infty)}(\rho^{(s)}) < \infty$, Proposition~\ref{prop:WP-QC-final} shows that $g_s (D_t) = g_s (f_t (\m D)) = f^{(s)}_t (\m D) = D_t^{(s)}$ is bounded by a Weil-Petersson quasicircle. Moreover, from the proof of Lemma~\ref{lem:foliates}, we know that for all $t \in \m R$ there is $s_0 < t$, such that $D_t^{(s_0)} \subset (1/2) \m D$. As $f_{s_0}: \m D \to D_{s_0}$ is conformal, $\gamma_t = f_{s_0} (\partial D_t^{(s_0)})$ is also a Weil-Petersson quasicircle (e.g. by Lemma~\ref{thm_TT_equiv_T01}). We also obtain the continuity of $t \mapsto \gamma_t$ from the continuity of $t \mapsto \partial D_t^{(s_0)}$ by Corollary~\ref{cor:S_finite_foliation_WP}. \end{proof} Now we prove the whole-plane energy duality. \begin{proof}[Proof of Theorem~\ref{thm:main0}] Assume first $S(\rho) < \infty$, we show $\mc D_{\m C} (\varphi) < \infty$. For this, note that the winding function $\varphi^{(s)}$ of the foliation in $\ad{\m D}\smallsetminus\{0\}$ driven by $\rho^{(s)} = (\rho_t)_{t \ge s}$ is given by \begin{align*} \varphi^{(s)} (w) = \vartheta[g^{(s)}_{\tau(w)}] (w) = \vartheta[g_{\tau(z)}](z) - \vartheta[g_s] (z) = \varphi (z) - \vartheta[g_s ](z) \end{align*} where $z = f_s (w) \in D_s$ and we used the chain rule \eqref{eq:theta_chain}. We have \begin{align}\label{eq:proof_whole-plane_duality} \begin{split} \mc D_{D_s} (\varphi)& = \mc D_{D_s} (\varphi^{(s)} \circ g_s) + \mc D_{D_s} (\vartheta[g_s])= \mc D_{\m D} (\varphi^{(s)}) + \mc D_{D_s} (\vartheta[g_s]) \\ & = 16 \,S_{[s,\infty)} (\rho) + \mc D_{\m D} (\vartheta[f_s]). \end{split} \end{align} The first equality follows from orthogonality of $\vartheta[g_s]$ and $\varphi^{(s)} \circ g_s$ in $\mc E_0 (\m D)$, since $\vartheta[g_s]$ is harmonic in $D_s$ and $\varphi^{(s)} \circ g_s \in \mc E_0 (D_s)$. The second equality follows from the conformal invariance of the Dirichlet energy, and the third from Theorem~\ref{thm:main}. We obtain immediately the lower bound $$\mc D_{\m C} (\varphi) \ge 16 \, S(\rho).$$ For the opposite inequality, since $F_s^{(r)} := e^{-r}f_s^{(r)}$ (see Section~\ref{subsec:whole_plane_Loewner_chain}) converges uniformly on compact subsets of $\m D$ to $f_s$ as $r \to -\infty$, we have that $\vartheta[F_s^{(r)}] (z)= \vartheta[f_s^{(r)}] (z) $ converges uniformly on compact sets to $\vartheta[f_s] (z)$. The lower semicontinuity of the Dirichlet energy then shows that for all compact $K \subset \m D$, $$\mc D_{K} (\vartheta[f_s]) \le \liminf_{r \to -\infty} \mc D_{K} (\vartheta[f_s^{(r)}]).$$ On the other hand, Corollary~\ref{cor:duality_beta} shows that $$\mc D_{K} (\vartheta[f_s^{(r)}]) = \mc D_{f_s^{(r)} (K)} (\vartheta[g_s^{(r)}]) \le 16 \, S_{[r,s]} (\rho),$$ letting $r \to -\infty$ yields $$\mc D_{\m D} (\vartheta[f_s]) \le 16 \int_{-\infty}^s L(\rho_t)\, \mathrm{d} t. $$ Combining this with \eqref{eq:proof_whole-plane_duality} shows that $\mc D_{D_s} (\varphi) \le 16 \,S(\rho)$. Letting $s \to -\infty$ we obtain the upper bound and hence $\mc D_{\m C}(\varphi) = 16 \, S(\rho)$. For the converse, if $\rho$ generates a foliation of $\m C\smallsetminus \{0\}$ with $\mc D_{\m C} (\varphi) < \infty$, Proposition~\ref{prop:converse} and \eqref{eq:proof_whole-plane_duality} imply that $$16\, S_{[s, \infty)} (\rho^{(s)}) = \mc D_{\m D} (\varphi^{(s)}) \le \mc D_{D_s} (\varphi) \le \mc D_{\m D} (\varphi).$$ Letting $s \to -\infty$ we obtain $S(\rho) < \infty$ and this completes the proof. \end{proof} \section{Applications of energy duality}\label{sec:application} In this section we derive several consequences of energy duality, Theorem~\ref{thm:main0}. \subsection{Reversibility of the Loewner-Kufarev energy} Let $(\gamma_t)_{t \in \m R}$ be a foliation generated by $\rho \in \mathcal N$ and let $(D_t)_{t \in \m R}$ be the corresponding family of domains. We will consider the evolution of its time-reversal, that is the Loewner chain corresponding to the family $(\tilde D_{s(t)} : = j (\hat{\m{C}} \smallsetminus D_t))_{t \in \m R}$, where $j(z):= 1/z$ and $e^{-s (t)}$ is the conformal radius of $j (\hat{\m{C}} \smallsetminus D_t)$. Let $\tilde f_{s}$ be the conformal map from $\m D$ onto $\tilde D_s$ with $\tilde f_s(0) = 0$ and $\tilde f_s'(0) = e^{-s}$. \begin{lemma}\label{lem:capacity_complement} The function $t \mapsto s(t)$ is a decreasing homeomorphism of $\m R$. \end{lemma} \begin{proof} To show that $t \mapsto s$ is decreasing, let $t_1 < t_2$. Since $D_{t_2} \varsubsetneq D_{t_1}$, we have $\tilde D_{s(t_1)} \varsubsetneq \tilde D_{s(t_2)}$. Schwarz' lemma then shows that $s (t_2) < s(t_1)$. We claim that $s(t) \to \infty$ as $t \to -\infty$. In fact, Koebe's $1/4$-theorem shows that $D_t$ contains the centered disk of radius $e^{-t}/4$, therefore $\tilde D_{s(t)}$ is contained in the centered disk of radius $4 e^{t}$. Schwarz' lemma shows $ s(t) \ge - t - \log 4$ which goes to $\infty$ as $t \to -\infty$. Since the diameter of $D_t$ tends to $0$ as $t \to \infty$ by Lemma~\ref{lem:foliates}, $\tilde D_{s(t)}$ has conformal radius tending to $\infty$. Therefore $s (t) \to -\infty$ as $t \to \infty$. It remains to verify that $s$ is continuous. To see this, note that the continuity of $t \mapsto \gamma_t$ in the supremum norm shows that as $t \to t_0 \in \m R$, $\tilde D_{s(t)}$ converges in the Carath\'eodory kernel sense to $\tilde D_{s(t_0)}$. Therefore, $\tilde f_{s(t)}'(0)$ tends to $\tilde f_{s(t_0)}'(0)$, and equivalently, $s(t)$ tends to $s(t_0)$. \end{proof} Lemma~\ref{lem:capacity_complement} implies that $ (\tilde f_s)_{s\in \m R}$ is a whole-plane Loewner chain as defined in Section~\ref{sec:whole-plane}. Note that in Section~\ref{sec:whole-plane} we took the measure $\rho$ as starting point for the definition whereas we have constructed the monotone family of domains here. However, using Pommerenke's theorem as discussed in Section~\ref{sect:Loewner-Kufarev-Equation} the existence of a measure $\tilde \rho$ generating the family follows easily. Energy duality, Theorem~\ref{thm:main0}, now implies the following reversibility of the Loewner-Kufarev energy. \begin{thm} [Energy reversibility]\label{thm:energy_rev}Let $\tilde \rho \in \mc N$ be the measure generating $(\tilde D_s)_{s \in \m R}$. We have $S (\rho) = S(\tilde \rho). $ \end{thm} \begin{proof} It suffices to prove that the winding function $\tilde \varphi$ associated to the foliation $(\partial \tilde D_{s(t)})$ satisfies $\tilde \varphi \circ j = \varphi$. Energy reversibility then follows from Theorem~\ref{thm:main0} and conformal invariance of the Dirichlet energy, which implies $\mc D_{\m C} (\varphi) = \mc D_{\m C} (\tilde \varphi)$. Now let $z$ be a differentiable point of $\gamma_t = \partial D_t$. The winding function $\tilde \varphi$ at $j(z)$ associated to the inverted foliation is geometrically the angle (modulo $2\pi$) from $j (\gamma_t)$ to the circle centered at $0$ passing through $j(z)$. Since $j$ is conformal and the family of circles centered at $0$ is preserved under $j$, we have $\tilde \varphi \circ j (z)= \varphi(z)$ modulo $2\pi$. To determine the $2\pi$ branches from the analytic definition of $\varphi$ and $\tilde \varphi$ via $\vartheta$ and to show that they actually coincide, we can use the equipotentials $f_t (r S^1)$ inside $D_t$ to continuously deform both winding functions to $0$ as $r \to 0$. \end{proof} \subsection{Characterization of Weil-Petersson quasicircles}\label{subsec:jordan_curve} Let $\gamma$ be a Weil-Petersson quasicircle separating $0$ from $\infty$. We will associate to $\gamma$ a particular measure $\rho = \rho^\gamma$ (and foliation) with $S(\rho) < \infty$ and in this way prove Theorem~\ref{thm:main-jordan-curve}. We assume for notational simplicity that the bounded component of $\m C \smallsetminus \gamma$ has conformal radius $1$\footnote{This assumption is only made for convenience, so that the curve $\gamma$ corresponds to time-index $0$ in the foliation. All results in this section hold for a general Weil-Petersson quasicircle $\gamma$ separating $0$ from $\infty$.}. Let $f$ (resp. $h$) be the conformal map from $\m D$ (resp. $\m D^*$) to the bounded component $D_0$ (resp. unbounded component $D_0^*$) of $\m C \smallsetminus \gamma$ such that $f$ (resp. $h$) fixes $0$ (resp. $\infty$) and has derivative $f'(0) = 1$ (resp. $h'(\infty) >0$). Consider the foliation $(\gamma_t)_{t \in \mathbb{R}}$ that consists of $\gamma$ together with the family of equipotentials on both sides of $\gamma$. By equipotential which we mean image of a circle $r S^1$ under $f$ (resp. under $h$), and we include all equipotentials corresponding to $r < 1$ (resp. $r > 1$). The parametrization of $(\gamma_t)_{t \in \m R}$ by $t$ is chosen so that the connected component $D_t$ of $\hat{\m{C}} \smallsetminus \gamma_t$ containing $0$ has conformal radius $e^{-t}$. Let $\rho^\gamma \in \mc N$ be the measure associated with $(\gamma_t)_{t \in \m R}$ and let $\varphi$ be the corresponding winding function. Along with the Loewner chain $(f_t: \m D \to D_t )_{t \in \m R}$, we consider also the conformal maps $h_t : \m D^*\to D_t^* $, such that $h_t(\infty) = \infty$ and $h_t '(\infty) > 0$. In particular, $f = f_0$ and $h = h_0$. We also set $g_t : = f_t ^{-1}$ and $k_t : = h_t ^{-1}$ as before. For a conformal map $k$ fixing $\infty$ we define $$\vartheta [k](z) = \arg \frac{z k'(z)}{k(z)}= \int_{\infty}^z \mathrm{d} \left(\arg \frac{k(w) -k(z)}{w - z} + \arg \frac{w}{k(w)}\right), $$ namely, the continuous branch of argument is chosen so that $\vartheta[k](z) \to 0$ as $z \to \infty$. \begin{lemma} \label{lem:harm_equi} For a.e. $t \ge 0$, $\rho_t^\gamma$ is the uniform measure. Moreover, $\varphi|_{\m C \smallsetminus \gamma}$ is harmonic, and we have $$\varphi|_{D_0} = \vartheta [g_0], \quad \text{and} \quad \varphi|_{D_0^*} = \vartheta [k_0].$$ \end{lemma} \begin{proof} By our definition of equipotentials we have $\gamma_t = f (e^{-t} S^1)$ for $t \ge 0$. The flow $(g_t \circ f)_{t \ge 0}$, driven by $(\rho_t^\gamma)_{t \ge 0}$ equals $(z \mapsto e^t z)$, which shows that $(\rho_t^\gamma)_{t\ge 0}$ is a.e. the uniform measure on $S^1$. The identity $\varphi = \vartheta [g_0]$ on $D_0$ and the harmonicity of $\varphi|_{D_0}$ follow as in Lemma~\ref{lem:harmonic}. Now we consider $D_0^*$. Using the notation of Theorem~\ref{thm:energy_rev}, we have that the winding function in $D_0^*$ is given by $\varphi (z) = \tilde \varphi \circ j (z).$ Since the family of equipotentials is preserved under the inversion $j$, it follows that $\tilde \varphi$ is harmonic in $j (D_0^*)$ and $$\tilde \varphi = \vartheta [j \circ k_0 \circ j] = \vartheta [k_0] \circ j,$$ using the chain rule \eqref{eq:theta_chain} and $\vartheta [j] \equiv 0$. Therefore $\varphi|_{D_0^*} = \vartheta [k_0]$. \end{proof} Recall that the Loewner energy of the curve $\gamma$ is \begin{equation}\label{eq:def_Loewner_energy} I^L(\gamma) = \mc D_{\m D} (\arg f') + \mc D_{\m D^*} (\arg h') + 4 \log |f'(0)/h'(\infty)|. \end{equation} (We have normalized so that $f'(0) = 1$ but we will keep it in the notation since \eqref{eq:def_Loewner_energy} holds more generally.) \begin{thm} \label{thm:dual_Jordan_curve} Let $\gamma$ be a Weil-Petersson quasicircle separating $0$ from $\infty$. Let $\rho^\gamma$ be the measure associated to $\gamma$. Then \[16 \,S(\rho^\gamma) = I^L(\gamma) - 2 \log |f'(0)/h'(\infty)| < \infty.\] Moreover, if $\rho \in \mc N$ generates $\gamma$ as a leaf, then $S(\rho^\gamma) \le S(\rho)$. \end{thm} \begin{proof} By Lemma~\ref{lem:harm_equi}, the winding function associated to $\rho^\gamma$ satisfies \begin{align*} \mc D_{\m C} (\varphi) & = \frac{1}{\pi}\int_{D_0} \abs{\nabla \vartheta [g_0]}^2 \mathrm{d} A (z) + \frac{1}{\pi} \int_{D_0^*} \abs{\nabla \vartheta [k_0]}^2 \mathrm{d} A (z) \\ & = \frac{1}{\pi}\int_{\m D} \abs{\nabla \vartheta [f]}^2 \mathrm{d} A (z) + \frac{1}{\pi} \int_{\m D^*} \abs{\nabla \vartheta [h]}^2 \mathrm{d} A (z) \\ & =\frac{1}{\pi} \int_{\m D} \abs{\frac{f''}{f'} - \frac{f'}{f} + \frac{1}{z}}^2 \mathrm{d} A (z) + \frac{1}{\pi} \int_{\m D*} \abs{\frac{h''}{h'} - \frac{h'}{h} + \frac{1}{z}}^2 \mathrm{d} A (z) \\ & = \frac{1}{\pi} \int_{\m D} \abs{\frac{f''}{f'}}^2 \mathrm{d} A (z) + \frac{1}{\pi} \int_{\m D^*} \abs{\frac{h''}{h'}}^2 \mathrm{d} A (z) - \frac{1}{\pi} \int_{\m D} \abs{\frac{f'}{f} -\frac{1}{z}}^2 \mathrm{d} A (z) \\&\qquad - \frac{1}{\pi} \int_{\m D^*} \abs{\frac{h'}{h} -\frac{1}{z}}^2 \mathrm{d} A (z) \\ & = \frac{1}{\pi} \int_{\m D} \abs{\frac{f''}{f'}}^2 \mathrm{d} A (z) + \frac{1}{\pi} \int_{\m D^*} \abs{\frac{h''}{h'}}^2 \mathrm{d} A (z) + 2 \log |f'(0)/h'(\infty)|\\ & = I^L(\gamma) - 2 \log |f'(0)/h'(\infty)|, \end{align*} where we used \eqref{eq:def_Loewner_energy} and Lemma~\ref{lem:Grunsky} proved just below. Theorem~\ref{thm:main0} then implies the identity. The claim $S(\rho^\gamma) \le S(\rho)$ follows immediately from Lemma~\ref{lem:harm_equi} since equipotentials are generated by the zero energy measure for $t \ge 0$ and we complete the proof using Theorem~\ref{thm:energy_rev}. \end{proof} \begin{lemma}\label{lem:Grunsky} We have the identity \begin{align*} \Re & \left[ \int_{\m D} \frac{f''}{f'}\left( \ad{\frac{f'}{f} - \frac{1}{z}}\right)\mathrm{d} A (z) +\int_{\m D^*} \frac{h''}{h'} \left(\ad{\frac{h'}{h} - \frac{1}{z}}\right) \mathrm{d} A (z) \right] \\ & = \int_{\m D} \abs{ \frac{f'}{f } - \frac{1}{z} }^2 \mathrm{d} A (z)+ \int_{\m D^*} \abs{\frac{h'}{h } -\frac{1}{z} }^2 \mathrm{d} A (z) = 2 \pi \log \abs{\frac{ h'(\infty)}{f'(0)}}. \end{align*} \end{lemma} \begin{proof} Consider $\tilde f : = j\circ h \circ j$ and $\tilde h : = j \circ f \circ j$, the conformal maps associated to $j (\gamma)$. We have $\tilde f' (0) = h'(\infty)^{-1}$ and $$\frac{\tilde f''(z)}{\tilde f' (z)} = - \frac{1}{z^2} \left( \frac{h''(1/z)}{h' (1/z)} - \frac{2 h'(1/z)}{h (1/z) } +2z \right)$$ and similarly for $\tilde h$. We compute \begin{align*} I^L(j (\gamma)) =& \frac{1}{\pi}\int_{\m D} \abs{\frac{\tilde f''}{\tilde f'}}^2 \mathrm{d} A (z) + \frac{1}{\pi}\int_{\m D^*} \abs{\frac{\tilde h''}{\tilde h'}}^2 \mathrm{d} A (z) + 4 \log \abs{\frac{\tilde f'(0)}{\tilde h'(\infty)}} \\ = & \frac{1}{\pi}\int_{\m D^*} \abs{ \frac{h''}{h'} - \frac{2 h'}{h } + \frac{2}{z} }^2 \mathrm{d} A (z) + \frac{1}{\pi}\int_{\m D} \abs{ \frac{f''}{f'} - \frac{2 f'}{f } + \frac{2}{z} }^2 \mathrm{d} A (z) + 4 \log \abs{\frac{f'(0)}{h'(\infty)}} \\ = & I^L(\gamma) - \int_{\m D^*} 4 \Re \left[\frac{h''}{h'} \left(\ad{\frac{h'}{h} - \frac{1}{z}}\right) \right]\mathrm{d} A (z) - \int_{\m D} 4 \Re \left[ \frac{f''}{f'}\left( \ad{\frac{f'}{f} - \frac{1}{z}}\right)\right]\mathrm{d} A (z) \\ & + 4 \int_{\m D^*} \abs{\frac{h'}{h } -\frac{1}{z} }^2 \mathrm{d} A (z) + 4\int_{\m D} \abs{ \frac{f'}{f } - \frac{1}{z} }^2 \mathrm{d} A (z). \end{align*} Since the Loewner energy of a Jordan curve is M\"obius invariant, we have $I^L(\gamma) = I^L(j (\gamma))$, we obtain the first equality. The second equality follows from Lemma~\ref{lem:Grunsky_inequality} since $\gamma$ has Lebesgue measure zero. \end{proof} Combining Theorem~\ref{thm:WP-leaf} and Theorem~\ref{thm:dual_Jordan_curve}, we obtain a new characterization of Weil-Petersson quasicircles. \begin{cor}\label{cor:WP-characterization} A Jordan curve $\gamma$ separating $0$ from $\infty$ is a Weil-Petersson quasicircle if and only if $\gamma$ can be realized as a leaf in the foliation generated by a measure $\rho$ with $S(\rho) < \infty$. \end{cor} \begin{cor}\label{cor:WP-LE-bound} The Loewner energies of the leaves generated by $\rho$ are uniformly bounded by $16\, S(\rho)$. \end{cor} \begin{proof} Using Theorem~\ref{thm:main0} and Theorem~\ref{thm:dual_Jordan_curve}, $$16 \,S(\rho) \ge 16 \,S(\rho^\gamma) = I^L (\gamma_t) - 2 \log |f_t'(0)| + 2 \log |h_t'(\infty)|,$$ where $f_t, h_t$ are the conformal maps associated to $\gamma_t$. On the other hand, Lemma~\ref{lem:Grunsky_inequality} shows that $$ 2 \log |h_t'(\infty)| - 2 \log |f_t'(0)| \ge 0.$$ Thus, $I^L(\gamma_t) \le 16 \, S(\rho) $ for all $t$ as claimed. \end{proof} \begin{rem} Corollary~\ref{cor:WP-LE-bound} provides a way to generate and simulate Weil-Petersson quasicircles of bounded Loewner energy using a measure with controlled Loewner-Kufarev energy. In fact, we obtain infinitely many such quasicircles for any given measure. \end{rem} \begin{rem} The Loewner energy $I^L(\gamma)$ is invariant under M\"obius transformations, and is known to be a K\"ahler potential for the Weil-Petersson metric defined on a subspace $T_0 (1)$ of the universal Teichm\"uller space, see \cite[Thm.\,II.4.1]{TT06}. On the other hand, $\log |f'(0)/h'(\infty)|$ is only invariant under scaling and rotation which is consistent with the fact that the Loewner-Kufarev equation has two marked points $0$ and $\infty$ on the Riemann sphere. We also point out that $\log |f'(0)/h'(\infty)|$ is a K\"ahler potential for the Velling-Kirillov metric on the universal Teichm\"uller curve, a complex fiber bundle over $T_0(1)$, see \cite[Thm.\,I.5.3]{TT06}. \end{rem} \subsection{Complex identity: Proof of Proposition~\ref{prop:complex_id}} \label{sec:complex} Recall that $\gamma$ is compatible with $\varphi \in W^{1,2}_{\textrm{loc}}$ if the winding function of $\gamma$ coincides with the trace $\varphi|_{\gamma}$ arclength-a.e. Since $\gamma$ is by assumption compatible with $\Im \psi$, $$\vartheta[f](z) = -\Im \psi^{h}( f(z) ), \quad \forall z \in \m D,$$ where $\Im \psi^h + \Im \psi^0$ is the orthogonal decomposition of $\Im \psi$ with respect to $\m C \smallsetminus \gamma$ as in \eqref{eq:orthogonal}. Hence we can write, \begin{align*} \zeta &= \left( \Re \psi \circ f + \log \abs{\frac{f' (z)z}{f(z)}}\right) + i \left( \Im \psi \circ f + \vartheta[f](z) \right) = u + i \Im \psi^0 \circ f; \\ \xi &= v + i \Im \psi^0 \circ g, \end{align*} where $$u : = \Re \psi \circ f + \log \abs{\frac{f' (z)z}{f(z)}}, \quad v : = \Re \psi \circ h + \log \abs{\frac{h' (z)z}{h(z)}}.$$ Notice that $\log |\frac{f' (z)z}{f(z)}|$ is a harmonic conjugate of $\vartheta[f]$, so they have the same Dirichlet energy. Therefore \begin{align*} \mc D_{\m D} (\zeta) + \mc D_{\m D^*} (\xi) & = \mc D_{\m D}(u) + \mc D_{\m D}(v) + \mc D_{\m C} (\Im \psi^0) \\ & = \mc D_{\m C} (\Re \psi) + \mc D_{\m C} (\Im \psi^{h}) + \text{``cross-terms''} + \mc D_{\m C} (\Im \psi^0) \\ & = \mc D_{\m C} (\Re \psi) +\mc D_{\m C} (\Im \psi) + \text{``cross-terms''} \end{align*} where the ``cross-terms'' come from expanding the Dirichlet integrals of $u$ and $v$. In fact, they equal $2$ times \begin{equation}\label{eq:cross-terms} \int_{\m D} \brac{\nabla \Re \psi \circ f, \nabla \log \abs{\frac{f' (z)z}{f(z)}}} \mathrm{d} A(z) + \int_{\m D^*} \brac{\nabla \Re \psi \circ h, \nabla \log \abs{\frac{h' (z)z}{h(z)}}} \mathrm{d} A(z). \end{equation} It suffices to show that \eqref{eq:cross-terms} vanishes. We will only prove it assuming that $\gamma$ is smooth. The general case of a Weil-Petersson quasicircle can be deduced using an approximation argument following exactly the same proof as \cite[Thm.\,3.1]{VW1}. From Stokes' formula, the first term in \eqref{eq:cross-terms} equals \begin{align*} & \int_{\partial \m D} \Re \psi \circ f (z) \partial_n \log \abs{\frac{f' (z)z}{f(z)}} |\mathrm{d} z| \\ & = \int_{\partial \m D} \Re \psi \circ f (z) \partial_n \log |f' (z)| |\mathrm{d} z| + \Re \left[\int_{\partial \m D} \Re \psi \circ f (z) \partial_n \left(\log \frac{z}{f(z)} \right)|\mathrm{d} z|\right] \\ & = :I_1 + I_2 \end{align*} where $\partial_n$ is the normal derivative in the outward pointing direction. Using the formula $\partial_n \log |f'(z)| = k_{\Omega}\circ f(z)|f'(z)| - k_{\m D} (z) = k_{\Omega}\circ f(z)|f'(z)| - 1 $, where $k_{\Omega}$ is the geodesic curvature of $\partial \Omega$ (see, e.g., \cite[Appx.\,A]{W2}), we obtain \begin{align*} I_1 &= \int_{\partial \Omega} \Re \psi (w) k_{\Omega}(w) |\mathrm{d} w| - \int_{\partial \m D} \Re \psi \circ f (z)|\mathrm{d} z|. \end{align*} Using $z |\mathrm{d} z| = -i \mathrm{d} z$, we have \begin{align*} I_2 &= \Re \int_{\partial \m D} \Re \psi \circ f (z) z \left( - \frac{f'(z)}{f(z)} + \frac{1}{z} \right)|\mathrm{d} z| \\ &= \Re \int_{\partial \m D} i \Re \psi \circ f (z) \frac{f'(z)}{f(z)} \mathrm{d} z +\int_{\partial \m D} \Re \psi \circ f (z) |\mathrm{d} z| \\ & = \Re \int_{\partial \Omega} \frac{i \Re \psi (w)}{w} \mathrm{d} w +\int_{\partial \m D} \Re \psi \circ f (z) |\mathrm{d} z|. \end{align*} The sum of $I_1$, $I_2$ and those integrals coming from the second term of \eqref{eq:cross-terms} vanishes since $k_\Omega (y) = - k_{\Omega^*} (y)$ and the contour integral in $I_2$ winds in the opposite direction for $\Omega$ and $\Omega^*$. This concludes the proof. \qed \section{Conformal distortion formula}\label{subsec:variation_LK} The goal of this section is to compute an explicit formula for the change of the Loewner-Kufarev energy under conformal transformation of the foliation and prove Theorem~\ref{thm:conformal-distortion} which combines Proposition~\ref{prop:conformal-distortion} and Corollary~\ref{cor:E_rho_variation}. We consider the following set up. Let $\rho \in \mc N_+$ be such that $$S_{[0,1]} (\rho) = \int_0^1 L(\rho_t)\,\mathrm{d} t < \infty.$$ We write as before $\rho_t = \nu_t^2 (\theta ) \mathrm{d} \theta$. Suppose $\psi$ is a conformal transformation that maps $K_1$ onto another compact hull $\psi(K_1)=\tilde{K}_1 \subset \ad{\m D}$, and is defined on a neighborhood $U$ of $K_1$ in $\ad {\m D}$ which is mapped onto a neighborhood $\tilde U$ of $\tilde K_1$ in $\ad {\m D}$. Note that we always have $S^1 \subset K_1$ by definition. See Figure~\ref{fig:distortion}. The image of the hulls $(\tilde K_t: = \psi (K_t))$, is driven by a measure $t\mapsto \tilde \rho_t$, where $\tilde \rho_t$ is not necessarily a probability measure. Indeed, $\tilde D_t := \m C \smallsetminus \tilde K_t$ may not have conformal radius $e^{-t}$, so the new Loewner chain is not necessarily normalized. \begin{figure} \centering \includegraphics[scale=0.6]{distortion.pdf} \caption{Conformal distortion of a foliation. The mapping $\psi$ is conformal in a neighborhood of the hull $K$ generated by $\rho$, which is mapped to another hull $\tilde K$, generated by a measure $\tilde \rho$. Proposition~\ref{prop:conformal-distortion} computes the difference of the Loewner-Kufarev energies of $\rho$ and $\tilde \rho$.} \label{fig:distortion} \end{figure} Let $(g_t = f_t^{-1})_{t \le 1}$ and $(\tilde g_t = \tilde f_t^{-1})_{t \le 1}$ be the corresponding uniformizing Loewner chains. Let $\psi_t : = \tilde g_t \circ \psi \circ f_t$. Then $\psi_t$ is a conformal map of $U_t : = g_t (U \smallsetminus K_t)$ onto $\tilde U_t : = \tilde g_t (\tilde U \smallsetminus \tilde K_t)$. By the Schwarz reflection principle, $\psi_t$ extends to a holomorphic function in a neighborhood of $S^1$. For the next statement, recall that $\mc S f $ denotes the Schwarzian of $f$. \begin{lemma}\label{lem:nu_transform} For a.e. $t \in [0,1]$, $\tilde \rho_t \ll \mathrm{d} \theta$. If we write $\tilde \rho_t = \tilde \nu_t^2(\theta) \mathrm{d} \theta$, then \begin{equation}\label{eq:rho_tilde_rho} \tilde \nu_t^2 (\theta_t) = |\psi_t'(e^{i\theta})| \nu_t^2(\theta), \end{equation} where $\theta_t \in [0,2\pi]_{/0\sim 2\pi}$ satisfies $e^{i \theta_t} = \psi_t (e^{i\theta})$. \end{lemma} \begin{proof} Let $\tilde H_t$ be the Herglotz integral of $\tilde \rho_t$. Let $0<r < 1$ and consider for $t \in [0,1]$ the curve $\gamma_t^r = f_t (r S^1)$ passing through $f_t(re^{i\theta})=:w$. We can compute the normal velocity of $\psi \circ \gamma_t^r$ at $\psi(w)$ in two ways. First starting from the velocity at $w$ using that $\psi$ is conformal: this gives $r|\psi'(w)||f'_t(re^{i\theta})| \mathrm{Re} \, H_t(re^{i\theta})$; and second, directly from the Loewner-Kufarev equation driven by $\tilde \rho$, which gives $$|\tilde f'_t(\psi_t(re^{i\theta}))|\mathrm{Re} \,\overline{e^{i\theta}\psi_t'(re^{i\theta})}|\psi'_t(re^{i\theta})|^{-1}\psi_t(re^{i\theta}) \tilde H_t(\psi_t(re^{i\theta})).$$ Indeed, to see why these formulas hold, first note that a unit normal at $w$ is \[ \mathbf{n}(\gamma_t^r(e^{i\theta}))= -e^{i\theta} \frac{f'_t(re^{i\theta})}{|f'_t(re^{i\theta})|}, \quad \theta \in [0, 2\pi). \] Using the Loewner equation, we see that the normal velocity with respect to the curve $\gamma_t^r(\theta)$ at time $t$ at the point $w$ is \begin{align*} - \mathrm{Re} \frac{\partial_t f_t(re^{i\theta}) \overline{e^{i\theta} f'_t(re^{i\theta})}}{|f'_t(re^{i\theta})|} & = \mathrm{Re} \frac{re^{i\theta} f_t'(re^{i\theta}) H_t(re^{i\theta}) \overline{e^{i\theta} f'_t(re^{i\theta})}}{|f'_t(re^{i\theta})|} \\ & = r|f'_t(re^{i\theta})| \mathrm{Re} \, H_t(re^{i\theta}). \end{align*} Next, by definition $\psi_t = \tilde g_t \circ \psi \circ f_t$, so $\tilde{f}_t \circ \psi_t = \psi \circ f_t$. Since $w = f_t(re^{i\theta})$ we have $\psi'(w) f'_t(re^{i\theta}) = \tilde{f}_t'(\psi_t(r e^{i\theta})) \psi_t'(re^{i\theta})$ and the normal velocity of the image curve at $\psi(w)$ is \begin{align*} - \Re & \frac{(\partial_t \tilde{f}_t)(\psi_t(re^{i\theta})) \overline{e^{i\theta} \psi'(w) f'_t(re^{i\theta}) }}{|\psi'(w)||f'_t(re^{i\theta})| } \\ & = \Re \frac{ \psi_t(re^{i\theta}) \tilde{f}_t'(\psi_t(re^{i\theta})) \tilde H_t(\psi_t(re^{i\theta})) \overline{e^{i\theta} \psi'(w) f'_t(re^{i\theta}) }}{|\psi'(w)||f'_t(re^{i\theta})| } \\ & = \Re \frac{ \psi_t(re^{i\theta}) \tilde{f}_t'(\psi_t(re^{i\theta})) \tilde H_t(\psi_t(re^{i\theta})) \overline{e^{i\theta} \tilde{f}_t'(\psi_t(r e^{i\theta})) \psi_t'(re^{i\theta}) }}{|\tilde{f}_t'(\psi_t(r e^{i\theta})) \psi_t'(re^{i\theta})| } \\ & = |\tilde{f}_t'(\psi_t(re^{i\theta})) |\Re \frac{ \psi_t(re^{i\theta}) \tilde H_t(\psi_t(re^{i\theta})) \overline{e^{i\theta} \psi_t'(re^{i\theta}) }}{|\psi_t'(re^{i\theta})| } \end{align*} We get \[r|\psi'(w)||f'_t(re^{i\theta})| \mathrm{Re} \, H_t(re^{i\theta}) =|\tilde f'_t(\psi_t(re^{i\theta}))|\mathrm{Re} \left( \,\frac{\overline{e^{i\theta}\psi_t'(re^{i\theta})}}{|\psi'_t(re^{i\theta})|}\psi_t(re^{i\theta}) \tilde H_t(\psi_t(re^{i\theta})) \right).\] Note that $H_t$ is continuous on $\overline{\m D}$ and $\psi_t$ extends to be holomorphic on a neighborhood of $S^1$. Moreover, as $r \to 1-$, $\overline{e^{i\theta}\psi_t'(re^{i\theta})}\psi_t(re^{i\theta}) /|\psi'_t(re^{i\theta})|\to 1$. Since $\Re \tilde H_t = 2\pi P_{\m D}[\tilde \rho_t]$, we obtain that $\tilde \rho_t \ll \mathrm{d} \theta$ and \eqref{eq:rho_tilde_rho} by letting $r \to 1-$ and using the definition $\psi_t = (\tilde f_t)^{-1} \circ \psi \circ f_t$ and the chain rule. \end{proof} \begin{prop}\label{prop:conformal-distortion} We have $$L(\tilde \rho_t) - L(\rho_t) = \frac{1}{4} \int_{S^1} e^{2i\theta} \mc S \psi_t (e^{i\theta}) \mathrm{d} \rho_t (\theta) + \frac{1}{8}\left( |\tilde \rho_t| - | \rho_t| \right).$$ \end{prop} Note that by conjugating $\psi_t$ by a M\"obius transformation mapping $\m D$ to $\m H$, one sees that $e^{2i\theta} \mc S \psi_t (e^{i\theta}) \in \m R$. \begin{proof} We use the same notation as in Lemma~\ref{lem:nu_transform}. Differentiating \eqref{eq:rho_tilde_rho} with respect to $\theta$, we obtain using $\partial \theta_t /\partial \theta = |\psi_t'(e^{i\theta})|$ that $$\tilde \nu_t'(\theta_t) \sqrt {\abs{\psi_t'(e^{i\theta})}} = \nu_t'(\theta) + \frac{\partial_\theta \abs{\psi_t'(e^{i\theta})}}{2\abs{\psi_t'(e^{i\theta})}} \nu_t (\theta).$$ Plugging this into the expression for $L (\tilde \rho_t)$, we get \begin{align*} L (\tilde \rho_t) & = \frac{1}{2} \int_{S^1} \tilde \nu_t'(\theta)^2 \,\mathrm{d} \theta = \frac{1}{2} \int_{S^1} \tilde \nu_t' (\psi_t (e^{i\theta}))^2 \abs{\psi_t'(e^{i\theta})} \, \mathrm{d} \theta \\ & = \frac{1}{2} \int_{S^1} \left(\nu_t' + \frac{\partial_\theta\abs{\psi_t'}}{2\abs{\psi_t'}} \nu_t \right)^2 \mathrm{d} \theta \\ & = L (\rho_t) + \frac{1}{2}\int_{S^1} \frac{\partial_\theta |\psi_t'| }{|\psi_t'|} \nu_t' \, \nu_t \, \mathrm{d}\theta + \frac{1}{8} \int_{S^1} \left(\frac{\partial_\theta |\psi_t'|}{|\psi_t'|}\right)^2 \nu_t^2 \, \mathrm{d} \theta. \end{align*} Integrating \begin{align*} \partial_\theta \left[\nu_t^2 \frac{\partial_\theta |\psi_t'|}{ |\psi_t'|}\right] & = 2 \nu_t' \,\nu_t \frac{\partial_\theta |\psi_t'|}{ |\psi_t'|} + \nu_t^2 \left[ \frac{\partial_\theta^2 |\psi_t'|}{ |\psi_t'|} - \left(\frac{\partial_\theta |\psi_t'|}{|\psi_t'|}\right)^2\right], \end{align*} over $S^1$ against $\mathrm{d} \theta$ gives $0$. It follows that $L(\tilde \rho_t) - L(\rho_t)$ equals \begin{align*} \frac{1}{2}\int_{S^1} \frac{\partial_\theta |\psi_t'| }{|\psi_t'|} \nu_t' \, \nu_t \, \mathrm{d}\theta + \frac{1}{8} \int_{S^1} \left(\frac{\partial_\theta |\psi_t'|}{|\psi_t'|}\right)^2 \nu_t^2 \, \mathrm{d} \theta & = - \frac{1}{4}\int_{S^1} \nu_t^2\left[\frac{\partial_\theta^2 |\psi_t'|}{|\psi_t'|} - \frac{3}{2} \left(\frac{\partial_\theta |\psi_t'|}{|\psi_t'|}\right)^2 \right] \, \mathrm{d}\theta. \end{align*} Using $|\psi_t' (z)| = \psi_t ' (z) z /\psi_t(z)$ and $\partial_\theta = iz \partial_z$, we compute \begin{align*} \frac{\partial_\theta |\psi_t' (z)|}{ |\psi_t' (z)| } & = iz \partial_z \log |\psi_t' (z)| = iz \left(\frac{\psi_t''}{\psi_t'} - \frac{\psi_t'}{\psi_t} + \frac{1}{z}\right) \end{align*} and \begin{align*} -\frac{1}{2} \left( \frac{\partial_\theta |\psi_t' (z)|}{ |\psi_t' (z)| }\right)^2 &= \frac{z^2}{2} \left[ \left(\frac{\psi_t''}{\psi_t'}\right)^2 + \left(\frac{\psi_t'}{\psi_t}\right)^2 + \frac{1}{z^2} - 2 \frac{\psi_t''}{\psi_t} - 2 \frac{\psi_t'}{z\psi_t} + 2 \frac{\psi_t''}{z\psi_t'} \right] \\ & = \frac{z^2}{2} \left(\frac{\psi_t''}{\psi_t'}\right)^2 + \frac{z^2}{2}\left(\frac{\psi_t'}{\psi_t}\right)^2 + \frac{1}{2} - z^2 \frac{\psi_t''}{\psi_t} - \frac{z \psi_t'}{\psi_t} + \frac{z\psi_t''}{\psi_t'}. \end{align*} Moreover, \begin{align*} \partial_\theta \left(\frac{\partial_\theta |\psi_t' (z)|}{ |\psi_t' (z)| } \right) & = iz \left[ i \left(\frac{\psi_t''}{\psi_t'} - \frac{\psi_t'}{\psi_t} + \frac{1}{z}\right) + iz \left(\left(\frac{\psi_t''}{\psi_t'}\right)' - \left(\frac{\psi_t'}{\psi_t}\right)' - \frac{1}{z^2}\right) \right] \\ & = - z^2 \left(\frac{\psi_t''}{\psi_t'}\right)' - z \left(\frac{\psi_t''}{\psi_t'} - \frac{\psi_t'}{\psi_t} + \frac{1}{z}\right) + z^2 \left(\frac{\psi_t''}{\psi_t} - \frac{(\psi_t')^2}{\psi_t^2}\right) + 1. \end{align*} We obtain \begin{align*} \frac{\partial_\theta^2 |\psi_t'|}{|\psi_t'|} - \frac{3}{2} \left(\frac{\partial_\theta |\psi_t'|}{|\psi_t'|}\right)^2 & = \partial_\theta \left(\frac{\partial_\theta |\psi_t' |}{ |\psi_t' | } \right) -\frac{1}{2} \left( \frac{\partial_\theta |\psi_t'|}{ |\psi_t'| }\right)^2\\ & = - z^2 \mc S \psi_t - \frac{z^2}{2}\left(\frac{\psi_t'}{\psi_t}\right)^2 +\frac{1}{2} = -z^2 \mc S \psi_t +\frac{1 - |\psi_t'|^2}{2}. \end{align*} Combining these computations, we get \begin{align*} L(\tilde \rho_t) - L(\rho_t) & = - \frac{1}{4} \int_{S^1} \nu_t^2 (\theta)\left[\frac{\partial_\theta^2 |\psi_t'|}{|\psi_t'|} - \frac{3}{2} \left(\frac{\partial_\theta |\psi_t'|}{|\psi_t'|}\right)^2 \right]\mathrm{d}\theta \\ & = \frac{1}{4} \int_{S^1} \nu_t^2 (\theta)\left[e^{2i\theta} \mc S \psi_t +\frac{|\psi_t'|^2-1}{2} \right] \,\mathrm{d}\theta \\ & = \frac{1}{4} \int_{S^1} \nu_t^2 (\theta) e^{2i\theta} \mc S \psi_t (e^{i\theta}) \,\mathrm{d}\theta + \frac{1}{8}\left( \int_{S^1} \tilde \nu_t^2 (\theta) \,\mathrm{d}\theta - \int_{S^1} \nu_t^2 (\theta) \,\mathrm{d}\theta \right), \end{align*} where we used \eqref{eq:rho_tilde_rho} in the last equality and $$\int_{S^1} \nu_t^2 (\theta) |\psi_t'|^2 \,\mathrm{d}\theta = \int_{S^1} \tilde \nu_t ^2 (\theta_t) \, |\psi_t'| \,\mathrm{d}\theta = \int_{S^1} \tilde \nu_t ^2 (\theta) \,\mathrm{d}\theta $$ which completes the proof. \end{proof} \begin{cor} \label{cor:E_rho_variation} We have \begin{equation}\label{eq:integrated_distortion} S_{[0,1]} (\tilde \rho) - S_{[0,1]} (\rho) = \frac{1}{4} \int_0^1 \int_{S^1} e^{2i\theta} \mc S \psi_t (e^{i\theta}) \rho_t (\theta) \,\mathrm{d}\theta\mathrm{d} t + \frac{1}{8}\left( \log \tilde g_1'(0) - 1 \right). \end{equation} \end{cor} \begin{proof} The formula follows by integrating \eqref{eq:rho_tilde_rho}, using $\int_0^1 |\rho_t|\mathrm{d} t = 1$ and $\int_0^1 |\tilde \rho_t|\mathrm{d} t = \log \tilde g_1 '(0)$. \end{proof} \begin{rem} The Brownian loop measure is a conformally invariant sigma finite measure on Brownian loops in the plane \cite{LSW_CR_chordal, LW2004loopsoup}. The conformal distortion formula of Theorem~\ref{thm:conformal-distortion} can be interpreted in terms of Brownian loop measures: namely, $S_+(\rho) - S_+(\tilde \rho)$ also equals the difference between the Brownian loop measure of those loops in $\m D$ that intersect both $K_1$ and $\m D \smallsetminus U$ and the measure of those loops in $\m D$ that intersect both $\tilde K_1$ and $\m D\smallsetminus \tilde U$. We will prove this in a forthcoming paper with Lawler. For now we just remark that this interpretation immediately implies that the energy difference depends only on the hulls at time $1$ and not on the foliation, a fact that is not immediately apparent from the distortion formula \eqref{eq:integrated_distortion}. \end{rem} \section{Further comments and open problems} \label{sec:further} We will now indicate further implications and interpretations as well as open problems suggested by our results. \vspace{10pt} \textbf{Random conformal geometry and mating-of-trees} We begin by discussing connections between the results in this paper and ideas in random conformal geometry, see in particular \cite{IG4, MoT}, which served as inspiration for the formulation of our main result, Theorem~\ref{thm:main0}. We will also speculate on how to formulate stochastic versions of some of our results but we do not make rigorous statements here. Let $(B_t)_{t\in \m R}$ be the rotation-invariant two-sided Brownian motion on $S^1$ and suppose $\kappa \ge 0$. Whole-plane SLE$_\kappa$ is the random (whole-plane) Loewner chain generated by the measure $\rho(\mathrm{d} \theta \mathrm{d} t) = \delta_{B_{\kappa t}} (\theta) \mathrm{d} t \in \mc N$, where $\delta_{B_{\kappa t}}$ is the Dirac mass at $B_{\kappa t}$ and $\mathrm{d} t$ is Lebesgue measure on $\m R$. When $\kappa \ge 8$, the hull is generated by a space-filling curve growing in $\m C$ from $\infty$ towards $0$, see \cite{Rohde_Schramm}. The Loewner-Kufarev energy is infinite in this case, since a Dirac mass is not an absolutely continuous measure. However, an easy generalization of the main result of \cite{APW} shows that the Loewner-Kufarev energy is the large deviation rate function of whole-plane SLE$_\infty$. Roughly speaking, as $\kappa \to \infty$, \begin{align}\label{eq:heurstic_infty} \mathbb{P} \big\{\text{The complements of a whole-plane} \operatorname{SLE}_{\kappa} & \text{ process stays close to } (D_t)_{t\in \m R}\big\} \nonumber \\ & \approx \exp \left(- \kappa S(\rho)\right) \end{align} where $(D_t)_{t \in \m R}$ is the family of domains of any deterministic whole-plane Loewner chain with driving measure $\rho$. See \cite{APW} for a precise statement in the unit disk setup. On the other hand, the Loewner energy $ I^L$ is expressed in terms of Dirichlet energies \eqref{eq:loop_LE_def} and is believed to be the large deviation rate function of the $\operatorname{SLE}_{0+}$ loop: as $ \kappa \to 0\!+\!$, \begin{equation} \label{eq:heuristic_0} \mathbb{P} \left\{ \operatorname{SLE}_\kappa \text{ loop stays close to } \gamma \right\} \approx \exp \left(- I^L(\gamma)/\kappa\right) \end{equation} where $\gamma$ is a given deterministic Jordan curve. See \cite{W1,peltola_wang} for precise statements in chordal settings. Furthermore, it is well-known that the Dirichlet energy is the large deviation rate function for the Gaussian free field, see \cite[3.4.12]{Deuschel-strook}. SLE processes enjoy a duality property with respect to replacing $\kappa$ by $16/\kappa$ \cite{Dub_duality,Zhan_duality,IG1}. Roughly speaking, an $\operatorname{SLE}_{\kappa}$ curve describes locally the outer boundary of an $\operatorname{SLE}_{16/\kappa}$ hull when $\kappa < 4$. The mating-of-trees theorem \cite{MoT} further explores this duality and the interplay with an underlying Liouville quantum gravity field. (See also, e.g., \cite{ang2020conformal} and the references therein for some recent progress in this direction.) An impressionistic picture is as follows. A pair of ``mated'' space-filling trees whose branches are formed by $\operatorname{SLE}_{\kappa}$-like curves are constructed as flowlines of an Gaussian field and a coupled whole-plane chordal space-filling SLE$_{16/\kappa}$ curve from $\infty$ to $\infty$ traces the interface between the pair of trees. One can speculate that the union of the pair of trees should degenerate to a foliation as $\kappa \to 0\!+$ (and $16/\kappa \to +\infty$) and the mating-of-trees coupling suggests that the large deviation rate functions of the coupled processes should match in this limit. Combining this with the heuristic formulas \eqref{eq:heurstic_infty} and \eqref{eq:heuristic_0} led us to guess Theorem~\ref{thm:main0} where the factor $16$ is consistent with the SLE $\kappa \leftrightarrow 16/\kappa$ duality. However, the setup of \cite{MoT} uses whole-plane chordal Loewner evolution so the space-filling SLE there runs from $\infty$ to $\infty$, whereas the whole-plane radial SLE runs from $\infty$ to $0$. A coupling of radial SLE with the Gaussian free field is described in \cite{IG4}, but we are not aware of results similar to the mating-of-trees theorem in the current literature for the setting we work in. Our results, in particular Theorem~\ref{thm:WP-leaf}, Theorem~\ref{thm:main0}, and Proposition~\ref{prop:complex_id} provide analytical evidence for a ``radial mating-of-trees'' theorem. Using the dictionary we outlined in \cite[Sec.\,1.3 and 3.4]{VW1}, one can speculate that the following statements should hold: For small $\kappa > 0$, run a space filling whole-plane SLE$_{16/\kappa}$ on an appropriate ``quantum sphere'' assumed to be independent of the SLE. (A quantum sphere can be described by a Gaussian free field with additional logarithmic singularities and an attached transformation law.) If the SLE process is run up to time $t$, the unvisited part and visited part of the quantum sphere form two independent ``quantum disks'' (which are also defined starting from a Gaussian free field) conformally welded along the frontier of the whole-plane SLE$_{16/\kappa}$, which itself is an SLE$_\kappa$-type loop. The two ``quantum disks'' are each decorated with a radial SLE$_{16/\kappa}$ curve. The two SLE$_{16/\kappa}$ curves are independent conditionally on the position of the tip at time $t$. Varying $t$, we expect the separating SLE$_\kappa$-type loops to form a (fractal) foliation-like family that sweeps out the twice punctured sphere. This foliation-like process also encodes the whole-plane SLE$_{16/\kappa}$ evolution. The real part of the complex field in Proposition~\ref{prop:complex_id} reflects the metric/measure structure of the quantum sphere, whereas the imaginary part encodes the fractal foliation-like process, hence the trajectory of the space filling SLE. \vspace{10pt} \textbf{Whole-plane radial SLE Reversibility} Let us next comment on the reversibility of the Loewner-Kufarev energy, Theorem~\ref{thm:main_rev}. An analogous result about the reversibility of the Loewner energy can be explained (and proved) by SLE$_{0+}$ large deviations considerations combined with the fact that chordal SLE is reversible \cite{Zhan_rev} for small $\kappa$, see \cite{W1}. However, it is not known whether whole-plane SLE$_\kappa$ for $\kappa > 8$ is reversible. (For $\kappa \le 8$, reversibility was established in \cite{zhan_rev_whole,IG4}.) Therefore Theorem~\ref{thm:main_rev} cannot be predicted from the SLE point of view given currently known results, but it does on the other hand suggest that reversibility might hold for large $\kappa$ as well. \vspace{10pt} \textbf{Whole-plane chordal Loewner-Kufarev energy} It is natural to ask about a version of our results in chordal settings. The most natural one is the whole-plane chordal version, where the family of curves all pass through $\infty$ and foliate the plane $\m C$ as $t$ ranges from $-\infty$ to $\infty$, as in the mating-of-trees theorem. When $\kappa \to \infty$, the whole-plane chordal SLE$_\kappa$ Loewner chain converges to the constant identity map for all time $t$. Therefore, renormalization is needed to obtain both a non-trivial limit and a meaningful large deviation result. (This is one reason to work in the radial setup here as well as in \cite{APW}.) One way to proceed it is to conformally map the two punctures ($0$ and $\infty$) in our whole-plane (radial) setup to $y$ and $\infty$ then let $y \to \infty$. The third complex degree of freedom (ranging in a non compact space) in the choice of conformal automorphism of the Riemann sphere needs to be chosen carefully to obtain a clean statement. \vspace{10pt} \textbf{Foliation loops in Weil-Petersson Teichm\"uller space} Recall that any quasicircle $\gamma$ separating $0$ from $\infty$ can be identified with an element in universal Teichm\"uller space $T(1) \simeq \operatorname{M\"ob}(S^1)\backslash \operatorname{QS}(S^1)$ via (the equivalence class of) its welding homeomorphism $\phi_\gamma = h^{-1} \circ f|_{S^1}$. The subspace $T_0(1)$ corresponding Weil-Petersson quasicircles can be endowed with an infinite dimensional complex Hilbert manifold structure equipped with the Weil-Petersson metric, see \cite{TT06}. Theorem~\ref{thm:WP-leaf} shows that any $\rho$ with $S(\rho) < \infty$ generates a foliation $(\gamma_t)_{t \in \m R}$ of Weil-Petersson quasicircles which can be considered as elements of $T_0(1)$ via their welding homeomorphisms. So the Loewner evolution of $\rho$ generates a dynamical process on $T_0(1)$. Theorem~\ref{thm:main-jordan-curve} shows that there exists such a family (obtained by interpolating by equipotentials) passing through any given element of $T_0(1)$. It is not too hard to show that it corresponds to a continuous loop $t \mapsto [\phi_{\gamma_t}]$ in $T_0(1)$ starting and ending at the origin $[\operatorname{Id_{S^1}}]$. We believe the class of loops in $T_0(1)$ coming from measures with $S(\rho)<\infty$ may be of interest to study. For instance, an interesting question concerns how the length of a loop is related to $S(\rho)$. The example in Section~\ref{sect:examples} shows that one can have $S(\rho) > 0$ while the corresponding loop is trivial, so one can only hope for an upper bound in terms of $S(\rho)$. One can further ask for properties of the minimal energy (equipotential) path to a given element. Another question concerns how transformations on $\rho$ affects a path in $T_0(1)$, and vice versa. We are currently investigating these and related questions. \vspace{10pt} \textbf{Unitarizing measures on Homeo$(S^1)$} Evolutions of random homeomorphisms of $S^1$ have been studied by Airault, Malliavin and collaborators as part of a program to find certain unitarizing probability measures on the group of homeomorphisms of the unit circle that provide a unitary representation of the Virasoro algebra, see, e.g., \cite{AM,AMT1,AMT2}. See also \cite{AJKS} for analytic treatment of the conformal welding problem in a similar context involving rough random homeomorphism. It is shown in \cite{AM} that the Berezin quantization on the homogeneous space $T_0 (1)$ can be carried out given the existence of a unitarizing probability measure satisfying an integration by parts formula. In this quantization scheme, the Hilbert space consists of holomorphic square integrable sections over the trivial line bundle with respect to the measure and the quantized operators act unitarily on this Hilbert space. Kontsevich and Suhov alluded to potential links between SLEs and unitarizing measures in \cite[Sec. 2.5.2]{Kontsevich_SLE}. This link is supported by the fact that the Loewner energy is the K\"ahler potential (which plays a central role in the Berezin quantization) on $T_0 (1)$ and recent developments establishing the connection between the Loewner energy and SLEs \cite{W2}. The results in the present paper may provide another angle of attack of this circle of problems from a dynamical point of view. \vspace{10pt} \textbf{``Foliations'' in hyperbolic $3$-space} Since M\"obius transformations of $\hat{\m{C}}$ extend to isometries of the hyperbolic $3$-space $\m H^3$ (whose boundary at $\infty$ is identified with the Riemann sphere $\hat{\m{C}}$) and being a Weil-Petersson quasicircle is a M\"obius invariant property, it is natural to try to relate our foliations by Weil-Petersson quasicircles to objects in $\m H^3$. In \cite{Anderson}, it is shown that every Jordan curve bounds at least one minimal disk in $\m H^3$, and \cite{bishop-WP} shows that a Jordan curve is a Weil-Petersson quasicircle if and only if any such minimal disk in $\m H^3$ has finite total curvature. For example, when $\gamma$ is a circle, the unique minimal surface is the totally geodesic surface, namely the hemisphere bounded by $\gamma$. Although the minimal disk for a given boundary curve may not be unique in general, \cite[Thm.\,B]{Seppi} and a bound on the quasiconformal constant of the Weil-Petersson quasicircle imply uniqueness when the Loewner energy of $\gamma$ is small enough. Hence, for small enough $S(\rho)$, Theorem~\ref{thm:WP-leaf} and Corollary~\ref{cor:WP-LE-bound} imply that the foliation $(\gamma_t)_{t \in \m R}$ uniquely determines a family of minimal disks of finite total curvature $(\Sigma_t)_{t \in \m R}$ in $\m H^3$, where $\Sigma_t$ is bounded by the leaf $\gamma_t$. We believe that the family $(\Sigma_t)_{t \in \m R}$ forms a smooth foliation of $\m H^3$ in this case. Such families of minimal surfaces seem interesting to study in their own right and, by embedding into a dynamical family, could be useful in the analysis of minimal surfaces in $\m H^3$ bounded by Weil-Petersson quasicircles and in deriving a rigorous AdS$_3/$CFT$_2$ holographic principle. \bibliographystyle{abbrv}
1,116,691,497,982
arxiv
\section{Introduction} Supernova 2011fe was discovered soon after explosion in the nearby galaxy M101 (NGC~5457) by the Palomar Transient Factory \citep{rau09,law09} on 2011 August 24.167 UT (all calendar dates herein are UT) and rapidly classified as a supernova of Type Ia (SN Ia) \citep[][identified initially as PTF11kly]{nugent11a, nugent11b}. This was the brightest SN Ia since SN~1972E in NGC~5253 \citep[see, e.g.,][]{kirshner73}, although SN~1986G remains the nearest SN~Ia \citep[e.g.,][]{phillips87}. The combination of proximity, early discovery, and modern observing resources makes this SN a rare gift that can be studied in unprecedented detail. Over the last two decades, the reliability of SNe~Ia as calibratable standard candles has been securely established. Specifically, the regression of peak brightness in optical magnitudes (corrected for reddening and \emph{a priori} second parameter effects, usually light-curve shape) against recession velocity $v_r$ (the Hubble diagram) in the range $1200\ \rm km \ s^{-1} \leq v_{r} \leq 30,000\ \rm km \ s^{-1} $ has been shown to be almost linear and to have a scatter of only 0.13 mag rms \citep[e.g.,][]{hamuy96,riess96, parodi00, guy05, jha07,conley08,hicken09,kessler09, folatelli10,burns11, mandel09, mandel11}. SNe~Ia are thus largely free of any Malmquist bias, and by virtue of their brightness, are cosmological probes at distances where peculiar motions of galaxies are insignificant compared to the expansion velocity of the cosmic manifold. Their consequent use as probes of cosmic acceleration is now well known \citep[e.g.,][]{riess98,perlmutter99,astier06,woodvasey07,sullivan11}. Photometry of SNe~Ia in the near infra-red (NIR) has shown even greater promise for use as a standard candle (for a recent review of the NIR properties of SNe~Ia, see \citet{phillips11}). The effects of extinction are greatly reduced and they appear to have relatively constant peak magnitudes in $J, H,$ and $K_s$ \citep{meikle00, krisciunas04a, krisciunas04b, krisciunas07}. The scatter in the NIR Hubble diagram is $\sim 0.15$ mag \emph{without} the light-curve shape corrections necessary for optical bands \citep{krisciunas04a, woodvasey08, folatelli10}. Motivated by this, we began a Director's discretionary time program to observe SN~2011fe with the WIYN High-Resolution Infrared Camera (WHIRC) NIR camera at the WIYN 3.5-m telescope\footnote{The WIYN Observatory is a joint facility of the University of Wisconsin, Indiana University, Yale University, and the National Optical Astronomy Observatory.} on Kitt Peak, taking advantage of the instrumentation deployment that allows for the use of the NIR camera even when other instruments are scheduled for a given night. \section{Observations and Data Reduction} We imaged the SN on 34 nights between 2011 Aug 27 and 2011 Oct 26 with WHIRC. The WHIRC camera \citep{meixner10} contains a 2048$^2$ HgCdTe array with a field of view of 3\farcm3 x 3\farcm3 and a pixel scale of $\sim$0.1\arcsec\ per pixel. We obtained observations in the $H$ band during each visit and observations in the $J$ and $K_s$ bands for most of the visits (see Table \ref{irphot} for details). On photometric nights, a standard star at similar air mass \citep[P133C,][]{persson98} was observed in the same filters. A typical observing sequence consisted of 20 to 30 second exposures in a 5-point cross-shaped dither pattern with $\sim$20\arcsec\ dithers. Most nights, this sequence was executed twice in the $H$ band, with a 5\arcsec\ random offset of the telescope between maps. Data were reduced in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} as prescribed in the WHIRC Reduction Manual \citep{joyce09}.The raw images were corrected for non-linearity and sky subtracted using a median-filtered sky frame obtained from each 5-point map. The images were flat-fielded with dome flats corrected for the pupil ghost (an inherent feature of WHIRC images resulting from internal reflections from the optical elements) using the IRAF routine mscred.rmpupil. Aperture photometry was performed on each sky-subtracted, flatfielded image using typical apertures of 3\arcsec\ diameter and a surrounding sky annulus of 0.5\arcsec\ width. On nights where the seeing FWHM was greater than 1.0\arcsec, a 4 or 5\arcsec\ aperture diameter was used. We did not have a template image to subtract the host galaxy light as is commonly done for SNe. The host background is relatively smooth in the region chosen for the sky. In addition, inspection of Two Micron All Sky Survey \citep[2MASS;][]{skrutskie06} images of the region reveal no point source at the location of SN~2011fe to 3$\sigma$ limits of 17.5, 17.4, and 16.7 in $J, H,$ and $K_s$, respectively, well below the brightness of the SN itself. All photometry of SN~2011fe was calibrated relative to a local calibrator, the nearby 2MASS source 14031367+5415431. This star lies approximately 80\arcsec\ SW of SN~2011fe, within the same WHIRC field, so the relative photometry should be independent of atmospheric transparency or extinction. Because of the relatively large (0.024 mag) published uncertainties in the 2MASS calibration, the nearby photometric standard P133C \citep{persson98} was observed on ten photometric nights to calibrate the local 2MASS standard using canonical atmospheric extinction coefficients of 0.08, 0.04, and 0.07 mag/airmass for the $J$, $H$, and $K_s$ filters, respectively. The corrections to the 2MASS magnitudes of the local calibrator to the Persson standard were small ($\sim 0.01$ mag), with our final values being $12.008 \pm 0.009$ mag, $11.471 \pm 0.008$ mag, and $11.405 \pm 0.011$ mag in $J$, $H$, and $K_s$, respectively. The flux-calibrated magnitudes of SN~2011fe are presented in Table \ref{irphot} and plotted in Figure \ref{lcfig}. The quoted uncertainties are the rms of the mean of the individual images with the error in the calibration of the standard (which is the dominant source of error) included. In addition, the brightness of the local standard 2MASS 14031367+5415431 was measured relative to another 2MASS star in the field (14025941+5416266) on all nights to ensure against intrinsic variability of the local standard. Over the course of the observations, the rms uncertainty in the differential photometry of these two stars was 0.015 mag, which can be considered an upper limit on any intrinsic variability. If nights judged not to be photometric are excluded, this rms uncertainty decreases to 0.010 mag. \section{Analysis} In order to calibrate SN~2011fe against other SNe~Ia, we compared our photometry with that of \citet{woodvasey08}. \citet{kattner12} have shown that there is a weak relationship between absolute luminosity and decline rate in the $J$ and $H$ bands. Optical photometry of SN~2011fe indicates that it is a ``normal'' SN Ia with a $\Delta m_{15}(B)$ value of $\sim$1.2 \citep{richmond12}, so we will not make any corrections for decline rate (as the correction suggest by \citet{kattner12} is minimal at this decline rate). \citet{woodvasey08} provide templates of light curves of SNe~Ia in $J, H, $ and $K_s$. We fit our data to the templates using a $\chi^2$ minimization. We restricted the fit to epochs ranging from 10 days before $B$-band maximum light (the earliest points in the templates) to 25 days after $B$-band maximum (when differences in filter bandpasses and spectral features combine to create deviations from the templates). We restricted the fit to 19 days after $B$-band maximum for the $J$ band as it showed larger deviations from the templates. The $J$ filter in WHIRC is significantly different than the $J$ filter used for the \citet{woodvasey08} templates (and the $K_s$ filter has differences as well). Details on the WHIRC filter bandpasses are available from the WHIRC website\footnote{http://www.noao.edu/kpno/manuals/whirc/filters.html} and are shown compared to 2MASS and Carnegie Supernova Program \citep[CSP; ][]{contreras10} filter bandpasses in Figure \ref{filtfig}. The \citet{woodvasey08} templates are defined relative to $B$-band maximum. We did not have optical photometry, so both the scaling in magnitude and the epoch were free parameters in the fit. For all three bandpasses, we derived the same epoch of $B$-band maximum, 2011 September 9.9 $\pm$ 0.2 (MJD 55813.9, consistent with the time derived by other groups; W. Li, private communication). From the \citet{woodvasey08} templates we can derive the maximum in each passband (we missed a measurement of that epoch as a result of poor weather) as well as the magnitude in each band at the time of $B$-band maximum. The values at $B$-band maximum are the fiducial points of the \citet{woodvasey08} templates. For the maximum in each bandpass, we find $J_{\rm max} = 10.51 \pm 0.04$ mag, $H_{\rm max} = 10.75 \pm 0.04$ mag, and $K_{s\rm max} = 10.64 \pm 0.05$ mag. At $B$-band maximum, we find $J_{B_{\rm max}} = 10.62 \pm 0.04$ mag, $H_{B_{\rm max}} = 10.85 \pm 0.04$ mag, and $K_{sB_{\rm max}} = 10.68 \pm 0.05$ mag. To evaluate the uncertainty for each value, we used the \citet{woodvasey08} light curves that had at least three points within three days of $B$-band maximum to calculate the error of the mean for the template in each band. Templates derived from data taken by the CSP \citep{contreras10} as well as the \citet{woodvasey08} data (Shappee \& Jha, in prep.) show essentially the same structure near maximum brightness where we are performing the fit and result in similar fits. \citet{krisciunas04b} present a third-order polynomial fit for their NIR light curves of SNe~Ia. This curve is not a good match for our points over the nominal range given by \citet{krisciunas04b}, most likely as result of their dataset having few points before maximum. If we restrict the fit to just the points near maximum, we derive $J_{\rm max} = 10.49 \pm 0.06$ mag, $H_{\rm max} = 10.76 \pm 0.08$ mag, and $K_{s \rm max} = 10.65 \pm 0.08$ mag (uncertainties are dominated by the rms values from \citet{krisciunas04b} fits). All derived magnitudes are consistent with those found using the \citet{woodvasey08} templates. We have not applied any $K$ corrections to our photometry. The redshift of M101 is $0.000804 \pm 0.000007$ \citep{devaucouleurs91}, so any $K$ correction will be minimal. The \citet{sfd98} dust maps imply a foreground extinction of $A_V = 0.028$ mag, and thus values in the $J$, $H$, and $K_s$ bands less than 0.01 mag. Based on narrow \ion{Na}{1}~D absorption lines in the spectra of SN~2011fe, \citet{nugent11b} derive a host-galaxy extinction of $A_V = 0.04$ mag (\citet{patat11} report a similar result). Again, this implies extinctions in the $H$ and $K_s$ bands less than 0.01 mag, while the extinction in the $J$ band is $\sim 0.01$ mag. Given these minimal values, we do not apply any correction to our photometry. \section{Discussion} To establish the absolute luminosity calibration of SNe~Ia, we need SNe in nearby galaxies to which distances can be determined by other methods. There is a rich history of obtaining Cepheid-based distances to such host galaxies, using the \emph{HST} \citep[e.g., ][]{sandage06, gibson00, freedman01, riess09}. A discussion of the details associated with these techniques is beyond the scope of this paper. Rather, we recognize that M101 provides a better platform for the absolute magnitude calibration, because it is nearby and its distance is better determinable, if not already determined. In addition, the demonstration by \citet{krisciunas04a, krisciunas04b, woodvasey08} that the $H$-band light curves and peak brightnesses of SNe~Ia are independent of second parameter characteristics of individual SNe (in addition, of course to being almost unaffected by reddening), provides a method that mitigates many of the issues that have plagued the earlier attempts and resulted in controversy. The weak relationship between luminosity and decline-rate in the $J$ and $H$ bands found by \citet{kattner12} does add some potential complications for SNe Ia in the infra-red in general, but not for the particular case of SN~2011fe as its $\Delta m_{15}(B)$ value of 1.2 \citep{richmond12}. Here we use our data for SN~2011fe to discuss the absolute magnitude anchor for the $H$-band calibration of SNe~Ia, by comparing against currently available Cepheid distances to M101. For the purpose of comparing our data to absolute calibrations of SNe Ia in the infra-red, we will focus on the $H$ band, although similar results can be found using the $J$ and $K_s$ bands. As can be seen in Figure \ref{filtfig}, the $H$ filter bandpass is the most similar across the data sets used for absolute calibration (PAIRITEL and CSP) as well as the WHIRC data. In addition, not all derivations of absolute calibrations include the $K_s$ band. As an example, \citet{woodvasey08} use $H$-band magnitudes at $B_{\rm max}$ and the Hubble diagram (recession velocity vs apparent magnitude) to, in effect, derive that the absolute $H$-band magnitude at $B_{\rm max}$ is: \begin{equation} M( H_{B{\rm max}} ) - 5 {\rm log} (H_{0}/72) = -18.08 \pm 0.15 \end{equation} They then quote $ M( H_{B{\rm max}} ) = -18.08 \pm 0.15 $ mag, by adopting $H_{0} = 72\ \kmsmpc$\footnote{Using only the PAIRITEL subsample of \citet{woodvasey08} to facilitate cross-comparisons with independent samples.}. Our measured value of $H_{B_{\rm max}} = 10.85 \pm 0.04$ mag for SN~2011fe yields a distance modulus $(m-M)_{0} = 28.93 \pm 0.16$ mag if $H_{0}$ is indeed $72\ \kmsmpc$. More precisely, SN~2011fe gives the distance modulus to M101 as: \begin{equation} (m-M) + 5 {\rm log} (H_{0}/72) = 28.93 \pm 0.16\ \rm{mag} \end{equation} Note that the main uncertainty comes from the intrinsic rms of $0.15$ mag in $H$-band absolute calibration as reported by \citet{woodvasey08}, and not from the relatively insignificant uncertainty in the determination of the $H$ magnitude at the epoch of $B_{max}$. Table \ref{mods} lists the various absolute infra-red calibrations for SNe~Ia (all essentially based on a cosmology that assumes $H_0 = 72\ \kmsmpc$) and the distance moduli to M101 derived from these calibrations using our apparent magnitudes for SN~2011fe. Again, in each case the uncertainy is dominated by the absolute calibration, not the photometry of SN~2011fe. There is a wide range in the absolute calibrations, yielding a span of 0.31 mag in distance modulus depending on the specific calibration used. The source of this dispersion is not clear, but may be the result of different filters, corrections to those filters, and assumptions that went into the individual analyses. Note also that these calibrations are not all independent, as many use the same data sets and analysis tools. Until the infra-red absolute magnitudes of SNe~Ia are more firmly settled, there will be some question about their cosmological utility. \citet{freedman01} concluded that the distance modulus to M101 from Cepheids is $29.13 \pm 0.11$ mag, where the Cepheid distance scale zero-point rests on an adopted LMC distance modulus ($\mu_{0}$) of 18.50 mag. \citet{saha06} give $29.17 \pm 0.09$ mag from an alternative analysis of the same data and an adopted $\mu_{0} (LMC)$ of 18.54 mag. A more recent comprehensive and completely independent study of Cepheids in M101 yields $29.04 \pm 0.19$ mag \citep{shappee11}, where the Cepheid scale is based on the maser distance to NGC~4258, which is tantamount to $\mu_{0} (LMC) = 18.41$ mag. The differences among these three results thus rest entirely on the adopted zero-point for the respective Cepheid P-L relations used by the three sets of authors. Comparing the conditional ($H_{0} = 72$) $H$-band distance moduli to M101 from Table \ref{mods} to the Cepheid distances and using equation 2 (with the appropriate infra-red calibration), one finds the infra-red distances can accommodate $H_{0}$ values from 64 to 74 $\kmsmpc$. Using $H$-band photometry with NICMOS on \emph{HST}, and an assumed LMC modulus of 18.50 mag, \citet{Macri11} obtained distance moduli of 29.53 and 29.19 mag from Cepheids (relative to LMC Cepheids, without any metallicity dependence modeling) in outer and inner fields of M101, respectively. They concluded that in addition to metallicity differences, photometry errors from blending are a likely contributor to the observed difference (with the inner field distance erring on the side of appearing too close). In addition, there are published distances to M101 using non-Cepheid based methods. Tip of the red giant branch (TRGB) results span the gamut from $(m-M)_{0} = 29.05 \pm 0.14$ mag \citep{shappee11} to $29.34 \pm 0.09$ mag \citep{Rizzi07} and $29.42 \pm 0.11$ mag \citep{Sakai04}. \citet{tammann11} adopt the mean of the last 2 values when deriving $H_{0}$ from the visible light curve of SN~2011fe. Using the planetary nebulae luminosity function method \citet{Feldmeier96} obtained $(m-M)_{0} =29.42 \pm 0.15$ mag. Figure \ref{distfig} graphically demonstrates the distance estimates for M101 discussed herein (using the $H$-band calibration for SN~2011fe). Presented with this range of results, and given the state of the art uncertainties in our understanding of metallicity dependence and its inter-relation with de-reddening procedures there is no compelling argument that can pinpoint the determined distance to M101 better than the likely range from 29.04 to 29.42 mag. For this range of possible moduli and the range of conditional moduli implied by the $H$-band magnitude of SN~2011fe, values of $H_{0}$ from 56 to 76 $\kmsmpc$ cannot be ruled out from this SN alone. It is sobering that M101, which is nearer than any of the SNe~Ia calibrating host galaxies used by \citet{freedman01} or \citet{Riess11}, and for which there are multiple independent distance determinations, has resulting distance moduli that span a range of $\sim$0.4 mag. Other calibrator host-galaxies at distances comparable to Virgo and beyond do not offer such cross-validation to scrutinize the robustness of their derived distances. The source of uncertainty in the distance moduli derived from the magnitudes of SN~2011fe is not just the measurement or the calibration of the SNe~Ia. It is also necessary to resolve the Cepheid and TRGB distance scales and their systematics before a better than 5\% accuracy for $H_{0}$ can be asserted. \section{Conclusions} We have presented $J$, $H$, and $K_s$ light curves of SN~2011fe in M101. The light curves appear to be those of a normal SN Ia. Our apparent magnitude in the $H$ band at the epoch of $B$-band maximum is $10.85 \pm 0.04$, implying distance moduli to M101 based on various infra-red absolute calibrations that span a range from 28.86 to 29.17 mag. This dispersion is comparable to that for traditional distance measures to M101 (29.04 to 29.42 mag). This is, however, only one object in a class that still exhibits a small, but significant, intrinsic spread in peak magnitudes. From the dispersion in absolute calibrations of SNe~Ia in the infra-red, it is clear that they are not yet fully understood. \acknowledgments We would like to thank the referee, Mark Phillips, for extremely useful comments and suggestions. We would also like to thank the WIYN Observatory for their support of this program. T.M. acknowledges many useful conversations with Chris Burns on the nature of SN light curves in the infra-red. T.M. dedicates this paper to the memory of his friend and colleague, Dr. Weidong Li. {\it Facilities:} \facility{WIYN}
1,116,691,497,983
arxiv
\section{Introduction} Magnetohydrodynamics is that part of the mechanics of continuous media which studies the motion of electrically conducting media in the presence of a magnetic field. The dynamic motion of fluid and magnetic field interact strongly on each other, so the hydrodynamic and electrodynamic effects are coupled. In $3$D space, the compressible isentropic MHD equations in a domain $\Omega \subset \mathbb{R}^3$ can be written as \begin{equation} \label{eq:1.2j} \begin{cases} \displaystyle H_t-\text{rot}(u\times H)=-\text{rot}\Big(\frac{1}{\sigma}\text{rot}H\Big),\\[6pt] \displaystyle \text{div}H=0,\\[6pt] \displaystyle \rho_t+\text{div}(\rho u)=0,\\[6pt] \displaystyle (\rho u)_t+\text{div}(\rho u\otimes u) +\nabla P=\text{div}\mathbb{T}+\text{rot}H\times H. \end{cases} \end{equation} In this system, $x\in \Omega$ is the spatial coordinate; $t\geq 0$ is the time; $H=(H^1,H^2,H^3)$ is the magnetic field; $0< \sigma\leq \infty$ is the electric conductivity coefficient; $\rho$ is the mass density; $u=(u^1,u^2,u^3)\in \mathbb{R}^3$ is the velocity of fluids; $P$ is the pressure satisfying \begin{equation} \label{eq:1.3} P=A \rho^\gamma, \quad A>0,\quad \gamma > 1, \end{equation} where $A$ is a constant and $\gamma$ is the adiabatic index; $\mathbb{T}$ is the viscosity stress tensor: \begin{equation} \label{eq:1.4} \mathbb{T}=2\mu D(u)+\lambda \text{div}u \mathbb{I}_3, \quad D(u)=\frac{\nabla u+(\nabla u)^\top}{2}, \end{equation} where $D(u)$ is the deformation tensor, $\mathbb{I}_3$ is the $3\times 3$ unit matrix, $\mu$ is the shear viscosity coefficient, $\lambda+\frac{2}{3}\mu$ is the bulk viscosity coefficient, $\mu$ and $\lambda$ are both real constants, \begin{equation} \label{eq:1.5} \mu > 0, \quad 3\lambda+2\mu \geq 0,\end{equation} which ensures the ellipticity of the Lam$\acute{\text{e}}$ operator (see (\ref{xc})). Although the electric field $E$ doesn't appear in system (\ref{eq:1.2j}), it is indeed induced according to a relation $$ E=\frac{1}{\sigma}\text{rot}H- u\times H $$ by moving the conductive flow in the magnetic field. The MHD system (\ref{eq:1.2j}) describes the macroscopic behavior of electrically conducting compressible (isentropic) fluids in a magnetic field. It is reasonable to assume that there is no magnetic diffusion (i.e. $\sigma=+\infty$) when the conducting fluid considered is of a very high conductivity, which occurs frequently in many cosmical and geophysical problems. Then we need to consider the following system: \begin{equation} \label{eq:1.2} \begin{cases} \displaystyle H_t-\text{rot}(u\times H)=0,\\[6pt] \displaystyle \text{div}H=0,\\[6pt] \displaystyle \rho_t+\text{div}(\rho u)=0,\\[6pt] \displaystyle (\rho u)_t+\text{div}(\rho u\otimes u) +\nabla P=\text{div}\mathbb{T}+\text{rot}H\times H, \end{cases} \end{equation} which is the so called viscous and non-resistive MHD equations (see \cite{dd1}\cite{dd2}\cite{dd3}\cite{dd4}\cite{dd5}). The aim of this paper is to give a blow-up criterion of strong solutions to the initial boundary value problem (IBVP): system (\ref{eq:1.2}) in a bounded, smooth domain $\Omega \subset \mathbb{R}^3$ with the initial boundary value conditions: \begin{align} \label{fan1}(H,\rho, u)|_{t=0}=(H_0(x), \rho_0(x), u_0(x)),\ x\in \Omega; \quad u|_{\partial {\Omega}}=0. \end{align} Throughout this paper, we adopt the following simplified notations for the standard homogeneous and inhomogeneous Sobolev space: \begin{equation*}\begin{split} &D^{k,r}=\{f\in L^1_{loc}(\Omega): |f|_{D^{k,r}}=|\nabla^kf|_{L^r}<+\infty\},\quad D^k=D^{k,2}, \\[6pt] &D^{1}_0=\{f\in L^6(\Omega): |f|_{D^{1}}=|\nabla f|_{L^2}<\infty \ \text{and}\ f|_{\partial \Omega}=0\},\quad \|(f,g)\|_X=\|f\|_X+\|g\|_X,\\[6pt] &\|f\|_{W^{m,r}}=\|f\|_{W^{m,r}(\Omega)},\quad \|f\|_s=\|f\|_{H^s(\Omega)},\quad |f|_p=\|f\|_{L^p(\Omega)},\\[6pt] & |f|_{D^{k,r}}=\|f\|_{D^{k,r}(\Omega)},\quad |f|_{D^k}=\|f\|_{D^k(\Omega)}, \quad \mathbb{A}: \mathbb{B}=\sum_{ij}a_{ij}b_{ij}. \end{split} \end{equation*} A detailed study of homogeneous Sobolev spaces can be found in \cite{gandi}. As has been observed in \cite{jishan}, which proved the existence of unique local strong solutions with initial vacuum to IBVP (\ref{eq:1.2})-(\ref{fan1}), in order to make sure that the IBVP (\ref{eq:1.2})-(\ref{fan1}) with initial vacuum is well-posed, the lack of a positive lower bound of the initial mass density $\rho_0$ should be compensated with some initial layer compatibility condition on the initial data $(H_0,\rho_0, u_0,P_0)$: \begin{theorem}\cite{jishan}\label{th5} Let constant $q \in (3,6]$. If $(H_0, \rho_0, u_0,P_0)$ satisfies \begin{equation}\label{th78} \begin{split} & (H_0, \rho_0, P_0) \in H^1\cap W^{1,q},\ \rho_0\geq 0,\ u_0\in D^1_0\cap D^2, \end{split} \end{equation} and the compatibility condition \begin{equation}\label{th79} \begin{split} Lu_0+\nabla P_0- \text{rot} H_0\times H_0=\sqrt{\rho_0} g_1 \end{split} \end{equation} for some $g_1 \in L^2$, $P_0=A\rho^\gamma_0$, and \begin{equation}\label{xc} \begin{split} Lu_0=-\mu\triangle u_0-(\lambda+\mu)\nabla \text{div} u_0, \end{split} \end{equation} then there exists a time $T_*$ and a unique solution $(H,\rho,u,P)$ to IBVP (\ref{eq:1.2})-(\ref{fan1}) satisfying \begin{equation*}\begin{split} &(H,\rho,P)\in C([0,T_*];H^1\cap W^{1,q}),\ u\in C([0,T_*];D^1_0\cap D^2)\cap L^{2}([0,T_*];D^{2,q}),\\ & u_t\in L^2([0,T_*];D^1_0),\ \sqrt{\rho}u_t\in L^\infty([0,T_*];L^2). \end{split} \end{equation*} \end{theorem} Some analogous existence theorems of the unique local strong solutions to the compressible Navier-Stokes equations have been previously established by CHo-Choe-Kim in \cite{CK3}\cite{CK}\cite{guahu}. In $3$D space, Huang-Li-Xin obtained the well-posedness of global classical solutions with small energy but possibly large oscillations and vacuum for Cauchy problem in \cite{HX1} or IBVP in \cite{HX2} to the isentropic flow. For compressible MHD equations, when $0<\sigma<+\infty$, the global smooth solution near the constant state in one-dimensional space was studied in Kawashima-Okada \cite{kawa}; recently, in $3$D space, the similar result to \cite{HX1} has been obtained in Li-Xu-Zhang \cite{mhd}. However, for $\sigma=+\infty$, at least as far as I know, there are few results on the global existence of strong solutions with initial vacuum. The non-global existence in the whole space $\mathbb{R}^3$ has been proved in \cite{olga} for the classical solution to isentropic MHD equations as follows: \begin{theorem}\cite{olga} \label{olga} Assume that $\gamma\geq \frac{6}{5}$, if the momentum $\int_{\mathbb{R}^3} \rho_0 u_0 \text{d} x\neq 0 $, then there exists no global classical solutions to (\ref{eq:1.2})-(\ref{fan1}) with conserved mass, momentum and total energy. \end{theorem} Then these motivate us to consider that the local strong solutions to (\ref{eq:1.2})-(\ref{fan1}) may cease to exist globally, or what is the key point to make sure that the solution obtained in Theorem \ref{th5} could become a global one? If the blow-up happens, we want to know the mechanism of breakdown and the structure of singularities? The similar question has been studied for the incompressible Euler equation by Beale-Kato-Majda (BKM) in their pioneering work \cite{TBK}, which showed that the $L^\infty$-bound of vorticity $\text{rot} u$ must blow up if we assume that the life span of the corresponding strong solution is finite. Later, Ponce \cite{pc} rephrased the BKM-criterion in terms of the deformation tensor $D(u)$, and the same result as \cite{pc} has been proved by Huang-Li-Xin \cite{hup} for compressible isentropic Navier-Stokes equations, which can be shown: if $0 < \overline{T} < +\infty$ is the maximum existence time for strong solution, then \begin{equation}\label{kaka1} \lim \sup_{ T \rightarrow \overline{T}} \int_0^T |D( u)|_{ L^\infty(\Omega)}\text{d}t=\infty. \end{equation} Recently, the similar blow-up criterions as (\ref{kaka1}) have been obtained for the $3$D compressible isentropic MHD equations in Xu-Zhang \cite{gerui}, which can be shown: \begin{equation}\label{kaka123} \lim \sup_{ T \rightarrow \overline{T}} \int_0^T |\nabla u|_{ L^\infty(\Omega)}\text{d}t=\infty. \end{equation} Some similar results also can be seen in Chen-Liu \cite{mingtao} or Lu-Du-Yao \cite{duyi}. Moreover, for the strong solutions with initial vacuum to $3$D compressible isentropic Navier-stokes equations, Sun-Wang-Zhang \cite{zif} proved $$ \lim \sup_{ T \rightarrow \overline{T}} |\rho|_{ L^\infty([0,T]\times \Omega)}=\infty, $$ under the physical assumption (\ref{eq:1.5}) and $\lambda <7\mu$. In the following theorem, under the physical assumption (\ref{eq:1.5}) and $3\lambda < 29 \mu$, we show that the $L^\infty$ norms of the magnetic field $H$ and the mass density $\rho$ control the possible blow-up (see \cite{olga}\cite{zx}) for strong solutions, which means that if a solution of the compressible MHD equations is initially regular and loses its regularity at some later time, then the formation of singularity must be caused by losing the upper bound of $H$ or $\rho$ as the critical time approaches. The arguments used in \cite{hup}\cite{zif} can not be applied to our case directly. The first reason is that we will relax the assumption $\lambda <7\mu$ to $3\lambda < 29 \mu$. The second reason is the presence of magnetic momentum flux density tensor $$ \frac{1}{2}|H|^2I_3-H\otimes H $$ in momentum equations $(\ref{eq:1.2j})_4$. To deal with this nonlinear term, we need to control the norm $ |\nabla H|_{2}$, which is difficult to be bounded by $|D(u)|_{L^1(0,T;L^\infty)}$ because of the strong coupling between $u$ and $H$ in magnetic equations $(\ref{eq:1.2j})_1$, and the lack of smooth mechanism of $H$ for the case $\sigma=+\infty$. These are unlike those for $(|\rho|_\infty, |\nabla \rho|_{2})$, which can be totally determined by $|\text{div}u|_{L^1(0,T;L^\infty)}$ due to the scalar hyperbolic structure of the continuity equation $(\ref{eq:1.2j})_1$ in \cite{hup}\cite{zif}. So some new arguments need to be introduced to improve the results obtained above for system (\ref{eq:1.2j}). \begin{theorem} \label{th3} Let the viscosity coefficients $(\mu, \lambda)$ satisfy \begin{equation}\label{ass} \mu > 0, \quad 3\lambda+2\mu \geq 0,\quad3\lambda < 29 \mu,\end{equation} and $(H_0,\rho_0,u_0, P_0)$ satisfy (\ref{th78})-(\ref{th79}). If $(H,\rho,u,P)$ is a strong solution to IBVP (\ref{eq:1.2})- (\ref{fan1}) obtained in Theorem \ref{th5}, and $0< \overline{T} <\infty$ is the maximal time of its existence, then \begin{equation}\label{eq:2.91} \lim \sup_{ T \rightarrow \overline{T}}( |\rho|_{L^\infty([0,T]\times \Omega)}+ |H|_{L^\infty([0,T]\times \Omega)})=\infty. \end{equation} \end{theorem} \begin{remark}\label{rrr2} We introduce the main ideas of our proof for Theorem \ref{th3}, some of which are inspired by the arguments used in \cite{hup}\cite{zif}\cite{jiangzhu}. \text{I}) We improve the methods used in \cite{zif}\cite{jiangzhu} to obtain the estimate (\ref{keyq}) under the assumption (\ref{ass}). In order to prove (\ref{keyq}), the restriction $\lambda<7\mu$ plays an key role in the analysis shown in \cite{zif}, and actually, it is only used to get the upper bound of $\int_{\Omega}\rho|u(t)|^r \text{dx}$ for some $r > 3$. However, Wen-Zhu \cite{jiangzhu} obtain the upper bound of $\int_{\Omega}\rho|u(t)|^r \text{dx}$ under the assumption $3\lambda < 29 \mu$, which as a byproduct extends the conclusions obtained in \cite{zif}. Compared with \cite{jiangzhu}, we need to deal with the magnetic term appearing in the momentum equations, and due to the initial vacuum, we obtain the upper bound of $\int_{\Omega}\rho|u(t)|^r \text{dx}$ for $r \in (3, 7/2)$ under the assumption (\ref{ass}). In order to get a restriction of $\mu$ and $\lambda$ as better as possible, the crucial ingredient to relax the additional restrictions to $3\lambda < 29\mu$ has been observed (see \cite{jiangzhu}) that \begin{equation}\label{gpkk} \displaystyle |\nabla u|^2=|u|^2\Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2+\big| \nabla |u|\big|^2, \end{equation} for $ |u|>0$, and thus \begin{equation}\label{ghukk22} \begin{split} \int_{\Omega \cap |u|>0} |u|^{r-2} | \nabla u|^2\text{d}x\geq (1+ \phi(\epsilon_0,\epsilon_1,r) )\int_{\Omega \cap |u|>0} |u|^{r-2} \big| \nabla |u|\big|^2\text{d}x, \end{split} \end{equation} if \begin{equation}\label{ghukk11} \begin{split} &\int_{\Omega \cap |u|>0} |u|^r \Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2\text{d}x\geq \phi(\epsilon_0,\epsilon_1,r)\int_{\Omega \cap |u|>0} |u|^{r-2} \big| \nabla |u|\big|^2\text{d}x \end{split} \end{equation} for some positive function $\phi(\epsilon_0,\epsilon_1,r)$ near $r=3$. The details can be seen in Lemma \ref{abs3}. \text{II}) If $ |\rho|_{L^\infty([0,T]; L^{\infty}(\Omega))}$ and $ |H|_{L^\infty([0,T]; L^{\infty}(\Omega))}$ are bounded, we can obtain a high integrability of velocity $u$, which can be used to control the nonlinear terms (See Lemmas \ref{abs3}-\ref{sk4}). The argument used in \cite{zif} is introduced to control the upper bound of $|\nabla u|_2$, and a important observation has been shown in Lemma \ref{sk4} that \begin{equation}\label{magnet}\begin{cases} \quad \mathbb{B}=H_t\otimes H+H\otimes H_t\\[6pt] =(H^iH^k\partial_k u^j+H^jH^k\partial_k u^i-H^i H^j\partial_k u^k)_{(ij)}-\text{div}\big( (H\otimes H)\otimes u \big),\\[6pt] \quad \mathbb{C}=H\cdot H_t=H\cdot (H \cdot \nabla u-u \cdot \nabla H-H\text{div}u)\\[6pt] =\big(H\cdot \nabla u\cdot H-\frac{1}{2}|H|^2\text{div}u\big)-\frac{1}{2}\text{div}(u|H|^2), \end{cases} \end{equation} from which, we successfully avoid the difficulty coming from the strong coupling between the magnetic field and velocity when the magnetic diffusion vanishes. The next difficulty is to control the mass density $\rho$ and the magnetic field $H$, which both satisfy hyperbolic equations. To do this, we need to make sure that the velocity $u$ is bounded in $L^1([0, T]; D^{1,\infty}(\Omega))$. On the other hand, in order to prove $u\in L^1([0, T]; D^{1,\infty}(\Omega))$, we have to obtain some priori bounds for $\nabla \rho$ and $\nabla H$. Furthermore, the magnetic term in the momentum equation will bring extra difficulty to us. However, via using the argument from \cite{hoff1} and the structure of the magnetic equations, in Lemma \ref{ablem:4-1}, we show that \begin{equation}\label{mgnetc} \begin{split} \Lambda=& \int_{\Omega} \dot{u} \cdot \big[\text{div}\big(H\otimes H-\frac{1}{2}|H|^2I_3\big)_t+\text{div}\big(\text{div}\big(H\otimes H-\frac{1}{2}|H|^2I_3\big)\otimes u\big)\big]\text{d}x\\ =& \int_{\Omega} \partial_{k}u^k H^i H^j \partial_{j} \dot{u}^i \text{d}x+\int_{\Omega} \Big( -\frac{1}{2}\partial_ju^j |H^k|^2 \partial_{i}\dot{u}^i\Big) \text{d}x. \end{split} \end{equation} Then we get the cancelation to the derivatives $(\nabla \rho, \nabla H)$ during our computation, which brings us the desired result. \end{remark} The rest of this paper is organized as follows. In Section $2$, we give some important lemmas which will be used frequently in our proof. In Section $3$, we give the proof for the blow-up criterion (\ref{eq:2.91}). \section{Preliminary} In this section, we give some important lemmas which will be used frequently in our proof. The first one is some Sobolev inequalities: \begin{lemma}\label{gag} For $l\in (3,\infty)$, there exists some generic constant $C> 0$ that may depend on $l$ such that for $f\in D^1_0(\Omega)$, $g\in D^1_0\cap D^2(\Omega)$ and $h \in W^{1,l}(\Omega)$, we have \begin{equation}\label{gaga1} \begin{split} |f|_6\leq C|f|_{D^1_0},\qquad |g|_{\infty}\leq C|g|_{D^1_0\cap D^2}, \qquad |h|_{\infty}\leq C\|h\|_{W^{1,l}}. \end{split} \end{equation} \end{lemma} Next we consider the following boundary value problem for the Lam$\acute{\text{e}}$ operator $L$: \begin{equation}\label{tvd} \begin{cases} -\mu \triangle U-(\mu+\lambda)\nabla \text{div} U=F,\quad \text{in}\ \Omega,\\[8pt] U(t,x)=0, \quad \text{on}\quad \partial \Omega, \end{cases} \end{equation} where $U = (U^1, U^2, U^3), \ F = (F^1, F^2, F^3)$. It is well known that under the assumption $(\ref{eq:1.5})$, $(\ref{tvd})_1$ is a strongly elliptic system. If $F \in W^{-1,2}(\Omega)$, then there exists a unique weak solution $U \in D^1_0(\Omega)$. We begin with recalling various estimates for this system in $L^l(\Omega)$ spaces, which can be seen in \cite{dd8}. \begin{lemma}\label{tvd1} Let $l \in (1, +\infty)$ and $u$ be a solution of (\ref{tvd}). There exists a constant $C$ depending only on $\lambda$, $\mu$, $l$ and $\Omega$ such that the following estimates hold: (1) if $F\in L^l(\Omega)$, then we have \begin{equation}\label{tvd2} \|U\|_{W^{2,l}}\leq C|F|_l; \end{equation} (2) if $F \in W^{-1,l}(\Omega)$ (i.e., $F =\text{div}f$ with $f =(f_{ij})_{3\times3}$, $f_{ij} \in L^l(\Omega)$), then we have \begin{equation}\label{tvd3} \| U \|_{W^{1,l}} \leq C|f|_l; \end{equation} (3) if $F = \text{div}f$ with $f_{ij} = \partial_k h^k_{ij}$ and $h^k_{ij} \in W^{1,l}_0(\Omega)$ for $ i,j,k = 1,2,3$, then we have \begin{equation}\label{tvd4} | U |_{l} \leq C|h|_l. \end{equation} \end{lemma} Moreover, we need an endpoint estimate for $L$ in the case $l = \infty$. Let $BMO(\Omega)$ stands for the John-Nirenberg space of bounded mean oscillation whose norm is defined by: \begin{equation}\label{tvd5} \|F\|_{BMO(\Omega)}=\|f\|_{L^2(\Omega)}+[f]_{[BMO]}, \end{equation} with \begin{equation}\label{tvd6} \begin{cases} \displaystyle [f]_{[BMO]}=\sup_{x\in \Omega,\ r\in (0,d)} \frac{1}{|\Omega_r(x)|}\int_{\Omega_r(x)} |f(y)-f_{\Omega_r(x)}|\text{d}y,\\[10pt] \displaystyle f_{\Omega_r(x)}= \frac{1}{|\Omega_r(x)|} \int_{\Omega_r(x)} f(y)\text{d}y, \end{cases} \end{equation} where $\Omega_{r} (x) = B_r(x)\cap \Omega$, $B_r(x)$ is the ball with center $x$ and radius $r$, and $d$ is the diameter of $\Omega$. $|\Omega_r(x)|$ denotes the Lebesgue measure of $\Omega_r(x)$. Note that \begin{equation}\label{tvd77} \begin{split} [f]_{[BMO]}\leq 2|f|_\infty. \end{split} \end{equation} \begin{lemma}\label{tvd18} If $F = \text{div} f$ with $f = (f_{ij} )_{3\times3}, \ f_{ij} \in L^\infty(\Omega)\cap L^2(\Omega)$, then $\nabla U \in BMO(\Omega)$ and there exists a constant $C$ depending only on $\lambda$, $\mu$ and $\Omega$ such that \begin{equation}\label{tvd8} \begin{split} |\nabla U|_{[BMO]}\leq C(|f|_\infty+|f|_2). \end{split} \end{equation} \end{lemma} Because $\Omega$ is a bounded domain with smooth boundary, the estimate (\ref{tvd8}) can be found in \cite{dd8} for a more general setting. In the next lemma, we will give a variant of the Brezis-Waigner inequality \cite{dd7}, which also can be seen in \cite{zif}. \begin{lemma}\cite{dd7}\label{tvd10} Let $\Omega$ be a bounded Lipschitz domain and $f \in W^{ 1,l} (\Omega)$ with $l \in (3, \infty)$. There exists a constant $C$ depending on $l$ and the Lipschitz property of $\Omega$ such that \begin{equation}\label{tvd9} \begin{split} |f|_{L^\infty(\Omega)}\leq& C\big(1+|f|_{BMO(\Omega)} \ln (e+|\nabla f|_l\big). \end{split} \end{equation} \end{lemma} Finally, for $(H,u) \in C^1(\Omega)$, there are some formulas based on $\text{div}H=0$: \begin{lemma}\label{liu6} Let $(H, \rho, u,P)$ be the unique strong solution obtained in Theorem \ref{th5} to IBVP (\ref{eq:1.2})--(\ref{fan1}) in $[0,T)\times \Omega$, then we have \begin{equation}\label{zhoumou} \begin{cases} \displaystyle \text{rot}(u\times H)=(H\cdot \nabla)u-(u\cdot \nabla)H-H\text{div}u,\\[8pt] \displaystyle \text{rot}H\times H =\text{div}\Big(H\otimes H-\frac{1}{2}|H|^2I_3\Big)=-\frac{1}{2}\nabla |H|^2+H\cdot \nabla H. \end{cases} \end{equation} \end{lemma} \begin{proof} It follows immediately from the following equality: \begin{equation*} \begin{cases} a\times \text{rot}a=\frac{1}{2}\nabla (| a|^2)-a \cdot \nabla a,\\[8pt] \text{rot}(a\times b)=(b\cdot \nabla)a-(a \cdot \nabla)b+(\text{div}b)a-(\text{div}a)b, \end{cases} \end{equation*} based on the fact that $\text{div}H=0$. \end{proof} \section{Blow-up criterion (\ref{eq:2.91}) for strong solutions} Now we prove (\ref{eq:2.91}). Let $(H, \rho, u,P)$ be the unique strong solution obtained in Theorem \ref{th5} to IBVP (\ref{eq:1.2})--(\ref{fan1}) in $[0,\overline{T})\times \Omega$. Due to $P=A\rho^\gamma$, we show that $P$ satisfies \begin{equation}\label{mou9} P_t+u\cdot \nabla P+\gamma P \text{div}u=0, \quad P_0 \in H^2 \cap W^{2,q}. \end{equation} We first give the standard energy estimate that \begin{lemma}\label{s2} \begin{equation*} \begin{split} |\sqrt{\rho}u(t)|^2_{ 2}+|H(t)|^2_2+|P(t)|_1+\int_{0}^{T}|\nabla u(t)|^2_{2}\text{d}t\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation*} where $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any\ T\in (0,\overline{T}])$. \end{lemma} \begin{proof} We first show that \begin{align} \label{2}\frac{d}{dt}\int_{\Omega} \Big(\frac{1}{2}\rho |u|^2+\frac{P}{\gamma-1}+\frac{1}{2}H^2\Big) \text{d}x+\int_{\Omega} \big(\mu |\nabla u|^2+(\lambda+\mu)(\text{div}u)^2\big)\text{d}x=0. \end{align} Actually, (\ref{2}) is classical, which can be shown by multiplying $(\ref{eq:1.2})_4$ by $u$, $(\ref{eq:1.2})_3$ by $\frac{|u|^2}{2}$ and $(\ref{eq:1.2})_1$ by $H$, then summing them together and integrating the resulting equation over $\Omega$ by parts, where we have used the fact \begin{equation}\label{zhu1} \begin{split} \int_{\Omega} \text{rot}H \times H \cdot u \text{d}x=\int_{\Omega} -\text{rot}(u \times H) \cdot H \text{d}x. \end{split} \end{equation} \end{proof} Next we assume that the opposite of (\ref{eq:2.91}) holds, i.e., \begin{equation}\label{we11*} \begin{split} \lim \sup_{T\mapsto \overline{T}}\Big( |\rho|_{L^\infty([0,T]\times \Omega)}+ |H|_{L^\infty([0,T]\times \Omega)}\Big)=C_0<\infty. \end{split} \end{equation} Now based on (\ref{we11*}), we can improve the energy estimate obtained in Lemma \ref{s2}. \begin{lemma}\label{abs3} If (\ref{ass}) holds, then there exists $r\in \big(3,\frac{7}{2}\big)$ such that \begin{equation}\label{keyq} \begin{split} \int_{\Omega}\rho|u(t)|^r \text{dx}\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation} where $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any\ T\in (0,\overline{T}])$. \end{lemma} \begin{proof} For any $\lambda$ satisfying (\ref{ass}), there must exists a sufficiently small constant $\alpha_\lambda>0$: \begin{equation}\label{qudai} 3\lambda <(29-\alpha_\lambda) \mu. \end{equation} So we only need to show that (\ref{keyq}) holds under the assumption (\ref{qudai}). First, multiplying $ (\ref{eq:1.2})_4$ by $r|u|^{r-2}u$ $(r\geq 3)$ and integrating over $\Omega$, we have \begin{equation}\label{lz1} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x+\int_{\Omega}H_r\text{d}x\\ =&-r(r-2)(\mu+\lambda)\int_{\Omega} \text{div}u |u|^{r-3}u\cdot \nabla |u|\text{d}x\\ &+\int_{\Omega} r P\text{div }(|u|^{r-2}u)\text{d}x-\int_{\Omega} r\big(H\otimes H-\frac{1}{2}|H|^2I_3\big): \nabla (|u|^{r-2}u)\text{d}x, \end{split} \end{equation} where $$ H_r=r|u|^{r-2}\big(\mu|\nabla u|^2+(\mu+\lambda)|\text{div}u|^2+\mu(r-2)\big| \nabla |u| \big|^2\big). $$ For any given $\epsilon_1\in (0,1)$ and $\epsilon_0\in (0,\frac{1}{4})$, we define a nonnegative function which will be determined in $\textbf{Step}\ 2$ as follows $$ \phi(\epsilon_0,\epsilon_1,r)=\left\{ \begin{array}{llll} \frac{\mu \epsilon_1(r-1)}{3\big(-\frac{(4-\epsilon_0)\mu}{3}-\lambda+\frac{r^2(\lambda+\mu)}{4(r-1)}\big)}, \quad \text{if}\quad \frac{r^2(\mu+\lambda)}{4(r-1)} -\frac{(4-\epsilon_0)\mu}{3}-\lambda>0, \\[12pt] \displaystyle 0,\quad \text{otherwise}. \end{array}\right. $$ $\textbf{Step}\ 1$: we assume that \begin{equation}\label{ghu} \begin{split} &\int_{\Omega \cap |u|>0} |u|^r \Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2\text{d}x> \phi(\epsilon_0,\epsilon_1,r)\int_{\Omega \cap |u|>0} |u|^{r-2} \big| \nabla |u|\big|^2\text{d}x. \end{split} \end{equation} A direct calculation gives for $|u|>0$: \begin{equation}\label{popo} \begin{split} |\nabla u|^2=|u|^2\Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2+\big| \nabla |u|\big|^2, \end{split} \end{equation} which plays an important role in the proof. By (\ref{lz1}) and the Cauchy's inequality, we have \begin{equation}\label{lz3} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x+\int_{\Omega \cap \{|u|>0\}}H_r\text{d}x\\ =&-r(r-2)(\mu+\lambda)\int_{\Omega \cap \{|u|>0\}} \text{div}u |u|^{\frac{r-2}{2}} |u|^{\frac{r-4}{2}}u\cdot \nabla |u|\text{d}x\\ &+\int_{\Omega} r P\text{div }(|u|^{r-2}u)\text{d}x-\int_{\Omega} r\big(H\otimes H-\frac{1}{2}|H|^2I_3\big): \nabla (|u|^{r-2}u)\text{d}x\\ \leq & r(\mu+\lambda)\int_{\Omega \cap \{|u|>0\}} |u|^{r-2} |\text{div}u|^2\text{d}x +\frac{r(r-2)^2(\mu+\lambda)}{4}\int_{\Omega \cap \{|u|>0\}} |u|^{r-2} \big| \nabla |u| \big|^2\text{d}x\\ &+\int_{\Omega} r P\text{div }(|u|^{r-2}u)\text{d}x-\int_{\Omega} r\big(H\otimes H-\frac{1}{2}|H|^2I_3\big): \nabla (|u|^{r-2}u)\text{d}x. \end{split} \end{equation} Via H\"older's inequality, Gagliardo-Nirenberg inequality and Young's inequality, we have \begin{equation}\label{zhu2s} \begin{split} J_1=&\int_{\Omega} r P\text{div }(|u|^{r-2}u)\text{d}x\\ \leq&Cr(r-1) \Big(\int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x\Big)^{\frac{1}{2}}\Big(\int_{\Omega}|u|^{r-2}P^2\text{d}x\Big)^{\frac{1}{2}}\\ \leq & Cr(r-1)|P|_{\frac{12r}{4r+4}}\Big( \int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x\Big)^{\frac{1}{2}}\Big(\int_{\Omega}\big(|u|^{\frac{r}{2}}\big)^6\text{d}x\Big)^{\frac{2(r-2)}{12r}}\\ \leq & Cr(r-1)\Big( \int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x\Big)^{\frac{1}{2}}\Big(\int_{\Omega}\big( \nabla |u|^{\frac{r}{2}}\big)^2\text{d}x\Big)^{\frac{(r-2)}{2r}}\\ \leq& \frac{1}{2}\mu r \epsilon_0 \int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x+C(\mu, r,\epsilon_0),\\ J_2=&-\int_{\Omega} r\big(H\otimes H-\frac{1}{2}|H|^2I_3\big): \nabla (|u|^{r-2}u)\text{d}x\\ \leq& Cr(r-1)\Big(\int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x\Big)^{\frac{1}{2}}\Big(\int_{\Omega}|u|^{r-2}|H|^4\text{d}x\Big)^{\frac{1}{2}}\\ \leq &Cr(r-1)|H^2|_{\frac{12r}{4r+4}}\Big( \int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x\Big)^{\frac{1}{2}}\Big(\int_{\Omega}\big(|u|^{\frac{r}{2}}\big)^6\text{d}x\Big)^{\frac{2(r-2)}{12r}}\\ \leq & Cr(r-1)\Big( \int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x\Big)^{\frac{1}{2}}\Big(\int_{\Omega}\big( \nabla |u|^{\frac{r}{2}}\big)^2\text{d}x\Big)^{\frac{(r-2)}{2r}}\\ \leq& \frac{1}{2} \mu r \epsilon_0 \int_{\Omega}|u|^{r-2}|\nabla u|^2\text{d}x+C(\mu, r,\epsilon_0), \end{split} \end{equation} where $\epsilon_0\in (0,\frac{1}{4})$ is independent of $r$. Then combining (\ref{popo})-(\ref{zhu2s}), we quickly have \begin{equation}\label{lz3kk} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x+\int_{\Omega \cap \{|u|>0\}} \mu r(1-\epsilon_0)|u|^{r-2}|\nabla |u| |^2\text{d}x\\ &+\int_{\Omega \cap \{|u|>0\}} \mu r(1-\epsilon_0)|u|^{r}\Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2\text{d}x+\int_{\Omega \cap \{|u|>0\}} \mu r(r-2)\big| \nabla |u| \big|^2\text{d}x\\ \leq & \frac{r(r-2)^2(\mu+\lambda)}{4}\int_{\Omega \cap \{|u|>0\}} |u|^{r-2} \big| \nabla |u| \big|^2\text{d}x+C(\mu, r,\epsilon_0). \end{split} \end{equation} So according to (\ref{ghu}) and (\ref{lz3kk}), we obtain that \begin{equation}\label{lz4} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x+rf(\epsilon_0,\epsilon_1, r)\int_{\Omega \cap \{|u|>0\}}|u|^{r-2}|\nabla |u||^2\text{d}x \leq C(\mu, r,\epsilon_0), \end{split} \end{equation} where \begin{equation}\label{xuchen} \begin{split} f(\epsilon_0,\epsilon_1, r)=\mu (1-\epsilon_0)\phi(\epsilon_0,\epsilon_1,r)+\mu(r-1-\epsilon_0)-\frac{(r-2)^2(\mu+\lambda)}{4}. \end{split} \end{equation} \textbf{Subcase} 1: if $3\in \Big\{ r\big| \frac{r^2(\mu+\lambda)}{4(r-1)}-\frac{(4-\epsilon_0)\mu}{3}-\lambda>0\Big\}$, i,e, $(5-8\epsilon_0)\mu<3\lambda$, it is easy to get $$[3,+\infty)\in \Big\{ r\big| \frac{r^2(\mu+\lambda)}{4(r-1)}-\frac{(4-\epsilon_0)\mu}{3}-\lambda>0\Big\}.$$ Therefore, we have \begin{equation}\label{lzyue} \begin{split} \phi(\epsilon_0,\epsilon_1,r)= \frac{\mu \epsilon_1(r-1)}{3\big(-\frac{(4-\epsilon_0)\mu}{3}-\lambda+\frac{r^2(\lambda+\mu)}{4(r-1)}\big)} \end{split} \end{equation} for any $r\in [3,\infty)$. Substituting (\ref{lzyue}) into (\ref{xuchen}), for $r\in [3,\infty)$, we have \begin{equation}\label{xuchendd} \begin{split} f(\epsilon_0,\epsilon_1, r)=\frac{\mu^2 \epsilon_1(1-\epsilon_0)(r-1)}{3\big(-\frac{(4-\epsilon_0)\mu}{3}-\lambda+\frac{r^2(\lambda+\mu)}{4(r-1)}\big)}+\mu(r-1-\epsilon_0)-\frac{(r-2)^2(\mu+\lambda)}{4}. \end{split} \end{equation} For $(\epsilon_1,r)=(1,3)$, we have \begin{equation}\label{dulan} \begin{split} f(\epsilon_0, 1,3)=&\frac{16\mu^2(1-\epsilon_0)}{3\lambda-(5-8\epsilon_0)\mu}+\mu(2-\epsilon_0)-\frac{\mu+\lambda}{4} =-C_1(\lambda-a_1 \mu)(\lambda-a_2\mu), \end{split} \end{equation} then according to $\frac{(5-8\epsilon_0)\mu)}{3}<\lambda$, we have $C_1=\frac{3}{4\big(3\lambda-(5-8\epsilon_0)\mu\big)}>0$ and \begin{equation}\label{dulann} \begin{split} a_1=&\frac{13-10\epsilon_0+2\sqrt{64+\epsilon^2_0-56\epsilon_0}}{3},\\ a_2=&\frac{13-10\epsilon_0-2\sqrt{64+\epsilon^2_0-56\epsilon_0}}{3}<0. \end{split} \end{equation} Then if we want to make sure that $f(\epsilon_0, 1,3)>0$, we have to assume that \begin{equation}\label{dulann1} \begin{split} \frac{(5-8\epsilon_0)\mu)}{3}<\lambda< a_1\mu. \end{split} \end{equation} Due to $a_1(0)=\frac{29}{3}$ and $a'_1(\epsilon)<0$ for $\epsilon_0\in (0,1/4)$, so there must exists a sufficiently small $\epsilon_0\in (0,1/4)$ such that $a_1(\epsilon_0)=\frac{29-\alpha_\lambda}{3}$. Since $f(\epsilon_0,\epsilon_1, r)$ is continuous w.r.t. $(\epsilon_1, r)$ over $[0,1]\times [3,+\infty)$, there exists $ \epsilon_1\in (0,1)$ and $r\in \big(3,\frac{7}{2}\big)$, such that $ f(\epsilon_0,\epsilon_1, r)\geq 0 $, which, together with (\ref{lz4})-(\ref{xuchen}), implies that \begin{equation}\label{lz411} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x \leq C,\quad \text{for}\quad r\in \big(3,7/2 \big). \end{split} \end{equation} \textbf{Subcase} 2: if $3\notin \Big\{ r\Big| \frac{r^2(\mu+\lambda)}{4(r-1)}-\frac{(4-\epsilon_0)\mu}{3}-\lambda>0\Big\}$, i.e., $(5-8\epsilon_0)\mu\geq 3\lambda$. In this case, for $r\in (3,\frac{7}{2})$, it is easy to get \begin{equation}\label{peng} \begin{split} &r\Big[\mu (1-\epsilon_0)\phi(\epsilon_0,\epsilon_1,r)+\mu(r-1-\epsilon_0)-\frac{(r-2)^2(\mu+\lambda)}{4}\Big]\\ >&3\Big(\frac{7}{4}\mu-\frac{9(\mu+\lambda)}{16}\Big)=3\Big(\frac{19\mu}{16}-\frac{9\lambda}{16}\Big) \geq 3\Big(\frac{19\mu}{16}-\frac{3(5-8\epsilon_0)\mu}{16}\Big)>\frac{1}{4}\mu, \end{split} \end{equation} which, together with (\ref{lz4})-(\ref{xuchen}), implies that \begin{equation}\label{lz422} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x+\frac{1}{4}\mu\int_{\Omega \cap \{|u|>0\}}|u|^{r-2}\big| \nabla |u| \big|^2\text{d}x\leq C,\quad \text{for}\quad r\in \big(3, 7/2 \big). \end{split} \end{equation} $\textbf{Step}$ 2 : we assume that \begin{equation}\label{ghu11} \begin{split} &\int_{\Omega \cap |u|>0} |u|^r \Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2\text{d}x\leq \phi(\epsilon_0,\epsilon_1,r)\int_{\Omega \cap |u|>0} |u|^{r-2} \big| \nabla |u|\big|^2\text{d}x. \end{split} \end{equation} A direct calculation gives for $|u|>0$, \begin{equation}\label{ghu22} \begin{split} \text{div}u=|u|\text{div}\Big(\frac{u}{|u|}\Big)+\frac{u\cdot \nabla |u|}{|u|}. \end{split} \end{equation} Then combining (\ref{ghu22}) and (\ref{lz3})-(\ref{zhu2s}), we quickly have \begin{equation}\label{lz77} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x+\int_{\Omega \cap \{|u|>0\}}\mu r(1-\epsilon_0)|u|^{r-2}|\nabla u|^2\text{d}x\\ &+\int_{\Omega \cap \{|u|>0\}}r(\lambda+\mu) |u|^{r-2}|\text{div}u|^2\text{d}x+\int_{\Omega \cap \{|u|>0\}}\mu r(r-2)|u|^{r-2}\big| \nabla |u| \big|^2\text{d}x\\ =&-r(r-2)(\mu+\lambda)\int_{\Omega \cap \{|u|>0\}} \Big( |u|^{r-2} u\cdot \nabla |u| \text{div}\Big(\frac{u}{|u|}\Big)+ |u|^{r-4} |u\cdot \nabla |u| |^2\Big)\text{d}x. \end{split} \end{equation} This gives \begin{equation}\label{lz88} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x+\int_{\Omega \cap \{|u|>0\}}r|u|^{r-4}G\text{d}x\leq C(\mu,r,\epsilon_0), \end{split} \end{equation} where \begin{equation}\label{wang1} \begin{split} G=&\mu (1-\epsilon_0)|u|^{2} |\nabla u|^2+(\mu+\lambda)|u|^{2}|\text{div}u|^2+\mu(r-2)|u|^{2}\big| \nabla |u| \big|^2\\ &+(r-2)(\mu+\lambda)|u|^{2} u\cdot \nabla |u|\text{div}\Big(\frac{u}{|u|}\Big)+(r-2)(\mu+\lambda)|u \cdot \nabla |u||^2. \end{split} \end{equation} Now we consider how to make sure that $G\geq 0$. \begin{equation}\label{wang2mm} \begin{split} G=&\mu (1-\epsilon_0)|u|^{2} \Big( |u|^2\Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2+\big| \nabla |u|\big|^2\Big)+(\mu+\lambda)|u|^{2}|\Big(|u|\text{div}\Big(\frac{u}{|u|}\Big)+\frac{u\cdot \nabla |u|}{|u|}\Big)^2\\ &+\mu(r-2)|u|^{2}\big| \nabla |u| \big|^2+(r-2)(\mu+\lambda)|u|^{2} u\cdot \nabla |u|\text{div}\Big(\frac{u}{|u|}\Big)\\ &+(r-2)(\mu+\lambda)|u \cdot \nabla |u||^2\\ =&\mu(1-\epsilon_0) |u|^{4} \Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2+\mu(r-1-\epsilon_0)|u|^2\big| \nabla |u| \big|^2+(r-1)(\mu+\lambda)|u \cdot \nabla |u||^2\\ &+r(\mu+\lambda)|u|^{2} u\cdot \nabla |u|\text{div}\Big(\frac{u}{|u|}\Big)+(\mu+\lambda)|u|^{4}\Big(\text{div}\Big(\frac{u}{|u|}\Big)\Big)^2\\ =&\mu(1-\epsilon_0) |u|^{4} \Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2+\mu(r-1-\epsilon_0)|u|^{2}\big| \nabla |u| \big|^2\\ &+(r-1)(\mu+\lambda)\Big(u \cdot \nabla |u|+\frac{r}{2(r-1)}|u|^{2}\Big(\text{div}\frac{u}{|u|}\Big)\Big)^2\\ &+(\mu+\lambda)|u|^{4}\Big(\text{div}\frac{u}{|u|}\Big)^2-\frac{r^2(\mu+\lambda)}{4(r-1)}|u|^{4}\Big(\text{div}\Big(\frac{u}{|u|}\Big)\Big)^2, \end{split} \end{equation} which, combining with the fact $$ \Big|\text{div}\Big(\frac{u}{|u|}\Big)\Big|^2\leq 3\Big|\nabla \Big(\frac{u}{|u|}\Big)\Big|^2, $$ implies that \begin{equation}\label{wang3} \begin{split} G\geq& \mu(1-\epsilon_0) |u|^{4} \Big| \nabla \Big(\frac{u}{|u|}\Big)\Big|^2+\mu(r-1-\epsilon_0)|u|^{2}\big| \nabla |u| \big|^2\\ &+\Big(\mu+\lambda-\frac{r^2(\mu+\lambda)}{4(r-1)}\Big)|u|^{4}\Big(\text{div}\Big(\frac{u}{|u|}\Big)\Big)^2\\ \geq &\frac{\mu(1-\epsilon_0)}{3} |u|^{4} \Big| \text{div}\Big(\frac{u}{|u|}\Big)\Big|^2+\mu(r-1-\epsilon_0)|u|^{2}\big| \nabla |u| \big|^2\\ &+\Big(\mu+\lambda-\frac{r^2(\mu+\lambda)}{4(r-1)}\Big)|u|^{4}\Big(\text{div}\Big(\frac{u}{|u|}\Big)\Big)^2\\ \geq &\mu(r-1-\epsilon_0)|u|^{2}\big| \nabla |u| \big|^2+\Big(\frac{(4-\epsilon_0)\mu}{3}+\lambda-\frac{r^2(\mu+\lambda)}{4(r-1)}\Big)|u|^{4}\Big(\text{div}\Big(\frac{u}{|u|}\Big)\Big)^2. \end{split} \end{equation} Thus \begin{equation}\label{wang4} \begin{split} &\int_{\Omega \cap \{|u|>0\}} r|u|^{r-4} G\text{d}x\\ \geq & r \Big(\frac{(4-\epsilon_0)\mu}{3}+\lambda-\frac{r^2(\mu+\lambda)}{4(r-1)}\Big) \int_{\Omega \cap \{|u|>0\}} |u|^{r} \Big(\text{div}\Big(\frac{u}{|u|}\Big)\Big)^2\text{d}x\\ &+\mu r(r-1-\epsilon_0)\int_{\Omega \cap \{|u|>0\}} |u|^{r-2} \big| \nabla |u| \big|^2\text{d}x\\ \geq & 3r \Big(\frac{(4-\epsilon_0)\mu}{3}+\lambda-\frac{r^2(\mu+\lambda)}{4(r-1)}\Big)\phi(\epsilon_0,\epsilon_1,r)\int_{\Omega \cap \{|u|>0\}}|u|^{r-2} \big| \nabla |u| \big|^2\text{d}x\\ &+\mu r(r-1-\epsilon_0)\int_{\Omega \cap \{|u|>0\}} |u|^{r-2} \big| \nabla |u| \big|^2\text{d}x\\ \geq & g(\epsilon_0,\epsilon_1,r)\int_{\Omega \cap \{|u|>0\}} |u|^{r-2} \big| \nabla |u| \big|^2\text{d}x, \end{split} \end{equation} where \begin{equation}\label{xuchendd} \begin{split} g(\epsilon_0,\epsilon_1,r)=\Big[3r \Big(\frac{(4-\epsilon_0)\mu}{3}+\lambda-\frac{r^2(\mu+\lambda)}{4(r-1)}\Big)\phi(\epsilon_0,\epsilon_1,r)+\mu r(r-1-\epsilon_0)\Big)\Big]. \end{split} \end{equation} Here we need that $\epsilon_0$ is sufficiently small such that $\epsilon_0<(r-1)(1-\epsilon_1)$. Then combining (\ref{lz88}) and (\ref{wang4})-(\ref{xuchendd}), we quickly have \begin{equation}\label{lz4mm} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x \leq C,\quad \text{for\quad some }\quad r\in \big(3,7/2\big). \end{split} \end{equation} So combining (\ref{lz411})-(\ref{lz422}) and (\ref{lz4mm}) for $\textbf{Step}$:1 and $\textbf{Step}$:2, we conclude that if $3\lambda <(29-\alpha_\lambda)\mu$, there exits some constants $C>0$ such that \begin{equation}\label{lz4nn} \begin{split} & \frac{d}{dt}\int_{\Omega} \rho |u|^r\text{d}x \leq C,\quad \text{for\quad some}\quad r\in \big(3,7/2\big). \end{split} \end{equation} \end{proof} Now for each $t\in [0,T)$, we denote $v(t,x)=(-L)^{-1} \text{div} A$ and $$ A=PI_3-\Big(H\otimes H-\frac{1}{2}|H|^2I_3\Big), $$ that is, $v$ is the solution of \begin{equation}\label{diyi} \begin{cases} \mu \triangle v+(\lambda+\mu)\nabla \text{div} v=\text{div} A \quad \text{in}\quad \Omega,\\[8pt] v(t,x)=0 \quad \text{on} \quad \partial \Omega. \end{cases} \end{equation} From Lemma \ref{tvd1}, for any $l\in (1,+\infty)$, there exists a constant $C$ independent of $t$ such that \begin{equation}\label{diy2} \begin{cases} |\nabla v(t)|_l\leq C(|\rho(t)|_l+|H(t)|_l),\\[8pt] |\nabla^2 v(t)|_l\leq C(|\nabla \rho(t)|_l+|\nabla H (t)|_l). \end{cases} \end{equation} Now let us introduce an important quantity: $$ w=u-v.$$ It will be observed that this quantity $w$ possesses more regularity information than $u$ under the assumption that $(H,\rho)$ is upper bounded. First, we have \begin{lemma}\label{sk4} \begin{equation*} \begin{split} |\nabla w(t)|^2_{ 2}+\int_0^T( |\nabla^2 w|^2_2+|\sqrt{\rho}w_t|^2_2)\text{d}t\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation*} where $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any\ T\in (0,\overline{T}])$. \end{lemma} \begin{proof} First, from the momentum equations $(\ref{eq:1.2})_4$, we find that $w$ satisfies \begin{equation}\label{zhu6} \begin{cases} \rho w_t -\mu \triangle w-(\lambda+\mu)\nabla \text{div}w=\rho F,\\ w(t,x)=0 \quad \text{on} \quad [0,T)\times \partial \Omega, \ w(0,x)=w_0(x), \quad \text{in }\ \Omega, \end{cases} \end{equation} with $w_0(x)=u_0(x)+v_0(x)$ and \begin{equation*} \begin{split} F=&-u\cdot \nabla u+L^{-1} \text{div} A_t\\ =&-u\cdot \nabla u-L^{-1} \nabla \text{div}(Pu)-(\gamma-1) L^{-1}\nabla (P\text{div}u)\\ &-L^{-1}\text{div}(H_t\otimes H+H\otimes H_t )+L^{-1} \nabla (H\cdot H_t)=\sum_{i=1}^{5} L_i. \end{split} \end{equation*} Multiplying the equations in (\ref{zhu6}) by $w_t$ and integrating the resulting equation over $\Omega$, from H\"older's inequality, we have \begin{equation}\label{gaibian} \begin{split} &\frac{d}{dt}\int_{\Omega} \big(\mu|\nabla w|^2+(\lambda+\mu)|\text{div}w|^2\big)\text{d}x +\int_{\Omega}\rho |w_t|^2 \text{d}x\\ =&\int_{\Omega} \rho F\cdot w_t \text{d}x \leq C|\sqrt{\rho}F|^2_2+\frac{1}{2}\int_{\Omega}\rho |w_t|^2 \text{d}x, \end{split} \end{equation} which means that \begin{equation}\label{gai11} \begin{split} &\frac{d}{dt}\int_{\Omega} \big(\mu|\nabla w|^2+(\lambda+\mu)|\text{div}w|^2\big)\text{d}x +\int_{\Omega}\rho |w_t|^2 \text{d}x \leq C\sum_{i=1}^{5} |\sqrt{\rho} L_i|^2_2. \end{split} \end{equation} Next we need to consider the terms $|\sqrt{\rho} L_i|_2$ for $i=1,2,...,5$. From Lemma \ref{abs3} and (\ref{diy2}), it follows that \begin{equation}\label{yue5} \begin{split} |\sqrt{\rho}L_1|_2=& |-\sqrt{\rho}u \cdot \nabla u|_2 \leq C |\sqrt{\rho}u |_r |\nabla u|_{\frac{2r}{r-2}}\\ \leq& C\Big( |\nabla w|_{\frac{2r}{r-2}}+|\nabla v|_{\frac{2r}{r-2}}\Big) \leq C(\epsilon) |\nabla w|_2+\epsilon | w|_{D^2}+C, \end{split} \end{equation} where we have used the interpolation inequality $$ |f|_p\leq C(\epsilon)|f|_2+\epsilon |\nabla f|_2,\quad 2\leq p <6. $$ According to Lemmas \ref{s2}-\ref{abs3}, we obtain \begin{equation}\label{kkll} \begin{split} |\sqrt{\rho}L_2|_2=& |-\sqrt{\rho}L^{-1} \nabla \text{div}(Pu)|_2\leq C|Pu|_2\leq C|\sqrt{\rho}u|_2\leq C,\\ |\sqrt{\rho}L_3|_2=& |-(\gamma-1)\sqrt{\rho}L^{-1} \nabla (P\text{div}u)|_2\\ \leq&C |\sqrt{\rho}|_3|L^{-1} \nabla (P\text{div}u)|_6\\ \leq & C |\nabla L^{-1} \nabla (P\text{div}u)|_2\leq C |P\text{div}u |_2\leq C|\nabla u|_2. \end{split} \end{equation} Now we consider the term $\mathbb{B}=(b^{(i,j)})_{(3\times 3)}=H_t\otimes H+H\otimes H_t$. Due to Lemma \ref{liu6}, \begin{equation}\label{bang1} H_t=H \cdot \nabla u-u \cdot \nabla H-H\text{div}u, \end{equation} we get \begin{equation}\label{yue66} \begin{split} b^{(i,j)}=&H^j\big(H^k\partial_k u^i-u^k\partial_k H^i-H^i\partial_k u^k\big)\\ &+H^i\big(H^k\partial_k u^j-u^k\partial_k H^j-H^j\partial_k u^k\big)\\ =&H^iH^k\partial_k u^j+H^jH^k\partial_k u^i-H^i H^j\partial_k u^k-\partial_k(H^iH^j u^k), \end{split} \end{equation} which means that \begin{equation}\label{yue67} \begin{split} \mathbb{B}=&(H^iH^k\partial_k u^j+H^jH^k\partial_k u^i-H^i H^j\partial_k u^k)_{(3\times 3)}-\text{div}\big( (H\otimes H) \otimes u\big) =\mathbb{B}_1+\mathbb{B}_2. \end{split} \end{equation} Then we have \begin{equation}\label{kksd} \begin{split} |\sqrt{\rho}L_4|_2=& |\sqrt{\rho}L^{-1}\text{div}(H_t\otimes H+H\otimes H_t )|_2\\ =& |\sqrt{\rho}L^{-1} \text{div}\mathbb{B}_1|_2+ |\sqrt{\rho}L^{-1} \text{div}\mathbb{B}_2 |_2\\ \leq&C |\sqrt{\rho}|_3|L^{-1} \text{div}\mathbb{B}_1|_6+|\sqrt{\rho}L^{-1} \text{div}\text{div}\big( (H\otimes H)\otimes u\big) |_2\\ \leq & C |\nabla L^{-1} \text{div}\mathbb{B}_1|_2+C|\nabla u|_2\leq C|\nabla u|_2. \end{split} \end{equation} Similarly, we consider the term $\mathbb{C}=H\cdot H_t$. Due to (\ref{bang1}), we obtain \begin{equation}\label{yue68} \begin{split} \mathbb{C}=& H\cdot (H \cdot \nabla u-u \cdot \nabla H-H\text{div}u)\\ =&\Big(H\cdot \nabla u\cdot H-\frac{1}{2}|H|^2\text{div}u\Big)-\frac{1}{2}\text{div}(u|H|^2) =\mathbb{C}_1+\mathbb{C}_2, \end{split} \end{equation} which, together with the Poincar$\acute{\text{e}}$ inequality, implies that \begin{equation}\label{kks} \begin{split} |\sqrt{\rho}L_5|_2=& |\sqrt{\rho}L^{-1} \nabla (H\cdot H_t)|_2\\ =& |\sqrt{\rho}L^{-1} \nabla \mathbb{C}_1|_2+ |\sqrt{\rho}L^{-1} \nabla \mathbb{C}_2 |_2\\ \leq&C |\sqrt{\rho}|_3|L^{-1} \nabla \mathbb{C}_1|_6+|\sqrt{\rho}L^{-1} \nabla \text{div}(u|H|^2) |_2\\ \leq & C |\nabla L^{-1} \nabla \mathbb{C}_1|_2+C|\nabla u|_2\leq C|\nabla u|_2. \end{split} \end{equation} Combining (\ref{yue5})-(\ref{kks}), we have \begin{equation}\label{kksk} \begin{split} |\sqrt{\rho}F|^2_2\leq \epsilon|\nabla^2 w|^2_2+C(\epsilon)(1+|\nabla w|^2_2+|\nabla u|^2_2). \end{split} \end{equation} Then from Lemma \ref{tvd1} and (\ref{zhu6}), we have \begin{equation}\label{kksu} \begin{split} |\nabla^2 w|^2_2\leq C(|\rho w_t|^2_2+|\rho F|^2_2)\leq C(|\sqrt{\rho} w_t|^2_2+|\sqrt{\rho} F|^2_2), \end{split} \end{equation} which implies, by taking $\epsilon=\frac{1}{3C}$ in (\ref{kksk}), that \begin{equation}\label{kksw} \begin{split} |\sqrt{\rho}F|^2_2\leq \frac{1}{2}|\sqrt{\rho} w_t|^2_2 +C(\epsilon)(1+|\nabla w|^2_2+|\nabla u|^2_2). \end{split} \end{equation} Substituting (\ref{kksu}) into (\ref{gai11}), from Gronwall's inequality, the desired conclusions can be obtained. \end{proof} Finally, according to the estimates obtained in (\ref{diy2}) and Lemmas \ref{abs3}-\ref{sk4}, we get \begin{lemma}\label{sk4ss} \begin{equation*} \begin{split} |\nabla u(t)|^2_{ 2}+\int_0^T |\nabla u|^2_q\text{d}t\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation*} where $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any\ T\in (0,\overline{T}])$. \end{lemma} Next, we will give high order regularity estimates for $w$. This is possible if the initial data $(H_0,\rho_0,u_0, P_0)$ satisfies the compatibility condition (\ref{th79}). First for a function or vector field (or even a $3 \times 3$ matrix) $f (t , x )$, the material derivative $ f$ is defined by: $$ \dot{f}=f_t+u\cdot \nabla f=f_t+\text{div}(fu)-f\text{div}u. $$ \begin{lemma}[\textbf{Lower order estimate of the velocity $u$}]\label{ablem:4-1}\ \\ \begin{equation*} \begin{split} |w(t)|^2_{D^2}+|\sqrt{\rho} \dot{u}(t)|^2_{2} + \int_{0}^{T} |\dot{u}|^2_{D^1}\text{d}t\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation*} where $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any\ T\in (0,\overline{T}])$. \end{lemma} \begin{proof} We will follow an idea due to Hoff \cite{hoff1}. Applying $\dot{u}[\partial / \partial t+\text{div}(u\cdot)]$ to $(\ref{eq:1.2})_4$ and integrating by parts give \begin{equation}\label{bzhen4} \begin{split} &\frac{1}{2}\frac{d}{dt}\int_{\Omega}\rho |\dot{u}|^2 \text{d}x\\ =& -\int_{\Omega}\Big( \dot{u}\cdot \big(\nabla P_t+\text{div}(\nabla P\otimes u)\big)+\dot{u}\cdot \big(\triangle u_t+\text{div}(\triangle u\otimes u)\big) \Big)\text{d}x\\ &+ (\lambda+\mu)\int_{\Omega} \dot{u}\cdot \big(\nabla \text{div}u_t+\text{div}(\nabla \text{div}u\otimes u)\big) \text{d}x\\ &+\int_{\Omega} \dot{u} \cdot \Big(\text{div}\Big(H\otimes H-\frac{1}{2}|H|^2I_3\Big)_t+\text{div}\Big(\text{div}\Big(H\otimes H-\frac{1}{2}|H|^2I_3\Big)\otimes u\Big)\Big)\text{d}x\\ \equiv&:\sum_{i=6}^{8}L_i+\Lambda. \end{split} \end{equation} According to Lemmas \ref{s2}-\ref{sk4ss}, H\"older's inequality, Gagliardo-Nirenberg inequality and Young's inequality, we deduce that \begin{equation}\label{zhou6} \begin{split} L_6=& -\int_{\Omega}\big(\dot{u}\cdot \big(\nabla P_t+\text{div}(\nabla P\otimes u)\big)\big)\text{d}x\\ =& \int_{\Omega}\big(\partial_j\dot{u}^j P_t+\partial_k\dot{u}^j\partial_j Pu^k\big)\text{d}x\\ =& \int_{\Omega}\big(-\partial_j \dot{u}^ju^k\partial_k P-\gamma P\text{div}u\partial_j\dot{u}^j +\partial_k\dot{u}^j\partial_j Pu^k\big)\text{d}x\\ =& \int_{\Omega}\big(-\gamma P\text{div}u\partial_j\dot{u}^j +P\partial_k(\partial_j \dot{u}^ju^k)-P\partial_j(\partial_k \dot{u}^ju^k)\big)\text{d}x\\ \leq& C|\nabla \dot{u}|_{2}|\nabla u|_{2} \leq \epsilon |\nabla \dot{u}|^2_{2} +C(\epsilon) |\nabla u|^2_2,\\ L_7=& \int_{\Omega}\mu\big(\dot{u}\cdot \big(\triangle u_t+\text{div}(\triangle u\otimes u)\big)\big)\text{d}x\\ =& -\int_{\Omega}\mu\big(\partial_i\dot{u}^j\partial_i u^j_t+\triangle u^j u\cdot \nabla \dot{u}^j\big)\text{d}x\\ =& -\int_{\Omega}\mu\big(|\nabla \dot{u}|^2- \partial_i \dot{u}^j u^k\partial_k \partial_i u^j- \partial_i \dot{u}^j \partial_i u^k\partial_k u^j+\triangle u^j u\cdot \nabla \dot{u}^j\big)\text{d}x\\ =& -\int_{\Omega}\mu\big(|\nabla \dot{u}|^2- \partial_i \dot{u}^j \partial_ku^k \partial_i u^j- \partial_i \dot{u}^j \partial_i u^k\partial_k u^j-\partial_iu^j \partial_i u^k \partial_k \dot{u}^j\big)\text{d}x\\ \leq &-\frac{\mu}{2}|\nabla \dot{u}|^2_{2}+C|\nabla u|^4_4, \end{split} \end{equation} where $\epsilon>0$ is a sufficiently small constant. Similarly, we have \begin{equation}\label{yueyue} \begin{split} L_8=& (\lambda+\mu)\int_{\Omega} \big(\dot{u}\cdot \big(\nabla \text{div}u_t+\text{div}(\nabla \text{div}u\otimes u\big)\big)\text{d}x \leq -\frac{\mu+\lambda}{2}|\nabla \dot{u}|^2_2 +C |\nabla u|^4_4. \end{split} \end{equation} Next we begin to consider the magnetic term $\Lambda$ \begin{equation*} \begin{split} \Lambda=& \int_{\Omega} \dot{u} \cdot \Big(\text{div}\Big(H\otimes H-\frac{1}{2}|H|^2I_3\Big)_t+\text{div}\Big(\text{div}\Big(H\otimes H-\frac{1}{2}|H|^2I_3\Big)\otimes u\Big)\Big)\text{d}x\ = \sum_{j=1}^4 \Lambda_j. \end{split} \end{equation*} Via the magnetic equations $(\ref{eq:1.2})_1$ and integrating by parts, we obtain that \begin{equation}\label{yueyue12} \begin{split} \Lambda_1=& \int_{\Omega} \dot{u} \cdot \text{div}\big(H\otimes H\big)_t\text{d}x= \int_{\Omega} \big(H\otimes H\big)_t : \nabla \dot{u} \text{d}x\\ =& \int_{\Omega} \big(H\otimes H_t+H_t\otimes H): \nabla \dot{u} \text{d}x\\ =& \int_{\Omega}H \otimes\big( H\cdot \nabla u-u \cdot \nabla H-H\text{div}u\big): \nabla \dot{u}\text{d}x\\ &+ \int_{\Omega} \big(H \cdot \nabla u-u \cdot \nabla H-H\text{div}u\big)\otimes H: \nabla \dot{u}\text{d}x\\ =& \int_{\Omega}H \otimes\big( H\cdot \nabla u-H\text{div}u\big): \nabla \dot{u}\text{d}x+ \int_{\Omega} \big(H \cdot \nabla u-H\text{div}u\big)\otimes H: \nabla \dot{u}\text{d}x\\ &+\int_{\Omega} \Big(-H\otimes(u \cdot \nabla H)-(u \cdot \nabla H)\otimes H)\Big): \nabla \dot{u}\text{d}x=\Lambda_{11}+\Lambda_{12}+\Lambda_{13},\\ \end{split} \end{equation} \begin{equation}\label{yueyue13} \begin{split} \Lambda_2=& \int_{\Omega} \dot{u} \cdot \text{div}\Big(-\frac{1}{2}|H|^2I_3\Big)_t\text{d}x= \int_{\Omega} \Big(-\frac{1}{2}|H|^2I_3\Big)_t : \nabla \dot{u} \text{d}x\\ =& \int_{\Omega} -(H\cdot H_t I_3): \nabla \dot{u} \text{d}x\\ =& \int_{\Omega}-\big(H \cdot ( H\cdot \nabla u-u \cdot \nabla H-H\text{div}u)I_3\big): \nabla \dot{u}\text{d}x\\ =& \int_{\Omega}-\big(H \cdot\big( H\cdot \nabla u-H\text{div}u\big)I_3\big): \nabla \dot{u}\text{d}x+ \int_{\Omega}\big(H \cdot\big( u \cdot \nabla H\big)I_3\big): \nabla \dot{u}\text{d}x\\ =&\Lambda_{21}+\Lambda_{22},\\ \Lambda_3=& \int_{\Omega} \dot{u} \cdot \text{div}\big(\text{div} \big(H\otimes H\big) \otimes u\big) \text{d}x = \int_{\Omega} \text{div} \big(H\otimes H\big) \otimes u: \nabla \dot{u}\text{d}x\\ =& \int_{\Omega} (H\cdot \nabla H)\otimes u: \nabla \dot{u}\text{d}x = \int_{\Omega} H^k \partial_kH^i u^j \partial_j \dot{u}^i\text{d}x\\ =&- \int_{\Omega} H^k H^i \partial_k u^j\partial_j\dot{u}^i\text{d}x - \int_{\Omega} H^k H^i u^j \partial_{kj}\dot{u}^i\text{d}x =\Lambda_{31}+\Lambda_{32},\\ \Lambda_4=& \int_{\Omega} \dot{u} \cdot \text{div}\Big(\text{div}\Big(-\frac{1}{2}|H|^2I_3\Big)\otimes u\Big)\text{d}x\\ =&\int_{\Omega}\text{div}\Big(-\frac{1}{2}|H|^2I_3\Big)\otimes u:\nabla \dot{u} \text{d}x =\int_{\Omega}-H^k\partial_iH^ku^j\partial _j \dot{u}^i \text{d}x\\ =&\frac{1}{2}\int_{\Omega} |H^k|^2 \partial_iu^j\partial _j\dot{u}^i \text{d}x+\frac{1}{2}\int_{\Omega} |H^k|^2 u^j\partial _{ij}\dot{u}^i \text{d}x=\Lambda_{41}+\Lambda_{42}, \end{split} \end{equation} where we have used the fact that $\text{div}H=0$. Now we observe that \begin{equation}\label{yueyue15} \begin{split} \Lambda_{13}+\Lambda_{32} =&\int_{\Omega} \Big(-H\otimes(u \cdot \nabla H)-(u \cdot \nabla H)\otimes H)\Big): \nabla \dot{u}\text{d}x- \int_{\Omega} H^k H^i u^j \partial_{kj}\dot{u}^i\text{d}x\\ =& \int_{\Omega} \Big(- H^i u^k \partial_{k}H^j \partial_{j}\dot{u}^i-u^k \partial_{k}H^i H^j \partial_{j}\dot{u}^i-H^k H^i u^j \partial_{kj}\dot{u}^i\Big)\text{d}x\\ =& \int_{\Omega} \Big(\partial_{k}u^k H^i H^j \partial_{j}\dot{u}^i+u^k H^j\partial_{k}H^i \partial_{j}\dot{u}^i+H^j H^i u^k \partial_{kj}\dot{u}^i\Big)\text{d}x\\ &+ \int_{\Omega} \Big(-u^k H^j\partial_{k}H^i \partial_{j}\dot{u}^i-H^k H^i u^j \partial_{kj}\dot{u}^i\Big)\text{d}x\\ =& \int_{\Omega} \partial_{k}u^k H^i H^j \partial_{j} \dot{u}^i \text{d}x,\\ \Lambda_{22}+\Lambda_{42} =&\int_{\Omega}\big(H \cdot\big( u \cdot \nabla H\big)I_3\big): \nabla \dot{u}\text{d}x+\frac{1}{2}\int_{\Omega} |H^k|^2 u^j\partial _{ij}\dot{u}^i \text{d}x\\ =&\int_{\Omega} \Big( H^ku^l \partial_l H^k \text{div}\dot{u}+ \frac{1}{2} |H^k|^2 u^j \partial _{ij} \dot{u}^i\Big) \text{d}x\\ =&\int_{\Omega} \Big( -\frac{1}{2}u^j |H^k|^2 \partial_{ij}\dot{u}^i-\frac{1}{2}\partial_ju^j |H^k|^2 \partial_{i}\dot{u}^i+\frac{1}{2} |H^k|^2 u^j \partial _{ij} \dot{u}^i\Big) \text{d}x\\ =&-\frac{1}{2}\int_{\Omega} \partial_ju^j |H^k|^2 \partial_{i}\dot{u}^i \text{d}x, \end{split} \end{equation} which, together with (\ref{yueyue12})-(\ref{yueyue13}), implies that \begin{equation}\label{yueyue18} \begin{split} \Lambda \leq C|H|^2_{\infty} |\nabla u|_2|\nabla \dot{u}|_2\leq \epsilon|\nabla \dot{u}|^2_2+ C(\epsilon)|\nabla u|^2_2. \end{split} \end{equation} Due to the definition of $w$, we know that $w$ satisfies \begin{equation}\label{mou5k} \begin{split} \mu \triangle w+(\lambda+\mu)\nabla \text{div}w=\rho \dot{u}\quad \text{in} \ \Omega, \end{split} \end{equation} with the zero boundary condition. From Lemma \ref{tvd1}, we have \begin{equation}\label{mou5s} |w|_{D^2}\leq C|\rho \dot{u}|_2\leq C|\rho \dot{u}|_2, \end{equation} which, together with (\ref{bzhen4})-(\ref{yueyue18}) and letting $\epsilon>0$ be sufficiently small, implies that \begin{equation}\label{mou5} \begin{split} \frac{1}{2}\frac{d}{dt} & \int_{\Omega}\rho |\dot{u}|^2 \text{d}x+|\dot{u}|^2_{D^1} \leq C|\nabla u|^4_4+C\\ \leq & C|\nabla u|_2 |\nabla u|^3_6\leq C|\nabla u|^2_6(|\nabla w|_6+|\nabla v|_6)\\ \leq & C|\nabla u|^2_6(1+|\nabla^2 w|_2) \leq C|\nabla u|^2_6(1+|\sqrt{\rho}\dot{u}|_2). \end{split} \end{equation} Then from Gronwall's inequality, we have \begin{equation}\label{mou999} \begin{split} \int_{\Omega}\rho |\dot{u}|^2(t) \text{d}x+\int_0^t|\dot{u}|^2_{D^1}\leq & C,\quad \text{for} \quad 0\leq t\leq T. \end{split} \end{equation} \end{proof} According to Lemmas \ref{abs3}-\ref{ablem:4-1} and using the equations (\ref{mou5k}) again, we deduce \begin{lemma}\label{sk4zz} \begin{equation*} \begin{split} |\nabla w(t)|_{ L^2([0,T];L^{\infty}(\Omega))}+|\nabla^2 w(t)|_{ L^2([0,T];L^{q}(\Omega))}\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation*} where $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any\ T\in (0,\overline{T}])$. \end{lemma} Finally, the following lemma gives bounds of $|\nabla \rho|_q$, $|\nabla H|_q$ and $|\nabla^2 u|_q$. \begin{lemma}\label{s7} \begin{equation}\label{zhu54} \begin{split} &\|\big(\rho,H,P)(t)\|_{W^{1,r}}+|(\rho_t,H_t, P_t)(t)|_r\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation} where $r\in[2,q]$, $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any \ T \in (0,\overline{T}])$. \end{lemma} \begin{proof} In the following estimates we will use (from (\ref{diyi})-(\ref{diy2}) and (\ref{tvd9})-(\ref{tvd99})) \begin{equation}\label{zhu55a} \begin{split} |\nabla^2 v|_q \leq& C(|\nabla \rho |_q+|\nabla H|_q),\\ |\nabla v|_{\infty}\leq& C\big(1+|\nabla v|_{BMO(\Omega)}\ln (e+|\nabla^2v|_q)\big)\\ \leq& C\big(1+(|\rho|_{L^2\cap L^\infty}+|H|_{L^2\cap L^\infty})\ln (e+|\nabla \rho|_q+|\nabla H|_q)\big)\\ \leq& C\big(1+\ln (e+|\nabla \rho|_q+|\nabla H|_q)\big). \end{split} \end{equation} First, applying $\nabla$ to $(\ref{eq:1.2})_3$, multiplying the resulting equations by $q|\nabla \rho|^{q-2} \nabla \rho$, we have \begin{equation}\label{zhu20cccc} \begin{split} &(|\nabla \rho|^q)_t+\text{div}(|\nabla \rho|^qu)+(q-1)|\nabla \rho|^q\text{div}u\\ =&-q |\nabla \rho|^{q-2}(\nabla \rho)^\top D( u) (\nabla \rho)-q \rho|\nabla \rho|^{q-2} \nabla \rho \cdot \nabla \text{div}u. \end{split} \end{equation} Then integrating (\ref{zhu20cccc}) over $\Omega$, we immediately obtain \begin{equation}\label{zhu200} \begin{split} \frac{d}{dt}|\nabla \rho|^q_q \leq& C|D( u)|_\infty|\nabla \rho|^q_q+C|\nabla^2 u|_q|\nabla \rho|^{q-1}_q\\ \leq & C(|\nabla w|_\infty+|\nabla v|_\infty) |\rho|^q_q+C(|\nabla^2 w|_q+|\nabla^2 v|_q)|\nabla \rho|^{q-1}_q. \end{split} \end{equation} Second, applying $\nabla$ to $(\ref{eq:1.2})_1$, multiplying the resulting equations by $q\nabla H |\nabla H|^{q-2}$, we have \begin{equation}\label{zhu20qs} \begin{split} &(|\nabla H|^2)_t-qA:\nabla H|\nabla H|^{q-2}+q B: \nabla H|\nabla H|^{q-2}+qC : \nabla H|\nabla H|^{q-2}=0, \end{split} \end{equation} where \begin{equation}\label{mou6ll} \begin{split} A=\nabla (H\cdot \nabla u)=&(\partial_j H \cdot \nabla u^i)_{(ij)}+(H\cdot \nabla \partial_j u^i)_{(ij)},\\ B=\nabla (u \cdot \nabla H)=&(\partial_j u\cdot \nabla H^i)_{(ij)}+ (u\cdot \nabla \partial_j H^i)_{(ij)},\\ C=\nabla(H \text{div}u)=&\nabla H \text{div}u+H \otimes \nabla \text{div}u. \end{split} \end{equation} Then integrating (\ref{zhu20qs}) over $\Omega$, due to \begin{equation}\label{btbt} \begin{split} &\int_{\Omega} A: \nabla H |\nabla H|^{q-2}\text{dx} \leq C|\nabla u|_\infty |\nabla H|^q_q+C|H|_\infty|\nabla H|^{q-1}_q |u|_{D^{2,q}},\\ &\int_{\Omega} B: \nabla H|\nabla H|^{q-2} \text{dx}\\ =&\int_{\Omega} \sum_{i,j,k} \partial_j u^k \partial_k H^i \partial_j H^i|\nabla H|^{q-2} \text{dx}+\int_{\Omega} \sum_{i,j,k} u^k (\partial_{kj} H^i \partial_j H^i)|\nabla H|^{q-2} \text{dx}\\ =&C|\nabla u|_\infty |\nabla H|^q_q+\frac{1}{2}\int_{\Omega} \sum_{k=1}^3 u^k\Big(\partial_{k} |\nabla H|^2|\nabla H|^{q-2} \Big)\text{dx}\\ =&C|\nabla u|_\infty |\nabla H|^q_q+\frac{1}{q}\int_{\Omega} \sum_{k=1}^3 u^k \partial_{k} |\nabla H|^{q} \text{dx} \leq C|\nabla u|_\infty |\nabla H|^q_q,\\ &\int_{\Omega} C: \nabla H|\nabla H|^{q-2} \text{dx} \leq C|\nabla u|_\infty |\nabla H|^q_q+C|H|_\infty|\nabla H|^{q-1}_q |u|_{D^{2,q}}, \end{split} \end{equation} we quickly obtain the following estimate: \begin{equation}\label{zhu21qs} \begin{split} \frac{d}{dt}|\nabla H|^q_q \leq& C(|\nabla u|_\infty+1)|\nabla H|^q_q+C | u|_{D^{2,q}}|\nabla H|^{q-1}_q\\ \leq & C(|\nabla w|_\infty+|\nabla v|_\infty) |\nabla H|^q_q+C(|\nabla^2 w|_q+|\nabla^2 v|_q)|\nabla H|^{q-1}_q. \end{split} \end{equation} Then from (\ref{zhu55a}), (\ref{zhu200}), (\ref{zhu21qs}) and Gronwall's inequality, we immediately have \begin{equation}\label{zhu2kkh} \begin{split} &\frac{d}{dt}(|\nabla \rho|^q_q+|\nabla H|^q_q)\\ \leq & C(1+|\nabla w|_\infty+|\nabla v|_\infty) (|\nabla \rho|^q_q+|\nabla H|^q_q)+C|\nabla^2 w|_q(|\nabla \rho|^{q-1}_q+|\nabla H|^{q-1}_q)\\ \leq &C\big(1+\|\nabla w\|_{W^{1,q}}+\ln (e+|\nabla \rho|_q+|\nabla H|_q)\big)(|\nabla \rho|^q_q+|\nabla H|^q_q)\\ &+C|\nabla^2 w|_q(|\nabla \rho|^{q-1}_q+|\nabla H|^{q-1}_q). \end{split} \end{equation} Via (\ref{zhu2kkh}) and notations: $$ f=e+|\nabla \rho|_q+|\nabla H|_q,\quad g=1+\|\nabla w\|_{W^{1,q}}, $$ we quickly have $$ f_t\leq Cgf +Cf\ln f+Cg, $$ which, together with Lemma \ref{sk4zz} and Gronwall's inequality, implies that $$ \ln f(t)\leq C,\quad \text{for} \quad 0\leq t\leq T. $$ Then we have obtained the desired estimate for $|\nabla \rho|_q+|\nabla H|_q$. And the upper bound of $|\nabla \rho|_r+|\nabla H|_r$ can be deduced via the H\"older's inequality. Finally, the estimates for $\rho_t$ and $H_t$ can be obtained easily via the following relation: \begin{equation}\label{ghtkk} \begin{cases} H_t=H \cdot \nabla u-u \cdot \nabla H-H\text{div}u,\\[6pt] \rho_t=-u\cdot \nabla \rho-\rho\text{div} u,\ P_t=-u \cdot \nabla P-\gamma P \text{div}u, \end{cases} \end{equation} and the estimates obtained in Lemmas Lemmas \ref{s2}-\ref{s7}. \end{proof} According to the estimates obtained in Lemmas \ref{s2}-\ref{s7}, we deduce that \begin{lemma}\label{sk4nn} \begin{equation*} \begin{split} |u(t)|_{D^2}+|\sqrt{\rho}u(t)|_{2}+\int_0^T\big(|u_t|^2_{D^1}+|u|^2_{D^{2,q}}\big) \text{d}t\leq C,\quad \text{for} \quad 0\leq t\leq T, \end{split} \end{equation*} where $C$ only depends on $C_0$, $\mu$, $\lambda$, $A$, $\gamma$, $\Omega$ and $T$ $(any\ T\in (0,\overline{T}])$. \end{lemma} \begin{proof} Via the momentum equations $(\ref{eq:1.2})_4$, (\ref{diy2}) and Lemma \ref{tvd1}, we have $$ |u|_{D^{2,l}}\leq (|w|_{D^{2,l}}+|v|_{D^{2,l}})\leq C(|w|_{D^{2,l}}+|\nabla P|_l+|\nabla H|_l), $$ which, together with Lemma \ref{s7}, implies that $$ |u(t)|_{D^2}+\int_0^T |u|^2_{D^{2,q}}\text{d}t\leq C,\quad \text{for} \quad 0\leq t\leq T. $$ According to Lemmas \ref{abs3} and \ref{s7}, for $r\in (3,7/2)$, we quickly have \begin{equation*} \begin{split} |\sqrt{\rho}u_t|_2\leq C(|\sqrt{\rho}\dot{u}|_2+|\sqrt{\rho} u\cdot \nabla u|_2)\leq C(1+|\rho^{\frac{1}{r}}u|_r |\nabla u|_{\frac{2r}{r-2}}) \leq C. \end{split} \end{equation*} Similarly, we have $$\int_0^T |u_t|^2_{D^1} \text{d}t \leq C\int_0^T (|\dot{u}|^2_{D^1}+|u\cdot \nabla u|^2_{D^1})\text{d}t \leq C.$$ \end{proof} In truth, in view of the estimates obtained in Lemmas \ref{s2}-\ref{sk4nn}, we quickly know that the functions $(H,\rho,u,P)|_{t=\overline{T}} =\lim_{t\rightarrow \overline{T}}(H,\rho,u,P)$ satisfies the conditions imposed on the initial data $(\ref{th78})-(\ref{th79})$. Therefore, we can take $(H,\rho,u,P)|_{t=\overline{T}}$ as the initial data and apply the local existence Theorem \ref{th5} to extend our local strong solution beyond $t\geq \overline{T}$. This contradicts the assumption on $\overline{T}$. {\bf Acknowledgement:} The research of S. Zhu was supported in part by National Natural Science Foundation of China under grant 11231006, Natural Science Foundation of Shanghai under grant 14ZR1423100 and China Scholarship Council. \bigskip
1,116,691,497,984
arxiv
\section{Introduction} Non-equilibrium physical systems can be divided into two categories: active and externally driven matter. In active matter, energy is injected internally and continuously to the system~\cite{Marchetti13,review3}. Typical examples of active matter include suspensions of microswimmers~\cite{Palacci13,Cates15}, biological tissues \cite{Angelini10,Garcia2015} and sub-cellular materials~\cite{Frey10,Prost07}. On the other hand, in externally driven matter, energy is injected to the system globally or at the boundaries. Some examples of driven matter are sheared colloidal suspensions~\cite{Larson,Poon07,ballauff} and Rayleigh-Benard convection in fluids~\cite{Faber}. One important question in statistical physics is to identify if there is a universality between active and externally driven matter~\cite{Marchetti13,review3}. For instance in Refs.~\cite{Elsen16,Elsen17,Takeshi16}, it has been suggested that periodically sheared suspensions can be mapped into a particular class of active matter. As another example, dense bacterial suspensions display collective motion that was described using the tools of fluid turbulence~\cite{turbulence}. We consider {\it growing dense active matter} as a specific type of active system where particles (or mass) are created locally in the system. Some examples include fast growing and densely packed epithelial tissues~\cite{Angelini10} and bacterial colonies~\cite{colony}. In the case of tissues, the cells grow up to a certain size before dividing into two daughter cells. In an open boundary, the dynamics of growing tissues can display some glassy behaviours such as dynamic heterogeneities~\cite{Angelini10}. On the other hand, under confinement, the dynamics becomes arrested after the density or local pressure reaches a certain value, similar to a glass transition~\cite{Hallatschek16,reviewelijah}. Exploring the role of cell division in dense tissues is relevant to understanding biological processes such as tissue repair and tumor growth \cite{silberzan,silberzan2}. In numerical simulations, the cells are often approximated as soft spheres without changing much of the physics~\cite{Ranft10,Thirumalai18,Hallatschek19}. Similarly, some bacteria such as \emph{N. gonorrhoeae} also have a spherical body which changes into a dumbell shape during reproduction~\cite{Welker18}, similar to our numerical model below. In densely packed colonies, they display liquid/glass-like structure~\cite{Welker18}, although the dynamics has not been much studied experimentally. On the other hand, some other bacteria such as \emph{E. coli} and \emph{P. aerigunosa} are rod-shaped instead of spherical~\cite{Poon18,Kragh16}. They typically elongate before dividing in the lateral direction. This will introduce additional complexity such as liquid-crystalline local order inside the colony~\cite{tsim,Giomi18,Poon18}, which we do not consider further here. Collectively, bacteria also exist at large densities in the form of surface-attached biofilms \cite{biofilm1}, whose global structure and microscopic dynamics can now be studied experimentally with high resolution imaging techniques \cite{biofilm2}. Biofilms represent another important research area in biological studies. Numerical models devised to study biological systems are often system-specific~\cite{modelbacteria,lardon}. It is therefore difficult to identify if there is a unifying physical principle behind all these different examples of growing active matter. In this paper, we consider a minimal model of growing dense active matter in two dimensions. We consider circular soft repulsive discs which grow and divide in an attempt to capture the competition between two key physical ingredients, namely {\it particle division} that increases the density locally, and {\it steric repulsion}. Furthermore, we regulate the growth and division rates of the particles such that the density inside the growing material remains constant while the total number of particles increases linearly. This linear growth is consistent with many experiments of tissue growth~\cite{Montel11,Freyer85,Freyer86}. In biofilms, both linear and non-linear scaling such as exponential or algebraic have been reported~\cite{Dockery01,Allen19}. Other numerical models of tissue growth such as vertex model are also widely studied to take into account confluency~\cite{Manning16,Henkes17,Levine18}. They are often found to provide very similar dynamics to that of particle-based (or non-confluent) models~\cite{Henkes19}. The aim of our work is to isolate the physics of growing and dividing dense active matter without any additional system-specific interactions such as confluency~\cite{Manning16}, cell-cell adhesion~\cite{Ranft10} or non-sphericity of the particles~\cite{Giomi18}, and removing all other sources of motion such as thermal fluctuations and self-propulsion. In real biological systems, all these factors can of course coexist. It has been argued that cell division can fluidize active tissues because particle division necessarily produces a local rearrangement~\cite{joanny}. So, when a sufficient number of particle divisions has taken place, the entire system has been structurally reorganised, and this allows the material to flow~\cite{joanny,silke}. We argue below that this simple intuition does not account for the dynamic behaviour seen in growing active matter, where the global expansion itself plays a key physical role in the fluidisation of the material. In recent work, it was shown that growing tumors display dynamic behaviour reminiscent of supercooled liquids approaching a glass transition~\cite{Jimenez-Valencia15,Thirumalai18}. We shall similarly propose an analogy with glassy materials~\cite{Berthier11}, but will provide evidence that growing active matter actually resembles glassy materials that are externally driven at large scale, such as sheared colloidal suspensions~\cite{Poon07,ballauff}. To reach this conclusion we show that the single particle dynamics should be decomposed into two distinct components. First, particles move radially in the growing material as a response to the global expansion. These affine displacements are slaved to the radial growth of the colony and can overshadow the second, more complex non-affine component of the displacement. Removing affine displacements is standard practice for sheared glasses~\cite{Falk98}, but rarely performed in active matter and biological physics literature until very recently~\cite{Prakash19}. After subtracting the affine particle displacements, we show that the non-affine dynamics of the particles are spatially heterogenous and display aging behaviour. In addition, we find that the time correlation functions display compressed exponential decay at short length scales, a signature found in many soft glassy materials~\cite{Cipelletti00,ramos05,bob,ruta}. The mean squared displacements show a crossover from superdiffusive behaviour at short time scales to subdiffusive behaviour at long time scales. Overall, we conclude that this characteristic aging dynamics is not directly caused by the individual local division events, but is instead controlled by the global radial growth rate of the colony which indeed results from particle division. The radial growth rate plays therefore the same role as the global shear rate in sheared dense suspensions~\cite{Poon07,ballauff}. This suggests that both growing active matter and externally driven soft glassy matter are described by the same underlying physics. Our paper is organised as follows. In Sec.~\ref{model} we introduce the model and describe its macrocopic behaviour. In Sec.~\ref{aging} we characterise its aging microscopic dynamics. In Sec.~\ref{dynhet} we demonstrate the heterogeneous character of the dynamics and show that it is driven by the global expansion of the material. In Sec.~\ref{1D} we consider a quasi 1-dimensional geometry to show that our result remains robust with different geometries. In Sec.~\ref{conclusion} we discuss our results and offer some perspectives for future work. \section{Model and macroscopic behaviour} \label{model} \begin{figure} \begin{centering} \includegraphics[width=1.0\columnwidth]{Fig1.pdf} \par\end{centering} \caption{(a) Model: each particle grows with a rate that depends on the total energy density. When a given diameter $\sigma_i(t)$ reaches the critical size $\sqrt{2}$, particle $i$ divides into two particles with equal diameters $\sigma=1$. (b,c) Snapshots of the growing colony at different times $t$. Highlighted particles have recently divided within time interval $[t,t-10]$. (d) The total number of particles $N(t)$ grows linearly with $t$ after a transient regime. (e) Consequently, the global strain rate $\dot\varepsilon(t)=\dot{R}/R$, decays as $1/t$ at large times. \label{fig:model}} \end{figure} \subsection{Model} We consider $N(t)$ circular particles in an infinite two-dimensional space. The number of particles, $N(t)$, is not conserved but grows with time $t$, due to the dynamics which we introduce below. Let us denote $\mathbf{r}_i(t)$ and $\sigma_i(t)$ as the position and diameter of particle $i$ at time $t$ respectively. The diameter of the particles varies from $\sigma$ to $\sqrt{2}\sigma$. The interaction between the particles is approximated by a two-body repulsive harmonic potential, \begin{equation} V(r_{ij},\sigma_{ij}) = \begin{cases} \frac{\epsilon}{2} \left(\frac{\sigma_i^2+\sigma_j^2}{2}\right) \left(1- \frac{r_{ij}}{\sigma_{ij}} \right)^2, & \,\, r_{ij}\le\sigma_{ij}, \\ 0, & \,\, r_{ij}>\sigma_{ij}, \label{eq:potential} \end{cases} \end{equation} where $r_{ij}=|\mathbf{r}_i-\mathbf{r}_j|$ and $\sigma_{ij}=(\sigma_i+\sigma_j)/2$. This potential has been used before in the context of foams~\cite{Durian95}, soft tissues~\cite{Ranft10,Thirumalai18}, and biofilms~\cite{modelbacteria,Farrell13,Allen19}. Note that in some models of tissues~\cite{Ranft10,Thirumalai18}, there is also a short-range attractive potential to mimic cell-cell adhesion, which we do not include here, because it is not needed to maintain the tissue integrity. Another reason is that we aim to discover some generic physical principles in all growing active matter, which also includes bacterial colonies where adhesion is not necessarily present. In Eq.~(\ref{eq:potential}), $\epsilon$ is the interaction strength (in units of energy per unit area). The prefactor $(\sigma_i^2 + \sigma_j^2)/2$ in (\ref{eq:potential}) is chosen so that the total energy, $\frac{1}{2}\sum_{ij}V(r_{ij},\sigma_{ij})$, scales linearly with the total area of the particles $\sum_i\pi\sigma_i^2/4$. We also assume that thermal fluctuations are unimportant, thus, the dynamics of the particles are given by the overdamped equations: \begin{equation} \xi\frac{d\mathbf{r}_i}{dt} = -\sum_{j=1}^{N(t)}\frac{\partial V(r_{ij},\sigma_{ij})}{\partial\mathbf{r}_i}, \label{eqmotion} \end{equation} where $\xi$ is the friction coefficient describing the interaction with the substrate. Together with $\epsilon$, this defines the microscopic time scale for energy dissipation, namely $\tau_d=\xi \sigma^2/\epsilon$. Physically, this represents the typical time for two initially overlapping particles to move away from each other. In all simulations presented below, we shall work in reduced units where $\sigma=\tau_d=1$. The dynamic equations of motion in Eq.~(\ref{eqmotion}) ensure that repulsion due to particle crowding is described in the simplest possible manner. In the absence of growth and division, the system would quickly arrive at rest in a state without any particle overlap since the system has open boundaries. To continuously drive the dynamics and make the system active, we now introduce the growth and division dynamics for individual particles. Let us define the total energy density at time $t$: \begin{equation} u(t) = \frac{4}{N\pi\sigma^2}\sum_{i\neq j}V(r_{ij},\sigma_{ij}). \end{equation} We assume that each particle's diameter, $\sigma_i(t)$, increases with a growth rate which depends on $u(t)$: \begin{equation} \frac{d\sigma_i}{dt} = \begin{cases} \alpha_i \left(\frac{u_c-u(t)}{u_c} \right), &\,\, u(t) \le u_c, \\ 0, & \,\, u(t) > u_c, \label{eq:rate} \end{cases} \end{equation} where $\alpha_i$ is a constant, which is different for each particle $i$. In practice, it is drawn from a uniform random distribution $[0,\alpha_{\text{max}}]$. The value of $\alpha_{\text{max}}$ does not affect the main result in this paper, as long as it is chosen to be much smaller than the dissipative timescale $\tau_d=1$, in order to be close to a quasi-static limit with only very small particle overlaps. We choose the characteristic growth rate $\alpha_{\text{max}}=0.01$. For the division we introduce the following rule. When the particle diameter reaches a maximal size $\sqrt{2}$, the particle divides into two particles with equal diameters $\sigma=1$, as illustrated in Fig.~\ref{fig:model}(a). Images also show that this choice of parameter does not lead to any particular spatial correlation between the diameters of the particles which are constantly mixed during the growth of the system. In Eq.~(\ref{eq:rate}), $u_c$ is the energy density threshold, such that after some initial transient regime, the global energy density will saturate around $u(t)\simeq u_c$. Equivalently, the growth rate (\ref{eq:rate}) can also be defined \emph{via} the isotropic Kirkwood stress, with a corresponding pressure threshold~\cite{Thirumalai18}. Consequently, the number of particles $N(t)$ increases linearly (instead of exponentially if we took a constant growth rate) with time, except in the initial transient regime at $t\lessapprox3000$, see Fig.~\ref{fig:model}(d). This linear growth is consistent to many experiments in tumour tissue growths~\cite{Montel11,Freyer85,Freyer86}. In a two-rate model~\cite{Radszuweit09}, the particles on the perimeter usually divide at a quicker rate compared to particles in the bulk. In our simplified model, we assume homogenous division rate for all particles in the colony. \subsection{Macroscopic behaviour} We start all simulations with a single particle with unit diameter at the origin at time $t=0$. We then let the particle grow, divide, and so on. Figure~\ref{fig:model}(b,c) shows the snapshots of this growing colony at two different times $t$. The system remains roughly circular, characterized by the radius $R(t)$. After some initial transient regime $t\lessapprox3000$, the number of particles grows linearly with time. We call this the linear growth regime, see Fig.~\ref{fig:model}(d). In the linear growth regime, the area fraction inside the colony, $\phi_c$, is roughly uniform in space and constant in time. The value of $\phi_c$ depends on the chosen critical energy density $u_c$. For example, for $u_c=0.001$, $\phi_c\simeq0.92$, which is higher than the jamming density $\phi_J\simeq0.84$ below which particles would no longer overlap. In the following, we fix $u_c=0.001$, but we have also verified that the results in this paper do not change if one uses instead $u_c=0.0001$ (corresponding to $\phi_c\simeq0.87$). An important physical quantity resulting from the linear growth of the system is the global radial growth rate defined as \begin{equation} \dot{\varepsilon}(t) = \frac{\dot{R}(t)}{R(t)} , \label{eq:varepsilon} \end{equation} where $R(t)$ is the radius of the colony at time $t$, see Fig.~\ref{fig:model}(b). In the linear growth regime, the area fraction inside the colony is roughly constant and $N(t)\propto t$, thus, it follows that $\dot{\varepsilon}(t)\propto1/t$, see Fig.~\ref{fig:model}(e). We shall be mostly concerned with the dynamics of the material in the linear growth regime at large enough times. The expansion of the colony is driven by local division events, and thus the system is a genuine active system because energy injection occurs at the particle scale and no other {\it external} driving forces are present, in particular at larger scale. However, we shall argue below that the dynamics of the particles can be described solely by using the global variable $\dot{\varepsilon}(t)$, which can be seen as a global mechanical forcing acting on the colony, induced by the particle activity at small scale. Therefore the dynamics of growing active matter at high density is poised to resemble the one of other externally driven dense systems~\cite{SGR,BBK} such as sheared dense suspensions, where the global driving force is directly applied at large scale by an operator~\cite{Poon07,ballauff}. Finally, since the growth rate $\dot{\varepsilon}$ is decaying with time as $1/t$, we must take into account that the effective global driving force depends on time, and thus we must expect that the dynamics is going to display aging phenomena~\cite{agingstruik}. Because the driving force $\dot{\varepsilon}$ decreases with time, we expect the system to become slower at large times, again in full analogy with aging glassy materials. When measuring time correlation functions, it will therefore be useful to introduce notations to aging materials. We shall measure the dynamics between two times $t_w$ and $t_w + \Delta t$, where $t_w$ is the waiting time since the simulations started, and $\Delta t$ is the time interval over which dynamics is analysed. In an aging material, time translational invariance is lost, and dynamics does not uniquely depend on $\Delta t$ but also on $t_w$. It is often found that the scaled variable $\Delta t / t_w^{\mu}$ collapses time correlation functions at large times~\cite{agingstruik,agingSG}. The exponent $\mu \approx 1$ would correspond a simple aging scaling, whereas $\mu < 1$ has been called sub-aging, a behaviour reported in many different types of disordered materials~\cite{agingstruik,agingSG} and model systems~\cite{BB02,BY04}. \section{Microscopic aging dynamics} \label{aging} \subsection{Affine and non-affine displacements} We now investigate the aging microsopic dynamics, focusing mainly on the linear growth regime where $N(t)\propto t$. In this regime, the density inside the colony is roughly constant and particles are created uniformly anywhere inside the colony. Thus, we expect an \emph{affine velocity field} in the radial direction for every particle $i$: \begin{equation} \left(\frac{d\mathbf{r}_i}{dt}\right)^\text{aff} = \frac{1}{2} \frac{\dot{N}(t)}{N(t)} \mathbf{r}_i(t) \simeq \dot{\varepsilon}(t) \mathbf{r}_i(t), \label{eq:aff} \end{equation} where we also assume the interface of the colony remains circular (see Fig.~\ref{fig:model}(b,c)). The affine radial velocity of the particle is proportional to the radial growth rate $\dot{\varepsilon}$ at that particular time $t$ and to the distance of the particle from the origin. (Note that the colony is centred around the origin of the coordinate system.) Consequently, if we measure the total mean squared displacement (MSD) of the particles from time $t_w$ to $t_w+\Delta t$, these displacements should be influenced by the affine growth of the system. In the limit where affine displacements dominate, superdiffusion would be observed at long times even if particles do not actually relax the structure of the system. Indeed, previous numerical simulations~\cite{Thirumalai18} and experimental studies~\cite{Jimenez-Valencia15} reported superdiffusive particle motion. This superdiffusive behaviour in the total MSD is perhaps unsurprising because of the finite average velocity in the radial direction, giving rise to large affine displacements, see Fig.~\ref{fig:non-affine}(a). However, the total MSD is not necessarily ballistic either, because the radial growth rate is actually decaying as $1/t$ and thus the affine velocity also decays with time. \begin{figure} \begin{centering} \includegraphics[width=1.\columnwidth]{Fig2.pdf} \par\end{centering} \caption{ (a) Left: Total displacements of the particles from time $t_w$ to $t_w+\Delta t$ consist of mainly large affine displacements in the radial direction. Right: After subtracting the affine component, the non-affine displacements of the particles, $\Delta\mathbf{r}_i$, reveal a collective, aging, and heteregeneous dynamics. Here, $t_w=3276.8$ and $\Delta t=3276.8$. (b) Comparison of the total mean squared displacement (MSD) and the non-affine MSD for $t_w=209715.6$. (c) The non-affine MSD is fairly homogeneous but the total MSD increases linearly with the radial distance from the centre. Here, $t_w=209715.6$ and $\Delta t=25.6$. \label{fig:non-affine}} \end{figure} In order to uncover the non-trivial dynamics in these growing dense active matter, we first need to subtract these affine contributions from the total displacements of the particles. The \emph{affine displacement} of particle $i$ from time $t_w$ to $t_w+\Delta t$ is the time integral of the affine velocity: \begin{equation} \Delta\mathbf{r}_i^\text{aff} = \int_{t_w}^{t_w+\Delta t} \dot{\varepsilon}(t) \mathbf{r}_i(t)\,dt. \label{eq:aff} \end{equation} We define the \emph{non-affine displacement} of particle $i$ to be the difference between the total displacement and the affine displacement: \begin{equation} \Delta\mathbf{r}_i (t_w,\Delta t)= \Delta\mathbf{r}_i^\text{tot} - \Delta\mathbf{r}_i^\text{aff}, \end{equation} where the total particle's displacement is simply: \begin{equation} \Delta\mathbf{r}_i^\text{tot} (t_w,\Delta)= \mathbf{r}_i(t_w+\Delta t)-\mathbf{r}_i(t_w). \label{eq:rtot} \end{equation} This decompostion is illustrated in Fig.~\ref{fig:non-affine}(a). Note that during this time interval $[t_w,t_w+\Delta t]$, new particles are created due to division events, but we do not track these new particles. They of course will be tracked if we start a new measurement at a later time $t_w$. In previous literature~\cite{Thirumalai18,Jimenez-Valencia15}, the MSD is computed based on the total particle's displacement in Eq.~(\ref{eq:rtot}), $\left<|\Delta\mathbf{r}_i^\text{tot}|^2\right>$, without subtracting the affine components. Instead we subtract the affine components to define the non-affine MSD as \begin{equation} \Delta^2 r (t_w, \Delta t) = \langle | \Delta \mathbf{r}_i (t_w, \Delta t) |^2 \rangle. \end{equation} In Fig.~\ref{fig:non-affine}(b) we illustrate that the total and affine components of the motion behave similarly at very short time intervals $\Delta t$, but strongly deviate from one another at larger times. Whereas a strong superdiffusion is observed from the total displacements, we observe a non-trivial transition between superdiffusive behaviour at short times to subdiffusion at longer times from the non-affine MSD. \begin{figure*} \begin{centering} \includegraphics[width=0.9\textwidth]{Fig3.pdf} \par\end{centering} \caption{ (a) Non-affine mean squared displacements as a function of delay time $\Delta t$ and various waiting times $t_w$ display a crossover from superdiffusive to subdiffusive behaviour. (b) The scaled variable $\Delta t /t_w^{\mu}$ with $\mu \approx 0.7$ rescales the MSD onto a mastercurve. (c) Intermediate scattering function as a function of delay time $\Delta t$ and various waiting times $t_w$ for wavevectors $q=1.28$ (solid lines) and $q=81.92$ (dashed lines). (d) Rescaled intermediate scattering functions with the same variable $\Delta t /t_w^{\mu}$. (e) Collapsed $q$-dependent relaxation timescales $\tau(q) / t_w^{\mu}$, displaying a crossover from ballistic $q^{-1}$ at high $q$, to approximately diffusive $q^{-2}$ at low $q$. (f) Compressed exponential decay, $\beta(q)>1$, is observed at all $t_w$ over a broad range of wavevectors. \label{fig:aging}} \end{figure*} In Fig.~\ref{fig:non-affine}(c) we spatially resolve the MSD as a function of the radial position within the circular system (the system is by construction rotationnally invariant). As expected from Eq.~(\ref{eq:aff}), we find that the affine component of the displacement increases roughly linearly with the radial distance from the center, such that particles at the boundaries are advected by the radial growth much faster than the ones in the center. This effect should largely explain the strong radial dependence of the total MSD recently reported in Ref.~\cite{thirumalai2019}. By contrast, we find that the non-affine displacement is fairly homogeneous throughout the system, except for a thin particle layer very near the open boundary. In what follows, we will only consider the non-affine displacements of the particles to study the microscopic aging dynamics of the growing active system, and this dynamics will be averaged over the entire system. \subsection{Sub-aging dynamics} We start by discussing the aging behaviour of the MSD defined from the non-affine displacements. The numerical data are shown in Fig.~\ref{fig:aging}(a) as a function of delay time $\Delta t$ for different waiting times $t_w$. We first observe that for any given $t_w$ the MSD shows a crossover from superdiffusive behaviour at small delay times and small displacements, to a subdiffusive behaviour at long times and large displacements. The superdiffusive regime is characterised by an exponent $\Delta^2 r \sim \Delta t^{\alpha}$ with $\alpha \approx 1.4$, which is intermediate between diffusive ($\alpha=1$) and ballistic ($\alpha=2$). The reason for this intermediate behaviour will become clear in Sec.~\ref{dynhet} below. On the other hand, for large $\Delta t$, the non-affine MSD becomes subdiffusive with exponent ranging from $\alpha \approx 0.6$ to $\alpha \approx 1$, see Fig.~\ref{fig:aging}(a). Note that in order to determine the asymptotic value of the exponent in the long time limit $\Delta t / t_w \rightarrow\infty$, much longer simulation would be required. The second observation from the MSD data in Fig.~\ref{fig:aging}(a) is the fact that the curves for different times $t_w$ do not superimpose. Instead, the dynamics becomes slower as the age $t_w$ of the system increases. Therefore, the microscopic dynamics of the growing material is not time-translational invariant and slows down with $t_w$: the system is aging. This is expected as the growth rate $\dot{\varepsilon}$ (which acts as the driving force) also becomes slower with increasing time. A third noticeable aspect of the MSD data is the absence of a plateau regime at intermediate timescales. This plateau would represent the well-known caging dynamics often observed in glassy materials approaching a glass transition~\cite{Berthier11}. No such plateau can be expected in our simulations despite the fact that the system is very crowded, because we did not include any thermal fluctuations in the equations of motion Eq.~(\ref{eqmotion}). In fully athermal systems, typical MSD indeed do not display any signature of caging dynamics~\cite{atsushi}, and when sheared they also display a similar crossover from superdiffusive to diffusive~\cite{atsushi,klaus}. We expect that a small amount of temperature in our system would introduce vibrational short time dynamics. This would reveal caging dynamics, but this could also potentially obscure the short time superdiffusive behaviour reported in Fig.~\ref{fig:aging}(a). It turns out that we can collapse the MSD for different $t_w$ into a single universal function by rescaling the time delay $\Delta t$ in the horizontal axis by an algebraic function of $t_w$, introducing the rescaled variable $\Delta t / t_w^{\mu}$, where the exponent $\mu \approx 0.7$ is therefore a sub-aging exponent characterizing the aging dynamics, see Fig.~\ref{fig:aging}(b). Polymers and spin glasses are often characterised by a very similar sub-aging exponent~\cite{agingstruik,agingSG,BB02,BY04}. \subsection{Compressed exponential decay of time correlations} Another way to probe the relaxation dynamics in the system is to measure the intermediate scattering function, \begin{equation} F_s(q, t_w, \Delta t) = \left< \frac{1}{N(t_w)}\sum_{i=1}^{N(t_w)} e^{i\mathbf{q}\cdot\Delta\mathbf{r}_i}\right>, \label{eq:Fs} \end{equation} where $\Delta\mathbf{r}_i (t_w,\Delta t)$ is the non-affine displacement of particle $i$ within time interval $[t_w,t_w+\Delta t]$, and $N(t_w)$ is the total number of particles at the time $t_w$ where we start the measurement. Since the system is isotropic, $F_s$ only depends on the modulus of the wavevector $q=|\mathbf{q}|$. Physically, $F_s$ is related to the fraction of particles which have moved a distance larger than $2\pi/q$ during time interval $[t_w,t_w+\Delta t]$. It contains similar information to the MSD, but the dynamics is now resolved in space, because large wavevectors probe the dynamics at short distances, whereas small wavevectors probe large displacements. In addition, the intermediate scattering function is accessible to light scattering experiments, whereas the MSD is more suitable to experiments where particle positions can be resolved, for instance using real space imaging techniques. Figure~\ref{fig:aging}(c) shows $F_s(q,t_w,\Delta t)$ as a function of delay time $\Delta t$ for different waiting times $t_w$ and two very different wavevectors $q=1.28$ (solid lines) and $q=81.92$ (dashed lines). For small $\Delta t$, the displacements are small and thus $F_s \simeq 1$. As $\Delta t$ increases, the configuration becomes less correlated with the initial condition at $\Delta t=0$ and thus $F_s$ decays to zero. Similarly to the MSD, we observe aging behaviour in $F_s(q,\Delta t)$ because $F_s$ decays slower when the waiting time $t_w$ increases. Likewise, $F_s(q,t_w, \Delta t)$ for different waiting times can be collapsed into a single mastercurve by rescaling the delay time using the variable $\Delta t / t_w^{\mu}$, as shown in Fig.~\ref{fig:aging}(d). Remarkably, the same sub-aging exponent as for the MSD, $\mu \approx 0.7$, can be used to collapse the self-intermediate scattering function at all wavevectors. \begin{figure*} \begin{centering} \includegraphics[width=1.\textwidth]{Fig4.pdf} \par\end{centering} \caption{ (a,c) Van Hove particle distribution, Eq.~(\ref{eq:gs}) for $t_w=104858.0$. In (a), there is a clear separation of length scales into $\ell_a$ and $\ell_c$. (b,d) Non-affine displacement fields $\Delta\mathbf{r}_i$ corresponding to (a,c), the field is respectively magnified by a factor of $120$ and $4$ for better visibility. Division events are highlighted with dashed circles in (b). (e) Time evolution of the lengthscales $\ell_a$ and $\ell_c$. \label{fig:heterogeneity}} \end{figure*} Empirically, we find that $F_s$ can be well fitted using the following functional form: \begin{equation} F_s(q,t_w, \Delta t) \simeq \exp \left[ - \left( \frac{\Delta t}{\tau}\right)^{\beta} \right], \end{equation} where $\tau(q,t_w)$ is a relaxation timescale and $\beta(q,t_w)$ is called the stretching (when $\beta < 1$) or compression (when $\beta>1$) exponent. Given that the global dynamics slows down with the variable $t_w^\mu$, it is natural to consider the rescaled relaxation times, $\tau(q,t_w) / t_w^{\mu}$, as shown in Fig.~\ref{fig:aging}(e). As expected this rescaling collapses the data on a unique $q$ dependent function which describes the dynamics of the system as a function of the probed lengthscale. At large $q$ (short distances), we find that $\tau(q) \approx q^{-1}$ which corresponds to ballistic dynamics (where displacement is proportional to time). This behaviour is only qualitatively consistent with the short-time behaviour displayed by the MSD which was not fully ballistic but only superdiffusive with an exponent $\alpha \approx 1.4$ in Fig.~\ref{fig:aging}(a). The reason for these seemingly distinct behaviours will be elucidated in Sec.~\ref{dynhet}. On the other hand, the small $q$ regime of $\tau(q,t_w)$ appears to be fully diffusive. Finally, we show in Fig.~\ref{fig:aging}(f) the $q$-dependence of the exponent $\beta(q,t_w)$ characterizing the functional form of the time decay of the correlation function. We find that $\beta(q,t_w)$ is a non-monotonic function of the wavevector, being close to unity at both large and small distances, which corresponds to nearly exponential decay. However, at intermediate lengthscales, we find that $\beta>1$, which corresponds to a compressed exponential decay. Such compressed exponential decay has often been reported in soft glassy systems~\cite{Cipelletti00,ramos05,bob,ruta,pinaki}, in conjunction with ballistic regime $\tau(q) \sim q^{-1}$. Therefore, growing active matter represents one more example of a non-equilibrium soft matter system displaying anomalous aging dynamics. \section{Spatially heterogeneous dynamics} \label{dynhet} \subsection{Coexistence of two typical displacement scales} We noticed above that the MSD and the self-intermediate scattering function were giving somewhat different indications regarding the ballistic and superdiffusive character of particle motion at short times. Given that both quantities are based on the statistics of single particle displacements, they can only provide distinct quantitative information in the case where the underlying distribution of particle displacements is strongly non-Gaussian, thereby revealing the existence of a strong dynamic heterogeneity~\cite{bookDH,physics}. To investigate this issue further, we measure the distribution of single particle displacements (also called the van Hove function) \begin{equation} G_s(\Delta x,t_w,\Delta t) = \langle \delta(\Delta x - |\Delta x_i(t_w,\Delta t)|) \rangle, \label{eq:gs} \end{equation} where $\Delta x_i$ is the projection of $\Delta \mathbf{r}_i$ onto the $x$-axis (isotropy implies that we can average over both $x$ and $y$ directions). We show some representative distributions in Figs.~\ref{fig:heterogeneity}(a,c), whereas Figs.~\ref{fig:heterogeneity}(b,d) show the corresponding non-affine displacement fields $\Delta\mathbf{r}_i$ in real space. The waiting time is fixed at $t_w=104858.0$, while we look at two different delay times $\Delta t=25.6$ and $\Delta t=1638.4$. For longer time delays, nearly Gaussian distributions are revealed and these distributions and the corresponding snapshots are thus not very interesting. As can be seen from Fig.~\ref{fig:heterogeneity}(a), for short delay time $\Delta t=25.6$, the probability distribution $G_s$ shows a clear separation of length scales between a narrow core of slowly moving particles, and large tails of fast moving particles. The coexistence of (many) slow and (few) fast particles inside a single sample leading to non-Gaussian van Hove functions is a hallmark of dynamic heterogeneity in glassy materials~\cite{bookDH}. To quantify the coexistence of slow and fast particles, we find it empirically convenient to fit the two parts of the distribution $G_s(\Delta x, t_w, \Delta t)$ as the sum of two Gaussian distributions characterised by two distinct spatial extensions, $\ell_c (t_w,\Delta)$ and $\ell_a (t_w,\Delta t)$, as sketched in Fig.~\ref{fig:heterogeneity}(a). Note that in Fig.~\ref{fig:heterogeneity}(a), $G_s$ has both a Gaussian tail and a Gaussian peak. This contrasts slightly to other active/driven disordered materials, which usually have an exponential tail~\cite{Elsen16,Kob}. The larger length scale, $\ell_a$ is associated with irreversible plastic events in the system which can be directly caused by division events (see the red circles in the corresponding displacement fields in Fig.~\ref{fig:heterogeneity}(b)), but not always. There are also many plastic events leading to large particle displacements in locations where no division has occurred during $t_w$ and $t_w+\Delta t$. The smaller length scale, $\ell_c$, is instead associated with the collective motion of the particles that need to respond to the localised plastic events. Similar observations of localised plastic events leading to large scale collective motion are routinely made in the field of sheared glasses~\cite{bookDH}, where the localised events are called shear transformation zones~\cite{Falk98}, and the collective motion results from the stress redistribution carried by the elastic medium~\cite{picard}. At longer delay time $\Delta t=1638.4$ (see Fig.~\ref{fig:heterogeneity}(c)), both $\ell_c$ and $\ell_a$ tend to come closer. This is confirmed in Fig.~\ref{fig:heterogeneity}(e), which describes the $\Delta t$ evolution of the two length scales for a fixed $t_w$. Interestingly, we observe that the small length scale $\ell_c$ is ballistic at short delay time, namely $\ell_c \propto \Delta t$. By contrast, the larger length scale $\ell_a$ is roughly constant in this regime, and depend neither on $\Delta t$ nor on $t_w$. Therefore, $\ell_a$ quantifies the typical particle displacement in localised plastic events and it is always of the order of a fraction of the particle diameter. In the long $\Delta t$ limit, both $\ell_c$ and $\ell_a$ seem to converge together, see Fig.~\ref{fig:heterogeneity}(e), and the distinction between large and small displacements becomes irrelevant as the van Hove function becomes a Gaussian in the limit $\Delta t\rightarrow\infty$. Dynamic heterogeneity is only a transient phenomenon. The time evolution of the van Hove function illuminates the behaviour found above for the self-intermediate function and the MSD for short particle displacements. Indeed, the coexistence of the two length scales $\ell_c$ and $\ell_a$ suggest that the non-trivial superdiffusion exponent at small $\Delta t$ in the MSD ($\Delta r^2 \sim \Delta t^{1.4}$ in Fig.~\ref{fig:aging}(a)) is explained by the mixing of the two length scales: $\ell_c\propto\Delta t$ and $\ell_a\propto\Delta t^0$ at small $\Delta t$. Instead, the $q$-dependent relaxation time $\tau(q)$ in Fig.~\ref{fig:aging}(e) is mainly dominated by the behaviour of the length scale $\ell_c$, thus explaining its ballistic scaling, $\tau(q)\propto q^{-1}\iff\ell_c\propto\Delta t$. This follows from the fact that the measurement of $F_s(q,\Delta t)$ is dominated by the vast majority of the particles belonging to the core of the distribution (described by $\ell_c$), whereas the MSD is significantly influenced by the minority of fast moving particles (these particles make a large contribution to the MSD). \subsection{The key role of the global expansion} \begin{figure} \begin{centering} \includegraphics[width=1.\columnwidth]{Fig5.pdf} \par\end{centering} \caption{(a, blue) shows the same van Hove function as in Fig.~\ref{fig:heterogeneity}(a) (reproduced in red) but with division events completely switched off from time $t_w=104858.0$. (b) Corresponding non-affine displacement field (again magnified by a factor 120), which is very similar to that of Fig.~\ref{fig:heterogeneity}(b), except for the division events. \label{fig:no-division}} \end{figure} We found that as the size of the growing system increases, the microscopic dynamics of the system slows down. One explanation for this could be related to the fact that as $t$ increases, the number of particles in the system increases linearly with time, which implies that the division rate per unit area also decreases as $t^{-1}$. Taking the view that division events are directly responsible for the fluidisation of the system would thus suggest that dynamics should indeed slow down with $t$. However, we find that the microscopic particle dynamics is actually not fully, or rather {\it not directly and not locally}, controlled by the division events. Instead, our interpretation is that the superposition of all division events in the system leads to a macroscopic expansion of the tissue, with a rate $\dot{\varepsilon}(t) \sim 1/t$ defined in Eq.~(\ref{eq:varepsilon}). To establish the key role played by $\dot{\varepsilon}$, we perform an independent simulation of our model where the dynamics proceeds normally up to a given time $t_w=104858.0$, after which we completely switch off all division events, letting the particle diameters grow with no bound (such simulation would of course become fully unphysical at large $\Delta t$ where particles would get very large). The first effect of switching off division is to decrease the amplitude of the contribution of the fast moving particles in the van Hove function, as shown in Fig.~\ref{fig:no-division}(a), because division events no longer contribute to the tails, especially at very short times $\Delta t$. However, if we compare the displacement field in Fig.~\ref{fig:no-division}(b) and that in Fig.~\ref{fig:heterogeneity}(b), we see that the collective dynamics remains essentially the same, except for the few localised events indicated by red circles in Fig.~\ref{fig:heterogeneity}(b) which no longer appear in Fig.~\ref{fig:no-division}(b). However, several localised plastic events can still be observed. Therefore, our key point is that the non-affine displacement field remains spatially heterogeneous and highly complex even when division events are switched off. In addition, we find that the MSD remains essentially unaffected (i.e. superdiffusive) by the switching off of division events at short times, and is only reduced by small factor. These observations suggest that the collective and non-affine dynamics of the particles stem mostly from the fact that the colony is strained globally in the radial direction as a result of the macroscopic growth of the material. The rate of radial growth $\dot{\varepsilon}$ thus plays the role of a global macroscopic driving force on the dense amorphous colony, which then responds just as sheared glasses do. Namely, the forcing induces local plastic events (or shear transformation zones), which then lead to a stress redistribution at larger lengthscales. \section{Quasi-1D geometry} \label{1D} From the above, we showed that all dynamical properties of a radially growing dense active matter can be collapsed into a single universal function, which can be characterized by a single global parameter $\dot{\varepsilon}(t)$ and a universal exponent $\mu$. This is the same way as sheared soft glasses can be characterized by their strain rate. To test the generality of this result, we now consider a quasi one-dimensional (1D) geometry, where we assume periodic boundary condition at $y=0$ and $y=L_y$ and infinite domain in the $x$-direction (see Fig.~\ref{fig:1D}(a)). This geometry is similar to experiments of wound healing in epithelial tissues~\cite{Martin11-wound-healing}. We fix $L_y=8$ (in simulation units). We initialize a strip of $8$ particles of unit diameter at $x=0$ at time $t=0$, and let the system evolve according to the equations of motion Eqs.~(\ref{eqmotion}-\ref{eq:rate}), as before. Figure~\ref{fig:1D}(a) shows the snapshots of the dense active matter growing laterally in the $x$-direction at two different times. We also define $L(t)$ to be the lateral size of the growing colony at time $t$ (analogous to $R(t)$ in the radial geometry considered above). Similarly, we define the lateral growth rate to be $\dot{\varepsilon}(t)=\dot{L}/L$ in that case. For large enough time $t$, the total number of particles will increase linearly with time $t$ (see Fig.~\ref{fig:1D}(c)). This corresponds to the linear growth regime, equivalent to Fig.~\ref{fig:model}(d). In this regime, the growth rate again decays as $\dot{\varepsilon}\sim1/t$ (see Fig.~\ref{fig:1D}(d)). To make the same analysis as before, we must first subtract the affine components of the particles' displacements, which is the $x$-component of the integral in Eq.~(\ref{eq:aff}). Figure~\ref{fig:1D}(b) shows the typical \emph{non-affine} displacement field of the particles between time $t_w$ and $t_w+\Delta t$. We can then compute the same quantities such as the intermediate scattering function, defined in Eq.~(\ref{eq:Fs}), except now, we only consider the $x$-component of the particles' displacements. We find that the intermediate scattering function can also be collapsed into a near universal function by rescaling the delay time $\Delta t$ into $\Delta t\cdot\dot{\varepsilon}^\mu$ (see Fig.~\ref{fig:1D}(e,f)). Thus the collapse hypothesis still works for different geometries, however, we find that the aging exponent is weakly dependent on the dimensionality: $\mu\simeq0.6$ in quasi-1D, compared to $\mu\simeq0.7$ in the radial geometry. We also find a compressed exponential form of $F_s$, similar as before. \begin{figure} \begin{centering} \includegraphics[width=1.\columnwidth]{Fig6.pdf} \par\end{centering} \caption{ (a) Snapshots of growing dense active matter in quasi-1D geometry. $L(t)$ is the lateral size of the colony at time $t$. (b) shows the non-affine displacement field between time $t_w=1638.4$ and $t_w+\Delta t=2048.0$. (c) The total number of particles increases linearly with time $t$ for large $t$. (d) The strain/growth rate is defined to be $\dot{\varepsilon}(t)=\dot{L}/L$, which decreases as $1/t$. (e,f) show the intermediate scattering function $F_s(q_x,t_w,\Delta t)$, which can be collapsed into a single universal function by rescaling the $x$-axis into $\Delta t/t_w^\mu$, with $\mu\simeq0.6$. $q_x$ is the $x$-component of the wavevector. \label{fig:1D}} \end{figure} \section{Discussion and perspectives} \label{conclusion} \subsection{Physical discussion} Thinking about crowded biological systems using analogies with dense glassy materials is a fruitful area~\cite{reviewelijah}. For instance, the competition between self-propulsion and particle crowding leads to the emergence of a non-equilibrium class of glass transitions~\cite{BB13}, with their own specific features~\cite{berthier14,elijah} The role of cell division has also been tackled in this context, with different views. Introducing both cell division and cell death can lead for instance to a non-equilibrium steady state where the number of cells can be constant on average. This situation has been studied using several approaches~\cite{joanny,silke}, and it was concluded that this leads to visco-elastic behaviour, the short-time solid behaviour giving way to long-time flow controlled by the rate of cell division. Here, we studied a different case where particle division is not compensated by apoptosis, and thus the number of particles increases. This gives rise to the competition between steric repulsion and particle division again, but now inside a material that is macroscopically expanding, which we termed {\it growing active matter}. Although activity (and thus energy) is injected at the particle scale only by the division process, we found that resulting material expansion at the macroscopic scale gives rise to a finite expansion rate $\dot{\varepsilon}$ which is quite homogeneous inside the expanding system. As a result, the system becomes equivalent to a dense disordered assembly of particles submitted to a {\it global} mechanical forcing. Numerous studies of driven amorphous materials showed that this situation leads to plasticity and flow, taking the form of localised shear transformation zones which then redistributes the stress over large length scales, due to the elasticity. As a result, we found that the microscopic dynamics of the system is never arrested but instead shows relaxation over a time scale that is directly controlled by the radial growth rate, since the variable $\Delta t / t_w^\mu \approx \Delta t\cdot\dot{\varepsilon}^\mu$ essentially rescales all time correlation functions measured numerically. Interestingly, such non-linear rescaling with the strain rate $\dot{\varepsilon}$ has been reported in many types of sheared dense suspensions~\cite{Poon07,ballauff}. In the rheological context, dense suspensions are often sheared at constant shear rate $\dot{\gamma}$ in a simple shear geometry, but it is expected that the specific geometry of the external forcing is unimportant. Regarding the value of the exponent $\mu$, the simplest estimate should be $\mu = 1$ which would then correspond to the rheological behaviour of a simple yield stress material~\cite{review-yield}. Our finding $\mu \approx 0.7$ differs from this simple estimate. We see two possible explanations. First, even if the effective rate of deformation $\dot{\varepsilon}$ is the main control parameter, different rheological exponents with $\mu < 1$ are often observed in sheared glasses~\cite{review-yield}. Second, even though particle divisions is not uniquely driving the relaxation, these events occur and can speedup the relaxation by some amount, again reducing $\mu$ from unity. An experimental study of a soft gel sedimenting over gravity represents another relevant analogy~\cite{Cipelletti11}. In that case, the sedimenting gel of decreasing height $h(t)$ is compressed with a global compression rate $\dot{\varepsilon} = \dot{h(t)}/{h(t)}$, and it was also found that time correlation functions inside the material scale with the rescaled variable $\dot{\Delta t}/\dot{\varepsilon}^\mu$, with an exponent $\mu \approx 1.0$, with similar features as the ones reported above for the expanding active material. Our detailed analysis of particle motion at the particle scale clearly reveals the existence of localised plastic events (both due to particle division but also due to the plasticity induced by the global expansion of the material). The similarity with particle-resolved studies of sheared amorphous solids and computer simulations is again very striking, thus comforting our broad conclusion that dense growing active matter and sheared amorphous solids can be described by the same underlying physics. The analogy drawn here between an active system and a globally driven one differs qualitatively from a series of recent studies regarding the effect of self-propulsion on dense particle assemblies~\cite{BB13,berthier14,elijah}, which concluded that self-propulsion leads to glassy dynamics similar in many respects to thermally driven particle suspensions, so that the local driving force translates into a kind of non-driven, equilibrium-like relaxation dynamics. A third type of analogy was drawn from studying the effect of `self-deformation' (i.e. volume fluctuations occurring at the particle scale), where the activity was shown to give rise to local yielding events~\cite{Elsen17} that are then able to fluidise the system, but this fluidisation occurs without provoking a large-scale deformation as we found here. It is remarkable that these three types of analogies, derived from studies where some kind of ``activity'' is added to an otherwise densely packed disordered collection of soft objects (colloids, cells, bacteria), do not lead to a unique phenomenology, but instead to qualitatively different types of analogies with the behaviour of amorphous systems. We conclude that the physics observed in ``dense active matter'' may depend quite explicitely on the specific type of activity considered~\cite{reviewelijah}. \subsection{Perspectives} In conclusion, we have studied a minimal model of growing dense active matter, a class which includes biological tissues, bacterial colonies and biofilms. We have shown that the expanding nature of the material is key to understand its dynamics. In particular, it is useful to introduce the distinction between affine deformation due to the growth of the material, and the non-affine component which contains the relevant information of structural relaxation and microscopic dynamics. These dynamics shows pronounced dynamic heterogeneities, aging behaviour, and displays non-trivial time correlations functions exhibiting a crossover from ballistic motion of short scales accompanied by compressed exponential decay, crossing over to subdiffusive motion at larger scales. We have intentionally used a simplified model for expanding active matter which contains particle division and steric repulsion as unique ingredients to study their competition independently of any other physical processes that could of course be present in real materials, in particular biological ones. We are well aware that particle adhesion, self-propulsion, thermal fluctuations, internal structure or geometry of cells or bacteria may all affect the behaviour described in this work. We are currently exploring how to include some of these ingredients in more elaborated models, as well as collecting literature data for particle-resolved dynamic investigations of expanding biological systems. The study of the one-dimensional geometry in Sec.~\ref{1D} added for the revision of this manuscript is an effort in that direction. Our overall goal is to confirm that the main results of our study can be experimentally relevant. We also hope that our analysis, which borrows the tools developed to study dense amorphous solids, will motivate further experimental studies of the many fascinating examples of growing active matter that biology provides us. \begin{acknowledgments} We thank M. Cates, N. M. Oliveira, and D. Thirumalai for useful exchanges about this work. This work was supported by a grant from the Simons Foundation (\#454933, L. Berthier). E. T. is funded in part by the European Research Council under the Horizon 2020 Programme, ERC Grant Agreement No. 740269. \end{acknowledgments}
1,116,691,497,985
arxiv
\section{Introduction} The global organization of complex molecular interactions within and across cells is being disclosed by the graph-theoretic approaches~\cite{Barabasi:2004uq,tilee02,Rual:2005kx,Ma22012003}. The obtained cellular networks exhibit universal topological features which are rarely found in random networks, such as broad degree distributions~\cite{Jeong:2000kx} and high modularity~\cite{shenorr02}. Their origins and implications to cellular and larger-scale functions have thus been of great interest. Diverse network models based on simple mechanisms of adding and removing nodes and links have been proposed ~\cite{barabasi99,Vazquez:2003vn,Yamada:2009qe}. Those models capture the common aspects, like the preferential attachment~\cite{Barabasi:1999ys}, of biological processes such as the duplication, divergence, and recruitment of genes, proteins, and enzymes, and successfully reproduce the empirical features of biological networks, suggesting that the former can be the origin of the latter. Yet it remains to be explored what drives such construction and remodeling of biological networks functioning in living organisms. A population of living organisms find the typical architecture and function of their cellular networks changing with time. Such changes on long time scales are made by the organisms of different traits, giving birth to their descendants with different chances, that is, by evolution~\cite{fisher1999genetical,Orr:2005fk}. Therefore, it is desirable to investigate how the generic features of evolution lead to the emergence of the common features of biological networks. Living organisms are required to possess adaptability and stability simultaneously~\cite{wagnerbook}. To survive and give birth to descendants in fluctuating environments, the ability to adjust to a changed environment is essential~\cite{beaumont09}, which leads to, e.g., phenotypic diversity and the advantage of bet-hedging strategy~\cite{beaumont09}. At the same time, the ability to maintain the constant structure and perform routine important functions regularly, such as cell division and heat beats, is highly demanded. Therefore, in a given population, the cellular networks supporting higher adaptability and stability are more likely to be inherited, which leads the representative topology and function of the cellular networks to evolve over generations. Here we study how such evolutionary pressure shapes the biological networks. We propose a network model, in which links are rewired such that both adaptability and stability are enhanced. The dynamics of the network is simply represented by the Boolean variables assigned to each node regulating one another~\cite{kauffman69}. The Boolean networks have been instrumental for studying the gene transcriptional regulatory networks~\cite{babu04} and the metabolic networks~\cite{Ghim2005401}. This model network is supposed to represent the network structure typical of a population. The evolution of Boolean networks towards enhancing adaptability~\cite{kauffman86, stern99,oikonomou06,stauffer09,Greenbury201048}, stability~\cite{Wagner19961008,bornholdt98,szejka07,Sevim2008323,1367-2630-11-3-033005,Esmaeili2009127,mihaljev09,PhysRevE.81.021908,peixoto12}, or both~\cite{10.1371/journal.pcbi.1002669} has been studied, mostly by applying the genetic algorithm or similar ones to a group of small networks. In particular, the model networks which evolve by rewiring links towards local dynamics being neither active nor inactive have been shown to reproduce the critical global connectivity and many of the universal features of real-world biological networks~\cite{bornholdt00, rohlf07, rohlf08, liu06}, demonstrating the close relation between evolution and the structure of biological networks. However, the evolutionary evaluation and selection are made for each whole organism, not for part of it. In the simulated evolution of our model, the adaptability and the stability of the {\it global} dynamical state are evaluated in the wild-type network and its mutant network, differing by a single link from each other, and the winner of the two becomes the wild-type in the next step. The study of this model leads us to find that sparse and heterogeneous connectivity patterns emerge, which are consistent with the gene transcriptional regulatory networks and the metabolic networks of diverse species. The scaling behavior of stability with respect to system size suggests that the evolved networks are critical, lying at the boundary between the inflexible ordered phase and the unstable chaotic phase. Our study also shows how the nature of fluctuations and correlations changes by evolution. The extent of perturbation spread characterizing the system's stability fluctuates over different realizations of evolution. The fluctuation turns out to scale linearly with the mean in the stationary state of evolution while the square-root scaling holds in the transient period. We argue that this dynamic crossover is rooted in the variation of the combinatorial impacts of the structural fluctuation, driven by evolution, and the internal stochasticity. The scaling of the correlation volume, representing the typical number of nodes correlated with a node, is another feature of the evolved networks. Our results thus show the universal impacts of biological evolution on the structure and function of biological networks and illuminate the nature of correlations and fluctuations in such evolving systems distinguished from randomly-constructed or other artificial systems. The paper is organized as follows. The network evolution model is described in detail in Sec.~\ref{sec:model}. The emergent structural and functional features are presented in Sec.~\ref{sec:evolution}. In Sec.~\ref{sec:generalized}, we represent the Hamiltonian approach to a generalized model, including our model in a limit, and show the robustness of the obtained results. The scaling behaviors of the fluctuation of perturbation spread and the correlation volume are analyzed in Secs.~\ref{sec:fluctuation} and \ref{sec:correlation}, respectively. We summarize and discuss the results of our study in Sec.~\ref{sec:discussion}. \section{Model} \label{sec:model} We consider a network in which the node activities are regulated by one another. The network may represent the transcriptional regulatory network of genes, in which the transcription of a gene is affected by the transcriptional factors encoded from other genes, or the metabolic network of metabolites and reactions, the concentrations and fluxes of which are correlated. Various cellular functions are based on those elementary regulations. The model network does not mean that of a specific organism but is representative of the cellular networks of a population of organisms, which evolve with time. \REV{In our model the network evolution is made by adding or removing links, representing the establishment of new regulatory inputs or the loss of existing targets possibly caused by point mutations in the regulatory or coding regions of DNA~\cite{10.1371/journal.pcbi.1002669, babu04}.} To be specific, we consider a network $G$ of $N$ nodes which are assigned Boolean variables $b}\newcommand{\sigmapert}{\sigma_{i}=\pm 1$ for $i=1,2,\ldots,N$. $b_i$ represents whether a node $i$ is active or inactive in terms of the transcription of the messenger RNA, the flux of the corresponding chemical reaction, or the concentration of the metabolite. The global dynamical state is represented by $\Sigma = \{ b_1, b_2, \ldots, b_N\}$. Initially $L_0$ directed links are randomly wired and $b_i$'s are set to $1$ or $-1$ randomly. A link from node $j$ to node $i$, with the adjacency matrix $A_{ij}=1$, indicates the regulation of the activity of $i$ by $j$~\cite{babu04,Ghim2005401}. $b_{i}(\tau+1)$ of node $i$ at the microscopic time step $\tau+1$ is determined by its regulators at $\tau$ as \begin{equation} b}\newcommand{\sigmapert}{\sigma_{i}(\tau+1) = F_{i}(\{ b}\newcommand{\sigmapert}{\sigma_{j}(\tau)|A_{ij}=1\}), \label{eq:boolean_evol} \end{equation} where $F_i$ is the time-constant regulation function for node $i$, taking a value $1$ or $-1$ for each of all the $2^{k_i}$ states of $k_i$ regulators with $k_i = \sum_{j} A_{ij}$. A target state $\Sigma^{\rm (target)} = \{ b}\newcommand{\sigmapert}{\sigma_1^{\rm (target)}, b}\newcommand{\sigmapert}{\sigma_2^{\rm (target)}, \ldots, b}\newcommand{\sigmapert}{\sigma_N^{\rm (target)}\}$ is demanded of the network by the environment and the distance between $\Sigma$ and $\Sigma^{\rm (target)}$ quantifies the adaptation to the environment. \begin{figure} \begin{center} \includegraphics[width=0.7\columnwidth]{Figure1v2.eps} \caption{(Color online) Evolving network model. (a) A mutant $G'$ is generated by adding or removing a link randomly in the wild-type $G$, here between nodes $i$ and $j$. (b) The transition from $G$ to $G'$ happens if $H^{\rm (target)}_{G'}<H^{\rm (target)}_{G}$ or if $H^{\rm (pert)}_{G'}\leq H^{\rm (pert)}_{G}$ and $H^{\rm (target)}_{G'}=H^{\rm (target)}_{G}$. A new target state ${\Sigma^{\rm (target)}}'$ is generated if $H^{\rm (target)}_{G',t}=0$.} \label{fig:model} \end{center} \end{figure} The dynamical state $\Sigma(\tau)=\{b_1(\tau), b_2(\tau),\ldots, b_N(\tau)\}$ is updated every microscopic time step $\tau$ as in Eq.~(\ref{eq:boolean_evol}). Also the structure of the network $G$, including its adjacency matrix $A$ and the regulating functions $\{F\}$, evolves on a longer time scale as follows. At $\tau = t\tau_m$ with $t=0,1,2,\ldots$ the macroscopic time step and $\tau_m$ a time constant, a mutant network $G'$ is generated, which is identical to the wild-type $G$ except that it has one more or less link with a different regulation function (See Fig.~\ref{fig:model}). Then we let the dynamical state $\Sigma(\tau)$ evolve on $G$ and $G'$, respectively, for $t\tau_m\leq \tau < (t+1) \tau_m$. Due to their structural difference, the $\Sigma(\tau)$'s may evolve differently although they are set equal initially at $\tau=t\tau_m$. At $\tau = (t+1)\tau_m$, the adaptability and the stability of the time trajectories $\{\Sigma(\tau)|t\tau_m \leq \tau<(t+1)\tau_m\}$ on $G$ and $G'$ are evaluated in terms of the Hamming distances, $H^{\rm (target)}_{G,t}, H^{\rm (target)}_{G',t}, H^{\rm (pert)}_{G,t}$, and $H^{\rm (pert)}_{G',t}$, where the first two characterize the adaptation to the environment and the latter two represent the typical extent of perturbation spread. The winner of $G$ and $G'$ is determined in the way detailed below, which then becomes the wild-type $G$ for $(t+1)\tau_m \leq \tau< (t+2) \tau_m$ competing with its mutant. These procedures are repeated for $t=0,1,2,\ldots$. The adaptability of a Boolean network $G$ at time $t$ is here quantified by the average Hamming distance between $\Sigma(\tau)$ and a given target state $\Sigma^{\rm (target)}$~\cite{kauffman86, stern99,oikonomou06,stauffer09,Greenbury201048} over a microscopic time interval as \begin{align} H_{G,t}^{\rm (target)} = {1\over \tau_m - \tau_s}& \sum_{\tau = t\tau_m+\tau_s }^{(t+1)\tau_m} H(\Sigma(\tau), \Sigma^{\rm (target)}), \nonumber\\ H(\Sigma, \Sigma^{\rm (target)}) &= {1\over N} \sum_{i=1}^N \left(1-\delta_{b_i, b_i^{\rm (target)}}\right) \label{eq:Htarget} \end{align} where $\delta_{a,b}$ is the Kronecker delta function. $\tau_s$ is a microscopic-time constant such that the Hamming distance $H(\Sigma(\tau),\Sigma^{\rm (target)})$ is stationary for $t\tau_m +\tau_s \leq \tau < t\tau_m +\tau_m$. Another constant $\tau_m$ is set to $\tau_m = 2\tau_s$, which is found to range from $38$ to $162$ for $30\leq N\leq 800$ in our simulations. \REV{If smaller values of $\tau_m$ and $\tau_s$ were used, $H_{G,t}^{\rm (target)}$ in Eq.~(\ref{eq:Htarget}) would not represent the adaptability of the network in the stationary state of the Boolean dynamics.} The smaller $H_{G,t}^{\rm (target)}$ is, the closer the dynamical state on $G$ is likely to approach the target state, implying that $G$ is more adaptable to a given environment. We compute $H_{G',t}^{\rm (target)}$ in the same way as in Eq.~(\ref{eq:Htarget}). The stability in performing routine processes is another key requirement of life. Given that local perturbations can spread globally, the ability to suppress such perturbation spread can be a measure of stability~\cite{Wagner19961008,bornholdt98,szejka07,Sevim2008323,1367-2630-11-3-033005,Esmaeili2009127,mihaljev09,PhysRevE.81.021908,peixoto12}. To quantify the stability of $G$ at time $t$, the difference between the original state $\Sigma(\tau)$ and the perturbed state $\Sigma^{\rm (pert)}(\tau)=\{b_1^{\rm (pert)}(\tau), b_2^{\rm (pert)}(\tau),\ldots, b_N^{\rm (pert)}(\tau)\}$ is measured. The perturbed state is obtained by flipping the states of $N/2$ randomly-selected $b}\newcommand{\sigmapert}{\sigma$'s in $\Sigma(\tau)$ at $\tau=t\tau_m$ and then letting it evolve on $G$ for $t\tau_m \leq \tau< (t+1)\tau_m$. Then we count the number of perturbed nodes, having $b_i\ne b_i^{\rm (pert)}$, as \begin{equation} H_{G,t}^{\rm (pert)} = {1\over \tau_m - \tau_s} \sum_{\tau = t\tau_m+\tau_s }^{(t+1)\tau_m} H(\Sigma(\tau), \Sigma^{\rm (pert)} (\tau)) \label{eq:Hpert} \end{equation} with the Hamming distance $H(\Sigma, \Sigma^{\rm (pert)})$ defined in Eq.~(\ref{eq:Htarget}). $H^{\rm (pert)}_{G,t}$ represents the typical fraction of perturbed nodes; the smaller $H^{\rm (pert)}_{G,t}$ is, the more stable the network $G$ is against dynamical perturbations. The stability of the mutant $G'$ is also computed in the same way. We remark that the number of initial flipped variables can be changed over a significant range without changing the main results. The mutant $G'$ becomes the winner (i) if $H_{G',t}^{\rm (target)}<H_{G,t}^{\rm (target)}$ ($G'$ is more adaptable than $G$) or (ii) if $H_{G',t}^{\rm (pert)}< H_{G,t}^{\rm (pert)}$ ($G'$ is more stable than $G$) and $H_{G',t}^{\rm (target)}=H_{G,t}^{\rm (target)}$. If $H_{G',t}^{\rm (target)}=H_{G,t}^{\rm (target)}$ and $H_{G',t}^{\rm (pert)}= H_{G,t}^{\rm (pert)}$, the winner is chosen at random. Examples of the transition from $G$ to $G'$ are depicted in Fig.~\ref{fig:model}. Finally, to model the changes of the environment, a new target state ${\Sigma^{\rm (target)}}^{'}$ is generated if $H^{\rm (target)}$ of the winner is zero. Therefore our network evolution model represents the co-evolution of the structure and dynamics of the Boolean network on different time scales in a changing environment. \section{Emergent features in structure and function} \label{sec:evolution} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figure2v3.eps} \caption{(Color online) Emergence of a sparse and heterogeneous connectivity pattern. In simulations, the initial number of links $L_0$ is set to $4N$ or $N/2$ giving $\overline{k}_0=L_0/N = 4$ or $0.5$. For each $N$ and $L_0$, we run $\mathcal{N}$ independent simulations, each for $0\leq t\leq T$, where $T$ ranges from $4\times 10^4$ to $5\times 10^6$ and $\mathcal{N}=1000$ for $N\leq 200$, $\mathcal{N}=760$ for $N=400$, and $\mathcal{N}=22$ for $N=800$. $\langle \cdots \rangle$ indicates the ensemble average. (a) The plots of the mean connectivity $\langle \overline{k}_t\rangle$ for $N=200$. It converges to a constant irrespective of the initial value, which is evaluated as $\langle \overline{k}_\infty\rangle = (T/4)^{-1}\sum_{t=(3/4)T}^T \langle\overline{k}_t \rangle\simeq 1.4$. The networks at selected times are presented. (b) The $N$-dependence of the stationary-state mean connectivity. $\langle \overline{k}_{\infty}\rangle\simeq 0.53 +0.17 \ln N$ (solid line) fits reasonably the model results (circles). The mean connectivity $\overline{k}=L/N$ of the transcriptional regulatory networks of four species (triangles)~\cite{balazsi05, galanvasquez11, balazsi08,sanz11,balaji06} and of the bipartite metabolic networks of 506 species (crosses)~\cite{Karp01012005,pkim14} are shown. The fitting line (dotted) given by $\langle \overline{k}_{\infty}\rangle\simeq 1.01 +0.15 \ln N$ fits the data of the metabolic networks with $N$ the number of reactions and metabolites. (c) The cumulative distributions of the in-degree, $C(k)=\langle N^{-1}\sum_{j=1}^N \theta(k_{j}-k)\rangle$ at $t=0$ (initial state) and $t=4.8\times 10^5$ (stationary state) for $N=200$. The distribution in the random networks of $N=200$ nodes and $\langle L\rangle=\langle \overline{k}_\infty\rangle N=1.4N$ links is also shown for comparison. } \label{fig:evol_structure} \end{center} \end{figure} The simulation of the proposed model shows a variety of interesting features of evolving networks. Most of all, we find that the mean connectivity \REV{$\langle \overline{k}_t\rangle=\langle N^{-1}\sum_{i=1}^N k_i \rangle=\langle L_t\rangle/N$, with $k_i = \sum_{j=1}^N A_{ij}$ the in-degree or the number of regulators of node $i$ and } $L_t$ the total number of links at time $t$, converges to a constant $\langle {\overline{k}}_{\infty}\rangle$, which depends only on $N$ regardless of $\overline{k}_0=L_0/N$ [Fig.~\ref{fig:evol_structure} (a)]. The mean connectivity has been shown to converge to $\langle \overline{k}_\infty\rangle=2$ in some evolution models~\cite{rohlf07, rohlf08, liu06, PhysRevLett.108.128702,10.1371/journal.pcbi.1002669}, which is the critical point distinguishing the ordered and the chaotic phase in random Boolean networks~\cite{derrida86}. Different values of $\langle \overline{k}_\infty\rangle$ have been reported in other models~\cite{Sevim2008323,1367-2630-11-3-033005}, where $\langle \overline{k}_\infty\rangle>2$, implying a fundamental difference between the evolved networks and random networks. In our model, $\langle \overline{k}_\infty\rangle$ ranges from $1.2$ to $1.7$ for $30\leq N\leq 800$ and the data are fitted by a logarithmic growth with $N$ as $\langle \overline{k}_\infty\rangle\sim 0.53 + 0.17 \ln N$ [See Fig.~\ref{fig:evol_structure} (b)]. This suggests that $\langle \overline{k}_\infty\rangle$ would remain small for $N$ reasonably large, e.g., $\langle \overline{k}_\infty\rangle\simeq 2.88$ for $N=10^6$. Such sparse connectivity is identified in real biological networks~\cite{balazsi05,galanvasquez11,balazsi08, sanz11,balaji06,Karp01012005,pkim14}. The mean connectivities of the transcriptional regulatory networks are between $1$ and $3$ while the number of nodes ranges from hundreds to thousands. The mean connectivities of the metabolic bipartite networks also range between $1$ and $3$. Furthermore, they show logarithmic scaling with $N$ in agreement with our model [See Fig.~\ref{fig:evol_structure} (b)]. The number of regulator nodes \REV{(in-degree)} $k$ is broadly distributed in the evolved network compared with the Poissonian distribution of the random networks as seen in Fig.~\ref{fig:evol_structure} (c). Such broad distributions are universally observed in real-world networks~\cite{albert02,balazsi05,thieffry98, balaji06,tilee02}. The cumulative \REV{in-}degree distribution $C(k)=N^{-1} \sum_{i=1}^N \theta(k_i-k)$, with $\theta(x)$ the Heavisde step function, appear to take the form of an exponential function, which is in agreement with the transcriptional regulatory networks of {\it S. cerevisiae}~\cite{balaji06,tilee02}. \REV{This is, however, inconsistent with the previous studies on the real metabolic networks~\cite{Jeong:2000kx} or other model networks evolving via node duplication and divergence~\cite{10.1371/journal.pcbi.1002669}, which display power-law degree distributions. It is known that the node duplication~\cite{ Vazquez:2003vn} or the preferential attachment of links~\cite{Barabasi:1999ys} may lead to such power-law degree distributions, which is missing in our model. In Ref.~\cite{doi:10.1137/070710111}, the functional form of the degree distributions of some real metabolic networks are hard to point out. In contrast to the broad in-degree distributions, the out-degree $k^{\rm (out)}_i$ in the evolved networks of our model is found to follow the Poisson distribution as in the random networks. It is known that the out-degree distribution is irrelevant to the determination of the dynamical phase - ordered or chaotic - of random Boolean networks~\cite{lee08jpa}. } \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Figure3v2.eps} \caption{(Color online) Time evolution of adaptability and stability. (a) Plot of $(\langle H^{\rm (pert)}_t\rangle,\langle H^{\rm (target)}_t\rangle)$ for $10^2\leq t<6.4\times 10^5$ and $N=200$ with the initial mean connectivity $\overline{k}_0=4$ and $\overline{k}_0=0.5$. \REV{$\mathcal{N}=1000$ simulations are run. } The color varies with the evolution time $t$ and the arrows indicate the direction of increasing time. (Inset) The scaling behavior of the stationary-state Hamming distance $(\langle H^{\rm (pert)}_\infty\rangle$ with respect to the number of nodes $N$. $\langle H^{\rm (pert)}_\infty\rangle\sim N^{-0.7}$ (dashed line) fits the data. (b) Plots of $\langle H^{\rm (pert)}_t\rangle$ versus the mean connectivity $\langle \overline{k}_t\rangle$ for the evolving networks and the random networks of $N=200$. } \label{fig:evol_dynamics} \end{center} \end{figure} As evolution proceeds, it is more facilitated for the evolving network to get close to or reach a given target state. Such adaptability is quickly acquired, as implied in the rapid decrease of $\langle H^{\rm (target)}_t\rangle$ with increasing $t$ [Fig.~\ref{fig:evol_dynamics} (a)]. \REV{We remark that $H^{\rm (target)}_t$ may increase with $t$ even in a single realization of evolution, since the target state, the state demanded by the environment, may change with time. } The extent of perturbation spread $\langle H^{\rm (pert)}_t\rangle$ also decreases rapidly by evolution. Its stationary-state value $\langle H^{\rm (pert)}_\infty\rangle$ shows the following scaling behavior with $N$: \begin{equation} \left\langle H^{\rm (pert)}_\infty\right\rangle\sim N^{-\theta^{\rm (pert)}}, \ \ \theta^{\rm (pert)}\simeq 0.7. \label{eq:Hpertscal} \end{equation} This implies an intermediate level of stability of the evolved networks compared with the following networks. The random Boolean networks with the mean connectivity at the threshold $\overline{k}_c=2$ find the perturbation spread scale similarly to Eq.~(\ref{eq:Hpertscal}) but with a smaller scaling exponent ranging between $1/2$ and $1/3$, depending on the functional form of the in-degree distribution~\cite{lee08jpa}. Therefore, the perturbation spread in those critical random networks is much larger than that in the evolved networks for large $N$. Figure~\ref{fig:evol_dynamics} (b) shows that during the whole period of evolution, the evolving networks have smaller spread of perturbation than the random networks with the same mean connectivity $\langle \overline{k}\rangle$. On the other hand, in a variant of our model, the ``stability-only" model, in which only the stability of the wild-type and the mutant is evaluated for selection, the perturbation spread scales as $\langle H^{\rm (pert)}_\infty\rangle \sim N^{-1}$ [Fig.~\ref{fig:relax} (b)]. The original networks allow larger spread of perturbation than the stability-only model in order to facilitate adaptation to a fluctuating environment. The mean connectivity $\langle \overline{k}_\infty\rangle$ is also subject to such a balance constraint. As the opposite to the stability-only model, we can consider the ``adaptation-only" model in which only the adaptability of the wild-type and the mutant is considered. We found that the mean connectivity is much larger than in the original model.~\footnote{We found that the mean connectivity does not even become stationary but keeps increasing with time in some cases.} A large number of links make more and larger attractors in the state space, which can be helpful for adaptation. In the stability-only model, on the contrary, we find that the mean connectivity is much smaller than that of the original model [Fig.~\ref{fig:relax} (a)], suppressing the transitions between attractors. All these characteristics demonstrate that the structure and dynamics of the evolved networks are at the boundary between the stable and robust phase and the flexible and adaptable phase~\cite{kauffman69}. \section{A generalized model} \label{sec:generalized} \begin{figure} \includegraphics[width=\columnwidth]{Figure4v3.eps} \caption{(Color online) Mean connectivity and stability in the generalized model. The parameter $r$ is related to the relative importance of adaptability with respect to stability as in Eq.~(\ref{eq:rdef}). (a) Plots of the stationary-state mean connectivity $\langle \overline{k}_\infty\rangle$ versus the system size $N$. $\langle \overline{k}_\infty\rangle$ increases slowly with $N$ for all $r>0$ except for the stability-only model. (b) Plots of the perturbation spread $\langle H^{\rm (pert)}_\infty\rangle$ versus $N$. The scaling behavior $\langle H^{\rm (pert)}_\infty\rangle \sim N^{-\theta^{\rm (pert)}}$ is observed for all the considered cases. (Inset) the scaling exponent $\theta^{\rm (pert)}$ decreases from $1$ to $0.7$ with increasing $r$. } \label{fig:relax} \end{figure} In this section, we represent our model in the Hamiltonian approach, which offers a natural extension of the model allowing us to check the robustness of the obtained results. The evolution trajectory of the model network corresponds to a path in the space of networks $G$. A system of $N$ nodes changes its location in the $G$ space in the stochastic way as described in Sec.~\ref{sec:model}. Therefore, a generalized evolution model can be introduced by specifying the transition probability $\omega_{G\to G';\Sigma}$ from $G$ to $G'$ for a given dynamical state $\Sigma$~\cite{Wagner19961008,Sevim2008323,peixoto12}. Note that the dynamical state evolves with microscopic time $\tau$ in a deterministic way as long as the network structure $G$ is fixed. Suppose that the transition probabilities satisfy the relation \begin{align} {\omega_{G\to G';\Sigma}\over \omega_{G'\to G;\Sigma}} = \exp & \left(-{H_{G'}^{\rm (target)}-H_{G}^{\rm (target)}\over T^{\rm (target)}} \right.\nonumber\\ &\left. -{{H_{G'}^{\rm (pert)} - H_G^{\rm (pert)} \over T^{\rm (pert)}}}\right), \label{eq:boltzmann} \end{align} where the Hamming distances are computed by Eqs.~(\ref{eq:Htarget}) and (\ref{eq:Hpert}) with $\Sigma(t\tau_m) = \Sigma$ and two temperatures $T^{\rm (target)}$ and $T^{\rm (pert)}$ are introduced. Transitions to the networks with smaller $H^{\rm (target)}$ and $H^{\rm (pert)}$ are preferred to the extent depending on the two temperatures. Our model corresponds to the limit \begin{equation} T^{\rm (target)}\to 0, \ T^{\rm (pert)}\to 0, \ {\rm and} \ r \equiv {T^{\rm (pert)}\over T^{\rm (target)}}\to \infty, \label{eq:rdef} \end{equation} since the transition from $G$ to $G'$ is made only if $H^{\rm (target)}_{G'}< H^{\rm (target)}_G$ or $H^{\rm (target)}_{G'}=H^{\rm (target)}_G$ and $H^{\rm (pert)}_{G'}\leq H^{\rm (pert)}_G$. In case $T^{\rm (target)}>0$ and $T^{\rm (pert)}>0$, the transition to a less adaptable ($H^{\rm (target)}$-larger) or less stable ($H^{\rm (pert)}$-larger) network can be made with non-zero probability contrary to that of our model. The adaptability-only model corresponds to the limit $T^{\rm (target)}\to 0$ and $T^{\rm (pert)}\to \infty$ and the stability-only model to $T^{\rm (target)}\to \infty$ and $T^{\rm (pert)}\to 0$. With the transition probabilities satisfying Eq.~(\ref{eq:boltzmann}), each network $G$ appears in the stationary state with probability \begin{equation} P_{G;\Sigma} \propto \exp\left(- {H_G^{\rm (target)}\over T^{\rm (target)}} -{H_G^{\rm (pert)}\over T^{\rm (pert)}} \right), \end{equation} with the two Hamming distances playing the role of Hamiltonians coupled with two temperatures. To investigate the robustness of the results obtained in Sec.~\ref{sec:evolution}, we investigate this generalized model with the temperature ratio $r$ positive, $T^{\rm (pert)}\to 0$, and $T^{\rm (target)}\to 0$. For $r>0$, the transition from $G$ to $G'$ is available if and only if $H^{\rm (pert)}_{G'}+ r H^{\rm (target)}_{G'}\leq H^{\rm (pert)}_G + r H^{\rm (target)}_G$. $r$ controls the relative importance of $H^{\rm (target)}$ with respect to $H^{\rm (pert)}$. Simulations show that $\langle \overline{k}_\infty\rangle$ displays similar $N$-dependent behaviors for all $r>0$; it increases with $N$ slowly [See. Fig.~\ref{fig:relax}(a)]. On the contrary, in the stability-only model, the mean connectivity decreases with $N$. This highlights the crucial role of adaptation in shaping the architecture of regulatory networks. Secondly, as shown in Fig.~\ref{fig:relax} (b), $\langle H^{\rm (pert)}_\infty\rangle\sim N^{-\theta^{\rm (pert)}}$ with $\theta^{\rm (pert)} \simeq 0.7$ is observed not only for $r\to\infty$ but also for sufficiently large $r$, in the range $r\gtrsim 10$. For small $r$, roughly $r\lesssim 0.1$ and in the stability-only model, $\langle H^{\rm (pert)}_\infty\rangle\sim N^{-1}$, implying that stronger stability is achieved than for large $r$. The scaling exponent $\theta^{\rm (pert)}$ decreases from $1$ to $0.7$ with $r$ increasing in the range $0.1\lesssim r \lesssim 10$. Such robustness of the structural and functional properties for all large $r$ makes our model ($r\to\infty$) appropriate for modeling the evolutionary selection requesting both adaptability and stability. \section{Scaling of fluctuation} \label{sec:fluctuation} As the initial randomly-wired networks evolve, many of their properties change with time, the investigation of which may illuminate the mechanisms of evolution by which living organisms optimize their architecture for acquiring adaptability and stability. Evolution is accompanied by fluctuations. Environments are different for different groups of organisms and vary with time as well even for a given group. Mutants are generated at random and thus the specific pathway of evolution becomes stochastic. The studied networks also display fluctuations over different realizations of evolution $\sigma = \sqrt{\langle A^2\rangle - \langle A\rangle^2}$ for each quantity $A$. Among others, here we investigate such ensemble fluctuation of perturbation spread characterizing the system's stability $\sigmapert_t = \sqrt{\left\langle (H^{\rm (pert)}_t)^2\right\rangle - \left\langle H^{\rm (pert)}_t\right\rangle^2}$. While the evolutionary pressure results in enhancing stability (reducing $\langle H^{\rm (pert)}\rangle$), its fluctuation, normalized by the mean $\langle H^{\rm (pert)}\rangle$, is stronger and the whole distribution is broader, respectively, than those of random networks as shown in Fig.~\ref{fig:fluctuation} (a). Such enhancement of fluctuations helps the evolving network search for the optimal topology under fluctuating environments~\cite{eldar10,kussel05,thattai04,wolf05}. It is observed for a wide range of real-world systems that the standard deviation $\sigma$ and the mean $m$ of a dynamic variable show the scaling relation $\sigma \sim m^\alpha$ with the scaling exponent $\alpha$ reflecting the nature of the dynamical processes: For instance, $\alpha=1/2$ in the case of no correlations among the relevant variables and their distributions having finite moments as in the conventional random walk while the widely varying external influence may make such significant correlations as leading to $\alpha\neq 1/2$~\cite{menezes04a,menezes04b,meloni08,eisler08}. Such scaling relation has been observed for the gene expression level or the protein concentration that fluctuates over cells and time~\cite{nacher05,bareven06}. Also in our model the mean $\langle H^{\rm (pert)}_t\rangle$ and the fluctuation $\sigma_t$ of perturbation spread at different times $t$ satisfy the scaling relation \begin{equation} \sigmapert_t \sim \left\langle H^{\rm (pert)}_t\right\rangle^\alpha. \label{eq:fluctscal} \end{equation} Interestingly, the scaling exponent $\alpha$ changes with evolution [Fig.~\ref{fig:fluctuation} (b)]; $\alpha=\alpha_{\rm tr}$ with $\alpha_{\rm tr}\simeq 0.5$ for $\bar{k_0}=4$ and $\alpha_{\rm tr}\simeq 0.6$ for $\bar{k_0}=0.5$ during transient period but $\alpha=\alpha_{\rm st}$ with $\alpha_{\rm st}\simeq 1$ in the stationary state. Such crossover in $\alpha$ is robustly observed for all $N$ and $L_0$ as shown in Fig.~\ref{fig:fluctuation}(c) and \ref{fig:fluctuation}(d). \begin{figure*} \includegraphics[width=2\columnwidth]{Figure5v2.eps} \caption{(Color online) Scaling behaviors of fluctuation of perturbation spread. (a) The normalized fluctuation $\sigma_t/\langle H^{\rm (pert)}_t\rangle$ as a function of the mean connectivity $\langle \overline{k}_t\rangle$ for $N=200$. It is larger than that in the random networks (dashed line). The color varies with the evolution time $t$ and the arrows indicate the direction of increasing time. (Inset) The cumulative distributions of $H^{\rm (pert)}_{G,t}$ in the stationary state ($t>4.8\times 10^5$) compared with those of the random networks of $\langle \bar{k}\rangle=1.4$ (dashed line). (b) Plots of $\sigma_t$ with respect to $\langle H^{\rm (pert)}_t\rangle$ for $\overline{k}_0=4$ and $0.5$ and $N=200$. (Inset) The estimated scaling exponents $\alpha$ in Eq.~(\ref{eq:fluctscal}) as functions of time $t$. (c) Plots of $\sigma_t$ versus the log-binned values of $\langle H^{\rm (pert)}_t\rangle$ in the transient (tr.) and stationary (st.) periods for system sizes $N=50, 100, 200$, and $400$ with $\overline{k}_0=4$. The slopes of the two fitting lines, $0.90$ and $0.54$, are the averages of the estimated exponents $\alpha$ in the transient and the stationary period, respectively. (Inset) Plots of $\alpha$ versus $N$ in the transient and the stationary period. (d) The same plots as (c) with $\overline{k}_0=0.5$. The slopes, $0.91$ and $0.59$, of the two fitting lines are the average of $\alpha$ for $N=50, 100, 200$, and $400$. (Inset) Plots of $\alpha$ versus $N$ in the transient and the stationary periods. (e) The ratio ${\sigma^{(E)}_t}^2/{\sigma_t}^2$ as functions of time $t$ for $\overline{k}_0=4$ and $0.5$. In the stationary state, ${\sigma_t^{(E)}}^2/{\sigma_t}^2\simeq 0.67$ without regards to the initial mean connectivity or the system size. (f) The estimated scaling exponents $\alpha$ for the whole, external, and internal fluctuation at each time $t$ for $N=200$. (g) Plots of $\langle H^{\rm (pert)}_t\rangle$ and $\langle \overline{k}_t\rangle$ versus time $t$ in the transient period $0<t<20,000$. Both decrease with little fluctuation. (h) Plots of $\langle H^{\rm (pert)}_t\rangle$ and $\langle \overline{k}_t\rangle$ versus $t$ in the stationary period $480,000<t<500,000$. The larger fluctuation of $\langle H^{\rm (pert)}_t\rangle$ than $\langle \overline{k}_t\rangle$ is seen. } \label{fig:fluctuation} \end{figure*} What is the origin of such dynamic crossover in $\alpha$? It has been shown that the interplay of exogenous and endogenous dynamics may affect the scaling exponent $\alpha$ in systems under the influence of external environments~\cite{elowitz02,swain02,menezes04a,menezes04b,meloni08,eisler08}. In our evolution model, the extent of perturbation spread depends on the initial perturbation and on the network structure. The network structure is the outcome of the specific evolution pathway affected by the changing environment. The location of initial perturbation is determined on a random basis in our model, modeling the stochasticity of the internal microscopic dynamics in real systems. Therefore the perturbation spread can be considered as a function of the internal dynamics component $D$ and the network structure $S$, i.e., $H^{\rm (pert)}(D,S)$. Then the fluctuation of $H^{\rm (pert)}$ is represented as $\sigma^2 = \langle \langle {H^{\rm (pert)}}^2 \rangle_D \rangle_S - {\langle \langle H^{\rm (pert)} \rangle_D \rangle_S}^2$, where $\langle \cdots \rangle_D$ and $\langle \cdots \rangle_S$ represent the average over $D$ and $S$ as $\int dD P(D)\cdots$ and $\int dS P(S) \cdots$ and decomposed into the internal and the external fluctuation as~\cite{elowitz02,swain02}: \begin{align} \sigma^2 &= {\sigma^{(I)}}^2 + {\sigma^{(E)}}^2,\nonumber\\ {\sigma^{(I)}}&=\sqrt{\langle \langle {H^{\rm (pert)}}^2\rangle_D\rangle_S - \langle {\langle H^{\rm (pert)}\rangle_D}^2\rangle_S},\nonumber\\ {\sigma^{(S)}}&=\sqrt{\langle {\langle H^{\rm (pert)}\rangle_D}^2\rangle_S - {\langle \langle H^{\rm (pert)} \rangle_D \rangle_S}^2}. \label{eq:sigmas} \end{align} The internal fluctuation ${\sigma^{(I)}}$ denotes the structural average of the internal-dynamics fluctuation of $H^{\rm (pert)}$. On the other hand, the external fluctuation ${\sigma^{(E)}}$ is the structural fluctuation of the internal-dynamics average of $H^{\rm (pert)}$. In simulations, the quantities $\langle \langle \cdots \rangle_D\rangle_S$ are obtained simply by the ensemble averages $\langle \cdots \rangle$. To obtain $\langle {\langle H^{\rm (pert)}\rangle_D}^2\rangle_S$, we use the relation $\langle {\langle H^{\rm (pert)}\rangle_D}^2\rangle_S = \langle H^{\rm (pert,I)} H^{\rm (pert,II)}\rangle$~\cite{elowitz02,swain02}, where $H^{\rm (pert,I)}$ and $H^{\rm (pert,II)}$ are the perturbation spreads from two different initial perturbations on the same network and are computed by Eq.~(\ref{eq:Hpert}) with different perturbed states $\Sigma^{\rm (pert,I)}$ and $\Sigma^{\rm (pert,II)}$ from two initial perturbations. Inserting $\langle {H^{\rm (pert)}}^2\rangle = (1/2)(\langle {H^{\rm (pert,I)}}^2\rangle + \langle {H^{\rm (pert,II)}}^2\rangle)$ and $\langle {H^{\rm (pert)}}\rangle = (1/2)(\langle {H^{\rm (pert,I)}}\rangle + \langle {H^{\rm (pert,II)}}\rangle)$ in Eq.~(\ref{eq:sigmas}), one finds that the internal fluctuation is represented as ${\sigma^{(I)}}^2 = (1/2) \langle (H^{\rm (pert,I)} - H^{\rm (pert,II)})^2\rangle$ and the external fluctuation is ${\sigma^{(E)}}^2 = \langle (H^{\rm (pert,I)} -\langle H^{\rm (pert,I)}\rangle) (H^{\rm (pert,II)} -\langle H^{\rm (pert,II)}\rangle)\rangle$. The external fluctuation $\sigma^{(E)}_{t}$ is found to be much larger than $\sigma^{(I)}_{t}$ for all $t$ [Figure~\ref{fig:fluctuation} (e)], implying the wide variation of the network structure arising from exploiting differentiated pathways of evolution in changing environments. Moreover, the external fluctuation displays a similar crossover behavior to $\sigma_t$, that is, $\sigma^{(E)}_t\sim \langle H^{\rm (pert)}_t\rangle^{\alpha^{(E)}}$ with $\alpha^{(E)}$ increasing from $\alpha^{(E)}_{\rm tr}$, a value close to $1/2$, in the transient period to a value $\alpha^{(E)}_{\rm st}\simeq 1$ in the stationary state [Fig.~\ref{fig:fluctuation} (f)]. On the other hand, the internal fluctuation behaves as $\sigma^{(I)}_{t}\sim \langle H^{\rm (pert)}_t\rangle^{\alpha^{(I)}}$ with $\alpha^{(I)}$ remaining close to $1/2$, like in the diffusion process [Fig.~\ref{fig:fluctuation}(f)]. Which is dominant of the internal and the external fluctuation has been investigated for various complex systems~\cite{menezes04a,menezes04b,meloni08,eisler08}. Contrary to the static (nature of) systems of the previous works, the evolving networks in our model display a dynamic crossover in the fluctuation scaling while the external fluctuation is always dominant. To decipher the mechanism underlying this phenomenon, we begin with assuming that in the scaling regime the perturbation spread $H^{\rm (pert)}_t$ is small and factorized as \begin{equation} H^{\rm (pert)}_t \simeq D_t S_t, \end{equation} where $D_t$ and $S_t$ are the components reflecting the dependence of perturbation spread on the location of initial perturbation and on the global network structure, respectively. $D_t$ and $S_t$ are expected to be independent. We assume that their fluctuations scale as $\xi^{(D)}_t= \sqrt{\langle D_t^2\rangle - \langle D_t\rangle^2}\sim \langle D_t\rangle^{\beta^{(D)}}$ and $\xi^{(S)}_t= \sqrt{\langle S_t^2\rangle - \langle S_t\rangle^2}\sim \langle S_t\rangle^{\beta^{(S)}}$ with $\beta^{(D)}$ and $\beta^{(S)}$ time-independent constants. Then the mean of the perturbation spread should be given by \begin{equation} \langle H^{\rm (pert)}_t\rangle = \langle D_t\rangle\langle S_t\rangle \label{eq:Hdecomp0} \end{equation} and the internal and the external fluctuation in Eq.~(\ref{eq:sigmas}) are represented as \begin{eqnarray} {\sigma^{(I)}_t} &=& \sqrt{(\langle D_t^2\rangle-\langle D_t\rangle^2)\langle S_t^2\rangle} \sim \langle D_t\rangle^{\beta^{(D)}}\sqrt{\langle S_t^2\rangle}, \nonumber\\ {\sigma^{(E)}_t} &=& \langle D_t\rangle \sqrt{\langle S_t^2\rangle - \langle S_t\rangle^2} \sim \langle D_t \rangle \langle S_t\rangle^{\beta^{(S)}}. \label{eq:fluctdecomp0} \end{eqnarray} Using Eqs.~(\ref{eq:Hdecomp0}) and (\ref{eq:fluctdecomp0}), we can analyze the scaling behaviors of fluctuations as follows. In the transient period before entering the stationary state, the network structure is transformed significantly, making the structural component $\langle S_t\rangle$ essentially govern the perturbation spread in its time-dependent behavior, yielding \begin{equation} \langle H^{\rm (pert)}_t\rangle \sim \langle S_t\rangle, \ \sigma^{(I)}_t\sim \sqrt{\langle S_t^2\rangle}, \ \sigma^{(E)}_t \sim \langle S_t\rangle^{\beta^{(S)}}. \end{equation} This is supported by the similarity of the temporal patterns of $\langle H^{\rm (pert)}_t\rangle$ and the mean connectivity $\langle \overline{k}_t\rangle$ in Fig.~\ref{fig:fluctuation} (g). Therefore one can relate the external fluctuation to the mean of perturbation spread as \begin{equation} \sigma^{(E)}_t\sim \langle S_t\rangle^{\beta^{(S)}}\sim \langle H^{\rm (pert)}_t\rangle^{\beta^{(S)}}. \end{equation} Comparing this with the simulation results in Fig.~\ref{fig:fluctuation}(f), we find that $\beta^{(S)}\simeq \alpha^{(E)}_{\rm tr}\simeq 1/2$. That is, $\xi^{(S)}\sim \langle S\rangle^{1/2}$. The estimated value $\beta^{(S)}$ is also consistent with the simulation result $\alpha_{\rm tr}^{(I)}\simeq 1/2$, since $\sigma^{(I)}_t\sim \sqrt{\langle S_t^2\rangle}\sim \sqrt{\langle S_t\rangle^2+{\rm (const.)} \langle S_t\rangle^{2 \beta^{(S)}}}\sim \langle S_t\rangle^{\beta^{(S)}}$, with $\langle S_t\rangle\ll 1$ given $\langle H^{\rm (pert)}\rangle$ small in the scaling regime. In the stationary state, the network structure varies little with time; $\langle \overline{k}_t\rangle$ rarely varies (Fig.~\ref{fig:fluctuation} (h)). In contrast, $\langle H^{\rm (pert)}_t\rangle$ fluctuates significantly on short time scales. This suggests that randomly-selected locations of initial perturbation, having no correlations at different time steps, drive such time-dependent behaviors of $\langle H^{\rm (pert)}_t\rangle$. Therefore, from Eqs.~(\ref{eq:Hdecomp0}) and (\ref{eq:fluctdecomp0}), the mean and the fluctuation of perturbation spread are represented as \begin{equation} \langle H^{\rm (pert)}_t\rangle \sim \langle D_t\rangle, \ \sigma^{(I)}_t\sim \langle D_t\rangle^{\beta^{(D)}}, \ \sigma^{(E)}_t \sim \langle D_t\rangle. \end{equation} Regardless of the value of $\beta^{(D)}$, the external fluctuation is proportional to $\langle H^{\rm (pert)}_t\rangle$, \begin{equation} \sigma^{(E)}_t \sim \langle D_t\rangle \sim \langle H^{\rm (pert)}_t\rangle \end{equation} in agreement with the observation $\alpha^{(E)}_{\rm st} \simeq 1$ in Fig.~\ref{fig:fluctuation} (f). The internal fluctuation is expected to scale as $\sigma^{(I)}_t\sim \langle D_t\rangle^{\beta^{(D)}}\sim \langle H^{\rm (pert)}_t\rangle^{\beta^{(D)}}$, which allows us to find $\beta^{(D)}\simeq\alpha^{(I)}\simeq 1/2$. Therefore $\xi^{(D)}\sim \langle D\rangle^{1/2}$ like $\xi^{(S)}\sim \langle S\rangle^{1/2}$. The above arguments following Eqs.~(\ref{eq:Hdecomp0}) and (\ref{eq:fluctdecomp0}) with $\beta^{(S)}\simeq\beta^{(D)}\simeq 1/2$ illustrate why the internal fluctuation always scale as $\sigma_t^{(I)}\sim \langle H^{\rm (pert)}_t\rangle^{1/2}$ while the external fluctuation shows the dynamic crossover from $\sigma_t^{(E)}\sim \langle H^{\rm (pert)}_t\rangle^{1/2}$ to $\sigma_t^{(E)}\sim \langle H^{\rm (pert)}_t\rangle$. Combined with the observation that the external fluctuation makes a dominant contribution to $\sigma_t$, the arguments explain the crossover in the fluctuation scaling of perturbation spread shown in Fig.~\ref{fig:fluctuation} (b). Our results can be compared with the other cases showing a crossover in the fluctuation scaling driven by the change of the dominant fluctuation between $\sigma^{(I)}$ and $\sigma^{(E)}$~\cite{meloni08}. On the other hand, $\sigma^{(E)}$ is always dominant in our model. The time-varying perturbation spread is dominantly governed by the structure component $S_t$ in the transient period and the internal dynamics component $D_t$ in the stationary state, which underlies the crossover of $\alpha$ from $1/2$ to $1$ in our model. The rapid and significant changes of the structure of the evolving networks are identified only in the transient period, and the internal stochasticity dominates the statistics of stability in the stationary state of evolution. Therefore the nature of fluctuations is fundamentally different between the evolved networks and the random network or those which are not sufficiently evolved. \section{Correlation volume} \label{sec:correlation} The evolved networks in our model are more stable than random networks but less stable than the stability-only networks as shown by the scaling behaviors of $\langle H^{\rm (pert)}_\infty\rangle$ in Sec.~\ref{sec:evolution}. Such balance between robustness and flexibility is hardly acquired unless the relevant dynamical variables, the spread of perturbation in our case, at different sites are correlated with one another. For a quantitative analysis, let us consider the local perturbation $h_{i,t}$ at node $i$ and time $t$ defined as \begin{equation} h_{i,t} = {1\over \tau_m - \tau_s} \sum_{\tau=t\tau_m +\tau_s}^{(t+1)\tau_m} \left[1-\delta_{b_i(\tau),b_i^{\rm (pert)}(\tau)}\right], \end{equation} denoting whether the activity of node $i$ is different between the original state $\Sigma$ and the perturbed state $\Sigma^{\rm (pert)}$. Notice that the stability Hamming distance $H^{\rm (pert)}_t$ in Eq.~(\ref{eq:Hpert}) is the spatial average of the local perturbations, $H_t^{\rm (pert)} = N^{-1}\sum_{i=1}^N h_{i,t}$. If node $j$ tends to have larger perturbation than its average when node $i$ does, $h_{i,t}>\langle h_{i,t}\rangle$, their local perturbations can be considered as correlated, meaning that local fluctuations at $i (j)$ are likely to spread to node $j (i)$. In that case, we can expect that $\langle (h_{i,t} - \langle h_{i,t}\rangle)(h_{j,t} - \langle h_{j,t}\rangle)\rangle= \langle h_{i,t}h_{j,t}\rangle -\langle h_{i,t}\rangle\langle h_{j,t}\rangle>0$. Therefore we define the correlation volume as \begin{equation} \mathcal{C}_{t} \equiv {\sum_{i=1}^N \sum_{j\ne i} \left(\langle h_{i,t} h_{j,t} \rangle -\langle h_{i,t} \rangle \langle h_{j,t}\rangle\right)\over \sum_{j=1}^N \left(\langle h_{j,t}^2\rangle - \langle h_{j,t}\rangle^2\right)}, \label{eq:C} \end{equation} which represents how many nodes are correlated with a node in the perturbation-spreading dynamics. For instance, $\mathcal{C}_t=N-1$ if $h_{i,t} = h_{j,t} $ for all $i$ and $j$ (perfect correlation) and $\mathcal{C}_t=0$ if the $h$'s are completely independent of one another such that $\langle h_{i,t}h_{j,t}\rangle=\langle h_{i,t}\rangle \langle h_{j,t}\rangle$. One can find that the variance of the perturbation spread $\sigma_t^2 = \langle {H^{\rm (pert)}_t}^2\rangle - \langle H^{\rm (pert)}_t\rangle^2$ is decomposed into the local variance $\mathcal{S}_t$ and the correlation volume $\mathcal{C}_t$ as \begin{equation} \sigma^2_t = \mathcal{S}_t ( 1 +\mathcal{C}_t), \label{eq:variance} \end{equation} where $\mathcal{S}_t$ is defined in terms of the variance of $h_{i,t}$ as \begin{equation} \mathcal{S}_t \equiv {1\over N^2} \sum_{i=1}^N \left(\langle h_{i,t}^2\rangle - \langle h_{i,t}\rangle^2\right). \end{equation} The decomposition in Eq.~(\ref{eq:variance}) allows us to see that the fluctuation of perturbation spread depends on the magnitude of local fluctuations, $\mathcal{S}_t$, and how far the local fluctuation propagates to the system, characterized by the correlation volume $\mathcal{C}_t$ in Eq.~(\ref{eq:C}). If the $h_{i,t}$'s are independent, the local fluctuation does not spread, as $\mathcal{C}_t=0$, and the whole variance $\sigma_t^2$ is identical to the local variance $\sigma_t^2 = \mathcal{S}_t$. On the contrary, if the $h_{i,t}$'s are perfectly correlated, the correlation volume is $N-1$ and the whole variance $\sigma_t^2$ is $N$ times larger than the local variance as $\sigma_t^2 = N\mathcal{S}_t$, representing that local fluctuations spread to the whole system. \begin{figure} \includegraphics[width=\columnwidth]{Figure6v2.eps} \caption{(Color online) Correlation volume $\mathcal{C}_t$. (a) Plots of $\mathcal{C}_t$ versus time $t$ for $N=200$. (b) The initial correlation volume $\mathcal{C}_0$ at $t=0$ and the stationary-state one $\mathcal{C}_\infty$ averaged over the stationary period ($t>4.8\times 10^5$) are plotted as functions of the system size $N$ for $\overline{k}_0=4$ (upper) and for $\overline{k}=0.5$ (lower). $\mathcal{C}_\infty$ scales with $N$ as $\mathcal{C}_\infty \sim N^{0.43}$ or $N^{0.42}$ while $\mathcal{C}_0$ does not increase with $N$. } \label{fig:corrvol} \end{figure} In Fig.~\ref{fig:corrvol} (a), the correlation volume is shown to be larger in the stationary state than in the initial state. The correlation volume averaged over the stationary period, $\mathcal{C}_\infty$, is about $10$ while that in the initial state, $\mathcal{C}_0$, ranges between $2$ and $3$ for $N=200$. The dependence of $\mathcal{C}_t$ on the system size $N$ is different between the initial and the stationary states. Furthermore, the correlation volume in the stationary state increases with $N$ as \begin{equation} \mathcal{C}_\infty \sim N^\zeta \ {\rm with} \ \zeta \simeq 0.4 \label{eq:cscaling} \end{equation} while the correlation volume of the initial network $\mathcal{C}_0$ does not increase with $N$ [Fig.~\ref{fig:corrvol} (b)]. Such a scaling behavior is not seen in the whole fluctuation $\sigma_t^2$ even in the evolved networks. Therefore, the scaling behavior of the correlation volume in Eq.~(\ref{eq:cscaling}) can be another hallmark of the evolved systems and can be related to the system's capacity to be stable and adaptable simultaneously. \section{Summary and Discussion} \label{sec:discussion} In this work we have introduced and extensively investigated the characteristic properties of an adaptive network model capturing the generic features of biological evolution. In reality, the evolutionary selections are made for a population of heterogeneous living organisms, as adopted by the genetic algorithm, but here we considered a simplified model, where a single network, representing the network structure typical of a population of organisms, add or remove a link depending on whether that change improves its fitness or not. The fitness of a network is evaluated in terms of its adaptability to a changing environment and the stability against perturbations in the dynamical state, which look contradictory to each other but essential for every living organism. Despite such simplification, the model network reproduces many of the universal network characteristics of evolving organisms, including the sparsity and scaling of the mean connectivity, broad degree distributions, and stability stronger than the random Boolean networks but weaker than the networks evolved towards stability only, implying the simultaneous support of adaptability and robustness. Fluctuations and correlations display characteristic scaling behaviors in the stationary state of evolution contrasted to those in the transient period or in the initial random-network state. The evolutionary pressure drives the regulatory networks towards becoming highly stable by exploiting different pathways from realization to realization in the rugged fitness landscape, which results in a large fluctuation. The presence of two distinct components in the perturbation-spread dynamics, related to the different network structure depending on the evolution pathway and the location of random initial perturbation, respectively, is shown to bring the dynamic crossover in the fluctuation scaling. Such evolution makes large correlations as well. The proposed model is simple and generic allowing us to understand the evolutionary origin of the universal features of diverse biological networks. It illuminates the nature of dynamic fluctuations and correlations in evolving networks that are continuously influenced by the changing environments. The ensemble of those evolving networks can be formulated by the Hamiltonian approach, which depends on a time-varying external environment, and thus it opens a way to study biological evolution from the viewpoint of statistical mechanics. Given the increasing importance of the capacity to manipulate biological systems, natural or synthetic, our understanding of biological fluctuations can be particularly useful. The strong interaction with environments, like the natural selection in evolution, is common to diverse complex systems and thus the theoretical framework to deal with multiple components of dynamics presented here can be of potential use in substantiating the theory of complex systems. \acknowledgments This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean Government (MSIP) (No. 2013R1A2A2A01068845).
1,116,691,497,986
arxiv
\section{Introduction}\label{sec:intro} \noindent Dissipative Kerr soliton (DKS), as a self-reinforcing wave packet that maintains its shape while circulating around a microresonator, has been demonstrated under a double balance between nonlinearity and dispersion, as well as parametric gain and cavity loss \cite{doi:10.1126/science.aad4811, kippenberg2018dissipative}. Due to the unprecedented compactness, low-noise, high power-efficiency, and broad spectral bandwidth, soliton Kerr combs (microcombs) have attracted considerable research interest and been extensively studied for spectroscopy \cite{doi:10.1126/science.aah6516}, communications \cite{marin2017microresonator}, frequency synthesizer \cite{spencer2018optical}, optical clock \cite{drake2019terahertz}, microwave photonics \cite{liu2020photonic} and sensor applications \cite{yao2021}. Over the past several years, through the substantial exploration of the fundamental physics and microresonator fabrication, researchers have realized Kerr solitons in a growing number of platforms, including ultra-high \textit{Q} MgF$_2$ \cite{herr2014temporal}, silica \cite{yi2015soliton}, and monolithic integrated platforms such as Si$_3$N$_4$ \cite{doi:10.1126/science.aad4811, joshi2016thermally, wang2016intracavity, ye2021integrated}, LiNbO$_3$ \cite{he2019self}, AlGaAs \cite{moille2020dissipative} and Ta$_2$O$_5$ \cite{jung2021tantala}, as well as the wide-bandgap semiconductors AlN \cite{weng2021directly, liu2021aluminum}, SiC \cite{guidry2021quantum} and GaN\cite{https://doi.org/10.1002/lpor.202100071}. Photonic integration of laser pump and passive resonators offers the possibility of achieving chip-scale operation, but there are significant challenges to overcome before the widespread deployment of these soliton comb systems. One key challenge comes from the thermo-optic instability in the microresonator when the pump enters into the red-detuned regime for soliton formation. To stably access the soliton state, a number of experimental techniques were developed including rapid laser frequency scanning \cite{herr2014temporal, liu2021aluminum, jung2021tantala}, careful pump power manipulation \cite{doi:10.1126/science.aad4811, yi2015soliton}, or microheater thermal tuning \cite{joshi2016thermally, ji2021exploiting}. With the extra radio-frequency (RF) generator, modulator, or microheaters, these schemes can bring the short-lived soliton to a steady state. Self-injection locking (SIL) \cite{stern2018battery, raja2019electrically} has been proposed and exploited for turnkey soliton generation \cite{shen2020integrated} or even a remarkable octave-spanning soliton microcomb \cite{briles2021hybrid}, by directly coupling a laser chip to a passive microresonator. This approach enables a miniaturized frequency comb source but demands challenging photonic integration and great caution to control the back-reflected Rayleigh scattering. Also, the likely accessible detuning reduction in SIL will limit the spectral bandwidth and dispersive waves (DWs) intensity \cite {briles2021hybrid}, as well as the SER and total comb power \cite{voloshin2021dynamics}, which are serious issues for the precision metrology and timing. Recently, dual-pumping for two resonances (2P2R) has been applied to mitigate the thermal effects thus leading to the deterministic generation and switching of the DKS \cite{https://doi.org/10.1002/lpor.202100071, zhang2019sub, zhou2019soliton, lu2019deterministic}. However, it drastically increases the system complexity and cost due to the use of another set of laser, amplifier, polarization controller, and fiber circulator \cite{https://doi.org/10.1002/lpor.202100071, zhou2019soliton, lu2019deterministic}. Another dual-pumping scheme but activating a single resonance (2P1R) was also proposed, where the auxiliary pump is one modulation sideband away from the main pump \cite{wildi2019thermally, nishimoto2022thermal}. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{Figure1.jpg} \caption{\textbf{(a)} Schematic of different thermal compensation schemes for Kerr soliton generation. FPC: fiber polarization controller; EDFA: erbium-doped fiber amplifier; Cir: circulator. \textbf{(b)} Intracavity power versus the pump detuning for 1P2R system. The power coupled into the auxiliary mode rises suddenly due to the blue shift of the resonances during soliton formation, which can, in turn, remedy the intracavity power change thus preventing the soliton collapse. Blue and red shaded indicate the pump detuning position relative to the cavity resonance.} \label{fig1} \end{figure} A simple and cost-effective octave-spanning Kerr soliton generator needs to be developed urgently to bring the microcombs towards practical applications. Here, we present the straightforward access to octave-spanning DKSs by injecting a single pump to two close resonances (dual-mode) with the same polarization, which we called the 1P2R-1P scheme. The modes on the blue and red sides are used for parametric processes and thermal compensation, respectively. Figure \ref{fig1} compares different thermal accessible soliton systems and sketches the 1P2R mechanism. In contrast to dual-pumping, our method leverages a much simplified setup once the front-end design is properly managed. The idea used in this paper was proposed by our group and successfully applied for octave-spanning DKS generation in an AlN microring resonator (MRR) \cite{weng2021directly}. Dual-mode but with mixed polarization (1P2R-2P) was also found to help the soliton stabilization process \cite{li2017stably}, which requires careful adjustment of the polarization. However, the real potential of the 1P2R scheme needs investigation due to the rigorous requirements in design and fabrication for ensuring the pump and auxiliary modes are in close proximity. This paper presents the production of octave-spanning Kerr solitons with improved performance through careful design of the microresonators while we also discuss the feasibility of fabrication with high yield. The advanced single-soliton features a 17-GHz-wide soliton existence range (SER) and a 200-THz-wide spectral bandwidth. SER means an effective detuning range where a soliton state is maintained during the pump wavelength tuning \cite{herr2014temporal}. Moreover, using the same resonance, octave-spanning soliton crystals at the telecommunication C-band are also demonstrated. Similar soliton behavior are also observed in multiple chips, and thereby illustrate its universal nature. The presented results provide a solid strategy for broadband DKS generation, which is transferable to alternative materials with a tailored repetition rate (\textit{f}$_{rep}$). From an application perspective, the 1P2R-1P scheme paves the way towards making reliable, dynamic, low-cost, and easy-to-operate soliton microcomb sources. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Figure2.jpg} \caption{\textbf{(a)} Simulated resonant wavelengths for the microrings with a radius of 23.25 $\mu$m, the thickness of 800 nm, and various RWs. \textbf{(b)} Measured transmission spectra of seven adjacent MRRs with various RWs. RW change step is 20 nm. The circle denotes the target dual-mode enabling soliton generation with the 1P2R-1P scheme. \textbf{(c)} Simulated integrated dispersion profiles. Inset: zoom-in view of the dual-mode. \textbf{(d)} Measured MI comb spectra at a 400 mW on-chip power for varying RW. Dashed lines indicate the mode interaction positions.} \label{fig2} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Figure3.jpg} \caption{Experimental soliton results of MRR1. \textbf{(a)} Collected powers of all output (green) and the pump alone (yellow), as well as their difference (red) at \textit{P}$_{in}$=150 mW. \textbf{(b)} Microcomb evolution map. \textbf{(c)} Optical spectra of (i) MI comb, (ii) single-soliton, (iii) 2-SC and (iv) 3-SC by connecting output fiber with the OSA directly. Top inset: RF noise of MI comb and single-soliton. The photodiode (PD) noise floor is overlapped by that of the single-soliton. Lower insets describe the solitons distribution. \textbf{(d)} Single-soliton spectrum at low-frequency. \textbf{(e)} Measured autocorrelation (AC) traces of various soliton states.} \label{fig3} \end{figure*} \section{Design of dispersion-engineered dual-mode microresonators}\label{sec:design} Figure \ref{fig2} shows the design and resonance characteristics of the dual-mode MRRs. The 800-nm-thick Si$_3$N$_4$ MRRs that we are using were fabricated by Ligentec through photonic Damascene processes \cite{liu2021high}. To build a reliable layout design, we first conducted the simulations with finite element method. For the MRRs with a fixed radius of 23.25 $\mu$m and varied ring widths (RWs), the simulated resonant wavelengths are plotted in Fig. \ref{fig2}(a). As the RW increases, an obvious mode redshift is observed, while the slope d$\lambda$/d\textit{RW} is 0.012 and 0.094 for the TE$_{00}$ and TE$_{10}$ modes. Consequently, a 1 nm RW variation will lead to a $\sim$0.08 nm adjustment in the mode separation $\Delta\lambda$ ($\lambda_{10}$-$\lambda_{00}$), i.e., a 100 nm RW variation will lead to a $\sim$8 nm $\Delta\lambda$ change, which is almost one free spectra range (FSR). Specifically, the TE$_{00}$ mode with an angular number (\textit{m}) of 164 and TE$_{10}$ mode (\textit{m}=151) have a minimum $\Delta\lambda$ of 0.09 nm at $\sim$1563 nm when RW=1.68 $\mu$m. Then the two modes coincide again at $\sim$1572 nm with a separation of 0.12 nm when RW=1.78 $\mu$m. Figure \ref{fig2}(b) displays the measured transmission spectra of seven neighbouring MRRs with RWs discretely increasing from 1.68 to 1.80 $\mu$m. As the RW increases, both modes show the redshift (denoted by arrows) with the speeds almost the same as simulations, illustrating that a 20 nm variation in the MRR dimension is achievable even using the 248 nm DUV lithography. The target dual-mode in MRR1 (denoted by a circle) has a $\Delta\lambda$ of 0.084 nm, i.e., $\sim$10.5 GHz, and is enlarged as an inset of Fig. \ref{fig2}(c). The TE$_{00}$ and TE$_{10}$ modes have an FSR of $\sim$1012 and $\sim$979 GHz, respectively. Their statistical \textit{Q} factors are 1.1×10$^6$ and 4.2×10$^5$, respectively. The dual-mode position shifts to $\sim$1566.3 nm when RW=1.78 $\mu$m, while the microcombs cannot be tuned to the soliton regime due to a relatively large $\Delta\lambda$ of 0.262 nm. Overall, the experimental results are consistent with simulations, suggesting that dual-mode resonators in the C-band can be reliably attained by tailoring the MRR dimensions. Near-zero anomalous dispersion is crucial for broadband microcomb generation. Figure \ref{fig2}(c) presents the calculated integrated dispersion (D$_{int}$) \cite{doi:10.1126/science.aad4811} of the TE$_{00}$ mode family. All the MRRs can support dual-DW, where the D$_{int}$ equals zero. With the increase of RW, the DW position at low-frequency has a small blue shift, while the high-frequency DW dramatically drops from 311 to 254 THz. The simulated second-order dispersion D$_{2}$/2$\pi$ is 37.2 and 19 MHz when RW is 1.68 and 1.78 $\mu$m, respectively. Figure \ref{fig2}(d) summarizes the measured modulation instability (MI) microcombs when pumping the MRRs with an on-chip power of \textit{P}$_{in}$=400 mW. All spectra exceed an octave span thanks to the dual-DW. The groups of dense lines below 130 THz result from the 2nd-order diffraction of the optical spectrum analyzer (OSA), and are thus artifacts. The peculiar comb lines with enhanced or reduced power (denoted by dotted lines) are caused by mode interactions \cite{ramelow2014strong}. The wider MRRs tend to have flatter spectral envelopes and stronger comb lines near the DWs because ofer the low 2nd-order dispersion. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Figure4.jpg} \caption{\textbf{(a)}-\textbf{(c)} Soliton results of MRR1. \textbf{(a)} Transmitted power excluding pump when \textit{P}$_{in}$= 200 mW. \textbf{(b)} The soliton operation wavelength range and corresponding soliton numbers at different \textit{P}$_{in}$. \textbf{(c)} Typical TSM spectra and the fitting. TSM spectra obtained from \textbf{(d)} MRR2 and \textbf{(e)} MRR3 when \textit {P}$_{in}$ is around 220 and 320 mW, respectively.} \label{fig4} \end{figure*} \section{Experimental results}\label{sec:resul} \noindent\textbf{Octave-spanning single-soliton and soliton crystals.} In this section, we present the octave-spanning soliton results using the first 1.68-$\mu$m-wide device MRR1 as discussed in Fig. \ref{fig2}. All transmission curves and optical spectra presented in this work are obtained with a forward pump tuning speed of 1 nm/s, which allows for adiabatic tuning within the cavity. The experimental setup is similar to the one reported in \cite{weng2021directly} and can be found in \textbf {supplementary material}. Figure \ref{fig3}(a) shows the simultaneously measured transmission of all output and pump alone, as well as their difference when \textit{P}$_{in}$=150 mW. The striking steps related to the single-soliton formation are observed, with a width of 0.08 nm ($\sim$10 GHz), which is close to $\Delta\lambda$. By stopping the pump at different wavelengths, we map the spectra evolution of the microcomb in Fig. \ref{fig3}(b), which confirms the aforementioned wide SER conclusively. Two dash lines (i) and (ii) indicate the state of MI comb and single-soliton, whose spectra are plotted in Fig. \ref{fig3}(c)-(i) and -(ii). The spectrum ranges from 136 to 240 THz at MI state but is conspicuously widened to 125-322 THz for the single-soliton. The soliton microcomb covers 1.5-octaves and is state of the art \cite{li2017stably, pfeiffer2017octave,briles2018interlocking}. The dual-DW at the frequency of 132 and 311 THz agree with the simulation results very well. The transition from MI to soliton state can also be verified by the drastic reduction of the RF intensity noise, as shown in the inset of Fig. \ref{fig3}(c). For comparison, we also pump the TE$_{00}$ mode at 1549.1 nm which is far from the auxiliary mode. Only an MI comb ranging from 136 to 240 THz appears as a final state when \textit {P}$_{in}$=150 mW (see \textbf {supplementary material}). These results illustrate that the dual-mode scheme could also decrease the pump power required to reach the soliton state. The carrier-envelope offset frequency (\textit{f}$_{ceo}$) is an important parameter for microcomb in the applications of metrology and timekeeping, which can be detected via the \textit{f}-2\textit{f} self-referencing technique \cite{del2016phase}. A near-zero \textit{f}$_{ceo}$ is ideal for electronic detection and phase-locking \cite{briles2018interlocking}. Figure \ref{fig3}(d) shows the single-soliton spectrum at low-frequency, where the circles and crosses indicate the 1st-order (i.e., realistic) and 2nd-order comb lines, respectively. The latter has a spacing of half FSR and an intensity increasing trend with the frequency decrease. The \textit{f}$_{ceo}$ can be calculated via \begin{eqnarray} {f_{ceo}}=2&&\times(f_n-f_{2n}/2) \end{eqnarray} \noindent where \textit{f}$_{n}$ and \textit{f}$_{2n}$/2 are frequencies of the adjacent 1st-order and 2nd-order comb lines. Consequently, an \textit{f}$_{ceo}$ of $\sim$200 GHz is extracted. Different from the conventional \textit{f}-2\textit{f} beatnote detection via frequency doubling, our calculation accuracy is only at the level of 1 GHz limited by the OSA resolution. Nevertheless, this simple measurement could help to achieve a low \textit{f}$_{ceo}$ by optimizing the microring dimension further. The soliton crystal (SC) is an extraordinary state with regularly distributed soliton pulses and enhanced comb line power spaced by multiples of the cavity FSR \cite{cole2017soliton, wang2018robust}. For example, \textit{N}-SC exhibits comb lines separated by \textit{N}×FSR. Such SCs are typically formed in the presence of avoided mode crossing \cite{karpov2019dynamics, weng2021near}. In our scheme, a weak mode coupling occurs between the two neighboring resonances (see \textbf {supplementary material}) and result in the observation of 2-SC and 3-SC when adjusting the power slightly, as shown in Fig. \ref{fig3}(c)-(iii) and -(iv), respectively. Both spectra exceed an octave-spanning range (127-270 THz) and exhibit stronger comb lines near the pump. To our knowledge, this is the first report of octave-spanning dissipative SCs centered on the C-band. As regards 3-SC, there are 8 and 15 comb lines with powers greater than 1 mW and 100 $\mu$W, respectively. The in-waveguide comb powers of the single-soliton, 2-SC, and 3-SC are estimated to be 11.5, 22.8, and 31 mW, corresponding to a conversion efficiency (CE) of 7.7$\%$, 11.4$\%$, and 13.5$\%$. Compared with conventional single DKSs (a few percent CE) \cite{bao2014nonlinear}, the CE of SC is greatly enhanced and we believe it can be further improved by refining the external coupling rate \cite{jang2021conversion}. Figure \ref{fig3}(e) shows the experimental pulse traces carried out by an autocorrelator, where the periods of single-soliton, 2-SC and 3-SC are $\sim$1, $\sim$0.5 and $\sim$0.33 ps, inversely proportional to the \textit{f}$_{rep}$ of $\sim$1, $\sim$2 and $\sim$3 THz. With sech-squared fitting, the pulse width of single-soliton is deduced to be 35 fs from the autocorrelator trace, while an 18 fs width will be resulted from the spectrum by assuming a transform-limited pulse. This discrepancy can be mainly attributed to the phase variation across the pulse spectrum, which can be more precisely determined through a characterization technique like frequency resolved optical gating (FROG). In the \textbf {supplementary material}, we also present the 3-SC and 4-SC from other dual-mode devices. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Figure5.jpg} \caption{\textbf{(a)}-\textbf{(d)} Measurement results with MRR2. \textbf{(a)} Dial-mode transmission spectra at different temperatures. \textbf{(b)} The soliton behavior dependence on the pump power and temperature. \textbf{(c)} Transmission spectra at high pump power with available soliton steps. \textbf{(d)} Single-soliton spectra obtained at 310, 320, and 330 K. \textbf{(e)} Single-soliton spectra achieved with MRR3 when \textit {T}=330 K and \textit {P}$_{in}$=290 mW. Dual-mode transmission (left inset) and two adjacent lines at low-frequency (right inset).} \label{fig5} \end{figure*} \noindent\textbf{Versatile two-soliton microcombs with 1P2R-1P.} By repeatedly scanning the laser over the dual-mode in MRR1 at 200 mW, four typical transmission spectra (excluding pump) are observed and recorded, as shown in Fig. \ref{fig4}(a). Besides the single-soliton, we also obtain the multi-solitons with soliton number (\textit{N}) of 2 and 3 as well as the switching from \textit{N}=3 to \textit{N}=2. Particularly, the two-soliton microcomb (TSM) has an SER of 0.12 nm ($\sim$15 GHz). We clarify that these curves are not real comb power considering that some of pump power, even if lower than the parametric threshold, will be absorbed by the auxiliary TE$_{10}$ mode. Despite this, the soliton number is easily identified through the OSA and power meter measurements. Figure \ref{fig4}(b) is a diagram showing the soliton number and effective detuning range dependence on \textit{P}$_{in}$, which indicates that the single-soliton can be definitely generated at 140 and 150 mW. As \textit{P}$_{in}$ increases, both the access probability and SER of single-soliton are decreasing, while the multi-solitons appear with higher possibility. The TSM attained at 180 mW has a 0.15-nm-wide ($\sim$18.8 GHz) SER, near twice that of $\Delta\lambda$. The 1P2R-1P scheme is therefore shown to allow the creation of two-soliton states with ease. Figure \ref{fig4}(c) shows several spectra of the TSMs when \textit {P}$_{in}$ is around 200 mW. These soliton combs can be reproduced easily and sustained for long periods without noise. The relatively azimuthal angles of 37.6°, 60.5°, 120°, 172.4°, 175.1°, and 178.2° are retrieved by fitting the spectral envelope with \begin{eqnarray} S^{(2)}(\mu)=S^{(1)}(\mu)\times(2+2\times cos(\mu\psi)) \end{eqnarray} \noindent where $\psi$ is the relative azimuthal angle between the two pulses, $\mu$ is the comb mode index relative to the pump position, and \textit{S}$^{(1)}$($\mu$) is the spectral of a single-soliton following a sech-squared shape fitted from the experimental data \cite{doi:10.1126/science.aad4811}. The TSMs with $\psi$ of 37.6°, 175.1° and 178.2° are reproducible in the other two 1.68-$\mu$m-wide devices (i.e., MRR2 and MRR3), as shown in Figs. \ref{fig4}(d) and \ref{fig4}(e), respectively. All resonators are over-coupled at the pump wavelength, while the MRR1, MRR2, and MRR3 have a coupling gap of 650, 650, and 550 nm, respectively. Thus for MRR3, the pump power to access solitons is higher and the powers of individual lines are stronger at both high and low frequencies. These TSMs have a generally higher CE compared with the single-soliton, especially for the $\psi$=178.2° case, which has a CE beyond 10$\%$. Such diverse soliton states with improved CE are of interest in applications such as optical arbitrary waveform generation \cite{jiang2007optical} and microwave photonic filters (with larger resonators) \cite{xu2019high}. The results of three-soliton microcombs are shown in the \textbf {supplementary material}. \\ \noindent\textbf{Thermal control for separation tuning and single-soliton generation.} All above results are obtained by maintaining the substrate temperature at 290 K (17 °C) with the aid of a thermoelectric cooler (TEC). However, only multi-solitons can be triggered in MRR2 and MRR3, which possess a relatively large $\Delta\lambda$ of 0.125 and 0.127 nm. Next, we will show the effective control of the mode separation and soliton state by changing TEC temperature \textit{T}. Figure \ref{fig5}(a) draws the dual-mode transmission spectra in MRR2 at different temperatures. As the temperature increases, a thermally induced redshift of the resonant wavelengths are observed with a d$\lambda$/d\textit{T} of $\sim$0.02 nm/K, corresponding to a thermo-optic coefficient of $\sim$ 2.3$\times$10$^{-5}$/K, which is consistent with the result reported in \cite{arbabi2013measurements}. The $\Delta\lambda$ declines from 0.125 to 0.086 nm when \textit{T} increases from 290 to 350 K, indicating that the resonant wavelength of the TE$_{00}$ mode is more sensitive to the temperature variation. We also note that the coupling between the two modes is strengthened when they approach each other, leading to the increase in the extinction ratio of the TE$_{10}$ mode. Figure \ref{fig5}(b) depicts the relation between on-chip power and soliton number at various temperatures. It can be seen that only TSMs can be reached at the temperature of 290 and 300 K, while the single-solitons arise when \textit{T}$\geq$310 K. When 310$\leq$\textit{T}$\leq$330 K, low power can trigger single-soliton only, while conversely, TSMs tend to be formed at high power, which is similar to the trend shown in Fig. \ref{fig4}(b). \begin{table*} \centering \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \caption{Comparison of octave-spanning DKSs.} \label{table1} \setlength{\tabcolsep}{1mm}{} \begin{tabular}{lcccccccc} \toprule \tabincell{c}{Material} & \tabincell{c}{\textit{Q}-factor\\(million)} & \tabincell{c}{On-chip \\power (mW)} & \tabincell{c}{Spectral range\\(THz)} & \tabincell{c}{\textit{f}$_{rep}$\\(THz)} & \tabincell{c}{SER\\(GHz)} & Accessing method \\ Si$_3$N$_4$\cite{li2017stably} & 2 (\textit{Q}$_{int}$) & 120$\pm$15 & 129-275 & $\sim$1 & $\sim$1.5 & \tabincell{c}{Adiabatic pump sweeping (-100 GHz/s)\\ with 1P2R-2P scheme} \\ Si$_3$N$_4$\cite{pfeiffer2017octave} & $\sim$1(\textit{Q}$_{load}$) & 455 & 130-280 & $\sim$1 & - & \tabincell{c}{Forward sweeping and backward tuning} \\ Si$_3$N$_4$\cite{briles2018interlocking} & - & 200 & 130-310 & $\sim$1 & - & Fast pump sweeping with frequency shifter \\ Si$_3$N$_4$\cite{briles2021hybrid} & 2.7(\textit{Q}$_{int}$) & 40 & 140-280 & $\sim$1 & - & \tabincell{c}{Self-injection locking} \\ AlN\cite{weng2021directly} & 1.4(\textit{Q}$_{int}$) & $\sim$335 & 130-273 & $\sim$0.37 & $\sim$10.4 & \tabincell{c}{Adiabatic pump sweeping (-125 GHz/s)\\ with 1P2R-1P scheme} \\ AlN\cite{liu2021aluminum} & 1.6(\textit{Q}$_{int}$) & $\sim$390 & 130-295 & $\sim$0.43 & - & \tabincell{c}{Fast pump sweeping with \\ single-sideband modulator} \\ LiNbO$_3$\cite{he2021octave} & 1.15(\textit{Q}$_{load}$) & $\sim$600 & 125-268 & $\sim$0.2 & $\sim$0.2 & \tabincell{c}{Self-start (photorefractive effect)} \\\tabincell{c}{\textbf{Si$_3$N$_4$}\\\textbf {(this work)}} & $\sim$1.1(\textit{Q}$_{int}$) & \tabincell{c}{140\\ (200, 230)} & \tabincell{c}{125-320\\ (127-270)} & \tabincell{c}{$\sim$1\\ ($\sim$2, $\sim$3)} & $\sim$17 & \tabincell{c}{Adiabatic pump sweeping (-125 GHz/s)\\ with 1P2R-1P scheme} \\ \end{tabular} \end{table*} Figure \ref{fig5}(c) shows examples of transmission spectra measured at high power, where the relevant soliton number is labelled at the top. Specifically, the TSM with a notable SER of $\sim$23 GHz is observed when \textit{T}=300 K and \textit{P}$_{in}$=190 mW. At 320 K, a soliton switching from \textit{N}=2 to \textit{N}=1 is observed when \textit{P}$_{in}$=170 mW. The deterministic access to single-soliton state is also realized at 320 K, accompanied by a maximum SER of $\sim$17 GHz, slightly wider than the 0.105-nm-wide $\Delta\lambda$ ($\sim$13 GHz). The single-soliton spectra acquired at 310, 320, and 330 K are plotted in Fig. \ref{fig5}(d) with pump wavelengths of 1566.550, 1566.750, and 1566.950 nm, respectively. The spectra have similar profiles and range from 125 to 320 THz, well beyond an octave span. As with the results originating from MRR1, dual-DW at the frequency of 130 and 312 THz are observed. The inset exhibits the two adjacent 1st-order and 2nd-order comb lines near 125.5 THz. At 310, 320, and 330 K, the \textit{f}$_{ceo}$ is calculated to be $\sim$105, $\sim$108 and $\sim$109 GHz, respectively, which is about half of the \textit{f}$_{ceo}$ shown in Fig. \ref{fig3}(d) from MRR1. By setting the temperature of MRR3 at 330 K and tuning the pump wavelength to 1566.885 nm, the single-soliton can be stably accessed [see Fig. \ref{fig5}(e)], which has a similar profile as Fig. \ref{fig3}(c)-(ii) (MRR1) and Fig. \ref{fig5}(d) (MRR2). In this case, the $\Delta\lambda$, SER and \textit{f}$_{ceo}$ are $\sim$0.1 nm ($\sim$12.5 GHz), $\sim$11 GHz and $\sim$100 GHz, respectively. It should be mentioned that the MRR2 and MRR3, with almost identical mode separation and \textit{f}$_{ceo}$, are in the same chip, which strongly suggests the fabrication uniformity. These results indicate that temperature control will be crucial for the deterministic creation of single-solitons. \section{Discussion} \noindent The results demonstrate that the 1P2R-1P scheme is applicable to our 23-$\mu$m-radius Si$_3$N$_4$ MRRs with the proper mode separation (e.g., 10-13 GHz). The SER of single-soliton is generally equivalent to $\Delta\lambda$, while the window of multi-soliton is up to almost twice $\Delta\lambda$. Table \ref{table1} compares the reported octave-spanning DKSs realized with various platforms. Clearly, the SER is much expanded with an auxiliary resonance in our 1P2R-1P scheme. More importantly, the proposed strategy enables the access to soliton state via straightforward pump frequency control instead of rapid frequency scan or complicated control. The present octave-spanning single-solitons are generated by the Si$_3$N$_4$ MRRs with \textit{Q} of $\sim$1.1 million, but the ongoing experiments suggest that a 2.7 million \textit{Q} will reduce the required on-chip power to \textless 40 mW \cite{briles2021hybrid}, which paves a way towards a miniaturized soliton system that is integrated with a laser diode. We also demonstrate the octave-spanning soliton crystals generation. We believe that the proposed 1P2R-1P approach for deterministic access to DKS could have profound significance on the microcomb field if a design could be reproduced in fabrication with a reasonable yield. Some solutions can be adopted to further control the mode separation and improve the yield. First, according to Fig. \ref{fig2}, a fine (a few nm) ring dimension scan is necessary when designing the layout to ensure the accuracy of relative variation in fabrication. Considering the reliability, uniformity, and ability to fabricate a high density of MRRs in the commercial foundry, a reasonable variation in MRR dimensions is possible to provide more samples featuring both the desired dual-mode and low \textit{f}$_{ceo}$. Second, post-fabrication such as etching \cite{moille2021tailoring} can be used to tune the resonance characteristics and the mode spacing. Finally, as investigated with MRR2 and MRR3, temperature control can effectively modify the mode separation and change the soliton state. The control can be implemented by a substrate TEC or surface microheaters, which has been demonstrated for the thermal tuning of \textit{f}$_{rep}$ and \textit{f}$_{ceo}$ \cite{xue2016thermal}. In practice, we can also tune pump wavelength or change the laser source if the real dual-mode region deviates from the designed position. \section{Conclusions} \noindent In summary, we demonstrate the accessing of octave-spanning single-soliton, soliton crystals, and multi-solitons in the dual-mode microresonators via simply slow pump tuning. In addition to rich soliton states, the conventional inaccessible soliton step is stabilized now accompanied by an expanded detuning range. Compared with the results achieved by the 2P2R method using an independent auxiliary laser \cite{zhou2019soliton}, the demonstrated SER of 17 GHz here has been significantly enhanced by two orders of magnitude. Such a broad soliton existence window will greatly enhance the potential for microcomb use in applications such as parallel FMCW LiDAR \cite{riemensberger2020massively}. The soliton behavior dependence on the pump power and temperature is also explored. The proposed straightforward and low-cost soliton generation system 1P2R-1P is universally feasible for appropriately designed microresonators. We have demonstrated this first in an AlN MRR and expanded on this in the present study. It should also be possible across other platforms for soliton microcomb generation in LiNbO$_3$ and SiC as examples. It is foreseeable that, by using a passive dual-mode microresonator with an upgraded \textit{Q} such as $\sim$5×10$^6$ \cite{ji2021methods}, the photonic integrated octave-spanning coherent microcomb source will be delivered soon, which is driven by a laser diode with a power of about 100 mW but without amplifier or optical feedback. We also note that a recent work \cite{lei2022thermal} shows the positive effects of 1P2R-2P in improving the timing jitter and effective linewidth of the soliton microcomb lines. We believe such a simple system and the versatile soliton states will not only accelerate the achievement of commercial, portable and affordable soliton microcomb sources but also contribute to the extension of their applications. \\ \noindent\textbf{Supplementary material} See the \textbf {supplementary material} for additional information. \\ \noindent\textbf{Acknowledgements} This project is supported by the Science Foundation Ireland (Grant No. 17/NSFC/4918) and the National Natural Science Foundation of China (Grant No. 61861136001). \\ \noindent\textbf{Competing interests} The authors have no conflicts of interest to disclose. \\ \noindent\textbf{Data availability} The data in my manuscript can be obtained from the corresponding author. \bibliographystyle{pisikabst}
1,116,691,497,987
arxiv
\section{Introduction} Graphical calculi enable reasoning about quantum computation in an intuitive yet rigorous way. Calculi based on string diagrams are more flexible than circuit-style languages, allowing the description of states and measurement projections as well as unitary operations in one unified framework. Their rigour is ensured by the category-theoretical underpinnings \cite{SelingerCPM}. The best-known of these graphical languages is the ZX-calculus, which was first introduced 10 years ago \cite{CD1,CD2}. It is built around the interactions of two complementary bases, the computational basis and the Hadamard basis, which are graphically represented by so-called \emph{spiders}. A related formalism is the ZW-calculus \cite{hadzihasanovic2017thesis}, which is built around the interactions of generators related to the two different types of three-qubit entangled states: GHZ states and $W$ states. Here, we introduce a new graphical language called the \textit{ZH-calculus}, which roughly follows this analogy with multipartite entanglement: \begin{center} \textit{ZX-calculus} : \textit{ZH-calculus} :: \textit{graph states} : \textit{hypergraph states} \end{center} Graph states are the basic resource for the one-way model of measurement-based quantum computation~\cite{MBQC2}, and have been studied extensively using the ZX-calculus~\cite{CD2,DP1,DP2,RossMBQC}. Hypergraph states were introduced in~\cite{rossi2013hypergraph} as a generalisation of graph states, and have recently gathered some interest due, for example, to the role they play in quantum search algorithms~\cite{HyperGrover}, exponentially-growing Bell violations~\cite{gachechiladze2016extreme}, and universal measurement-based quantum computation~\cite{HyperSPTO}. Like the ZX- and ZW-calculi, the ZH-calculus includes a family of ``Z-spiders'' associated with the computational basis. However, its point of departure is the inclusion of ``H-boxes'', which are $n$-ary generalisations of the Hadamard gate satisfying a variation of the spider fusion law, much like the one satisfied by $W$-spiders in the ZW-calculus.\footnote{Despite satisfying a similar variation of the spider fusion rule, this generalisation of the Hadamard node is different from that employed in the original version of the Y-calculus \cite[Definition 2 of Version 1]{jeandel2018y-calculus}.} Whereas Hadamard gates are used to introduce edges between 2 vertices in a graph state, H-boxes can introduce hyperedges between $n$ vertices in a hypergraph state. Seen from another perspective, H-boxes are closely related to both $n$-ary AND gates in classical logic and to the Hadamard-CCZ family of quantum circuits. As a result, Boolean circuits can be encoded in the ZH-calculus with low constant overhead. In particular, the linear maps corresponding to classical AND and NOT gates can be depicted as follows in terms of the ZH calculus: \ctikzfig{logic} While the unitary NOT gate has a simple expression in the ZX-calculus, a typical encoding of an AND gate requires $25$ basic generators and non-Clifford phases (cf.~\cite{CKbook}, \S12.1.3.1). Similarly, multiply-controlled phase gates also have very succinct representations, indicating that the ZH-calculus may be useful for analysing Hadamard-CCZ circuits (a.k.a. Hadamard-Toffoli circuits~\cite{ShiToffoli,aharonov2003hadamardtoffoli}, cf. forthcoming~\cite{Niel2018} for connection to ZH), as well as diagonal operators at any level of the Clifford hierarchy~\cite{DiagHierarchy}. Our main theorem is the ZH-calculus is complete with respect to its standard interpretation as matrices. That is, if two ZH-diagrams describe the same matrices, then they are provably equal using the rules of the ZH-calculus. Perhaps one of the most appealing features of the calculus is the simplicity of this completeness proof. The core of the proof (section~\ref{s:completeness}) fits on 4 pages, where only especially intricate lemmas---which appear in Appendix~\ref{sec:disconnect}---were done within the proof assistant Quantomatic~\cite{quanto-cade}. This is due to two main factors. The first is the extensive use of \textit{!-box notation}~\cite{kissinger2014pattern}, which gives an elegant way to reason about diagrams which have arbitrarily-large fan-out-type behaviour. The second is a unique normal form for the ZH-calculus, which expresses any matrix as a Schur product -- i.e.\ entrywise product -- of elementary matrices with the property that all but one entry of each matrix is 1. This multiplicative construction contrasts with the additive construction of the normal form in the ZW-calculus \cite{hadzihasanovic2017thesis}, which arises as a sum of elementary matrices with the property that all but one entry of each matrix is 0. For example the normal form of the diagram corresponding to the matrix $\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)$ is effectively constructed as follows, where the left-hand side represents the approach in the ZH-calculus with $*$ denoting entrywise multiplication, and the right-hand side represents the approach in the ZW-calculus: \[ \begin{pmatrix}a&1\\1&1\end{pmatrix} * \begin{pmatrix}1&b\\1&1\end{pmatrix} * \begin{pmatrix}1&1\\c&1\end{pmatrix} * \begin{pmatrix}1&1\\1&d\end{pmatrix} = \begin{pmatrix}a&b\\c&d\end{pmatrix} = \begin{pmatrix}a&0\\0&0\end{pmatrix} + \begin{pmatrix}0&b\\0&0\end{pmatrix} + \begin{pmatrix}0&0\\c&0\end{pmatrix} + \begin{pmatrix}0&0\\0&d\end{pmatrix}. \] Unlike the completeness proofs for universal versions of the ZX-calculus \cite{LoriaCompleteness,OxfordCompleteness}, which make use of the ZW-completeness proof via suitable translations between the two respective languages, our proofs of soundness, completeness, and universality are self-contained and don't rely on encoding into another calculus. The paper is structured as follows. The generators and relations of the ZH-calculus are introduced in Section~\ref{s:ZH-dfn} and the calculus is proved to be universal and sound. The completeness proof is given in Section~\ref{s:completeness}. In section~\ref{s:applications} we survey two potential applications and comment on future work. Omitted proofs and a link to the Quantomatic project used to prove Lemmas~\ref{lem:disconnect-4} and \ref{lem:big-disconnect} are given in the appendix. \section{Definition of the ZH-calculus} \label{s:ZH-dfn} The ZH-calculus is a graphical language expressing operations as \emph{string diagrams}. These are diagrams consisting of dots or boxes, connected by wires. Wires are also allowed to have one or two ``dangling'' ends, which are not connected to a dot or box: these represent inputs of the diagram if oriented towards the bottom, outputs of the diagram if oriented to the top. \subsection{The generators and their interpretation} \label{s:ZX-translation} The diagrams of the ZH-calculus are generated by \emph{Z-spiders}, which are represented as white dots, and \emph{H-boxes}, which are represented as white boxes labelled with a complex number $a$. These generators are interpreted as follows, where $\intf{\cdot}$ denotes the map from diagrams to matrices. \[ \intf{\tikzfig{Z-spider}} := \ket{0}^{\otimes n}\bra{0}^{\otimes m} + \ket{1}^{\otimes n}\bra{1}^{\otimes m} \qquad\qquad \intf{\tikzfig{H-spider}} := \sum a^{i_1\ldots i_m j_1\ldots j_n} \ket{j_1\ldots j_n}\bra{i_1\ldots i_m} \] The sum in the second equation is over all $i_1,\ldots, i_m, j_1,\ldots, j_n\in\{0,1\}$, i.e.\ an H-box represents a matrix all but one of whose entries are equal to 1. A label of $-1$ is usually left out and the box is then drawn smaller, e.g.\ $\dotunit{small hadamard}:=\hadastate{-1}$. Straight and curved wires have the following interpretations: \[ \intf{\;|\;} := \ketbra{0}{0}+\ketbra{1}{1} \qquad\qquad\qquad \intf{\tikzfig{wire-cup}} := \ket{00}+\ket{11} \qquad\qquad\qquad \intf{\tikzfig{wire-cap}} := \bra{00}+\bra{11}. \] The juxtaposition of two diagrams is interpreted as the tensor product of the corresponding matrices and the sequential composition of two diagrams is interpreted as the matrix product of the corresponding matrices: \[ \intf{\gendiagram{$D_1$}\;\gendiagram{$D_2$}} := \intf{\gendiagram{$D_1$}}\otimes\intf{\gendiagram{$D_2$}} \qquad\qquad \intf{\tikzfig{sequential-composition}} := \intf{\gendiagram{$D_2$}}\circ\intf{\gendiagram{$D_1$}} \] The statements of the relations of the ZH-calculus will be simplified by introducing two derived generators, called \emph{grey spiders} and NOT, respectively. \begin{equation}\label{eq:grey-spider} \tikzfig{X-spider-dfn} \end{equation}\par\noindent \begin{equation}\label{eq:X-dfn} \tikzfig{negate-dfn} \end{equation}\par\noindent With these definitions, \dotmult{gray dot}\ acts on computational basis states as XOR and \greyphase{\neg} acts as NOT: \[ \intf{\dotmult{gray dot}} = \ketbra{0}{00}+\ketbra{0}{11}+\ketbra{1}{01}+\ketbra{1}{10} \qquad\qquad\qquad \intf{\greyphase{\neg}}=\ketbra{0}{1}+\ketbra{1}{0}. \] There is an evident encoding of the generators of the ZX-calculus into ZH given by the following translation: \[ \tikzfig{green-spider} \qquad\qquad \tikzfig{Hadamard} \qquad\qquad \tikzfig{red-spider} \] Since it is well-known that the ZX-calculus is universal for representing arbitrary linear maps $\mathbb C^{2^m} \to \mathbb C^{2^n}$, the following is immediate: \begin{proposition} Any linear map $\mathbb C^{2^m} \to \mathbb C^{2^n}$ can be expressed using the generators of the ZH-calculus. \end{proposition} We will also give a normal form in Section~\ref{s:completeness} which directly implies universality of the ZH-calculus, without going via ZX. \begin{figure} \centering \begin{tabular}{ccccc} (ZS1) & \tikzfig{Z-spider-rule} & \qquad & (HS1) & \tikzfig{H-spider-rule} \\ &&&& \\ (ZS2) & \tikzfig{Z-special} & & (HS2) & \tikzfig{H-identity} \\ &&&& \\ (BA1) & \tikzfig{ZX-bialgebra} & & (BA2) & \tikzfig{ZH-bialgebra} \\ &&&& \\ (M) & \tikzfig{multiply-rule} & & (U) & \tikzfig{unit-rule} \\ &&&& \\ (A) & \tikzfig{average-rule} & & (I) & \tikzfig{intro-rule} \\ &&&& \\ (O) & \tikzfig{ortho-rule} & & & \end{tabular} \caption{The rules of the ZH-calculus. Throughout, $m,n$ are nonnegative integers and $a,b$ are arbitrary complex numbers. The right-hand sides of both \textit{bialgebra} rules (BA1) and (BA2) are complete bipartite graphs on $(m+n)$ vertices, with an additional input or output for each vertex. The horizontal edges in equation (O) are well-defined because only the topology matters and we do not need to distinguish between inputs and outputs of generators. n.b. the rules (M), (A), (U), (I), and (O) are pronounced \textit{multiply}, \textit{average}, \textit{unit}, \textit{intro}, and \textit{ortho}, respectively.} \label{fig:ZH-rules} \end{figure} \subsection{The relations, and soundness}\label{sec:relations} The rules of the ZH-calculus are given in Figure~\ref{fig:ZH-rules}. We furthermore add one meta rule, often stated as ``only topology matters''. That is, two diagrams are considered equivalent if one can be deformed into the other. Furthermore, the Z-spiders and H-boxes are assumed to be \emph{symmetric} and \emph{undirected}: i.e.\ two inputs or outputs of the same generator can be swapped without changing the interpretation, and an input can be ``bent'' around to become an output, or conversely. Graphically: \ctikzfig{generator-symmetries} \medskip \begin{proposition} The ZH-calculus is sound. \end{proposition} \begin{proof} It is straightforward to check the symmetry properties for each generator and all of the rules in Figure~\ref{fig:ZH-rules} by concrete calculation. Soundness of the meta rule ``only the topology matters'' follows by considering the string diagrams as morphisms in a compact closed category~\cite{SelingerCPM}. \end{proof} \subsection{!-box notation}\label{sec:bang-boxes} Many of the calculations in this paper are greatly simplified by the use of \textit{!-box notation}~\cite{kissinger2014pattern}. A !-box (pronounced ``bang box'') in a string diagram represents a part of the diagram that is able to fan out arbitrarily. That is, the contents of a !-box, along with any wires into or out of the !-box, can be copied $n$ times for any non-negative integer $n$. For example, the !-box diagram below represents the following family of (concrete) string diagrams, one for each $n$: \[ \tikzfig{bang-box-example} \quad \longleftrightarrow \quad \left\{ \ \ \tikzfig{bang-box-example0}\ \ ,\quad \ \ \tikzfig{bang-box-example1}\ \ ,\quad \ \ \tikzfig{bang-box-example2}\ \ ,\quad \ \ \tikzfig{bang-box-example3}\ \ ,\quad \ \ \ldots\ \ \right\} \] All of the resulting string diagrams are well-defined because all of our generators can have arbitrary arities. We can also use !-boxes in diagram equations, as long as each !-box on the LHS has a corresponding !-box on the RHS, and the inputs/outputs in each !-box match. Such a rule represents a family of equations where each \textit{pair} of corresponding !-boxes is replicated $n$ times: \[ \tikzfig{unit-bangboxed} \quad \longleftrightarrow \quad \left\{ \ \ \tikzfig{unit-bb0}\ \ ,\quad \ \ \tikzfig{unit-bb1}\ \ ,\quad \ \ \tikzfig{unit-bb2}\ \ ,\quad \ \ \ldots\ \ \right\} \] Note the dashed box on the right-hand side of the first equation denotes an empty diagram. With this notation, the definition of grey spiders \eqref{eq:grey-spider} becomes \begin{equation}\label{eq:grey-spider-dfn} \tikzfig{X-spider-dfn-bb} \end{equation}\par\noindent Additionally, the rules (ZS1), (HS1), (BA1), and (BA2) from Figure~\ref{fig:ZH-rules} become \[ \text{(ZS1)}\quad \tikzfig{Z-spider-rule-bb} \qquad \text{(HS1)}\quad \tikzfig{H-spider-rule-bb} \qquad \text{(BA1)}\quad \tikzfig{ZX-bialgebra-bb} \qquad \text{(BA2)}\quad \tikzfig{ZH-bialgebra-bb} \] Using the rules in this form makes it straightforward to prove !-box generalisations of the rules (M), (U), (A), and (I). \begin{lemma}\label{lem:bb-rules} The ZH-calculus satisfies the following rules: \[ \text{(M!)}\;\; \tikzfig{multiply-rule-bb} \qquad \text{(U!)}\;\; \tikzfig{unit-bangboxed} \qquad \text{(A!)}\;\; \tikzfig{avg-lemma} \qquad \text{(I!)}\;\; \tikzfig{intro-rule-bangboxed} \] \end{lemma} \noindent This lemma is proved in Appendix~\ref{sec:bang-rules}. At this point, it is worth highlighting the special cases of (M!) and (U!) where the !-box is expanded $0$ times: \[ \tikzfig{scalar-mult} \qquad\qquad\qquad\qquad\qquad\qquad \tikzfig{scalar-rule} \] These rules enable us to multiply scalars at will, and in particular eliminate scalars by multiplying by the inverse. From hence forth, we will use this fact without further comment during our proofs. In this paper, we use a mild, but very useful, extension of the usual !-box notation, which allows !-boxes to be indexed by a the elements of a finite set. For example, indexing over the finite set $\mathbb B^2 := \{ 00, 01, 10, 11 \}$, we can write expressions such as: \[ \tikzfig{indexed-example} \ \ :=\ \ \ \tikzfig{index-example-rhs} \] This extends to equations in the obvious way: \[ \left(\ \tikzfig{index-example-rule}\ \right) \ \ := \ \ \left( \ \tikzfig{index-example-rule-inst}\ \right) \] where we require corresponding !-boxes on the LHS and RHS to be indexed by the \textit{same} finite set. Note that inputs and outputs of a copy associated with the index $x \in X$ on the LHS are matched with inputs and outputs of the \textit{same} copy on the RHS. We recover the behaviour of normal, un-labelled !-boxes by interpreting a !-box without a label as being indexed by an \textit{arbitrary} finite set, e.g. \[ \tikzfig{Z-spider-rule-bb} \qquad \longleftrightarrow \qquad \tikzfig{Z-spider-rule-bb-index} \quad \textrm{(for any finite sets $X$ and $Y$)} \] \section{Completeness} \label{s:completeness} We show that the ZH-calculus is complete by demonstrating the existence of a unique normal form for ZH-diagrams. It is first worth noting that, because we can turn inputs into outputs arbitrarily (cf. the beginning of section~\ref{sec:relations}), it suffices to consider diagrams which have only outputs. We call these \textit{states}. Concretely, these are interpreted as column vectors (i.e. kets). For states $\psi,\phi$, let $\psi * \phi$ be the \textit{Schur product} of $\psi$ and $\phi$ obtained by plugging the $i$-th output of $\psi$ and $\phi$ into \dotmult{white dot}, for each $i$: \ctikzfig{schur} It follows from (ZS1) that $*$ is associative and commutative, so we can write $k$-fold Schur products $\psi_1 * \psi_2 * \ldots * \psi_k$ without ambiguity. For any finite set $J$ with $|J| = k$, let $\prod_{j\in J} \psi_j$ be the $k$-fold Schur product. Let $\mathbb B^n$ be the set of all $n$-bitstrings. For any $\vec{b} := b_1\ldots b_n \in \mathbb B^n$, define the \textit{indexing map} $\iota_{\vec{b}}$ as follows: \begin{equation}\label{eq:iota-dfn} \iota_{\vec{b}} \; = \; \tikzfig{indexing-box} \; = \; \left(\greyphase{\neg}\right)^{1 - b_1} \ldots \left(\greyphase{\neg}\right)^{1 - b_n}. \end{equation} Then normal forms are given by the following $2^n$-fold Schur products: \begin{equation}\label{eq:nf-formula} \prod_{\vec{b} \in \mathbb B^n} \big( \iota_{\vec{b}} \circ H_n(a_{\vec{b}}) \big) \end{equation} where $H_n(a_{\vec{b}})$ is the arity-$n$ H-box (considered as a state) labelled by an arbitrary complex number $a_{\vec{b}}$. A normal form diagram can be seen as a collection of $n$ spiders, fanning out to $2^n$ H-boxes, each with a distinct configuration of NOT's corresponding to the $2^n$ bitstrings in $\mathbb B^n$. Diagrammatically, normal forms are: \[ \tikzfig{nf-bbox}\ \ :=\ \ \tikzfig{nf-picture} \] \begin{theorem}\label{thm:nf-unique} Normal forms are unique. In particular: \begin{equation}\label{eq:nf-concrete} \intf{ \, \prod_{\vec{b} \in \mathbb B^n} \big( \iota_{\vec{b}} \circ H_n(a_{\vec{b}}) \big) } = \sum_{\vec{b} \in \mathbb B^n} a_{\vec{b}} \ket{\vec{b}}. \end{equation} \end{theorem} \begin{proof} The map $\iota_{\vec b}$ is a permutation that acts on computational basis elements as $\ket{\vec c} \mapsto \ket{\vec c \oplus \vec b \oplus \vec 1}$. In particular, it sends the basis element $\ket{\vec 1}$ to $\vec b$. Hence $\iota_{\vec b} \circ H_n(a_{\vec b})$ is a vector with $a_{\vec b}$ in the $\vec b$-th component and $1$ everywhere else. The Schur product of all such vectors indeed gives the RHS of ~\eqref{eq:nf-concrete}. \end{proof} Since equation~\eqref{eq:nf-concrete} gives us a means of constructing any vector in $\mathbb C^{2^n}$, Theorem~\ref{thm:nf-unique} can also be seen as a proof of universality of the ZH calculus, independent of the encoding into ZX we gave in section~\ref{s:ZX-translation}. We now prove 2 lemmas which will assist in manipulating normal forms: \begin{lemma}\label{lem:X-copy} The NOT operator copies through white spiders: \ctikzfig{X-copy} \end{lemma} \begin{proof} Starting from the left-hand side, \[ \tikzfig{X-copy-proof} \qedhere \] \end{proof} \begin{lemma}\label{lem:iota-copy} The $\iota_{\vec{b}}$ operator copies through white spiders, i.e.\ for any $\vec{b}\in\mathbb B^n$: \ctikzfig{iota-copy} \end{lemma} \begin{proof} This follows immediately from Lemma~\ref{lem:X-copy} via the definition of $\iota_{\vec{b}}$ \eqref{eq:iota-dfn}. \end{proof} \begin{lemma}\label{lem:convolution-iota} The ZH-calculus enables the computation of the Schur product of two maps of the form $\iota_{\vec{b}}\circ H_n(x)$ and $\iota_{\vec{b}}\circ H_n(y)$ for any $\vec{b}\in\mathbb B^n$ and $x,y\in\mathbb C$: \ctikzfig{convolution-iota} \end{lemma} \begin{proof} Apply Lemma~\ref{lem:iota-copy}, followed by (M!). \end{proof} We will now show that normal form diagrams, when combined in various ways, can also be put into normal form. Let \tikzfig{nf} denote an arbitrary normal-form diagram. It is straightforward to see that permuting the outputs of a normal-form diagram merely interchanges the bits in the coefficients $a_{\vec b}$. Hence, normal forms are preserved under permutations of outputs. Furthermore: \begin{proposition}\label{prop:extension} A diagram consisting of a normal form diagram juxtaposed with \dotunit{white dot}\ can be brought into normal form using the rules of the ZH-calculus: \ctikzfig{extension} \end{proposition} \begin{proof} Starting from the left-hand side, which we expand using the indexed !-box notation, \ctikzfig{extension-proof} The last diagram is a normal form diagram with $n+1$ outputs, i.e.\ the desired result. \end{proof} \begin{proposition}\label{prop:convolution} The Schur product of two normal form diagrams can be brought into normal form using the rules of the ZH-calculus. \ctikzfig{convolution-nf} \end{proposition} \begin{proof} This follows from (ZS1) and Lemma~\ref{lem:convolution-iota}. \end{proof} \begin{corollary}\label{cor:tensor-product} The tensor product of two normal form diagrams can be brought into normal form using the rules of the ZH-calculus. \end{corollary} \begin{proof} A tensor product can be expressed as \ctikzfig{tensor-product} The diagram NF$_1$ and the leftmost $m$ copies of \dotunit{white dot}\ can be combined into one normal-form diagram with $(n+m)$ outputs by successive applications of Proposition~\ref{prop:extension}. Similarly, the rightmost $n$ copies of \dotunit{white dot}\ and NF$_2$ can be combined into one normal-form diagram with $(n+m)$ outputs. The desired result then follows by Proposition~\ref{prop:convolution}. \end{proof} \begin{remark}\label{rem:scalar-juxtaposition} Note that a single scalar H-box is a normal form diagram. Corollary~\ref{cor:tensor-product} thus implies that a diagram consisting of a normal form diagram juxtaposed with a scalar H-box can be brought into normal form. In the following proofs, we will therefore ignore scalars for simplicity: they can be added back in and then incorporated to the normal form without problems. \end{remark} We are now ready to prove the most difficult case, which is contraction. The majority of the work goes into proving Lemma~\ref{lem:big-disconnect}, which we call the Disconnect Lemma. It uses the (O) rule to disconnect the $2^n$-legged $\dotonly{white dot}\xspace$-spider arising from a contraction of a normal form into $2^{n-1}$ separate cups. It was proven with the help of the graphical proof assistant Quantomatic. Details and full proof are given in Appendix~\ref{sec:disconnect}. \begin{proposition}\label{prop:contraction} The diagram resulting from applying \dotcounit{white dot}\ to an output of a normal form diagram can be brought into normal form: \ctikzfig{whitecounit-nf} \end{proposition} \begin{proof} Starting from an arbitrary normal form, with a \dotcounit{white dot} plugged into the right most output, we have: \[ \scalebox{0.8}{\tikzfig{contraction-thm-pf}} \] Then, we can apply Lemma~\ref{lem:big-disconnect}: \[ \scalebox{0.8}{\tikzfig{contraction-thm-pf2}} \] The final diagram is in normal form, which completes the proof. \end{proof} Our strategy will now be to show that any diagram can be decomposed into H-boxes, combined via the operations of extension, convolution, and contraction. This will give us a completeness proof, thanks to the following proposition. \begin{lemma}\label{lem:H-box-nf} Any H-box can be brought into normal form using the rules of the ZH-calculus. \end{lemma} \begin{proof} The matrix of an H-box $H_n(a)$ has 1's in every entry but the very last one. Hence, to bring an H-box into normal form, we just need to introduce `dummy' 1's for every other matrix entry. We demonstrate the principle using a binary H-box but the argument is analogous for any other arity: \[ \tikzfig{H-nf-example} \qedhere \] \end{proof} To simplify the decomposition of diagrams into H-boxes, we prove a few corollaries. \begin{corollary}\label{cor:cup-nf} The diagram of a single cup can be brought into normal form: \[ \tikzfig{cup-nf} \] \end{corollary} \begin{proof} We can rewrite the cup as a pair of H-boxes using (HS2). This can then be written in terms of extension, convolution, and contraction as follows: \ctikzfig{binary-Z-decomposition} Hence, we can apply Lemma~\ref{lem:H-box-nf} and Propositions \ref{prop:extension}, \ref{prop:convolution}, and \ref{prop:contraction} to get a normal form. \end{proof} \begin{corollary}\label{cor:whitemult-nf} The diagram resulting from applying \dotmult{white dot}\ to a pair of outputs of a normal form diagram can be brought into normal form. \begin{equation}\label{eq:whitemult-nf} \tikzfig{whitemult-nf} \end{equation} \end{corollary} \begin{proof} Applying a \dotmult{white dot}\ to a pair of outputs has the same result as convolving with a cup, then contracting one of the outputs. That is, we can decompose \eqref{eq:whitemult-nf} as follows: \ctikzfig{whitemult-decomp} then apply Corollary \ref{cor:cup-nf} and Propositions \ref{prop:extension}, \ref{prop:convolution}, and \ref{prop:contraction}. \end{proof} \begin{corollary}\label{cor:cap-nf} Applying a cap to a normal form diagram results in another normal form diagram: \ctikzfig{cap-nf} \end{corollary} \begin{proof} Since the cap can be decomposed as $\dotcounit{white dot} \circ \dotmult{white dot}$, the result follows immediately from Corollary~\ref{cor:whitemult-nf} and Proposition~\ref{prop:contraction}. \end{proof} Thanks to Corollaries~\ref{cor:tensor-product} and \ref{cor:cap-nf}, we are able to turn any diagram of normal forms into a normal form. It only remains to show that the generators of the ZH-calculus can themselves be made into normal forms. We have already shown the result for H-boxes, so we only need the following. \begin{lemma}\label{lem:Z-spider-nf} Any Z-spider can be brought into normal form using the rules of the ZH-calculus. \end{lemma} \begin{proof} We can turn \dotunit{white dot}{} into an H-box using (U) and then bring it into normal form via Lemma~\ref{lem:H-box-nf}. By (ZS1), $\dotonly{white dot}\xspace = \tikzfig{dot-nf}$, which can be brought into normal form using (U), Lemma~\ref{lem:H-box-nf}, and Corollaries~\ref{cor:tensor-product} and \ref{cor:cap-nf}. This covers the cases of Z-spiders with 0 or 1 incident wires. We can decompose any Z-spider with $n\geq 2$ incident wires as a tensor product of $(n-1)$ cups, with each cup \dotmult{white dot}-ed with its neighbours: \ctikzfig{n-ary-Z-decomposition} If $n=2$, no \dotmult{white dot} are needed and the equality is by (ZS2) instead of (ZS1). In either case, the diagram can be brought into normal form by applying Corollaries~\ref{cor:tensor-product}, \ref{cor:cup-nf}, and \ref{cor:whitemult-nf}. \end{proof} \begin{theorem} The ZH-calculus is complete: for any ZH diagrams $D_1$ and $D_2$, if $\llbracket D_1 \rrbracket = \llbracket D_2 \rrbracket$ then $D_1$ is convertible into $D_2$ using the rules of the ZH-calculus. \end{theorem} \begin{proof} By Theorem~\ref{thm:nf-unique}, it suffices to show that any ZH diagram can be brought into normal form. Lemmas~\ref{lem:H-box-nf} and \ref{lem:Z-spider-nf} suffice to turn any generator into normal form. Corollary~\ref{cor:tensor-product} lets us turn any tensor product of generators into a normal form and Corollary~\ref{cor:cap-nf} lets us normalise any arbitrary wiring. \end{proof} \section{Applications and future work}\label{s:applications} We will now briefly survey some of the potential applications for the ZH-calculus. We begin with the simple observation that $n$-ary H-boxes let us generalise the usual string diagrammatic description the controlled-Z gate (as in e.g. the ZX calculus) to an $n$-controlled-Z gate: \ctikzfig{n-controlled-Z} Using the decomposition of controlled-Z gates above, a representation of graph states as ZX-diagrams was given in~\cite{DP1}, which in turn gave a fully diagrammatic derivation of the local complementation law for graph states~\cite{DP1} and a new procedure for extracting circuits from certain computations in the one-way model~\cite{DP2}. Passing from $\wedge Z$ to $\wedge^n Z$ gives an analogous representation for \textit{hypergraph states}: \[ \tikzfig{gs-graph-s} \qquad\textrm{\Large $\leadsto$}\qquad \tikzfig{gs-hypergraph} \] Indeed this was the original motivation for considering H-boxes of arbitrary arity. Using a method analogous to proofs in Appendix~\ref{sec:bang-rules}, we can routinely introduce !-boxes to known rules involving graph states (e.g. local complementation and feed-forward rules) to generalise them to hypergraph states. For example, introducing !-boxes to the local complementation rule enables complementing hyperedges of arbitrary arity overlapping on a single vertex: \[ \scalebox{0.8}{\tikzfig{lc1}} \ \ = \ \ \scalebox{0.8}{\tikzfig{lc2}} \qquad\textrm{\Large $\leadsto$}\qquad \scalebox{0.8}{\tikzfig{lc-bb1}} \ \ = \ \ \scalebox{0.8}{\tikzfig{lc-bb2}} \] This potentially gives a powerful new language and set of techniques for working with hypergraph states. Exploring these techniques, and the relationship to known rules for manipulating hypergraph states is a topic of future work. In another direction, if we consider diagrams whose H-boxes are labelled by a fixed root of unity $\omega := \exp(i \pi/2^m)$, we obtain an encoding for unitary gates described by arbitrary \textit{phase polynomials}~\cite{moscamatroid}, i.e. gates of the form $U_{\phi} \ket{\vec b} = \omega^{\phi(\vec b)} \ket{\vec b}$ for some polynomial $\phi(\vec b)$ over $n$ boolean variables. These have a simple graphical representation, where Z-spiders represent variables and $\omega$-labelled H-boxes represent terms in the phase polynomial. For example: \[ \tikzfig{phase-poly} \qquad\qquad \textrm{where}\qquad \phi(\vec b) = {\color{purple} b_1 b_2} + {\color{purple} b_1 b_2 b_3} + {\color{purple} b_3 b_4} \] One can then straightforwardly show basic properties of these unitaries (e.g.~composition, commutation, and replacement of non-linear AND terms by linear XOR terms) using the rules of the ZH-calculus. The phase polynomial formalism for $m = 2$ has been used extensively in studying optimisation problems for Clifford+T circuits~\cite{MeetInMiddle,moscamatroid,AmyMoscaReedMuller,campbelltcount}, and it was recently shown that all diagonal gates in the Clifford hierarchy are of the form $U_\phi$, where the level of the hierarchy depends on $m$ and the degree of $\phi$~\cite{DiagHierarchy}. Gaining access to this phase polynomial structure diagrammatically could therefore yield new methods for quantum circuit optimisation and/or fault tolerant computation through automated diagram rewriting in tools like Quantomatic. \bigskip \noindent \textbf{Acknowledgements.} The authors would like to thank Simon Perdrix and Mariami Gachechiladze for the fruitful conversations in which the foundations of the ZH-calculus were developed. We are also grateful to Niel de Beaudrap for interesting discussions about applications of the ZH-calculus and to Sal Wolffs for careful reading of our proofs (and pointing out a major omission in Corollary~\ref{cor:whitemult-nf}). The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no.\ 334828 (Backens) and 320571 (Kissinger). The paper reflects only the authors' views and not the views of the ERC or the European Commission. The European Union is not liable for any use that may be made of the information contained therein. \bigskip \bibliographystyle{eptcs}
1,116,691,497,988
arxiv
\section{ Introduction} \bigskip Let $k$ be a field. For an algebraic $k$-variety $\mathbb X$ we write $X(k):=\mathbb X (k)$. To simplify notations we often write $X$ instead of $X(k)$. In particular we write $V:=\mathbb V (k)$ when $\mathbb V$ is a vector space and write $k^N$ for $\mathbb A ^N(k)$. For a $k$-vector space $\mathbb V$ we denote by $\mathcal P_d(\mathbb V)$ the algebraic variety of polynomials on $\mathbb V$ of degree $\leq d$ and by $\mathcal P_d(V) $ the set of polynomials functions $P:V\to k$ of degree $\leq d$. We always assume that $d<|k|$, so the restriction map $\mathcal P_d(\mathbb V)(k)\to \mathcal P_d(V)$ is a bijection. For a family $\bar P=\{ P_i\}$ of polynomials on $\mathbb V$ we denote by $\mathbb X _{\bar P}\subset \mathbb V$ the subscheme defined by the ideal generated by $\{ P_i\} $ and by $X_ {\bar P} $ the set $ \mathbb X _P (k)\subset V$. We will not distinguish between the set of affine $k$-subspaces of $\mathbb V$ and the set of affine subspaces of $V$ since for an affine $k$-subspace $\mathbb W \subset \mathbb V$, the map $\mathbb W \to \mathbb W (k)$ is a bijection. In the introduction we consider only the case of hypersurfaces $\mathbb X \subset \mathbb V$ and provide an informal outline of main results. Precise definitions appear in the next section. \begin{definition}Let $P$ be a polynomial of degree $d$ on a $k$-vector space $V$. \begin{enumerate} \item We denote by $\tilde P :V^d\to k$ the multilinear symmetric form associated with $P$ defined by $\tilde P(h_1, \ldots, h_d) : = \Delta_{h_1} \ldots \Delta_{h_d} P: V^d \to k$, where $\Delta_hP(x) = P(x+h)-P(x)$. \item The {\em rank} $r(P)$ is the minimal number $r$ such that $P$ can be written in the form $P=\sum _{i=1}^rQ_iR_i$, where $Q_i,R_i$ are polynomials on $V$ of degrees $<d$. \item We define the {\em non-classical rank (nc-rank)} $r_{nc}(P)$ to be the rank of $\tilde P$. \item A polynomial $P$ is {\it $m$-universal} if for any polynomial $Q\in \mathcal P_d(k^m)$ of degree $d$ there exists an affine map $\phi :k^m\to V$ such that $Q=P\circ \phi$. \item We denote by $\mathbb X _P\subset \mathbb V$ the hypersurface defined by the equation $P(v)=0$ and by $\mathbb X _P^{\operatorname{sing}}$ the singular locus of $\mathbb X _P$. \item $s(P):= \operatorname{codim} _{\mathbb X _P}( \mathbb X _P^{\operatorname{sing}}) $. \end{enumerate} \end{definition} \begin{remark}\label{pd} \begin{enumerate} \item If char$(k) >d$ then $r(P) \le r_{nc}(P)$. \item In low characteristic it can happen that $P$ is of high rank and $\tilde P$ is of low rank, for example in characteristic $2$ the polynomial $P(x) = \sum_{1 <i<j<k <l\le n} x_ix_jx_kx_l$ is of rank $\sim n$, but of nc-rank $3$, see \cite{gt1}, \cite{lms}, \cite{tz-1}. \end{enumerate} \end{remark} \begin{remark}\label{uniform-remark} Let $\mathcal F _Q$ be the set of all affine maps $\phi :k^m\to V$ such that $Q=P\circ \phi$. We later show that in the case when $r_{nc}(P) \gg1$, the size of the set $\mathcal F _Q$ is essentially independent of the choice of $Q\in \mathcal P_d(k^m)$. \end{remark} We now turn to the applications. \subsubsection{Sections of high rank varieties} \begin{theorem} [Acc]\label{Acc} There exists $C=C(d)$ with the following property. For any field $k$ which is either algebraically closed or finite field, any $k$-vector space $V$ and any polynomial $P\in \mathcal P_d(V)$ of nc-rank $>Cm^C$ the following hold: \begin{enumerate} \item Any polynomial $Q\in \mathcal P_d(k^m)$ is $m$-universal. \item Let $\text{Aff} _m(\mathbb V) $ be the variety of affine maps $\phi :\mathbb A ^m\to \mathbb V$ and $\tilde \kappa _P : \text{Aff} _m(\mathbb V) \to \mathcal P_d(\mathbb A ^m) $ be the algebraic map defined by $\tilde \kappa _{P} (\phi):= P \circ \phi$. Then all fibers of $\tilde \kappa _P$ are algebraic varieties of the same dimension. \item The map $\tilde \kappa _P : \text{Aff} _m(\mathbb V) \to \mathcal P_d(\mathbb A ^m) $ is flat. \end{enumerate} \end{theorem} \begin{remark} \begin{enumerate} \item It is easy to see that $s(P)\leq 2r(P)$. \item In the case when $k$ is an algebraically closed field of characteristic $0$ it is shown in \cite{S} that $r(P)\leq d!s(d)$. \item We can show the existence of $\epsilon =\epsilon (d) >0$ such that $r_{nc}(P)\leq \epsilon s^\epsilon(P)$ for polynomials of degree $d$. \end{enumerate} \end{remark} \begin{corollary}There exist $C=C(d)$ such that for any field $k$ which is either algebraically closed or finite, a $k$-vector space $V$, any polynomial $P\in \mathcal P_d(V)$ such that $s(\tilde P)\geq Cm^C$ is $m$-universal. \end{corollary} \begin{question} Does there exist $\delta =\delta (d)$ such that $r(P)\leq \delta s(P)^\delta $? \end{question} \begin{remark}The analogous result holds for a system $\bar P$ of polynomials. \end{remark} \subsubsection{A strengthening of the main Theorem from \cite{Jan}. } In \cite{Jan} authors show that any non-trivial Zariski-closed condition on tensors that is functorial in the underlying vector space implies bounded rank. We show that the condition of being Zariski-closed can be omitted. \begin{theorem}\label{Jan} Let $k$ be an algebraically closed field, $\mathcal C$ the category of finite-dimensional affine $k$-vector spaces with morphisms being affine maps, let $\mathcal F _d$ be the contravariant endofunctor on $\mathcal C$ given by $$\mathcal F _d(V)=\{\text{Polynomials on $V$ of degree $\leq d$} \},$$ and let $\mathcal G \subset \mathcal F$ be a proper subfunctor. Then there exists $r$ such that $r_{nc}(P)\leq r$ for any finite-dimensional $k$-vector space $V$ and $P\in \mathcal G (V)$. \end{theorem} \subsubsection{Extending weakly polynomial functions} Let $k$ be a field, $V$ a $k$-vector space and $X$ a subset of $V $. A function $f:X\to k$ is {\em weakly polynomial} of degree $\leq a$, if the restriction of $f$ on any affine subspace $L$ of $V$ contained in $X$ is a polynomial of degree $\leq a$. \begin{theorem}\label{ext} Let $k$ be a field which either an algebraically closed field, or a finite field such that $|k|>ad$ and let $X\subset V$ be a hypersurface defined by a polynomial of degree $d$ of sufficiently high nc-rank. Then any $k$-valued weakly polynomial function of degree $\leq a$ on $X$ is a restriction of a polynomial $F$ on $V$ of degree $\leq a$. \end{theorem} \begin{remark}The main difficulty in a proof of Theorem \ref{ext} is the non-uniqueness of $F$ in the case when $a>d$. \end{remark} \subsubsection{Nullstellensatz over $\mathbb F _q$.} We prove the following variant of Nullstellensatz for polynomials over $\mathbb F_q$. \begin{theorem}\label{N} There exists $r(d)$ such that for any finite field $k=\mathbb F _q,$ a $k$-vector space $V$ and a $k$-polynomial $P$ of degree $d$ and nc-rank larger than $r(d)$ the following holds: Any polynomial $R$ of degree $<q/d$ vanishing at all points $x\in X_P$ is divisible by $P$. \end{theorem} \begin{remark} \leavevmode \begin{enumerate} \item This result is a strengthening of the Proposition 9.2 from \cite {gt1} in two ways: \begin{enumerate} \item We show the independence of $r(d)$ on the degree of $R$, and \item The paper \cite{bl} shows the existence of polynomials $Q_i$ of bounded degrees such that $R(x)=\sum _{i=1}^cQ_i(x)P_i(x)$, for all $x\in \mathbb X (k)$, but does not show that $R$ is contained in the ideal generated by $\{ P_i\}$. \end{enumerate} \item We outline ideas of proofs of Theorems \ref{ext}, \ref{N} at the beginning of corresponding sections. \item The quantitative bound on the rank in all proofs depend {\em only} on the bounds in Theorem \ref{bias-rank-1}. We conjectured that the bound on $r$ in Theorem \ref{bias-rank-1} depends polynomially on $s$. This conjecture was proved in \cite{Mi} (and in a slightly weaker form in \cite{janzer}). \end{enumerate} \end{remark} \section{The formulation of results.} We start with a series of definitions. \begin{definition}[Rank]\label{rank} Let $k$ be field and $V$ a $k$-vector space. \begin{enumerate} \item A tensor of degree $d$ on $V$ is a multilinear map $T:V^d\to k$. \item Let $P:V \to k$ be a polynomial of degree $d$. We define the {\em rank} $r(P)$ as the minimal number $r$ such that $P$ can be written as a sum $P=\sum _{j=1}^rQ_jR_j$ where $Q_j,R_j$ are polynomials of degrees $<\deg(P)$ defined over $k$ \footnote{This notion of rank is also known as {\em Schmidt-rank} in the analytic number theory literature or {\em strength} in the algebraic geometry literature.}. We define the rank of a degree $1$ polynomial to be infinite, and the rank of the zero polynomial to be $0$. \item For a polynomial $P:V \to k$ of degree $d$ we define $\tilde P(h_1, \ldots, h_d) : = \Delta_{h_1} \ldots \Delta_{h_d} P: V^d \to k$, where $\Delta_hP(x) = P(x+h)-P(x)$. \item We define the {\em non-classical rank (nc-rank)} $r_{nc}(P)$ to be the rank of $r(\tilde P)$. \item For a $d$ tensor $P : V^d \to k$ we define the {\em partition rank (p-rank)} $pr(P)$ as the minimal number $r$ such that $P$ can be written as a sum $P(x_1, \ldots, x_d)=\sum _{ J_i \subset [1,d], i \in [r]} Q_j((x_l)_{l \in J_i})R_j((x_l)_{l \in J^c_i})$ where $Q_j,R_j$ are degree $<d$ tensors. \item For any sequence $\bar d=(d_1,\dots ,d_c)$ we denote by $\mathcal P _{\bar d}(V)$ the space of families $\bar P=(P_i)_{ 1\leq i\leq c} $ of polynomials such that $\deg P_i \le d_i$ We define the {\em rank} $r(\bar P)$ as the minimal rank of the polynomials $\sum _{\bar a\in k^c \setminus \{0\}}a_iP_i$. We denote $d:= \max d_i$. Define $r_{nc}(\bar P)$, $pr(\bar P)$ similarly. \item For a family $\bar P=(P_i)_{1\leq i\leq c}$ of polynomials on $\mathbb V$ we define the subscheme $\mathbb X _{\bar P}\subset \mathbb V$ by the system of equations $\{v \in V: P_i(v)=0, 1\leq i\leq c\}$, and define the rank of $\mathbb X _{\bar P}$ to be the rank of $\bar P$. \end{enumerate} \end{definition} \begin{remark} The rank $r(\bar P)$ depends only on the linear span $L(\bar P)$ of $(P_i)$. \end{remark} The following statement is immediate. \begin{claim}\label{ord}Let $\bar P=( P_i)_{i=1}^c$ be a family of polynomials with $r(\bar P)\neq 0$. Then \begin{enumerate} \item $\dim(L(\bar P))=c$. \item There exists a basis $Q_i^j$ in $L(\bar P)$ such that \begin{enumerate} \item $\deg (Q^j_i)=j$. \item For any $j$ the family $\bar Q^j=(Q_i^j)$ is of rank $\geq r(\bar P)$. \end{enumerate} \end{enumerate} \end{claim} \begin{remark} Since the subscheme $\mathbb X _{\bar P} $ depends only on the space $L(\bar P)$, we can (and will) assume that all the families $\bar P$ we consider satisfy the conditions of Claim \ref{ord} on $\bar Q$. We also may assume that $\bar P$ has no affine polynomials, since we can initially restrict to the locus of the affine polynomials. \end{remark} \begin{lemma}\label{rank-p-rank} Let $P$ be a $d$-tensor. Then $r(P)\le pr(P) \le 4^dr(P)$. \end{lemma} \begin{proof} Let $P$ be a $d$-tensor. It is clear that $r(P)\le pr(P)$. We show that $pr(P) \le 4^dr(P)$. Let $k$ be a field with $|k|\geq d+2$ and write $G=(k^\star)^d$. For any $\bar j =\{ j_1,\dots ,j_d\}$, $j_d\leq d$ we denote by $\chi _{\bar j}:G\to k^\star$ the character $\bar t\to \prod _{i=1}^dt_i^{j_i}$ where $\bar t =(t_1,\dots,t_d)$. Let $V_i$, $1\leq i\leq d$ be $k$-vector spaces and $\mathcal P$ the space of polynomials $P: V_1\times \ldots \times V_d \to k$ of degree $\leq d$. Let $J$ be the space of maps $\bar j$ from $ [1,d]^d \to [1,d] $. For any $\bar j\in J$ we denote by $\mathcal P _{\bar j}$ the subspace of polynomials $P: V_1\times \ldots \times V_d\to k$ such that $$P(t_1v_1, \dots ,t_dv_d)= \prod _{i=1}^dt_i^{j_i} P(v_1, \dots ,v_d).$$ Since $|G|$ is prime to char$(k)$ and $|k|\geq d+2$ we have a direct sum decomposition $ \mathcal P =\oplus _{\bar j\in J} \mathcal P _{\bar j} $. We denote by $p_{\bar j} : \mathcal P \to \mathcal P _{\bar j} $ the projection. We write $\bar 1=(1,\dots ,1)\in J$ denote by $p_d : \mathcal P \to \mathcal P _{\bar 1} $ the projection. For any $a,b, a+b=d$ we denote by $J_{a,b}\subset J\times J$ the set of pairs $\bar j_a, \bar j_b$ such that $\text{supp} ( \bar j_a)\cap \text{supp} ( \bar j_b)=\emptyset$, $\deg ( \bar j_a)=a$, $\deg ( \bar j_b)=b$. For any polynomials $Q,R \in \mathcal P $ of degrees $a,b, a+b=d$ we write $P(Q,R)=\sum _{ (j_1,j_2)\in J_{a,b}} p_{ j_1}(Q) p_{ j_2}(R)$. We have that $p_{\bar 1}(QR)= P(Q,R) $, and therefore $pr(P)\leq 4^dr(P)$. \end{proof} \begin{definition}\label{Af} \leavevmode \begin{enumerate} \item For $m\geq 1$ and a $k$-vector space $V$, we denote by $\text{Aff} _m(\mathbb V)$ the algebraic variety of affine maps $\phi :\mathbb A ^m\to \mathbb V$ and write $\text{Aff} _m(V) := \text{Aff} _m(\mathbb V) (k)$. \item We define an algebraic morphism $\tilde \kappa _{\bar P}: \text{Aff} _m(\mathbb V) \to \mathcal P_{\bar d}(\mathbb A ^m)$ by $\tilde \kappa _{\bar P} (\phi):= \bar P \circ \phi$, and denote by $\kappa _{\bar P}$ the corresponding map $\text{Aff} _m(V) \to \mathcal P_{\bar d}(k ^m) $. \end{enumerate} \end{definition} \begin{definition}\label{un} A map $\kappa :M\to N$ between finite sets is {\em $\epsilon$-uniform}, where $\epsilon >0$, if for all $n\in N$ we have $$\left||N||\kappa ^{-1}(n) |-|M| \right|\leq \epsilon |M|.$$ \end{definition} \begin{remark} In this paper we say that a bound $r(\bar d,m,t)$ is {\it effective} if \begin{enumerate} \item For fixed $d:=\max _id_i$, the bound is polynomial in $m, t, c$. \item The dependence on $d$ is doubly exponential. \end{enumerate} The effective lower bounds for $r(\bar d,m,t)$ follow from the Conjecture \ref{conj-bias} proven in \cite{Mi}. \end{remark} \begin{theorem}\label{A1} For any sequence $\bar d$ and any $m, t\geq 1$, there exists an effective bound $r(\bar d,m,t)$ such that for any finite field $k=\mathbb F _q$, a $k$-vector space $V$ and a family $\bar P\in \mathcal P_{\bar d}(V)$ of nc-rank $\geq r(\bar d,m,t)$, the map $\kappa _P$ is $q^{-t}$ uniform. \end{theorem} To formulate Remark \ref{uniform-remark} precisely we introduce a number of additional definitions. \begin{definition}\label{b1} Fix an affine hypersurface $W\subset V$. We denote by ${\mathbb Z} _{\bar P}$ the variety of affine $m$-dimensional subspaces $L \subset X _{\bar P}\cap W$ and by $\mathbb Y _{\bar P}\subset {\mathbb Z} _{\bar P} $ the subvariety consisting of $L \subset \mathbb X _{\bar P}\cap W$ such that there is no $(m+1)$-dimensional affine subspace $M\subset \mathbb X _{\bar P}, M\not \subset W$ containing $L$. \end{definition} \begin{theorem}\label{B} For any $\bar d$, any $m,t\geq 1$, there exists an effective bound $\tilde r(m,t, \bar d)$ with the following property. For any finite field $k=\mathbb F _q$, any $k$-vector space $V$ and any family $\bar P\in \mathcal P_{\bar d}(V)$ of nc-rank $\geq \tilde r(m,t, \bar d)$, we have that $\frac{|Y_{\bar P}|}{|Z_{\bar P}|}\leq q^{-t}$. \end{theorem} \begin{remark} The condition that $r(P)\gg1$ does not imply that $Y _{\bar P} =\emptyset$. To see that $r(P)\gg1$ does not imply the emptiness of the set $Y _{\bar P} $ consider the case $V=k^n,W=\{w \in V: w_n=0\}, m=1, P=\sum _{i=1}^ {n-1}x_i^d+x_n$ and $L=ke_1$. \end{remark} \subsection{Applications} In this subsection we provide precise formulations of Theorems \ref{Acc}, \ref{ext} and \ref{N}. \subsubsection{The surjectivity over algebraically closed fields} We start with a formalization of Theorem \ref{Acc}. \begin{theorem} [Effective Stillman conjecture] \label{AC} There exists an effective bound $r(m,t, \bar d)$ with the following property. For any sequence $\bar d$ and $m,t\geq 1$, any algebraically closed field $k$, any $k$-vector space $V$ and any family $\bar P\in \mathcal P_{\bar d}(V)$ of nc-rank $\geq r(m,t, \bar d)$ the following holds \begin{enumerate} \item The map $\kappa _{\bar P}$ is surjective. \item All fibers of the morphism $\tilde \kappa _{\bar P}$ are of the same dimension. \item The morphism $\tilde \kappa _{\bar P}$ is flat. \item $\bar P\subset k[V^\vee]$ is a regular sequence. \end{enumerate} \end{theorem} \begin{remark}\leavevmode \begin{enumerate} \item Parts (1) and (2) follow from Theorem \ref{A1}. \item The part (3) follows from the parts (1),(2) and Theorem 23.1 in [22]. The part (4) is an immediate consequence of the part (3). \item As shown in \cite{ah} the part (4) of the theorem implies the validity of an effective Stillman conjecture. \end{enumerate} \end{remark} In the formulation of the next result we use notation introduced in Definition \ref{b1}. \begin{theorem}\label{BC}For any $\bar d$, $m,t\geq 1$, there exists effective bound $\tilde r(m,t, \bar d)$ with the following property. For any algebraically closed field $k$, a $k$-vector space $V$ and a family $\bar P\in \mathcal P_{\bar d}(V)$ of nc-rank $\geq \tilde r(m,t, \bar d)$, we have that $\dim ({\mathbb Z} _{\bar P})-\dim(\mathbb Y _{\bar P})\geq t$. \end{theorem} \subsubsection{Extending weakly polynomial functions} \begin{definition}\label{weak-def}\leavevmode \begin{enumerate} \item Let $V$ be a $k$-vector space and $X\subset V$. We say that a function $f:X \to k$ is {\it weakly polynomial} of degree $\leq a$ if the restriction $f_{|L}$ to any affine subspace $L \subset X$ is a polynomial of degree $\leq a$. \item $X$ satisfies $\star ^a$ if any weakly polynomial function of degree $\leq a$ on $X$ is a restriction of a polynomial function of degree $\leq a$ on $V$. \end{enumerate} \end{definition} Below is a formalization of Theorem \ref{ext} from the introduction. \begin{theorem}\label{w}For any $\bar d$ and $a\geq 1$ there exists an effective bound $r(\bar d,a)$ such for any field $k$ which is either finite with $|k|>ad$ or algebraically closed, a $k$-vector space $V$ and a family $\bar P\in \mathcal P _{\bar d}(V)$ of nc-rank $\geq r(\bar d,a) $ the subset $X_{\bar P}\subset V$ has the property $\star _a$. \end{theorem} \subsubsection{Nullstellensatz over $\mathbb F _q$} Below is a formalization of Theorem \ref{N} from the introduction. \begin{theorem}\label{main-null-int} For any sequence $\bar d$, there exists an effective bound $r(\bar d)>0$ such that the following holds. Let $k=\mathbb F _q$ be a finite field and let $V$ be a $k$-vector space and $\bar P=\{ P_i\}$ be a family of $k$-polynomials of degrees $\leq d_i$ on $V$ of nc-rank greater than $r(\bar d)$. Then any polynomial $R$ of degree $ <q/\tilde d, \ \tilde d:=\prod _{i=1}^cd_i,$ such that $R(x)=0$ for each $x\in \mathbb X _{\bar P}(k)$, belongs to the ideal $J(\bar P):=(P_1,\dots ,P_c)$. \end{theorem} \subsection{Acknowledgement} We thank U. Hrushovski for his help with simplifying the proof of Theorem \ref{main-null-int}. \section{Analysis} In the main part of this section we prove Theorems \ref{A1} and \ref{B} using the results on equidistribution of high rank families of polynomials. These are based on the technique of the additive combinatorics. At the end of the section we apply these results to prove Theorems \ref{AC}, \ref{BC} and \ref{Jan}. \subsection{ Equidistribution of high rank families of polynomials} The most basic result is the following proposition on equidistribution of high rank families of polynomials: \begin{proposition}\label{size} For any sequence $\bar d =(d_1,\dots ,d_c)$ and any $s\geq 1$ there exists an effective bound $r(\bar d,s)$ with the following property. For any finite field $k=\mathbb F _q$, a $k$-vector space $V$ and a family $\bar P\in \mathcal P_{\bar d}(V)$ of nc-rank $\geq r(\bar d, s)$ the map $\bar P:V\to k^c$ is $q^{-s}$-uniform. \end{proposition} The main ingredient of this proof comes from the relation between the bias of exponential sums and algebraic rank. Let $k$ be a finite field, char$(k)=p$, $|k|=q$. Let $V$ be a vector space over $k$. We denote $e_q(x) = e^{2 \pi i \psi(x)/p}$, where $\psi:k \to \mathbb F_p$ is the trace function. Then $e_q$ is a non trivial additive character. Let $P :V \to k$ be a polynomial of degree $d$. Recall that $\tilde P(h_1, \ldots, h_d)== \Delta_{h_d} \ldots \Delta_{h_1}P(x)$ is the multilinear form on $V^d$ associated with $P$, and we can write \[ \tilde P(h_1, \ldots, h_d)= \sum_{\omega \in \{0,1\}^d}( -1)^{|\omega|}P(x+\omega \cdot \bar h); \qquad |\omega| = \sum_{i=1}^d \omega_i, \quad \omega \cdot \bar h = \sum_{i=1}^d \omega_ih_i. \] We denote by ${\mathbb E}_{x \in S} f(x)$ the average $|S|^{-1}\sum_{x \in S} f(x)$. \begin{definition}[Gowers norms \cite{gowers}]\label{uniform} For a function $g: V \to {\mathbb C}$ we define the norm $\|g\|_{U_d} $ by \[\|g\|^{2^d}_{U_d} = {\mathbb E}_{x,v_1, \ldots v_d\in V} \prod_{\omega \in \{0,1\}^d} g^{\omega}(x+\omega \cdot \bar v),\] where $g^{\omega}=g$ if $|\omega|$ is even and $g^{\omega}=\bar g$ otherwise. \end{definition} \begin{definition}[Analytic rank] The analytic rank of a polynomial $P:V \to k$ of degree $d$ is defined by arank$(P)=-\log_q \|e_q(P)\|_{U_d}$. \end{definition} The key analytic tool in this paper is the following theorem relating bias and rank. \begin{theorem}[Bias-rank]\label{bias-rank-1} \leavevmode \begin{enumerate} \item Let $s,d>0$. There exists $r=r(s,d)$ such that for any finite field $k$ of size $q$, any vector space $V$ over $k$, any polynomial $P:V \to k$ of degree $d$ the following holds. If $P$ is of nc-rank $>r$ then \[ \|e_q(P)\|^{2^d}_{U_d} = |{\mathbb E}_{v \in V}e_q(\tilde P(h_1, \ldots, h_d))| < q^{-s}. \] \item Let $r,d>0$. For any finite field $k$ of size $q$, any vector space $V$ over $k$, any polynomial $P:V \to k$ of degree $d$, if \[ |{\mathbb E}_{h_1, \ldots, h_d} e_q( \tilde P(h_1, \ldots, h_d)| <q^{-r} \] for some polynomial, then $P$ is of partition rank $>r$. \end{enumerate} \end{theorem} \begin{proof} Part (1) was proved for partition rank in increasing generality in \cite{gt1,kl,bl}. The most general version of the first part can be found at the survey \cite{hhl} (Theorem 8.0.1). Part (2) for was observed in \cite{kz-approx}, \cite{lovett-rank}. Now the Theorem follows from Lemma \ref{rank-p-rank}. \end{proof} \begin{remark}\label{conj-bias} The dependence of $r$ on $s$ in (1) is polynomial, and double exponential in $d$. This was shown for $d=2,3$ in \cite{s-h}, for $d=4$ in \cite{lampert} and in full generality in \cite{Mi} (A similar bound but with a weak dependence in $|k|$ was proved independently in \cite{janzer}). \end{remark} \begin{remark}\label{gcs} By repeated applications of Cauchy-Schwartz we have that \[ |{\mathbb E}_{x \in V} e_q(P(x))| \le \|e_q(P)\|_{U_d}. \] \end{remark} \begin{remark}\label{norm-bias-rank} \leavevmode \begin{enumerate} \item We use Claim \ref{pd} in our proof of Theorems \ref{A1} and \ref{B}. \item\label{subspace-rank} Let $P:V\to k$ be a polynomial of degree $d$ and rank $R$, and let $W\subset V$ be a subspace of codimension $s$. Then the rank of $P_{|W}$ is $\geq R-s.$ \begin{proof} We may assume that $\mathbb V=\mathbb A ^n$ and $\mathbb W \subset \mathbb V$ consists of vectors $(x_i),1\leq i\leq n$ such that $x_i=0$ for $i\leq s$. If the rank of $P_{|W}$ is $< R-s $ we can find polynomials $Q_j,R_j$, with $ j\leq r$, fro some $r< R-s$ of degrees $<d$ such that $S_{|W} \equiv 0$ where $ S :=P-\sum _{j=1}^rQ_jR_j$. Since $S_{|W} \equiv 0$ there exists polynomials $T_i, 1\leq i\leq s$ of degrees $<d$ such that $S =\sum_{i=1}^sx_iT_i$. \end{proof} \end{enumerate} \end{remark} Now we can prove Proposition \ref{size}. \begin{proof} The number of points on $X_{\bar P}^{\bar b} = \{x: \bar P(x)=\bar b\}$ is given by \[ q^{-c}\sum_{\bar a \in k^c}\sum_{x \in V} e_q\left(\sum_{i=1}^c a_i(P_i(x)-b_i) \right). \] By Theorem \ref{bias-rank-1} and Remark \ref{gcs}, for any $s>0$ we can choose $r$ so that for any $\bar a \ne 0$ we have \[ \left|\sum_{x \in V} e_q\left(\sum_{i=1}^c a_i(P_i(x)-b_i)\right) \right| < q^{-s}|V|. \] \end{proof} \subsection{Proof of Theorem \ref{A1}} Let $\epsilon>0$ we say that a property holds for $\epsilon$ a.e.$s \in S$ if it holds for all but $(1-\epsilon)|S|$ of the elements in $S$. \begin{theorem}\label{need} For any $\bar d$ there exists $C=C(\bar d)$ such that for any $s,m>0$ there exists an effective bound $r=r(s,\bar d)$ such that if $\bar P=(P_i)_{ 1 \le i \le c}$ is a collection of polynomials on $V=k^n$ with $\deg P_i =d_i$ and the nc-rank $\bar P$ is $> r$ then: \begin{enumerate} \item For any collection of polynomials $\bar R=(R_i)_{ 1 \le i \le c}$, with $R_i:k^m \to k$ of degree $d_i$, there exist an affine map $w:k^m \to k^n$ such that $\bar P(w(x))=\bar R(x)$. Furthermore, if we denote by $n_{\bar R}$ the number of such affine maps, then for any $\bar R_1, \bar R_2$ as above $|1-n_{\bar R_1}/n_{\bar R_2}| <q^{-s+m^C}$. \item If $\bar P$ is homogeneous, then for any homogeneous collection $\bar R=(R_i)_{ 1 \le i \le c}$, $R_i:k^m \to k$ of degree $d_i$, there exist a linear map $w:k^m \to k^n$ such that $\bar P(w(x))=\bar R(x)$. Furthermore, if we denote by $n_{\bar R}$ the number of such linear maps, then for any $\bar R_1, \bar R_2$ as above $|1-n_{\bar R_1}/n_{\bar R_2}| <q^{-s+m^C}$. \end{enumerate} \end{theorem} \begin{proof} We provide two proofs of the Theorem: one which is valid only in the case where char$(k)>d$, but is more algebraic in nature, and the other relies on multiple applications of Cauchy-Schwartz. \\ We start with the proof in the case where char$(k)>d$. Since the proof for general $d$ involves many indices, we first prove the case when $c=1$ and $d=2$ so as to make the argument clear. We are given $P(t)=\sum_{1 \le i \le j \le n}a_{ij}t_it_j +\sum_{1 \le i \le n}a_{i}t_i +a$ of rank $r$. Note that for any linear form $l(t)=\sum_{i=1}^n c_it_i$ we have that $P(t)+l(t)$ is of rank $\ge r$. Denote $w(x) = (w^1(x)+s^1, \ldots, w^n(x)+s^n)$, where the $w^i$ are linear forms. We can write \[\begin{aligned} P(w(x)) &= \sum_{1 \le i \le j \le n} a_{ij}(w^i(x)+s^i)(w^j(x)+s^j)+ \sum_{1 \le i \le n} a_{i}(w^i(x)+s^i)+a \\ &= \sum_{1 \le i \le j \le n} a_{ij}\sum_{k,l=1}^m w^i_{k}w^j_{l}x_kx_l+\sum_{1 \le i \le n} a_{i}\sum_{k=1}^m w^i_{k}x_k \\ &+ \sum_{1 \le i \le j \le n} \sum_{k=1}^m a_{ij}(s^iw^j_k+s^jw^i_k)x_k+ \sum_{1 \le i \le j \le n} a_{ij}s^is^j + \sum_{1 \le i \le n} a_{i}s^i +a, \end{aligned}\] which we can write as \[\begin{aligned} &\sum_{1 \le k < l \le m} \sum_{1 \le i \le j \le n} a_{ij}(w^i_{k}w^j_{l} + w^i_{l}w^j_{k}) x_kx_l + \sum_{1 \le l \le m} \sum_{1 \le i \le j \le n} a_{ij}w^i_{l}w^j_{l}x^2_l \\ & + \sum_{k=1}^m \left[\sum_{1 \le i \le n} a_{i}\ w^i_{k} + \sum_{1 \le i \le j \le n} a_{ij}(s^iw^j_k+s^jw^i_k)\right]x_k + \sum_{1 \le i \le j \le n} a_{ij}s^is^j + \sum_{1 \le i \le n} a_{i}s^i +a. \end{aligned}\] Our aim is to show that the collection of coefficients for all monomials in the variables $x_j, \ 1 \le j \le m$, is of rank $\ge r$ (as polynomials in $w^i, s^i$): \begin{claim}\label{ind} The collection below is of rank $\ge r-1$. \[\begin{aligned} &\left\{ \sum_{1 \le i \le j \le n} a_{ij}(w^i_{k}w^j_{l} + w^i_{l}w^j_{k})\right\}_{1 \le k < l \le m} \bigcup \ \left\{ \sum_{1 \le i \le j \le n} a_{ij}(s^iw^j_k+s^jw^i_k)+ \sum_{1 \le i \le n} a_{i} w^i_{k}\right\}_{1 \le k \le m} \\ & \bigcup \left\{\sum_{1 \le i \le j \le n} a_{ij}s^is^j +\sum_{1 \le i \le n} a_{i} s^i \right\} \ \bigcup \ \left\{ \sum_{1 \le i \le j \le n} a_{ij}w^i_{l}w^j_{l}\right\}_{1 \le l \le m} \end{aligned}\] \end{claim} \begin{proof} We need to show that any non-trivial linear combination \[\begin{aligned} &\sum_{1 \le k < l \le m} b_{kl} \left[\sum_{1 \le i \le j \le n} a_{ij}(w^i_{k}w^j_{l} + w^i_{l}w^j_{k})\right] +\sum_{1 \le l \le m} b_{ll} \left[\sum_{1 \le i \le j \le n} a_{ij}w^i_{l}w^j_{l} \right]\\ &+ \sum_{1 \le k \le m} c_k \left[\sum_{1 \le i \le j \le n} a_{ij}(s^iw^j_k+s^jw^i_k)+ \sum_{1 \le i \le n} a_{i} w^i_{k}\right] + d\left[\sum_{1 \le i \le j \le n} a_{ij}s^is^j +\sum_{1 \le i \le n} a_{i} s^i\right] \end{aligned}\] is of rank $\ge r$. Suppose $b_{11} \ne 0$. Then we can write the above as \[ b_{11}P(w_1) + l_{w_2, \ldots, w_m,s}(w_1) \] where $w_j=(w_j^1, \ldots, w_j^n)$, and $l_{w_2, \ldots, w_m,s}$ is affine in $w_1$, so as a polynomial in $w_1$ this is of rank $\ge r$ and thus also of rank $\ge r$ as a polynomial in $w$. Similarly in the case where $b_{ll} \ne 0$, for some $1 \le l \le m$. Suppose $b_{12} \ne 0$. We can write the above as \[ (*) \quad b_{12}Q(w_1, w_2) + l^1_{w_3, \ldots, w_m,s}(w_1) + l^2_{w_3, \ldots, w_m,s}(w_2) \] where $Q:V^2 \to k$, and $Q(t,t)=2P(t)$, and $l^i_{w_3, \ldots, w_m,s}:V\to k$ are affine maps. Thus restricted to the subspace $W$ where $w_1=w_2$ we get that $(*)$ is of rank $\ge r$ and thus of rank $ \ge r-1$ on $W$. Similarly if $b_{kl} \ne 0$ for some $k< l$. A similar analysis for the cases when $c_k$ or $d_k$ are not zero yields the desired result. \\ \end{proof} If $P$ is homogeneous $P(t)=\sum_{1 \le i \le j \le n}a_{ij}t_it_j$ of rank $r$, and $w: k^m \to k^n$ a linear map, we can write \[\begin{aligned} P(w(x)) &= \sum_{1 \le i \le j \le n} a_{ij}w^i(x)w^j(x) = \sum_{1 \le i \le j \le n} a_{ij}\sum_{k,l=1}^m w^i_{k}w^j_{l}x_kx_l \end{aligned}\] which we can write as \[\begin{aligned} &\sum_{1 \le k < l \le m} \sum_{1 \le i \le j \le n} a_{ij}(w^i_{k}w^j_{l} + w^i_{l}w^j_{k}) x_kx_l + \sum_{1 \le l \le m} \sum_{1 \le i \le j \le n} a_{ij}w^i_{l}w^j_{l}x^2_l \\ \end{aligned}\] By Claim \ref{ind} the collection of coefficients for all monomials in the variables $x_j, \ 1 \le j \le m,$ is also of rank $\ge r-1$. \\ For $d>2$ we perform a similar computation. To simplify the notation we carry it out in the case $P, R$ are homogeneous; the non-homogeneous case is similar. In this case it suffices to use linear maps. Denote $\mathcal I_n=\{I=(i_1, \ldots, i_d): 1 \le i_1 \le \ldots \le i_d \le n\}$, and for $t \in k^n$ denote by $t_I = \prod_{j=1}^dt_{i_j}$. Let $P$ be a degree $d$ polynomial $P(t)=\sum_{I \in \mathcal I_n} a_{I} t_{I}$ on $k^n$ of rank $r$.\\ Let $w:k^m \to k^n$ be a linear map. We can write \[ P(w(x)) = \sum_{I \in \mathcal I_n} a_{I} \left(\prod_{j=1}^d w^{i_j}(x)\right) = \sum_{I \in \mathcal I_n} a_{I}\sum_{l_1, \ldots, l_d=1}^m \left(\prod_{j=1}^d w^{i_j}_{l_j} \right) x_{l_1} \dots x_{l_d}. \] We rewrite this as \[ \sum_{l_1, \ldots, l_d=1}^m \sum_{I \in \mathcal I_n} a_{I} \left(\prod_{j=1}^d w^{i_j}_{l_j} \right) x_{l_1} \dots x_{l_d} = \sum_{l \in \mathcal I_m } \left(\sum_{I \in \mathcal I_n} a_{I} \sum_{\sigma \in S(l)} \left(\prod_{j=1}^d w^{i_j}_{\sigma_j} \right)\right)x_{l_1} \dots x_{l_d}\] where $S(l)$ is the set of permutations of $l_1, \dots l_d$. \\ Once again, our aim is to show that the collection of coefficients for all possible monomials in the variables $x_j, \ 1 \le j \le m$, is of rank $\ge r$: \begin{claim} The collection below is of rank $\ge r-d+1$: \[ \left\{ \sum_{I \in \mathcal I_n} a_{I} \sum_{\sigma \in S(l)} \prod_{j=1}^d w^{i_j}_{\sigma_j} \right\}_{l \in \mathcal I_m} \] \end{claim} \begin{proof} Consider a linear combination of the above collection with coefficient $c_l \ne 0$ for some $l \in \mathcal I_m$. Consider $$Q(w_{l_1}, \ldots, w_{l_d})= \sum_{I \in \mathcal I_n} a_{I} \sum_{\sigma \in S(l)} \prod_{j=1}^d w^{i_j}_{\sigma_j}.$$ Then if $\Delta_wP(y) = P(y+w)-P(y)$ then $Q(w_{l_1}, \ldots, w_{l_d}) = \Delta_{w_{l_1}} \ldots \Delta_{w_{l_d}}P(y)$. Now we can write the given linear combination as $c_lQ(w_{l_1}, \ldots, w_{l_d})+T(w_{l_1}, \ldots, w_{l_d})$ where $T$ is a function of lower degree in $w_{l_1}, \ldots, w_{l_d}$ (it also depends on the $w_{t}$ for $t \ne l_i$, for $1 \le i \le d$). Since (char$(k), d)=1$, by Remark \ref{norm-bias-rank}(2) we have that the rank of $Q$ is the same as that of $P$, and thus the rank of the collection is the same as rank of $P$. \end{proof} When $c>1$,we are given $P_s(t)=\sum_{I \in \mathcal I_s}a^s_{I}t_I$, $1 \le s \le c$, of rank $r$, where $\mathcal I_{d_s}(n)$ is the set of ordered tuples $I=(i_1, \ldots, i_{d_s})$ with $1 \le i_1 \le \ldots \le i_{d_s} \le n$, and $t_I= t_{i_1} \ldots t_{i_{d_s}}$. Note that for any polynomials $l_s(t)$ of degrees $<d_s$ we have that $\{P_s(t)+l_s(t)\}$ is also of rank $>r$. We can write \[ P_s(w(x)) = \sum_{I \in \mathcal I_{d_s}(n)} a^s_{I}w^I(x) = \sum_{I \in \mathcal I_{d_s}(n)} a^s_{I}\sum_{l_1, \ldots, l_{d_s}=1}^m w^{i_1}_{l_1} \ldots w^{i_{d_s}}_{l_{d_s}}x_{l_1} \ldots x_{l_{d_s}}, \] where $w^I = \prod_{i \in I} w^{i}$. For $( l_1, \ldots ,l_{d_s}) \in \mathcal I_{d_s}(m)$ the term $x_{l_1} \ldots x_{l_{d_s}}$ has as coefficient \[ \sum_{\sigma \in S_{d_s}} \sum_{I \in \mathcal I_s(n)} a^s_{I}w^{i_1}_{l_{\sigma(1)}} \ldots w^{i_{d_s}}_{l_{\sigma(d_s)}}. \] We wish to show that the collection \[ \left \{\sum_{\sigma \in S_{d_s}} \sum_{I \in \mathcal I_{d_s}(n)} a^s_{I}w^{i_1}_{l_{\sigma(1)}} \ldots w^{i_{d_s}}_{l_{\sigma(d_s)}} \right\}_{ 1 \le s \le c, ( l_1, \ldots ,l_{d_s}) \in \mathcal I_s(m)} \] is of rank $>r$. Write $[1,c]=\bigcup_{f=2}^d C_f$ where $C_f=\{s: d_s=f\}$. We need to show that for any $f=2, \ldots, d$ if $B=(b^s_{ (l_1, \ldots, l_{d_s})})_{s \in C_f, (l_1, \ldots, l_{d_s}) \in \mathcal I_{d_s}(m)}$ is not $\bar 0$, then \[ \sum_{s\in C_f} \sum_{(l_1, \ldots, l_{f})} b^s_{ (l_1, \ldots, l_{f})}\sum_{\sigma \in S_{f}} \sum_{I \in \mathcal I_{f}(n)} a^s_{I}w^{i_1}_{l_{\sigma(1)}} \ldots w^{i_{f}}_{l_{\sigma(f)}} \] is of rank $>r$. Suppose $(b^s_{ (l_1, \ldots, l_{f})})_{s\in C_f} \ne \bar 0$. Then restricted to the subspace $w_{l_1} = \ldots = w_{l_{f}}$ we can write the above as \[ \sum_{s \in C_f} b^s_{ (l_1, \ldots, l_{f})}(f!) P_s(w_{l_1}) + R(w) \] where $w_j=(w_j^1, \ldots, w_j^n)$, and $R(w)$ is of lower degree in $w_{l_1}$, so as a polynomial in $w_{l_1}$ this is of rank $>r$ and thus also of rank $>r$ as a polynomial in $w$. \\ Now (1), (2) follow from Proposition \ref{size}, since $| \mathcal I_m| = m^{C(\bar d)}$. \\ \ \\ We give an alternative proof that is valid for {\em all characteristic}. Following the computations above it suffices to show the following Claim: \begin{claim}\label{nc-version} For any $t>0$ there exists $r=r(t, \bar d)$ such that if the nc-rank of $\bar P$ is $>r$ then for any polynomial $Q$ of degree $2\le f\le c$ that is a non trivial combination of the polynomials in the collection \[ \left \{\sum_{\sigma \in S_f} \sum_{I \in \mathcal I_f(n)} a^s_{I}w^{i_1}_{l_{\sigma(1)}} \ldots w^{i_{f}}_{l_{\sigma(f)}} \right\}_{ s \in C_f, ( l_1, \ldots ,l_{f}) \in \mathcal I_f(m)} \] we have that $| {\mathbb E} e_q(Q)|<q^{-t}$. \end{claim} \begin{proof} Let $Q$ be of degree $f$, and write \[ Q=\sum_{(l_1, \ldots, l_{f})}\sum_{s\in C_f} b^s_{ (l_1, \ldots, l_{f})}\sum_{\sigma \in S_f} \sum_{I \in \mathcal I_f(n)} a^s_{I}w^{i_1}_{l_{\sigma(1)}} \ldots w^{i_{f}}_{l_{\sigma(f)}} \] where $\bar b_{ (l_1, \ldots, l_{f})}= (b^s_{ (l_1, \ldots, l_{f})})_{s \in C_f} \ne 0$ for some $(l_1, \ldots, l_{f}) \in \mathcal I_f(m)$. Observe that any $(l_1, \ldots, l_{f})$ determines a unique set of variables $w_{l_1}, \ldots, w_{l_f}$, thus after $f$ applications of the Cauchy-Schwarz inequality we can isolate this collection and arrive at the multilinear from associated with differentiating \[ \sum_{s\in C_f} b^s_{ (l_1, \ldots, l_{f})}\sum_{\sigma \in S_f} \sum_{I \in \mathcal I_f(n)} a^s_{I}w^{i_1}_{l_{\sigma(1)}} \ldots w^{i_{f}}_{l_{\sigma(f)}} \] with respect to the associated set of variables. Namely we have \[ | e_q(Q)|_{U_1} \le \Big \| e_q\Big(\sum_{s\in C_f} b^s_{ (l_1, \ldots, l_{f})}\sum_{\sigma \in S_f} \sum_{I \in \mathcal I_f(n)} a^s_{I}w^{i_1}_{l_{\sigma(1)}} \ldots w^{i_{f}}_{l_{\sigma(f)}}\Big) \Big \|_{U_f} \] But the latter is equal to \[ {\mathbb E}_{w_{l_1}, \ldots, w_{ l_f}}e_q( \Delta_{w_{l_1}} \ldots \Delta_{w_{l_d}} \bar b_{ (l_1, \ldots, l_{f})} \cdot \bar P^f(x)). \] where $\bar P^f = (P_s)_{s \in C_f}$. Now by Theorem \ref{bias-rank-1} we can choose $r$ such that if $\bar P$ is of rank $>r$ then the above is $<q^{-s}$. \end{proof} This completes the proof of Theorem \ref{need}. \end{proof} \subsection{ Proof of Theorem \ref{B}.} In this subsection we prove Theorem \ref{B}. Let $V$ be a vector space and $l:V\to k$ be a non-constant linear function. For any subset $I$ of $k$ we denote $\mathbb W _I=\{ v \in \mathbb V |l(v)\in I\}$ so that $\mathbb W _b =\mathbb W _{\{ b\} }$, for $b\in k$. For $\mathbb X \subset \mathbb V$ we write $\mathbb X _I=\mathbb X \cap \mathbb W_I$. Theorem \ref{B} follows from the following proposition: \begin{proposition}\label{line-plane} Fix $\bar d =\{ d_i\} $ and $m,s>0$. Let $d:=\max _id_i $. There exists an effective bound $r=r(\bar d, s,m)$ such that for any finite field $k$, any $k$-vector space $\mathbb V$ and $ \bar P\in \mathcal P _{\bar d}(\mathbb V)$ of nc-rank $>r$ the following holds. For any $b\in k$ and $q^{-s}$-almost any affine $m$-dimensional subspace $L \subset X_b$ there exists an $(m+1)$-dimensional affine subspace $M \subset X$ containing $L$ such that $M \cap X_0 \ne \emptyset$. \end{proposition} \begin{proof} We fix $d$ and define $d'= \min(d+1,q)$. Let $M_0=\{ a_0,\dots ,a_{d}\}\subset k$ be a subset of $d'$ distinct points. To simplify notations we assume that $a_0=0$. \begin{claim} Let $Q(x)$ be a polynomial of degree $\leq d$ such that $Q_{|M_0}\equiv 0$. Then $Q(a)=0$ for all $a\in k$. \end{claim} \begin{proof} Since any polynomial of degree $\leq d$ in one variable vanishing at $d+1$ point is equal to 0, the claim is true if $q\geq d+1$. On the other hand if $d\geq q$ then there is nothing to prove. \end{proof} Let $J(d)$ be the subset of $[0,d]^{m+1}$ of tuples $t=(t_1, \ldots, t_{m+1} )$ such that $0 \le t_{m+1} \le \ldots \le t_1$. Let $T^{m+1}:=\{ a_{t} = (a_{t_1}, \ldots, a_{t_{m+1}}): t \in J(d)\}\subset k^{m+1}$. \begin{claim}\label{reduce} Let $Q(x_1, \ldots, x_{m+1})$ be a polynomial of degree $\leq d$ such that $Q_{|T ^{m+1}}\equiv 0$. Then $Q=0$. \end{claim} \begin{proof} Our proof is by induction in $m$ and for a fixed $m$ by induction in $d$. Consider first the case when $m=1$. We will write $x,y$ instead of $x_1,x_2$. We have $T^2=\{ (a_{t_1},a_{t_2}): 0\leq t_2\le t_1 \leq d\}$. We prove the claim by induction in $d$. Let $Q=\sum _{a,b}q_{a,b} x^ay^b$, with $a+b\leq d$. The restriction of $Q$ to the line $\{ y=0\}$ is equal to $Q^0(x)=\sum _{a\leq d}q_{a,0}x^a$. Since $Q^0_{|T^2}\equiv 0$ we see that $Q^0=0$. So $Q(x,y)=yQ'(x,y)$. So, by the inductive assumption, we have $Q'\equiv 0$. Assume now the validity of Claim for polynomials in $m$ variables and for polynomials in $m+1$ variables degree $\leq d-1$. Let $Q(x_1, \ldots, x_{m+1})$ be a polynomial of degree $\leq d$ such that $Q_{|T^{m+1}}\equiv 0$. Let $R$ be the restriction of $Q$ on the subspace of points $(x_1,\dots ,x_{m+1})$ such that $ x_{m+1} =0$. Since $R_{|T^m} \equiv 0 $ it follows from the inductive assumption that $R \equiv 0$ and therefore $Q=Q'x_{m+1}$ where $\operatorname{deg}(Q')=d-1$. Since $Q'_{|T}\equiv 0$ we see from the inductive assumption that $Q'\equiv 0$. \end{proof} Denote $I(d)$ the set of indexes \[ I(d)= \left\{ t:=(t_1, \ldots, t_{m+1}) \in J(d): 1 \le t_{m+1} \right\}. \] An affine $m$-dimensional subspace in $X_b$ is parametrized as $\{x+\sum_{i=1}^m s_iy_i; \ s_i \in k\}$, such that for all $2 \le e \le d$, all $P^e_j \in \bar P^e$ we have \[ (*) \quad P^e_j\left(x+\sum_{i=1}^m s_iy_i\right)=0,\ l(x)=b, \ l(y_i)=0. \] Let $Y$ be the set of $(x,\bar y)$ satisfying $(*)$. We need to show that almost every $(x, \bar y) \in Y$ we can find $z$ such that for all $2 \le e \le d$, all $P^e_j \in \bar P^e$ we have \[ \forall s,s_1, \ldots, s_m \in k, \ \quad P^e_j\left(x+\sum_{i=1}^m s_iy_i+sz\right)=0, \ l(z)=-b, \] or alternatively \[ \forall s,s_1, \ldots, s_m \in k, \quad P^e_j\left(x+\sum_{i=1}^m s_iy_i+sz\right)=0, \ l\left(x+\sum_{i=1}^m s_iy_i+sz\right)=(1-s)b. \] By Claim \ref{reduce}, since $P^e_j$ is of degree $e$, we can reduce this system to \[ P^e_j\left(x+\sum_{i=1}^m a_{t_i}y_i+a_{m+1}z\right)=0, \ l\left(x+\sum_{i=1}^m a_{t_i}y_i+a_{t_{m+1}}z\right)=(1-a_{t_{m+1}})b, \quad t \in I(e). \] Denote $I = \sum_{e \in [d]} |I(e)|$. Fix $(x,\bar y) \in Y$ and estimate the number of solutions $A(x ,\bar y)$ to the above system of equations, which is given by \[\begin{aligned} & q^{-2I} \sum_{z} \sum_{e \in [d], \bar c^e_{t^e}: t^e \in I(e), h_t : t \in I(d)}\\ &e_q\Big( \sum_{e \in [d]}\sum_{t^e} \bar c^e_{ t^e} \cdot \bar P^e \Big(x+\sum_{i=1}^m a_{t^e_i}y_i+a_{t^e_{m+1}}z\Big) + \sum_t h_{t}\Big( l \Big(x+\sum_{i=1}^m a_{t_i}y_i+a_{t_{m+1}}z \Big)+(a_{t_{m+1}}-1)b \Big)\Big). \end{aligned}\] Suppose $\bar c_{t^e}^e= 0$ for all $ t^e$, but $\bar h \ne 0$, and recall that $l(x)=b$, $l(y_i)=0$. We get \[\begin{aligned} &\sum_ze_q\left(\sum_{t} h_{t} \left(l\left(x+\sum_{i=1}^m a_{t_i}y_i+a_{t_{m+1}}z\right)+(a_{t_{m+1}}-1)b\right)\right) \\ &= \sum_z e_q\left(\sum_{t} h_{t} (a_{t_{m+1}}l(z)+a_{t_{m+1}}b)\right). \end{aligned}\] Now if $\sum_{t} h_{t} a_{t_{m+1}}l(z) \not \equiv 0$ then the sum is $0$. Otherwise also $\sum_{t} h_{t} a_{t_{m+1}}b=0$ so that the sum is $|V|$. \ \\ Now suppose $\bar c_{t_0}^e \ne 0$ for some $t_0$, and let $e$ be the largest degree for which this holds. Let \[ T(x,y,z)= \sum_{t^e} \bar c^e_{ t^e} \cdot \bar P^e \Big(x+\sum_{i=1}^m a_{t^e_i}y_i+a_{t^e_{m+1}}z\Big) +Q(x,y,z) \] where $Q$ is of degree $<e$. We estimate \[ B_{t_0}={\mathbb E}_{x,\bar y \in V}\left|{\mathbb E}_ze_q(T(x,y,z))\right|^2. \] \begin{lemma}\label{complexity} For any functions $f_t : V \to {\mathbb C}$, $\|f_t\|_{\infty} \le 1$, $t \in I(d)$, we have \[ \left|{\mathbb E}_{x,\bar y,z,z'}\prod _{t \in I(d)} f_{t}\big(x+\sum_{i=1}^m a_{t_i}y_i+a_{t_{m+1}}z\big) \bar f_{t}\big(x+\sum_{i=1}^m a_{t_i}y_i+a_{t_{m+1}}z+ a_{t_{m+1}}z'\big)\right| \le \|f_{t_0}\|_{U_{d}}. \] \end{lemma} \begin{proof} To simplify the notation we prove this in the case $m=1$. Without loss of generality $a_1=1$ (make a change of variables $y \to a_1^{-1}y,z \to a_1^{-1}z )$. We prove this by induction on $d$. When $d=1$, $I(d)=\{(1,1)\}$, and the claim in this case follow from the following inequality \[ \big|{\mathbb E}_{x,y,z,z'}f_1(x+y+z)f_2(x+y+z+z')\big| = \big|{\mathbb E}_{z,z'}f_1(z)f_2(z')\big| \le \big|{\mathbb E}_{z}f_i(z)\big| \le \|f_{t_0}\|_{U_1}. \]\ Assume $d>1$. We can write the average as \[\begin{aligned} &{\mathbb E}_{x,y,z,z'} \prod_{(i,j) \in I(d-1)} f_{i,j}(x+a_iy+a_jz) \bar f_{i,j}(x+a_iy+a_jz+a_jz')\\ & \qquad \qquad \prod_{1\le j \le d} f_{d,j}(x+a_dy+a_jz) \bar f_{d,j}(x+a_dy+a_jz+a_jz'). \\ \end{aligned}\] Shifting $x$ by $a_dy$ we get \[\begin{aligned} &{\mathbb E}_{x,y,z,z'} \prod_{(i,j) \in I(d-1)} f_{i,j}(x+(a_i-a_d)y+a_jz) \bar f_{i,j}(x+(a_i-a_d)y+a_jz+a_jz')\\ & \qquad \qquad \prod_{1\le j \le d} f_{d,j}(x+a_jz) \bar f_{d,j}(x+a_jz+a_jz'). \\ \end{aligned}\] Applying the Cauchy-Schwarz inequality we can bound the above as \[\begin{aligned} &\big[{\mathbb E}_{x,y,y',z,z'} \prod_{(i,j) \in I(d-1)} f_{i,j}(x+(a_i-a_d)y+a_jz) \bar f_{i,j}(x+(a_i-a_d)y+a_jz+a_jz')\\ & \qquad \qquad \prod_{(i,j) \in I(d-1)}\bar f_{i,j}(x+(a_i-a_d)y+(a_i-a_d)y'+a_jz) \\ & \qquad \qquad \qquad \qquad f_{i,j}(x+(a_i-a_d)y+(a_i-a_d)y'+a_jz+a_jz')\big]^{1/2}.\\ \end{aligned}\] Shifting $x$ by $a_dy$ and rearranging we get \[\begin{aligned} &\big[{\mathbb E}_{x,y,y',z,z'} \prod_{(i,j) \in I(d-1)} f_{i,j}(x+a_iy+a_jz) \bar f_{i,j}(x+a_iy+(a_i-a_d)y'+a_jz)\\ &\prod_{(i,j) \in I(d-1)}\bar f_{i,j}(x+a_iy+a_jz+a_jz') f_{i,j}(x+a_iy+(a_i-a_d)y'+a_jz+a_jz')\big]^{1/2}.\\ \end{aligned}\] Now if we denote \[ g_{i,j, y'}(x) =f_{i,j}(x) \bar f_{i,j}(x+(a_i-a_d)y'), \] then by the induction hypothesis we get that the above is bounded by \[ \big[{\mathbb E}_{y'}\|g_{i,j, y'}\|_{U_{d-1}}\big]^{1/2} \le \|f_{i,j}(x)\|_{U_d} \] for any $(i,j) \in I(d-1)$. \\ We do a similar computation for $(i.j) \in I(d)\setminus \{I(d-1), (d,1)\}$ , splitting \[\begin{aligned} &{\mathbb E}_{x,y,z,z'}\prod_{(i,j) \in I(d-1)} f_{i+1,j+1}(x+a_{i+1}y+a_{j+1}z) \bar f_{i+1,j+1}(x+a_{i+1}y+a_{j+1}z+a_{j+1}z')\\ & \qquad \qquad \prod_{1\le j \le d} f_{j,1}(x+a_jy+z) \bar f_{j,1}(x+a_jy+z+z'), \\ \end{aligned}\] and shifting $x$ by $z$ to get \[\begin{aligned} &{\mathbb E}_{x,y,z,z'}\prod_{(i,j) \in I(d-1)} f_{i+1,j+1}(x-z+a_{i+1}y+a_{j+1}z) \bar f_{i+1,j+1}(x-z+a_{i+1}y+a_{j+1}z+a_{j+1}z')\\ & \qquad \qquad \prod_{1\le j \le d} f_{j,1}(x+a_jy) \bar f_{j,1}(x+a_jy+z'). \\ \end{aligned}\] The only term left uncovered is $f_{d,1}$, so we split \[\begin{aligned} &{\mathbb E}_{x,y,z,z'} \prod_{(i,j) \in I(d-1)} f_{i+1,j}(x+a_{i+1}y+a_{j}z) \bar f_{i+1,j}(x+a_{i+1}y+a_{j}z+a_{j}z')\\ & \qquad \qquad \prod_{1\le i \le d} f_{i,i}(x+a_{i}y+a_iz) \bar f_{i, i}(x+a_{i}y+a_iz+a_iz'). \\ \end{aligned}\] We make the change of variable $z \to z-y$ to get \[\begin{aligned} &{\mathbb E}_{x,y,z,z'} \prod_{(i,j) \in I(d-1)} f_{i+1,j}(x+a_{i+1}y+a_{j}(z-y)) \bar f_{i+1,j}(x+a_{i+1}y+a_{j}(z-y)+a_{j}z')\\ & \qquad \qquad \prod_{1\le i \le d} f_{i,i}(x+a_iz) \bar f_{i, i}(x+a_iz+a_iz'). \\ \end{aligned}\] Observe that the argument of $f_{d,1}$ is $(x+(a_{d}-a_1)y+a_{1}z)$, so that the coefficient of $y$ is not zero. Now proceed as in previous cases. \end{proof} By the Lemma \ref{complexity} we obtain that $B_{t_0}$ is bounded by $\|e_q( \bar c^e_{ t_0^e} \cdot \bar P^e)\|_{U_e}$. By Theorem \ref{bias-rank-1} there exists an effective bound $r=r(s, d)$, such that if $P$ is of rank $>r$ then $\|e_q( \bar c^e_{ t_0^e} \cdot \bar P^e)\|_{U_e}<q^{-s}$. It follows that we can choose $r$ so that for $q^{-s}$-almost all $(x,y) \in Y$, the number of solutions $A(x, \bar y)$ is bounded below by $|V|q^{-4I}$. \end{proof} \subsection{Proof of Theorems \ref{AC} and \ref{BC} } We fix $m,d$ and $v$. As follows from Theorem \ref{A1}, there exists an effective bound $r=r(m,d,c)$ such that for any finite field $\mathbb F _q$, any $\mathbb F _q$-vector space $\mathbb V$ the map $\tilde \kappa _P(\mathbb F _q)$ is surjective for any family $\bar P=(P_i)$ of polynomials $P_i\in \mathcal P _d(\mathbb V)$ such that $r_{nc}(\bar P)\geq r$. We now consider the case when $k$ is an algebraically closed field. \subsubsection{The surjectivity of $\tilde \kappa _P(k)$} We fix $\mathbb V =\mathbb A ^n$ and consider $\mathcal P _d(\mathbb V)^c$ as a scheme defined over ${\mathbb Z}$. Let $T$ be the set of sequences $(a_i,b_i),1\leq i\leq r$, such that $0\leq a_i,b_i <d$ and $a_i+b_i \leq d$. For any $t=\{(a_i,b_i)\} \in T$ we denote by $\nu _t:\oplus _{i=1}^r \mathcal P _{a_i}(\mathbb V) \otimes \mathcal P _{b_i}(\mathbb V) \to \mathcal P _d(\mathbb V) $ the linear map given by $$\nu _i(\{ Q_i\otimes R_i\})=\sum _{i=1}^r Q_i R_i.$$ Let ${\mathbb Z}$ be the union of images of the maps of $\nu _t,t\in T$. Let $\mathbb Y \subset \mathcal P _d(\mathbb V)^c$ be constructible subset of families $\bar P=\{ P_i\}$ such that ${\mathbb Z} \cap L_{\bar P}=\{0\}$ where $L_{\bar P} \subset \mathcal P _d(\mathbb V) $ is the span of $( P_i)$. So $\mathbb Y (k)\subset \mathcal P _d(\mathbb V)(k)$ consists families $\bar P$ of polynomials of rank $>r$. We define ${\mathbb R} \subset \mathbb Y$ as the constructible subset of $\bar P\in \mathbb Y$ such that the morphism $\tilde \kappa _{\bar P}$ is not surjective. Our goal is to show that ${\mathbb R} =\emptyset$. We first consider the case when $k= \bar \mathbb F _p $ is the algebraic closure of $\mathbb F _p$. \begin{claim}\label{p} ${\mathbb R} (\bar \mathbb F _p)=\emptyset$. \end{claim} \begin{proof} Assume that ${\mathbb R} (\bar \mathbb F _p)\neq \emptyset$. Then there exists $\bar P\in \mathbb Y (\bar \mathbb F _p)$ such that the map $\tilde \kappa _{\bar P}(\bar \mathbb F _p)$ is not surjective. Then there exists $Q\in \mathcal P _{\bar d}(\bar \mathbb F _p)$ which is not in the image of $\tilde \kappa _ {\bar P}(\bar \mathbb F _p).$ By definition there exists $l\geq 1$ such that $\bar P\in \mathbb Y (\bar \mathbb F _q)$ and $Q\in \mathcal P _{\bar d}( \mathbb F _q),q=p^l$. But as follows from Theorem \ref{A1} there exists $\phi \in \text{Aff}_m(\mathbb F _q)$ such that $Q=\tilde \kappa _ {\bar P}(\phi)$. But the existence of such affine map $\phi$ contradicts the assumption that $Q$ is not in the image of $\tilde \kappa _ {\bar P}(\bar \mathbb F _p).$ \end{proof} \begin{corollary}\label{kappa}\leavevmode \begin{enumerate} \item The map $\tilde \kappa _P(k)$ is surjective for any algebraically closed field $k$ and a polynomial $P\in \mathcal P _d(\mathbb V)$ of nc-rank $>r(m,d)$. \item The map $\tilde \kappa _P(k)$ is surjective for any algebraically closed field $k$ of characteristic $0$ and a polynomial $P\in \mathcal P _d(\mathbb V)$ of nc-rank $>r(m,d)$. \end{enumerate} \end{corollary} \begin{proof} The part $(1)$ follows from the completeness of the theory $ACF_p$ of algebraically closed fields of a fixed characteristic $p$. To prove the part $(2)$ one choses a non-trivial ultrafilter $\mathcal U$ on the set of primes and considers the $\mathcal U$-ultraproduct of theories $ACF_p$. Let $l$ be the $\mathcal U$-ultraproduct of fields $\bar \mathbb F _p$. Let ACF be the theory of algebraically closed fields and $\alpha$ be the formula in ACF expressing the surjectivity of the map $\kappa _P$. As follows from Claim \ref{p} $\alpha$ holds for algebraic closures of fields $\mathbb F _p$. We fix now $m,d,c$ and for any $n\geq 1$ define by $\alpha _n$ the following formula in $ACF$. For any family $\bar P=(P_i), P_i\in k[x_1.\dots ,x_n]$ such that $r_{nc}(\bar P)\geq r(m,d,c)$ the map $\kappa _{\bar P}:\text{Aff} _m(\mathbb A ^n)\to (\mathcal P _d(\mathbb A ^m))^c$ is surjective. By the Theorem of {\L}o\'s applied to the formula $\alpha _n$ we see that the map $\kappa _P(l)$ is surjective for any family $\bar P\in \mathcal P _d(\mathbb V)$, $\dim(\mathbb V)=n$ of nc-rank $>r$. Since the theory $ACF_p$ of algebraically closed fields of characteristic $0$ is complete and $n\geq 1$ is arbitrary the corollary is proved. \end{proof} \subsubsection{The computation of the dimensions of fibers of $\tilde \kappa (k)$} Let $\operatorname{Hom}_{\operatorname{af}}(\mathbb W ,\mathbb V)$ be the variety of affine maps from $\mathbb W$ to $\mathbb V$ and let $\mathbb T \subset \mathbb Y$ be be the subscheme of polynomials $P$ such that there exists $Q\in \mathcal P _d(\mathbb A ^m)$ such that $\dim (\kappa _P^{-1}(Q))\neq \dim (\operatorname{Hom}_{\operatorname{af}}(\mathbb W ,\mathbb V))-\dim(\mathcal P _d(\mathbb W))$. We want to prove that $\mathbb T =\emptyset$. The same arguments as before show that it is sufficient to prove that $$\dim (\kappa _P^{-1}(Q))= \dim (\operatorname{Hom}_{\operatorname{af}}(\mathbb W ,\mathbb V))-\dim(\mathcal P _d(\mathbb W))$$ for all finite fields $k=\mathbb F _q$ and $Q\in \mathcal P _d(\mathbb A ^m)(k)$. Let $w:= \dim (\operatorname{Hom}_{\operatorname{af}}(\mathbb W ,\mathbb V))-\dim(\mathcal P _d(\mathbb W))$. As follows from \cite{LW} that there exists constant $A(n,d),l>0$ such that $||\kappa _P^{-1}(Q)(\mathbb F _{q^m})|-q^w|\leq A(n,d) q^{w-1/2}$ for any $q=p^{lm}$. This implies that $$\dim (\kappa _P^{-1}(Q))=\lim _{m\to \infty} \frac {\log _q (| \kappa _P^{-1}(Q)(k_m) |)}{lm},$$ where $k_m=\mathbb F _{q^{ml}}$ is the extension of degree $l$. Now the equality $$\dim (\kappa _P^{-1}(Q))=w$$ follows from Theorem \ref{A1}. Part (3) of Theorem \ref{AC} follows now from Theorem 23.1 in \cite{M}, and part (4) follows from the part (3). The derivation of Theorem \ref{BC} from Theorem \ref{B} is completely analogous. \subsection{Proof of Theorem \ref{Jan} } {\em Proof of Theorem \ref{Jan}.} Let $\mathcal G$ be a subfunctor of $\mathcal F _d$ such that $r(P), P\in \mathcal G (W) $ is not bounded above. We want to show that $\mathcal G (W)=\mathcal F _d(W)$ for any finite-dimensional $k$-vector space $W$. Let $m=\dim(W)$ and choose a polynomial $P\in \mathcal G (V)$, where $V$ is a $k$-vector space $V$ such that $r_{nc}(P)\geq r(m,d)$, where $r(m,d)$ is as in the Corollary \ref{kappa}. Then for any polynomial $Q$ on $W$ of degree $d$, there exist an affine map $\phi :W\to V$ such that $Q=\phi^\star (P)$. We see that $\mathcal G (W)=\mathcal F _d(W)$. \qed \section{Extending weakly polynomial functions from high rank varieties} \subsection{Introduction} We fix $d,a,c\geq 1$ and a field $k$ such that $|k|>ad$ and that there exists a root of unity $\beta \in k$ of order $m>ad$. A field is {\em admissible} if it satisfies these conditions. \begin{definition}\label{weak-def-1} Let $V$ be a $k$-vector space, and let $X\subset V$. We say that a function $f:X \to k$ is {\it weakly polynomial} of degree $\leq a$ if restrictions $f_{|L}$ to affine subspaces $L \subset X$ are polynomials of degree $\leq a$. \end{definition} \begin{remark} If $|k|>a$ it suffices to check this on $2$-dimensional subspaces (see \cite{KR}, Theorem 1). Namely a function is {\it weakly polynomial} of degree $\leq a$ if the restriction $f_{|L}$ to $2$-dimensional affine subspace $L \subset X$ is a polynomial of degree $\leq a$. \end{remark} The goal of this section is to show in the case when $k$ is admissible field which is either finite or algebraically closed then any weakly polynomial function $f$ on a subvariety $X \subset V$ of a sufficiently high rank extends to a polynomial $F$ of degree $\leq a$ on $V$. The main difficulty is in the case when $a\geq d$ when an extension $F$ of $f$ is not unique. To state our result properly we introduce the following definition: \begin{definition} An algebraic $k$-subvariety $\mathbb X \subset \mathbb V$ satisfies the condition $\star ^k_{a}$ if any weakly polynomial function of degree $\leq a$ on $X$ is a restriction of a polynomial function of degree $\leq a$ on $V$. \end{definition} The following example demonstrates the existence of cubic surfaces $\mathbb X \subset \mathbb A^2$ which do not have the property $\star^k_{1}$ for any field $k \neq \mathbb F _2 $. \begin{example} Let $V=k^2$, $Q=xy(x-y)$. Then $X=X_0\cup X_1\cup X_2$ where $X_0=\{ v\in V|x=0\}, X_1=\{ v\in V|y=0\} , X_2=\{ v\in V|x=y\} $. The function $f:X\to k$ such that $f(x,0)=f(0,y)=0,f(x,x)=x$ is weakly linear but one can not extend it to a linear function on $V$. \end{example} The main result of this section is that high rank hypersurfaces over admissible fields satisfy $\star^k_a$. \begin{theorem}\label{main} There exists an $r=r(a,d)$ such that for any admissible field $k$ which is either finite or algebraically closed, any hypersurface $\mathbb X$ of degree $d$ and nc-rank $\geq r$ in a $k$-vector space satisfies $\star^k_a$. \end{theorem} The result extends without difficulty to complete intersections $\mathbb X \subset \mathbb V$ of bounded degree and codimension, and high rank (see Definition \ref{rank}). \begin{theorem}\label{main1} For any $c>0$, there exists an effective bound $r=r(a,d,c)$ such that for any admissible field $k$, which is either finite or algebraically closed, a $k$-vector space $\mathbb V$, any subvariety $\mathbb X \subset \mathbb V$ of codimension $c$, degree $d$ and nc-rank $\geq r$ satisfies $\star ^k_a$. \end{theorem} Our proof of Theorem \ref{main1} consists of two steps. We first construct for any $d$ hypersurfaces $\mathbb X _n\subset \mathbb V _n$ over ${\mathbb Z}$ of degree $d$ and arbitrary high rank such that for any admissible field $k$ and any $c$ the subset $\mathbb X_n (k)^c\subset \mathbb V_n (k)^c$ satisfying the conditions of Theorem \ref{main1}. This result is purely algebraic. In the second step we show how to derive the general case of Theorem \ref{main1} from this special case. \begin{remark} The case $a <d$ was studied in \cite{kz-uniform}. The case $a=d=2$ of was studied in \cite{kz}, and a bilinear version of it was studied in \cite{GM}, where it was applied as part of a quantitative proof for the inverse theorem for the $U_4$-norms over finite fields. We expect the results in this paper to have similar applications to a quantitative proof for the inverse theorem for the higher Gowers uniformity norms, for which at the moment only a non quantitative proof using ergodic theoretic methods exists \cite{btz, tz, tz-1}. \end{remark} \subsection{Construction of an explicit collection of subvarieties} Let $\mathbb W:=\mathbb A ^d, \mathbb V_n:=\mathbb W ^n$, and $P_n:\mathbb V_n \to \mathbb A$ be given by $P_n(w_1,\dots ,w_n)= \sum _{i=1}^n \mu(w_i)$, where $\mu:\mathbb W \to \mathbb A$ is the product $\mu(x^1, \dots ,x^d):= \prod _{j=1}^dx^j$. Let $\mathbb X _n\subset \mathbb V _n$ be the hypersurface defined by the equation $P_n(v)=0$. \begin{theorem}\label{const} \leavevmode \begin{enumerate}\item There exists $\epsilon>0$ such that the nc-rank $r_{nc}(P_n)\geq n^{\epsilon}$. \item For any admissible field $k$ and any $c\geq1$ the subvariety $(\mathbb X _n) ^c \subset \mathbb V ^c$ has the property $\star _a$. \end{enumerate} \end{theorem} \begin{remark}To simplify notations we present the proof only in the case when $c=1$. The proof in the general case is completely analogous. \end{remark} \subsection{Proof of Theorem \ref{const}} \subsubsection{Proof of the part (1) of Theorem \ref{const}} In this subsection we prove the part $(1)$ of Theorem \ref{const}. \begin{proof} First we note that for a non trivial character $\psi$ on $k$ we have \[ |{\mathbb E}_{w \in V} \psi (\mu(w))| = t <1. \] Now we observe that \[ \tilde \mu (u^1_1, \ldots,u^1_d, \ldots, u^1_1, \ldots,u^1_d)|_{\{u^l_j=0, l \ne j\}} = u^1_1u^2_2\cdots u^d_d \] Denote \[ U=\{((u_1)_1, \ldots, (u_n)_1,\ldots, (u_1)_d, \ldots, (u_n)_d) \in V^d: (u_i)^l_j=0, l \ne j, i \in [n]\} \] Then restricted to $U$ we have \[ \tilde P_n((u_1)_1, \ldots, (u_n)_1,\ldots, (u_1)_d, \ldots, (u_n)_d)|_U= \sum_{i=1}^n \mu((u_i)^1_1, (u_i)^2_2, \ldots , (u_i)^d_d), \] so that \[ {\mathbb E}_{u \in V^d} \psi( \tilde P_n(u)) \le {\mathbb E}_{u \in U} \psi( \tilde P_n(u)) = |{\mathbb E}_{w \in W} \psi (\mu(w))|^{n} = t^{n}. \] It follows by \ref{bias-rank-1} that $ \tilde P_n$ is of rank $> n^{\epsilon}$ for some $\epsilon >0$. \end{proof} \begin{definition}\label{X} \begin{enumerate} \item For any set $X$ we denote by $k[X]$ the space of $k$-valued functions on $X$. \item For a subset $X$ of a vector space $V$, we denote by $\mathcal P _a^w(X) \subset k[X] $ the subspace of weakly polynomial functions of degree $\leq a$. \item We denote by $\mathcal P _a(X) \subset \mathcal P _a^w(X) $ the subspace of functions $f:X\to k$ which are restrictions of polynomial functions on $V$ of degree $\leq a$. \item As before we define $\mathbb W=\mathbb A^d, \mathbb V _n:=\mathbb W ^n $ and denote by $\mu$ the product map $\mu:\mathbb W \to \mathbb A$ given by \[ \mu (a^1,\dots ,a^d)= \prod _{s=1}^d a^s. \] We write elements of $V_n$ in the form $$v= (w_1,\dots ,w_n), \ 1\leq i\leq n,\ w_i\in W.$$ \end{enumerate} \end{definition} It is clear that Theorem \ref{main1} is equivalent to the following statement. \begin{theorem}\label{equality} Let $k$ be an admissible field, then $\mathcal P _a^w(X_n) = \mathcal P _a(X_n) $. \end{theorem} We fix $n$ and write $\mathbb X$ instead of $\mathbb X _n$, and $\mathbb V$ instead of $\mathbb V _n$. The proof of the part (2) of Theorem \ref{const} uses the existence of a large group of symmetries of $X$, the existence of a linear subspace $L\subset V$ of large dimension and the existence of the subroup $\Delta \subset k^\star , \Delta \cong {\mathbb Z} /m{\mathbb Z}$ for $m>ad$. \begin{proof}We start the proof of Theorem \ref{equality} with the following result. \begin{claim}\label{many} Let $Q$ be a polynomial of degree $\leq ad$ on $k^N$ such that $Q_{|\Delta ^N}\equiv 0.$ Then $Q=0$. \end{claim} \begin{proof} The proof is by induction on $N$. If $N=1$ then $Q=Q(x)$ is polynomial such that $Q(\delta) =0$ for $\delta \in \Delta$. Since $|\Delta| > ad$ we see that $Q=0$. Assume that the result is known for $N'=N-1$. Let $Q$ be a polynomial of degree $\leq ad$ on $k^N$ such that $Q_{|\Delta ^N}\equiv 0.$ By induction we see that $Q(\delta ,x_2,\dots ,x_s)\equiv 0$ for all $\delta \in \Delta$. Then for any $x_2,\dots ,x_s $ the polynomial $x\to Q(x,x_2,\dots ,x_s)$ vanishes for all $\delta \in \Delta$. Therefore $Q(x,x_2,\dots ,x_s) =0$ for all $x\in k$. \end{proof} \begin{definition}\label{ga}\leavevmode \begin{enumerate} \item $\Gamma :=(S_d)^n$. The group $\Gamma$ acts naturally on $X$. \item $L:=\{(c_1, \ldots, c_n)\in k^n |\sum_{i=1}^nc_i=0\}.$ \item $L_\Delta=(\Delta) ^n\cap L\subset k^n$. \item For $c\in k$ we write $w(c):=(c,1,\dots ,1)\in W$. \item $\kappa :L\hookrightarrow X\subset V$ is the linear map given by $$\kappa (c_1, \ldots, c_n) := (w(c_1), \ldots, w(c_n))$$ and write $\kappa _\gamma :=\gamma \circ \kappa ,\gamma \in \Gamma$. \item For any function $f:X\to k, \gamma \in \Gamma$ define a function $h_{\gamma ,f} :L\to k$ by $h_{\gamma ,f} :=f\circ \kappa _\gamma$. \item $T_1:= \{ (u_1,\dots ,u_d)\in (\Delta)^d|\prod _{j=1}^du _j=1\}$ and $T:=T_1^n$. \item We denote by $\zeta _i:T_1\hookrightarrow T,1\leq i\leq n$ the imbedding onto the i-th component. \item For any $j,j'$, $1\leq j\neq j'\leq d$ we denote by $\phi _{j,j'}: \Delta \to T_1 $ the morphism such that $\phi _{j,j'}(u)= ( x_l(u), 1\leq l\leq d ) $ where $x_j(u)=u, x_{j'}(u)=u^{-1} $ and $x_l(u) =1$ for $l\neq j,j'$. \item We denote by $\Theta _1$ the group of homomorphisms $\chi :T_1\to k^\star$. \item $ \Theta =( \Theta _1)^n$. \item For $\chi \in \Theta _1 , j,j', 1\leq j\neq j'\leq d $ we define a homomorphism $\chi _{j,j'}: \Delta \to k^\star $ by $\chi _{j,j'}:=\chi \circ \phi _{j,j'} $. Since $\Delta \cong {\mathbb Z} /m{\mathbb Z}$ there exists unique $\alpha _{j,j'}(\chi )\in (-m/2,m/2]$ such that $\chi _{j,j'}(u)= u^{\alpha _{j,j'} (\chi )}$ for any $u\in \Delta$. \item $\Theta _1^{adm}:=\{ \chi \in \Theta _1 : | \alpha _{j,j'}(\chi ) |\leq a\}$. \item $\Theta_1^{adm,+}:= \{ \chi \in \Theta_1^{adm} : \alpha _{j,j'} (\chi)\geq 0, j<j' \}$. \item Let $\Theta^{adm,+}:= (\Theta _1 ^{adm,+})^n$ and $\Theta^{adm}:=(\Theta _1 ^{adm})^n$. \item For any $k$-vector space $R$, a representation $\pi : T \to \operatorname{Aut} (R)$ and $\theta \in \Theta $ we define $$R^\theta =\{ r\in R|\pi(t)r=\theta (t)r, \ t\in T\}.$$ \end{enumerate} \end{definition} \begin{remark} Since $|T|$ is prime to $q:=\operatorname{char}(k)$ the Maschke's theorem implies the direct sum decomposition $R=\oplus _{\theta \in \Theta }R^\theta$. \end{remark} \begin{claim}\label{pol} For any $f\in \mathcal P _a^w(X) , \gamma \in \Gamma $ the function $h_{\gamma ,f} $ is a polynomial of degree $\leq a$. \end{claim} \begin{proof} Since $f\in \mathcal P _a^w(X) $ we have $h_{\gamma ,f}\in \mathcal P _a^w(L) $. Since $L$ is linear space we see that $h_{\gamma ,f} $ is a polynomial of degree $\leq a$. \end{proof} \begin{claim}\label{plus}\leavevmode \begin{enumerate} \item The subset $\Theta^{adm}$ of $\Theta$ is $\Gamma$-invariant. \item For any $\theta \in \Theta^{adm} $ there exists $\gamma \in \Gamma$ such that $\theta \circ \gamma \in \Theta^{adm,+}. $ \end{enumerate} \end{claim} \begin{proof}Clear. \end{proof} \begin{definition} We denote by $\mathcal P _a^{\bar w}(X)$ the space of functions $f$ such that $h_{\gamma ,f} $ is a polynomial of degree $\leq a$ on $L$ for all $\gamma \in \Gamma $. \end{definition} The group $T$ acts naturally on $X$, and on the spaces $\mathcal P ^{\bar w}_a(X)$ and $\mathcal P _a(X)$, and we have direct sum decompositions \[\mathcal P ^ { w}_a(X) =\oplus _{\theta \in \Theta} \mathcal P ^ { w}_a(X) ^\theta\] and \[\mathcal P _a(X) =\oplus _{\theta \in \Theta} \mathcal P _a(X) ^\theta. \] \begin{lemma}\label{almost} Let ${\mathbb Z}\subset \mathbb V$ be a homogeneous $k$-subvariety of degree $d$ and let $f:Z\to k$ a polynomial function of degree $ad$ which is a weakly polynomial function on $Z$ of degree $\leq a$. Then it is a restriction of polynomial function on $V(k)$ of degree $\leq a$. \end{lemma} \begin{proof} Lemma \ref{almost} follows inductively from the following claim. \begin{claim}\label{we} Let $f:Z\to k$ be a polynomial function of degree $\leq a$ which is a weakly polynomial of degree $<a$. Then $f$ is a polynomial of degree $<a$. \end{claim} \begin{proof} We can write $f$ as a sum $f=Q+f'$ where $\operatorname{f'}<a$ and $Q$ is homogeneous of degree $a$. Since $f$ is weakly polynomial of degree $< a$ the function $Q$ is also weakly polynomial of degree $<a$. It is sufficient to show that $Q\equiv 0$. Choose $z\in Z$. To show that $Q(z)=0$ consider the function $g$ on $k,g(t)=Q(tz)$. Since $Z$ is homogeneous $tz \in Z$. Since $Q$ is homogeneous of degree $a$ we have $g(t)=ct^a$. On the other hand, since $Q$ is weakly polynomial of degree $<a$ we see that $g(t)$ is a polynomial of degree $<a $. Since $a<q$ we see that $g\equiv 0$. So $Q(z)=g(1)=0$. \end{proof} \end{proof} \begin{corollary} Let $f:X\to k$ be a weakly polynomial of degree $<a$ on $X$ which is a restriction of polynomial function of degree $\leq ad$ on $V$. Then $f$ is a restriction of polynomial function of degree $\leq a$ on $V$. \end{corollary} It is clear that for a proof Theorem \ref{equality} it suffices to show that $\mathcal P^{w}_a(X) ^\theta =\mathcal P _a(X) ^\theta$ for any $\theta \in \Theta$. This equality follows now immediately from the following statement. Fix $ \theta \in \Theta $. \begin{proposition}\label{Id} \leavevmode For any weakly polynomial function $f:X\to k$ of degree $\leq a$ satisfying the equation $f(tx)=\theta (t)f(x), t\in T ,x\in X$ there exists a polynomial $P$ on $V$ of degree $\leq ad$ such that $f=P_{|X}$. \end{proposition} We start the proof of Proposition \ref{Id} with a set of definitions. Let $f:X\to k$ be a function such that $f(tx)=\theta (t)f(x), t\in T ,x\in X,$ and such that $h_{\gamma ,f} $ are polynomial functions on $L$ of degree $\leq a$ for all $\gamma \in \Gamma$. \begin{definition}\leavevmode \begin{enumerate} \item We write $h,h_\gamma :L \to k$ instead of $h_{\text{Id},f}$ and $h_{\gamma ,f}$. \item $\nu :V \to L$ is the map given by $\nu (w_1,\dots ,w_n):=(\mu (w_1),\dots , \mu (w_n))$. \item $W^0:=\{ w= (a^1,\dots ,a^d)\in W|a^i\in \Delta $ for $i\geq 2\}\subset W$. \item $X^0:=(W^0)^n\cap X$. \item For $ w=(a^1,\dots ,a^d) \in W$ we define $I( w):=\{ i,1\leq i\leq d|a^i\not \in \Delta \}$ and write $z(w):=\max (|I(w)|-1,0)$. \item For $x=(w_1, \ldots, w_n)$ we write $z(x)=\sum _jz( w_j)$. \item $Y_s:=\{ x\in X|z(x)\leq s\}, s\geq 0$. \end{enumerate} \end{definition} \begin{claim}\label{tr}\leavevmode \begin{enumerate} \item $Y_0=\{ \gamma (W^0), \gamma \in \Gamma\}$. \item For any $x\in X^0$ there exist unique $t(x)\in T$ such that $x=t(x)\kappa (\nu (x))$. \item $f(x)=\theta (t(x))f(\kappa (\nu (x)))$ for any $x\in X^0$. \item For any $\gamma \in \Gamma$, $l\in L,$ we have $\nu (\gamma (l))=l$ where $\gamma : X\to X$ is as in Definition \ref{ga}. \end{enumerate} \end{claim} \begin{proof}Clear. \end{proof} \begin{lemma}\label{s1} Let $f$ be a function on $X$ satisfying the conditions of Proposition \ref{Id} and such that $f_{|Y_0} \equiv 0$. Then $f\equiv 0$. \end{lemma} \begin{proof} It is clear that it is sufficient to prove the following statement \begin{claim} Let $f$ be a function on $X$ satisfying the conditions of Proposition \ref{Id} and such that $f_{|Y_s} \equiv 0 ,s\geq 0$. Then $f_{|Y_{s+1}} \equiv 0$. \end{claim} \begin{proof} We want to show that $f(x)=0$ for all $x=(w_j)\in Y_{s+1}$. Since the restriction of $f$ on any line is a polynomial of degree $\leq ad$ it is clear that for a proof of the equality $f(x)=0$ it is sufficient to prove the following geometric statement. \begin{claim}\label{x} There exists a line $N\subset X$ containing $x$ and such that $|N\cap Y_s|>ad$. \end{claim} \begin{proof} Let $x=(w_j), w_j =(x^1_j, \dots ,x^d_j),1\leq j \leq n$. We start with the following observation. \begin{claim} For any $x\in X\setminus Y_0, x=\{ x_j^i\}$ there exists $j_0$ such that either there exist $(i_0,i_1), 1\leq i_0\neq i_1\leq d $ such that $ x_{j_0}^{i_0}=0, x_{j_0}^{i_1} \not \in \Delta$ or $\prod _ix^i_{j_0}\neq 0$ and $x_{j_0}^{i_0}\not \in \Delta$ for some $i_0, 1\leq i_0\leq d $. \end{claim} \begin{proof}Clear. \end{proof} Consider first the case when $x^{i_0}_{j_0}=0$ for some pair $(i_0,j_0), 1\leq i_0\leq d, 1\leq j_0 \leq n $ and $x^{i_1}_{j_0}\not \in \Delta$ for some $i_1\neq i_0, 1\leq i_0\leq d $. Let $\alpha :k\to X$ be the map given by $ \alpha(c)=x^i_j(c)$ where $ x^i_j(c) = x^i_j$ if $(i,j)\neq (i_1,j_0)$ and $x^{i_1}_{j_0}(c)=c$. By construction $x\in N :=\text{Im}(\alpha) $ and $x^{i_1}_{j_0}(\Delta)\subset Y_s$ for $c\in \Delta$. Since $|\Delta| =q>ad $ we see that the line $N$ satisfies the conditions of Claim \ref{x}. So we may assume the existence of $j_0$ such $\prod _ix^i_{j_0}\neq 0$ and $x_{j_0}^{i_0}\not \in \Delta$ for some $i_0, 1\leq i_0\leq d $. To simplify notations we may and will assume that $j_0=i_0=1$. Since $x\in X$ there exists $j_1,2\leq j\leq n$ such that $\prod _ix^i_{j_1}\neq 0$. We may assume that $j_1=2$. It is clear that either $x_2^i\in \Delta$ for all $i,1\leq i\leq d$ or there exists $i_1, 1\leq i_1\leq d $ such that $x_2^{i_1}\not \in \Delta$ in which case we may and will assume that $i_1=1$. Let $a:=\prod _{i=2}^dx_1^i, b :=\prod _{i=2}^dx_2^i $ and $\beta :k\to X$ be the map given by $\beta (c):= x^i_j(c)$ where $x^1_1(c)=-bc, \ x^2_1(c)=ac-\sum _{j=3}^n\prod _{i=1}^dx^i_j$ and $ x^i_j(c) = x^i_j$ otherwise. Let $a:=\prod _{i=2}^dx_1^i, b :=\prod _{i=2}^dx_2^i $ and $\beta :k\to X$ be the map given by $\beta (c):= x^i_j(c)$ where $x^1_1(c)=-bc, \ x^1_2(c)=ac-b^{-1}\sum _{j=3}^n\prod _{i=1}^dx^i_j$ and $ x^i_j(c) = x^i_j$ otherwise. Let $N=\operatorname{Im}(\beta)$. Then $x\in N$ and $|N\cap Y_s|=q>ad$. \end{proof} \end{proof} \end{proof} \begin{lemma}\label{zero} $\mathcal P_a^ {\bar w}(X)^\theta =\{0\}$ for any $\theta \not \in \Theta^{adm}$. \end{lemma} \begin{proof} As follows from Lemma \ref{s1} it is sufficient to show that $h_{\gamma}(f)\equiv 0$ for any $\theta \not \in \Theta^{adm},\gamma \in \Gamma$ and $f\in \mathcal P_a^ {\bar w}(X)^\theta $. Since $\theta \not \in \Theta^{adm}$ there exist $i,j,j',1\leq i\leq n, 1\leq j,j'\leq d,$ such that $|\alpha _{j,j'}(\chi _i)|\geq a$. Choose $s\in S_d$ such $s(j)=1,s(j')=2$, and denote by $\tilde s\in \Gamma$ the image of $s$ under the imbedding $S_d\hookrightarrow \Gamma$ as the $i$-factor. After the replacing $f\to f\circ \tilde s, \theta \to \theta \circ s$ we may assume that $|\alpha _{1,2}(\chi _i)| > a$. The functions $h_{\gamma}$ and $h_{\gamma \circ s}$ are weakly polynomial functions of degrees $\leq a$ on the linear space $L$. Therefore $h_{\gamma}$ and $h_{\gamma \circ s}$ are polynomial functions of degrees $\leq a$. For any $l\in L$ such that $l_i\in \Delta$ we have $h_{\gamma \circ s}(l)=l_i^{\alpha _{1,2}(\chi _i)}h_{\gamma}(l)$. Since $\alpha _{1,2}(\chi _i)>a$ and $\alpha _{1,2}(\chi _i)\le m/2$, this is only possible if $h_{\gamma}=0$. \end{proof} \begin{corollary} As follows from Claim \ref{plus} it is sufficient to prove Proposition \ref{Id} for $\theta \in \Theta ^{adm,+}$. \end{corollary} \begin{definition} Let $P:V\to k$ be the polynomial given by $$P(v)=\prod _{i=1}^n \prod _{j=1}^d(x_i^j)^{\alpha ^{1,j}(\chi _i)}h(\nu (v)), \quad v=( x_i^j).$$ \end{definition} \begin{lemma}$\deg(P)\leq ad$. \end{lemma} \begin{proof}Let $b=\deg(h)$. It is sufficient to show that for any sequence $\bar e=(e(i))_{ 1\leq i\leq n}$, $e(i)\in [1,d]$, we have $\sum _{i=1}^n \alpha ^{1,e(i)}(\chi _i) +b\leq a$. Suppose there exists $\bar e$ such that $\sum _{i=1}^n \alpha ^{1,e(i)}(\chi _i) +b>a$. Since $\theta \in \Theta _n^{adm}$, there exists a subset $I$ of $[1,n]$ such that $a<\sum _{i\in I} \alpha ^{1,e(i)}(\chi _i) +b\leq 2a$. Let $ \gamma =(\sigma _i)_{1\leq i\leq n}$, $\sigma _i \in S_d,$ be such that $e(i)=\sigma _i(1)$ for $i\in I$ and $\sigma _i=Id$ if $i\not \in I$. Consider $h_{\gamma}:= \kappa _\gamma ^\star (f)$. On one hand it is a polynomial of degree $\leq a$ on $L$. On the other $h_\gamma (l)=h(l)\prod _{i\in I}l_i^{ \alpha ^{1,e(i)}(\chi _i) }$. The inequalities $a<\sum _{i\in I} \alpha ^{1,e(i)}(\chi _i) +b\leq 2a$ imply that $h\equiv 0$. \end{proof} By construction $P_{|L}\equiv f _{|L}$. Let $\bar f:=f-P$. Then $\bar f$ is weakly polynomial function of degree $\leq ad$ on $X$ vanishing on $L$ such that $\bar f(tx)=\theta (t)f(x)$ for $t\in T, \ x\in X$. As follows from Claim \ref{tr} we have $\bar f_{|X^0}\equiv 0$. It is clear that for a proof the part (1) of Proposition \ref{Id} it is sufficient to show that $\bar f(x)=0$ for all $x\in X$. By Lemma \ref{s1} it suffices to prove the following lemma: \begin{lemma}\label{van}$\bar f_{|Y_0} \equiv 0$. \end{lemma} \begin{proof} Since $Y_0=\Gamma X^0=TL_\gamma$, it suffices to show that $f _{|L_\gamma} \equiv 0$ for all $\gamma \in \Gamma $. Let $h_\gamma :L\to k$ be given by $h_\gamma (l)=\bar f(\gamma (l))$. We have to show that $h_\gamma \equiv 0$. Since $h_\gamma$ is a polynomial of degree $\leq ad$ it follows from Claim \ref{many} that it is sufficient to show that the restriction of $h_\gamma$ on $L_\Delta$ vanishes. But for any $l\in L_\Delta$ we have $\gamma (l)=tl', t\in T,l'\in L$. Since $\bar f(tx)=\theta (t)f(x)$ we see that $h_\gamma (l)=0$. \end{proof} This completes the proof of Theorem \ref{equality}. \end{proof} \subsection{Proof of Theorem \ref{main1}} \begin{proposition}\label{p-ext} There exists an effective bound $r=r(\bar d,a)$ such that if $k$ is an admissible field which is either finite or algebraically closed, $W\subset V$ is a hyperplane then any weakly polynomial function on $X_{\bar P}, \ r_{nc}(\bar P)\geq r$ of degree $\leq a$ vanishing on $X_{\bar P}\cap W$ is a restriction of a polynomial on $V$ of degree $\leq a$. \end{proposition} As an immediate corollary we obtain: \begin{corollary} \label{extension1} There exists an effective bound $r=r(\bar d,a)$ such that the following holds for all admissible fields $k$ which are either finite or algebraically closed. Let $\mathbb X= \{v\in \mathbb V|P_i(v)=0\}\subset \mathbb V$ be a subvariety of degree $\leq d$ and $\mathbb W \subset \mathbb V$ an affine subspace such that nc-rank of $\bar P_{|\mathbb W}\geq r$. Let $f$ be a weakly polynomial function on $X$ of degree $\leq a$ such that $f _{|X\cap W}$ extends to a polynomial on $W$ of degree $\leq a$. Then there exists an extension $F$ of $f$ to a polynomial on $V$ of degree $\leq a$. \end{corollary} \begin{proof} Consider first the case when $W\subset V$ is a hyperplane. By the assumption there exists an extension $R$ of the restriction $f|_W$. Choose a linear projection $s:V\to W $ and $f':X\to k$ by $f'(x)=f(x)-R(s(x))$. Then $f'$ is weakly polynomial function on $X$ of degree $\leq a$ such that $f' _{|X\cap W}\equiv 0$. As follows from Proposition \ref{p-ext} there exists an extension of $f'$ to a polynomial $F'$ on $V$ of degree $\leq a$. But then $F:=F'+R\circ s$ is an extension $F$ of $f$ to a polynomial on $V$ of degree $\leq a$. In the case when the codimension of $W$ is $>1$ we choose a flag $$\mathcal F =\{W_0=W\subset W_1\dots \subset W_{\dim(V)-\dim(W)}=V\}, \dim (W_i)=\dim(W)+i,$$ and extend $f$ by induction in $i,1\leq i\leq \dim(V)-\dim(W) $ to a polynomial $F$ on $V$. \end{proof} \begin{remark} The choice of $F$ depends on a choice of flag $\mathcal F$ and on choices of projections used in the inductive arguments. \end{remark} \subsection{Proof of Proposition \ref{p-ext}}\label{jan-extension} The key tool in our proof of this proposition is a testing result from \cite{kz-uniform} which roughly says that any weakly polynomial function of degree $a$ on $X$ that is "almost" weakly polynomial of degree $<a$, namely it is a polynomial of degree $<a$ on almost all affine subspaces, is weakly polynomial of degree $<a$. This part does not require $X$ to be of high rank. We use the assumption that $X$ is of high rank to show (see Theorem \ref{B}) that almost any isotropic affine plane is contained in an isotropic three affine dimensional subspace that is not contained in $l^{-1}\{0\}$. \\ We start by stating the testing result from \cite{kz-uniform}. In \cite{KR} (Theorem 1) the following description of degree $\le $ polynomials is given: \begin{proposition}\label{kau-ron} Let $P:V \to k$. Then $P$ is a polynomial of degree $\le a$ if and only if the restriction of $P$ to any affine subspace of dimension $l=\lceil \frac{a+1}{q-q/p}\rceil$ is a polynomial of degree $\le a$. \end{proposition} Note that when $a<q$ then $l\le 2$. \\ In \cite{KR} the above criterion is used for polynomial testing over general finite fields. In \cite{kz-uniform} (Corollary 1.13) it is shown how the arguments in \cite{KR} can be adapted to polynomial testing within a subvariety variety $X \subset V$ (high rank is not required). \begin{proposition}[Subspace splining on $X$]\label{testing-lines}For any $a,d, c>0$ there exists an $A=A(d,c,a) > 0$, depending polynomially on $c,d$ and exponentially on $a$, such that the following holds. Let $X \subset V(k)$ be a complete intersection of degree $d$, codimension $c$. Then any weakly polynomial function $f$ of degree $a$ such that the restriction of $f$ to $q^{-A}$-a.e $l$-dimensional affine subspace, $l=\lceil \frac{a}{q-q/p}\rceil$, is a polynomial of degree $<a$, is weakly polynomial of degree $<a$. \end{proposition} \begin{proof}[Proof of Proposition \ref{p-ext}] Let $V$ be a vector space and $l:V\to k$ be a non-constant affine function. For any subset $I$ of $k$ we denote $\mathbb W _I=\{ v \in \mathbb V |l(v)\in I\}$, so that $\mathbb W _b =\mathbb W _{\{ b\} }$, for $b\in k$. For a hypersurface $\mathbb X \in \mathbb V$ we write $\mathbb X _I=\mathbb X \cap \mathbb W_I$. \begin{lemma}\label{l} For any finite subset $S\subset k$, any weakly polynomial function $f$ of degree $a$ on $X$ such that $f_{|X_S} \equiv 0$, and any $b\in k\setminus S$, there exists a polynomial $Q$ of degree $\leq a$ on $V$ such that $Q_{|X_S}\equiv 0$ and $(Q-f)_{|X_b} \equiv 0$. \end{lemma} \begin{proof} We start with the following result. \begin{claim}Under the assumptions of Lemma \ref{l}, the restriction $f_{|X_b}$ is a weakly polynomial function of degree $\leq a-|S|$. \end{claim} \begin{proof} Since $|k|>a$ it suffices to show that for any plane $L\subset X_b$ the restriction $f_{|L}$ is a polynomial of degree $\leq a-|S|$. We first do the case when the field $k$ is finite. Since by the part (4) of Theorem \ref{AC} the variety $\mathbb X$ is a complete intersection it follows from Proposition \ref{testing-lines}, there is a constant $A= A(d,a)$ such that it suffices to check the restriction $f_L$ on $q^{-A}$-almost any affine plane $L\subset X_b$ is a polynomial of degree $\leq a-|S|$. As follows from Proposition \ref{line-plane}, for any $s>0$ there is an $r=r(d,s)$ such that if $X$ is of nc-rank $>r$ then for $q^{-s}$-almost any affine plane $L\subset X_b$ there exists an affine $3$-dim subspace $M\subset X$ containing $L$ and such that $M\cap X_0\neq \emptyset$. Then $M\cap X_t\neq \emptyset$ for any $t \in k$. Since $f$ is a weakly polynomial function of degree $a$, its restriction to $M$ is a polynomial $R$ of degree $\leq a$. Since the restriction of $R$ to $l^{-1}(S) \cap M$ is identically zero, we see that $R=R'\prod _{s\in S}(l-s).$ Since $l|_L\equiv b$, we see that the restriction $f|_L$ is equal to $R'$, which is a is a polynomial of degree $\leq a-|S|$. \end{proof} Now we show that this claim implies Lemma \ref{l}. Indeed, assume that $f_{|X_b}$ is a weakly polynomial function of degree $\le a-|S|$. It follows from the inductive assumption on $a$, that there exists a polynomial $Q'$ of degree $\leq a-|S|$ on $V$, such that $f_{|X_b}=Q'_{|X_b}.$ Let $Q:=\frac {Q'\prod _{s\in S}(l-s)}{\prod _{s\in S}(b-s) }$. Then $(f-Q)_{X_{S \cup \{b\}}}\equiv 0.$ For algebraically closed fields we follow the same argument replacing Proposition \ref{testing-lines} with Proposition \ref{asplining}, and Proposition \ref{line-plane}, with Theorem \ref{BC}. \end{proof} Proposition \ref{p-ext} follows from Lemma \ref{l} by induction. \\ \end{proof} Now we can prove Theorem \ref{main}: \begin{proof}[Proof of Theorem \ref{main} assuming Theorems \ref{const} and \ref{Jan}, and Corollary \ref{extension1}] Let $\tilde r$ be from Corollary \ref{extension1} and $r=r(a, \bar d):=\rho (\dim (W),d)$ from Theorem \ref{Jan}. As follows from Theorem \ref{const} the subvarieties $\mathbb X _n$ are of nc-rank $\geq \tilde r$ for $n\geq d\tilde r$. Let $\mathbb X \subset \mathbb V$ be a subvariety of nc-rank $\geq r$. By Theorem \ref{Jan} there exists a linear map $\phi :\mathbb W \to \mathbb V$ such that $\mathbb X _n=\{ w\in \mathbb W|\phi (w)\in \mathbb X\}$. Since $\mathbb X _n$ satisfies $\star^k_a$, Corollary \ref{extension1} implies that $\mathbb X$ satisfies $\star ^k_a$. \end{proof} \section{Nullstellensatz} Let $k$ be a field and $V$ be a finite dimensional $k$-vector space. We denote by $\mathbb V$ the corresponding $k$-scheme, and by $\mathcal P (V)$ the algebra of polynomial functions on $\mathbb V$ defined over $k$. For a finite collection $\bar P = (P_1,\ldots,P_c)$ of polynomials on $\mathbb V$ we denote by $J(\bar P)$ the ideal in $ \mathcal P (V) $ generated by these polynomials, and by $\mathbb X _{\bar P}$ the subscheme of $\mathbb V$ defined by this ideal. Given a polynomial $R \in \mathcal P(\mathbb V)$, we would like to find out whether it belongs to the ideal $J(\bar P)$. It is clear that the following condition is necessary for the inclusion $R\in J(\bar P)$. \medskip (N) $R(x) = 0$ for all $k$-points $x \in X_{\bar P}(k) $. \medskip \begin{proposition}[Nullstellensatz]\label{Null} Suppose that the field $k$ is algebraically closed and the scheme $\mathbb X _{\bar P}$ is reduced. Then any polynomial $R$ satisfying the condition $(N)$ lies in $J(\bar P)$. \end{proposition} We will show that the analogous result hold for $k=\mathbb F _q$ if $\mathbb X _{\bar P}$ is of high $nc$-rank. From now on we fix a degree vector $\bar d = (d_1,\ldots,d_c)$ and write $D:=\prod _{i=1}^cd_i$. We denote by $\mathcal P _{\bar d}(\mathbb V)$ the space of $\bar d$-families of polynomials $\bar P = (P_i)_{i=1}^c$ on $V$ such that $\operatorname{deg}(P_i) \leq d_i$. \begin{theorem}\label{main-null} There exists and an effective bound $r(\bar d)>0$ such that for any finite field $k=\mathbb F _q$ with $q>aD$, any family $\bar P$ of degrees $\bar d$ and nc-rank $\geq r(\bar d) $ the following holds. Any polynomial $Q$ on $V$ of degree $a$ vanishing on $X_{\bar P}$ belongs to the ideal $J(\bar P)$. \end{theorem} \begin{proof} Our proof is based on the following {\it rough bound} (see \cite{hr}). \begin{lemma}\label{rb} Let $\bar P=\{ P_i\}_{i=1}^c \subset \mathbb F _q[x_1,\dots ,x_n]$ be a family of polynomials of degrees $d_i,1\leq i\leq c$ such that the variety $\mathbb Y :=\mathbb X _{\bar P} \subset \mathbb A ^n$ is of dimension $n-c$. Then $|\mathbb Y (\mathbb F _q)|\leq q^{n-c}D$ where $D:=\prod _{i=1}^cd_i$. \end{lemma} For the convenience we reproduce the proof of this result. \begin{proof} Let $F$ be the algebraic closure of $\mathbb F _q$. Then $\mathbb Y (\mathbb F _q)$ is the intersection of $\mathbb Y$ with hypersurfaces $Y_j, 1\leq j\leq n$ defined by the equations $h_j( x_1,\dots ,x_n)=0$ where $h_j( x_1,\dots ,x_n)= x_j ^q-x_j$. Let $H_1,\ldots,H_{n-c}$ be generic linear combinations of the $h_j$ with algebraically independent coefficients from an transcendental extension $F'$ of $F$ and ${\mathbb Z} _1,...,{\mathbb Z} _{n-c}\subset \mathbb A ^n$ be the corresponding hypersurfaces. Intersect successively $\mathbb Y$ with ${\mathbb Z} _1,{\mathbb Z} _2,\dots ,{\mathbb Z} _{n-c}$. Inductively we see that for each $j \leq n-c$, each component $C$ of the intersection $\mathbb Y \cap {\mathbb Z} _1 \cap \dots \cap {\mathbb Z} _j$ has dimension $n-c-j$. Really passing from $j$ to $j+1$ for $j<n-c$ we have $\dim(C)=n-c-i>0$. So not all the functions $h_j$ vanish on $C$. Hence by the genericity of the choice of linear combinations $\{H_j\}$ we see that $H_{j+1}$ does not vanish on $C$ and therefore ${\mathbb Z} _{j+1}\cap C$ is of pure dimension $n-c-j-1$. Thus the intersection $\mathbb Y \cap {\mathbb Z} _1 \cap \dots \cap {\mathbb Z} _{n-c}$ has dimension $0$. By Bezout's theorem we see that $|\mathbb Y \cap {\mathbb Z} _1 \cap \dots {\mathbb Z} _{n-c}|\leq q^{n-c}D$. Since $\mathbb Y (\mathbb F _q)=\mathbb Y \cap {\mathbb Z} _1 \cap \dots \cap {\mathbb Z} _n\subset \mathbb X \cap \mathbb Y _1 \cap \dots \cap \mathbb Y _{n-c} $ we see that $|\mathbb Y (\mathbb F _q)| \leq q^{n-c} D$. \end{proof} Now we can finish the proof of Theorem \ref{main-null}. Let $R \in \mathbb F _q[x_1,\dots ,x_n] $ be a polynomial of degree $a$ vanishing on the set $\mathbb X _{\bar P}(\mathbb F _q)$. Suppose that $R$ does not lie in the ideal generated by the $P_i$, $1\leq i\leq c$. Let $\mathbb Y :=\mathbb X _{\bar P}$. Since $R$ vanishes on $\mathbb X _{\bar P}(\mathbb F _q)$ we have $\mathbb Y (\mathbb F _q) = \mathbb X _{\bar P}(\mathbb F _q) $. Since $\mathbb X _{\bar P} $ is irreducible we have $\dim(\mathbb Y)=n-c-1$. As follows from \ref{rb} we have the upper bound $|\mathbb Y (\mathbb F _q)| \leq aD q^{n-c-1}$. On the other hand as follows from Theorem \ref{uniform} there exists an effective bound $r(\bar d)>0$ such that the condition $r_{nc}(\bar P)\geq r(\bar d) $ implies the inequality $|\mathbb X _{\bar P}(\mathbb F _q)|> q^{n-c}\frac{q-1}{q}$, which is a contradiction to $q>aD$. \end{proof}
1,116,691,497,989
arxiv
\section{Introduction} The microscopic world is filled with examples of rigid structures that interact with each other as they move through fluids. In the biological context, these can range from very dense systems such as bacterial swarms \cite{Darnton2010}, where steric interactions are important, to regularly-spaced arrays of cilia, which can be coupled both hydrodynamically (through the fluid) \cite{Brumley2014} and elastically (through the cell membrane) \cite{Wan2016,Kanso2021}, down to dilute suspensions of planktonic bacteria and algae \cite{Ishikawa2009}, where only hydrodynamic interactions prevail. Outside biology, hydrodynamic interactions are important in the dynamics of sedimentation and the {\color{black} rheology of suspensions \cite{Shaqfeh1990,Mackaplow1996,Guazelli2011,Shelley2019}}, as well as the collective behaviour of synthetic active particles \cite{Ramaswamy2010,Marchetti2013}. For artificial devices such as diffusio- or electrophoretic swimmers, one must also consider long-range chemical interactions in addition to the hydrodynamics \cite{Sharifi2016,Varma2018,Varma2019,Saha2019}. Hydrodynamic interactions (HIs) represent a particular interest for research because, due to their long-range nature, they can give rise to collective behaviour in systems with a large number of active, self-propelled particles \cite{Vicsek2012,Elgeti2015}. A popular approach for studying active matter is to coarse-grain the system and postulate phenomenological equations based on symmetries, but it remains important to capture the microscopic origin of interactions between the particles. Therefore, the study of HIs between a small number of suspended bodies is the necessary link between understanding the dynamics of a single body in an unbounded fluid and that of a large collection thereof. On a microscopic length scale, the physics of the fluid is dominated by viscous dissipation, and inertia is negligible most of the time. Therefore, the interaction of micro-swimmers is usually a low Reynolds number problem, governed by the Stokes equations. Naturally, HIs are important in biology across all Reynolds numbers. For instance, they influence predator-prey interactions and sexual reproduction in small marine organisms such as copepods, which operate at low to intermediate Reynolds number \cite{Arezoo2016}. HIs are also very important in schools of fish (usually high Reynolds number), where they give rise to stable swimming formations and affect endurance and propulsive efficiency \cite{Weihs1973,Dai2018,Pan2020}. At intermediate and high Reynolds number, however, the problem of HIs is usually approached with experimental and computational tools. In contrast, in the low Reynolds number limit, the linearity of the Stokes equations allows for exact analytical solutions if the geometry is simple enough, e.g.~the interaction between two rigid spheres. For rigid spheres at low Reynolds number, exact analytical solutions were found for the flow field around two spheres of arbitrary size but specified orientation \cite{Jeffery1915,StimsonJeffery1926,Goddard2020}, as well as around two identical spheres with arbitrary orientation \cite{Goldman1966,Wakiya1967}. These exact solutions are possible either by exploiting a cylindrical symmetry in the problem \cite{Jeffery1915,StimsonJeffery1926}, or by using a bispherical coordinate system \cite{Goddard2020,Goldman1966,Wakiya1967}. These classical analytical results were later confirmed by computational studies \cite{Dabros1985,Kim1985,YoonKim1987}. In addition to the exact solutions, there are also approximate analytical solutions for the interaction of two spheres sufficiently far apart \cite{Felderhof1977,Cichocki1988}. These solutions are expressed as series expansions in inverse powers of the distance between the spheres, and have the advantage of circumventing bispherical coordinates. For more than two spheres, the interactions become more complicated, but researchers have studied this problem experimentally \cite{Jayaweera1964} and numerically \cite{Cichocki1994}, and have also made analytical progress in the form of a far-field theory \cite{Hocking1964}. For shapes more complex than a sphere, it is often necessary to approach the modelling problem with computational tools. In the biological context, full boundary-element method (BEM) simulations have been carried out to study the HIs between micromachines with spiral tails \cite{Nasseri1997}, uniflagellar bacteria swimming side by side \cite{Ishikawa2007}, and spherical colonies of algae swimming near boundaries \cite{Ishikawa2020}. Other computational studies have considered the interactions between more abstract types of swimmers such as dumbbell-type \cite{Gyrya2010} or squirmer-type pushers and pullers \cite{Goetze2010,Molina2013}. One important question to consider when talking about HIs between microorganisms is whether there is any net attraction or repulsion between the swimmers, and if they settle into stable swimming patterns. These questions are also motivated by experimental observations of swimming bacteria and volvocine algae \cite{Liao2007,Drescher2009}. In this study we focus on HIs between slender filaments at low Reynolds number, in order to tackle the interactions between swimming appendages such as cilia and flagella, rather than entire microorganisms. If HIs between microorganisms are important for the stability of swimming patterns in groups of swimmers, then the HIs between swimming appendages are essential to single-cell behaviour. This includes questions such as the speed and state of flagellar synchronisation \cite{Kim2004b,Reigh2012,Reigh2013,Brumley2014,Chakrabarti2019,Man2020}, the emergence of swimming gaits \cite{Wan2016} and metachronal waves \cite{Joanny2007mcw,Elgeti2013}, and the propulsive capacity of an organism with multiple appendages \cite{Elgeti2013,Nguyen2018}. Much previous work in this area is computational \cite{Kim2004b,Reigh2012,Reigh2013,Chakrabarti2019,Man2020,Nguyen2018,Elgeti2013}, but there has also been some analytical work on the HIs between nearby slender filaments \cite{Man2016}, as well as experimental work on HIs between the beating cilia of live algae \cite{Brumley2014}, and between rotating helices in macro-scale models of bacterial flagella \cite{Kim2003,Kim2004a}. After spheres, the next shapes that can be tackled analytically are slender filaments. This is because we now have well-developed theories for modelling the flows generated by moving filaments using a distribution of force singularities along the centreline of the slender body. One very successful analytical method is resistive-force theory (RFT) \cite{Hancock1953,Gray1955,Lighthill1996_helical}, which describes the anisotropic drag on a slender filament by a linear and local relationship between the force and velocity distributions along the centreline. Since it neglects non-local interactions along the filament, RFT is quantitatively accurate only for exponentially slender filaments, but it usually reproduces the qualitative features of the flow and it is analytically tractable, which leads to a deeper physical understanding. For more accurate quantitative results, one can use slender-body theory (SBT), which takes into account both local and non-local hydrodynamic effects \cite{Cox1970,Lighthill1976,Johnson1980}. While RFT is logarithmically correct, the errors in SBT are algebraically small. In this investigation we apply the theoretical techniques commonly used for single filaments (RFT and SBT) to describe the HIs between two slender filaments {\color{black} separated by a distance, $d$, greater than the contour length of the filaments, $L$}. In a similar way to previous studies on spheres \cite{Felderhof1977,Cichocki1988}, we express the force distribution along each filament as a series expansion in inverse powers of {\color{black}$d/L>1$}. This uses principles from the method of reflections, where some contributions in the expansion correspond to hydrodynamic effects that have reflected back and forth between the filaments a number of times. {\color{black} The method of scattering has previously been employed in the theoretical study of suspensions of rods \cite{Shaqfeh1990,Mackaplow1996}, but these studies focus on the bulk rheology of a suspension of passive fibres, whereas our current purpose is to derive analytical expressions for the specific HIs between two active slender filaments. Furthermore, the present study can handle helical and other shapes of filaments, while the aforementioned work was limited to straight rods.} Our final analytical results pertain specifically to rigid filaments, whose motion can be encapsulated in one mathematical object -- the resistance matrix. For multiple filaments, it is the extended resistance matrix (see also Ref.~\cite{Cichocki1988}) that relates the full dynamics (forces and torques on all the filaments) to the full kinematics (the linear and angular velocities of all the filaments). {\color{black} We expand our solution for the extended resistance matrix up to and including second-order corrections in $L/d<1$. This is motivated by our subsequent application to rotating helical pumps, where the net attraction or repulsion between the helices is only noticeable at second order. It is also at second order that the power of slender-filament methods like RFT and SBT comes into play. The first-order contribution of HIs is the same for slender filaments as it is for spheres or any rigid object that exerts a net force on the fluid. At second order, however, we have contributions not only from the flow that is reflected between the objects (which is the same for spheres), but also from expanding the shape of the filament centreline about its centre.} The paper is structured around three central parts -- the derivation, validation, and application of the theory for HIs between slender filaments at low Reynolds number. In Section \ref{sec:model} we derive analytical expressions for the extended resistance matrix of two arbitrarily-shaped rigid slender filaments, written as a series expansion up to second-order corrections in inverse distance. {\color{black} We then evaluate the coefficients in this series using both RFT and SBT, and in Section \ref{sec:validation} we validate the asymptotic theory against numerical simulations based on SBT}. Finally, in Section \ref{sec:application}, we apply both theory and simulations to the case of two helical pumps rotating side by side in an infinite fluid. We perform a thorough investigation of the forces and torques exerted by the helical pumps, and derive analytical expressions that capture the qualitative effects of HIs with varying distance and phase difference between the helices. Based on our understanding of pairwise HIs between helical pumps, we then provide a perspective on the HIs within a circular array of helical pumps, and we conclude this study in Section \ref{sec:discussion} by discussing our results in a wider context. \section{Asymptotic model for hydrodynamic interactions} \label{sec:model} In this section, we consider the HIs between two rigid slender filaments {\color{black} separated by a distance, $d$, greater than their contour length, $L$}. We quantify the dynamics of the interacting filaments through an extended resistance matrix, for which we derive a series expansion solution up to second-order corrections in {\color{black} $L/d<1$}. \subsection{Geometrical setup} \begin{figure} \landscapetrim{17cm}{9cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_0.pdf} \caption{Geometrical setup of the problem. (a) Two rigid filaments of dimensionless contour length $L=2$ interact with each other hydrodynamically as they move through a viscous fluid. Our asymptotic theory is valid {\color{black} for sufficiently large inter-filament separation, $d > L$, and in the limit of small filament thickness, $\epsilon \ll 1$}. We identify three useful coordinate systems: the laboratory frame (green), the interaction frame for a pair of filaments (blue), and the body frame for an individual filament (black). (b) Parameters describing the geometry of a helical filament, which we will use for the validation and application of our asymptotic theory.} \label{fig:setup} \end{figure} We begin by sketching the setup of our hydrodynamic problem and introducing the mathematical notation. In Fig.~\ref{fig:setup} (a) we illustrate the different coordinate systems used in this paper. First, there is the laboratory frame $\{\mathbf{e}_x,\mathbf{e}_y,\mathbf{e}_z\}$ in usual Cartesian coordinates. Then there is a body frame $\{\mathbf{e}_1^{(k)},\mathbf{e}_2^{(k)},\mathbf{e}_3^{(k)}\}$ for each filament, labelled by $k$. Relative to the laboratory frame, we define the body frame vectors for a filament with orientation $\mathbf{p} = (\phi,\theta,\chi)$ to be \begin{eqnarray} \mathbf{e}_1 &=& \cos\chi \left[ \cos\theta\left(\cos\phi \mathbf{e}_x + \sin\phi \mathbf{e}_y\right) -\sin\theta \mathbf{e}_z \right] + \sin\chi \left[ -\sin\phi \mathbf{e}_x + \cos\phi \mathbf{e}_y \right], \label{eq:bodyframe-A} \\ \mathbf{e}_2 &=& -\sin\chi \left[ \cos\theta\left(\cos\phi \mathbf{e}_x + \sin\phi \mathbf{e}_y\right) -\sin\theta \mathbf{e}_z \right] + \cos\chi \left[ -\sin\phi \mathbf{e}_x + \cos\phi \mathbf{e}_y \right] , \\ \mathbf{e}_3 &=& \sin\theta\left(\cos\phi \mathbf{e}_x + \sin\phi \mathbf{e}_y\right) + \cos\theta \mathbf{e}_z. \label{eq:bodyframe-Z} \end{eqnarray} Working outwards through the transformations applied to the laboratory frame vectors $\{\mathbf{e}_x,\mathbf{e}_y,\mathbf{e}_z\}$, we see that the body frame $\{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}$ is obtained by a rotation through angle $\phi$ around the vertical, $\mathbf{e}_z$, then a tilting by angle $\theta$ away from the vertical (i.e.~a rotation through angle $\theta$ around $-\sin\phi \mathbf{e}_x + \cos\phi \mathbf{e}_y$), and finally a rotation by angle $\chi$ around the axis $\mathbf{e}_3$. Relative to the body frame, we write the position of the centreline and the unit tangent along an arbitrarily-shaped filament $k$ as \begin{eqnarray} \mathbf{r}_k(s) &=& x^{(k)}_1(s) \mathbf{e}_1^{(k)} + x^{(k)}_2(s) \mathbf{e}_2^{(k)} + x^{(k)}_3(s)\mathbf{e}_3^{(k)}, \\ \hat{\mathbf{t}}_k(s) &=& \frac{\partial x^{(k)}_1}{\partial s} \mathbf{e}_1^{(k)} + \frac{\partial x^{(k)}_2}{\partial s} \mathbf{e}_2^{(k)} + \frac{\partial x^{(k)}_3}{\partial s}\mathbf{e}_3^{(k)}, \label{eq:filament-arbitrary} \end{eqnarray} where $s$ is the arc length along the filament. Finally there is a frame of interaction, $\{\mathbf{e}_x^{(j \to k)},\mathbf{e}_y^{(j \to k)},\mathbf{e}_z^{(j \to k)}\}$, defined for every pair of filaments $j$ and $k$ such that the unit vector $\mathbf{e}_x^{(j \to k)}$ points from the origin of the body frame of filament $j$ to that of filament $k$. This frame is useful for discussing interactions between three filaments or more, where there could be multiple pairwise interaction frames distinct from the absolute laboratory frame. However, in our discussion of interactions between two filaments, we may assume without loss of generality that the interaction frame is identical to the laboratory frame. Our asymptotic theory is written in terms of dimensionless quantities. We measure lengths in units of $\tilde{L}/2$ and viscosity in units of $\tilde{\mu}$, where $\tilde{L}$ is the integrated length of the filament and $\tilde{\mu}$ is the viscosity of the medium. This is equivalent to taking $L=2$ and $\mu=1$ in dimensionless terms. In these units, the cross-sectional radius of the filament, $\epsilon$, and the centre-to-centre distance between the filaments, $d$, must {satisfy \color{black}$\epsilon \ll 1 < d$} in order for our theory to hold. We also note that, in our notation, the arc length falls in the interval $s\in (-1,+1)$, giving a total dimensionless length $L=2$ for the filament, and placing the midpoint of the filament at $s=0$. In Fig.~\ref{fig:setup} (b), we illustrate a filament geometry of particular interest - a helical filament with helical radius, $R$, and helical pitch, $p$. It is convenient to introduce the helix angle $\psi = \tan^{-1}(2\pi R/p)$ and the number of helical turns $N=L/\sqrt{(2\pi R)^2+p^2}$. In terms of these, the dimensionless radius of the helix is $R = \sin(\psi)/(\pi N)$ and the pitch is $p = 2\cos(\psi)/N$. We write the centreline of helix $k$ relative to the midpoint of the helical axis, $\mathbf{x}_k$, as \begin{equation} \mathbf{r}_k(s) = R \cos(\pi N s) \mathbf{e}_1^{(k)} + \sigma R \sin(\pi N s) \mathbf{e}_2^{(k)} + s\cos\psi\mathbf{e}_3^{(k)}, \label{eq:centreline} \end{equation} where $s\in (-1,+1)$ is the arc length along the helix and $\sigma=\pm 1$ is the chirality (negative for left-handed helices, positive for right-handed). We can also write the unit tangent vector along the centreline as \begin{equation} \hat{\mathbf{t}}_k(s) = -\sin\psi \sin(\pi N s) \mathbf{e}_1^{(k)} + \sigma \sin\psi \cos(\pi N s) \mathbf{e}_2^{(k)} + \cos\psi\mathbf{e}_3^{(k)}. \label{eq:tangent} \end{equation} The calculations in Section \ref{sec:model} are valid for filaments of arbitrary shape, but in later sections we focus on helical filaments for the purposes of validating and applying our analytical results. \subsection{Hydrodynamic setup} The goal is to find a relationship between the kinematics and the dynamics of the two filaments. This is generally quantified by an extended resistance matrix, which relates the forces and torques exerted by the filaments to their linear and angular velocities, such that \begin{equation} \begin{pmatrix} \mathbf{F}_1 \\ \mathbf{T}_1 \\ \mathbf{F}_2 \\ \mathbf{T}_2 \end{pmatrix} = \begin{pmatrix} \mathbf{S}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{p}_1,\mathbf{p}_2) & \mathbf{C}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{p}_1,\mathbf{p}_2) \\ \mathbf{C}(\mathbf{x}_2,\mathbf{x}_1,\mathbf{p}_2,\mathbf{p}_1) & \mathbf{S}(\mathbf{x}_2,\mathbf{x}_1,\mathbf{p}_2,\mathbf{p}_1) \end{pmatrix} \begin{pmatrix} \mathbf{U}_1 \\ \mathbf{\Omega}_1 \\ \mathbf{U}_2 \\ \mathbf{\Omega}_2 \end{pmatrix}, \label{eq:defn-resistance-matrix} \end{equation} where the matrix $\mathbf{S}$ stands for self-induced dynamics and the matrix $\mathbf{C}$ represents cross-interactions between the filaments. We have made it explicit that the resistance matrix depends on the positions, $\mathbf{x}_j$, and orientations, $\mathbf{p}_j$, of the two filaments. {\color{black} Note that even the matrix $\mathbf{S}$ for self-induced dynamics depends on the position of both filaments, because fluid disturbances induced by the motion of one filament will reflect off the second filament and travel back to the position where they originated.} Because $\mathbf{F}_j$ and $\mathbf{T}_j$ are the forces and torques exerted by the filaments on the fluid, the resistance matrix is positive definite and, by the reciprocal theorem, also symmetric. In particular, this means that $\mathbf{C}(\mathbf{x}_2,\mathbf{x}_1,\mathbf{p}_2,\mathbf{p}_1) = \mathbf{C}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{p}_1,\mathbf{p}_2)^T$. Without loss of generality for the two filament case, we may define the laboratory frame to be centred on the first filament, so that $\mathbf{x}_1 = 0$. Thus, the resistance matrix only depends on the directed distance $\mathbf{d} = \mathbf{x}_2 - \mathbf{x}_1$ so that \begin{eqnarray} \begin{pmatrix} \mathbf{F}_1 \\ \mathbf{T}_1 \end{pmatrix} &=& \phantom{-}\mathbf{S}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2)\begin{pmatrix} \mathbf{U}_1 \\ \mathbf{\Omega}_1 \end{pmatrix} + \phantom{-}\mathbf{C}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2)\begin{pmatrix} \mathbf{U}_2 \\ \mathbf{\Omega}_2 \end{pmatrix}, \label{eq:F_T_first_helix} \\ \begin{pmatrix} \mathbf{F}_2 \\ \mathbf{T}_2 \end{pmatrix} &=& \mathbf{S}(-\mathbf{d},\mathbf{p}_2,\mathbf{p}_1)\begin{pmatrix} \mathbf{U}_2 \\ \mathbf{\Omega}_2 \end{pmatrix} + \mathbf{C}(-\mathbf{d},\mathbf{p}_2,\mathbf{p}_1)\begin{pmatrix} \mathbf{U}_1 \\ \mathbf{\Omega}_1 \end{pmatrix}. \label{eq:F_T_second_helix} \end{eqnarray} If the filaments are slender ($\epsilon\ll 1$), then we may represent the dynamics of filament $k$ by a force density $\mathbf{f}_k(s)$ along its centreline. {\color{black} We define an arclength-dependent drag tensor $\mathbf{\Sigma}(s)$ which relates the force density to the relative velocity of the filament centreline through the expression \begin{equation} \mathbf{f}_k(s) = \mathbf{\Sigma}_k(s) \cdot \left[\mathbf{u}(\mathbf{r}_k(s))-\mathbf{u}_\infty(\mathbf{r}_k(s))\right]. \label{eq:defn-force-density-RFT} \end{equation} In Section \ref{sec:evalcoeff} we will return to the drag tensor and explain how to evaluate it using resistive-force theory (RFT) and slender-body theory (SBT). Until then, the derivation of the asymptotic series expansion is independent of which method we use to characterise the drag on an individual filament.} For a rigid filament, the velocity of the centreline is given by the rigid body motion \begin{equation} \mathbf{u}(\mathbf{r}_k(s)) = \mathbf{U}_k + \mathbf{\Omega}_k\times\mathbf{r}_k(s). \label{eq:defn-u-vector-form} \end{equation} To make our notation more compact, we introduce a kinematics vector with six components made through the concatenation of the linear and angular velocities of the filament, i.e.~$(\mathbf{U}_k,\mathbf{\Omega}_k)$. Then, using summation convention, we may write the velocity of the first filament's centreline as \begin{equation} u_i(\mathbf{r}_1(s)) = (\delta_{ij}+\varepsilon_{i,j-3,k}(\mathbf{r}_1(s))_k) (\mathbf{U}_1,\mathbf{\Omega}_1)_j, \label{eq:defn-u-suffix-notation} \end{equation} where the index $j$ is summed over from $1$ to $6$, while the other free indices run from $1$ to $3$ as usual, and the Kronecker delta and Levi-Civita symbol are understood to be identically zero if any index falls outside the normal range $\{1,2,3\}$. Next, we consider the background flow at the position of the first filament, which is nothing more than the flow induced by the second filament. {\color{black} At distances much greater than the filament thickness, $\epsilon$, the dominant flow induced by the second filament is the cumulative effect of a distribution of Stokeslets placed along its centreline, and represented by the force density $\mathbf{f}_2(s)$. Hence, we can express the background flow as \begin{equation} \mathbf{u}_\infty(\mathbf{r}_1(s)) = \frac{1}{8\pi\mu} \sdint{\frac{\mathbf{I}+\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|} \cdot\mathbf{f}_2(s')}, \label{eq:defn-induced-flow} \end{equation} where $\mathbf{R}_d(s,s') = \mathbf{d} + \mathbf{r}_2(s') - \mathbf{r}_1(s)$ is the relative distance between a point $s'$ on the centreline of the second filament and a point $s$ on the centreline of the first filament.} Note that $\mu = 1$ in our dimensionless units, but was included for clarity. {\color{black} Higher-order singularities, such as the source dipoles included in computational studies \cite{Tornberg2004,Maxian2021}, decay at least as fast as the inverse cube of distance, and hence do not contribute to HIs at order $\mathcal{O}(d^{-2})$, which is as far as we go with the asymptotic series expansion in this paper.} To obtain the total hydrodynamic force and torque exerted by the filament, we need to calculate force moments along the length of the filament, so that \begin{equation} \mathbf{F} = \int_{-1}^{+1} \mathbf{f}(s) \mathrm{d}s, \quad \mathbf{T} = \int_{-1}^{+1} \mathbf{r}(s)\times\mathbf{f}(s) \mathrm{d}s. \label{eq:F_T_vector_form} \end{equation} Using the compact notation introduced earlier, we can write an expression for the dynamics vector $(\mathbf{F}_1,\mathbf{T}_1)$ of the first filament as \begin{equation} (\mathbf{F}_1,\mathbf{T}_1)_i = \sint{(\delta_{ij}+\varepsilon_{i-3,kj}(\mathbf{r}_1(s))_k)(\mathbf{f}_1(s))_j}, \label{eq:defn-F-T-suffix-notation} \end{equation} where the index $i$ runs from $1$ to $6$, while the other indices are summed over from $1$ to $3$. \subsection{Asymptotic series formulation} Equations \eqref{eq:defn-force-density-RFT}-\eqref{eq:defn-induced-flow} define a coupled system of equations for the force densities on the two filaments, which we will solve {\color{black} in the regime $d > L =2$}. We write the force distribution along each filament as an asymptotic series expansion \begin{equation} \mathbf{f}_k(s) = \mathbf{f}_k^{(0)}(s) + d^{-1}\mathbf{f}_k^{(1)}(s) + d^{-2}\mathbf{f}_k^{(2)}(s) + \mathcal{O}(d^{-3}), \label{eq:expn-f} \end{equation} with the ultimate goal of calculating series expansions for the self-induced and cross-interaction resistance matrices in Eq.~\eqref{eq:F_T_first_helix}. We can write these as \begin{eqnarray} \mathbf{S}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2) &=& \mathbf{S}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) + d^{-1}\mathbf{S}^{(1)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + d^{-2}\mathbf{S}^{(2)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + \mathcal{O}(d^{-3}), \label{eq:expn-S} \\ \mathbf{C}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2) &=& \mathbf{C}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) + d^{-1}\mathbf{C}^{(1)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + d^{-2}\mathbf{C}^{(2)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + \mathcal{O}(d^{-3}), \label{eq:expn-C} \end{eqnarray} where the matrices at each order only depend on the direction of separation, $\hat{\mathbf{d}}$, with all dependence on the magnitude of separation, $|\mathbf{d}|=d$, captured by the algebraic power of the given order. Because the leading order is given by the limit $d \to \infty$, where the filaments do not know of each other's presence, we deduce that \begin{equation} \mathbf{S}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = \mathbf{S}^{(0)}(\mathbf{p}_1), \quad \mathbf{C}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = \mathbf{0}. \label{eq:result-C0} \end{equation} In order to solve Eq.~\eqref{eq:defn-force-density-RFT} as an asymptotic series, we need to expand the flow induced by the second filament in inverse powers of distance. {\color{black} The Stokeslets decay like $1/|\mathbf{R}_d|$, so we first write the magnitude of the relative distance as} \begin{equation} |\mathbf{R}_d| = d\left(1 + \frac{2\hat{\mathbf{d}}\cdot(\mathbf{r}_2(s')-\mathbf{r}_1(s))}{d} + \frac{|\mathbf{r}_2(s')-\mathbf{r}_1(s)|^2}{d^2} \right)^{1/2}. \end{equation} {\color{black}Because all points on the filament centreline lie within a sphere of diameter $L$ around the centre, we have $|\mathbf{r}_2(s')-\mathbf{r}_1(s)| < L < d$, so we can apply the binomial expansion to get} \begin{eqnarray} \frac{1}{|\mathbf{R}_d|} &=& \frac{1}{d} - \frac{\hat{\mathbf{d}}\cdot(\mathbf{r}_2(s')-\mathbf{r}_1(s))}{d^2} + \mathcal{O}(d^{-3}),\\ \hat{\mathbf{R}}_d &=& \hat{\mathbf{d}} + \frac{(\mathbf{I} - \hat{\mathbf{d}}\dhatb)\cdot(\mathbf{r}_2(s')-\mathbf{r}_1(s))}{d} + \mathcal{O}(d^{-2}). \end{eqnarray} {\color{black} Note that these binomial expansions is valid for any $d>L$, and higher accuracy can be obtained by including more terms in the series.} Therefore, we can expand the induced flow in Eq.~\eqref{eq:defn-induced-flow} as \begin{equation} u_{\infty,i}(\mathbf{r}_1(s)) = \sdint{\left(d^{-1} J_{ij}(\hat{\mathbf{d}}) + d^{-2}K_{ijp}(\hat{\mathbf{d}})(\mathbf{r}_2(s')-\mathbf{r}_1(s))_p + \mathcal{O}(d^{-3}) \right)(\mathbf{f}_2(s'))_j}, \label{eq:expn-induced-flow} \end{equation} where the second-rank tensor \begin{equation} J_{ij}(\hat{\mathbf{d}}) = \frac{\delta_{ij} + \hat{d}_i\hat{d}_j}{8\pi\mu} \label{eq:defn-J} \end{equation} represents the leading-order Stokeslet induced by the second filament, and the third-rank tensor \begin{equation} K_{ijp}(\hat{\mathbf{d}}) = \frac{\hat{d}_i\delta_{jp} + \hat{d}_j\delta_{ip} - \hat{d}_p\delta_{ij} - 3\hat{d}_i\hat{d}_j\hat{d}_p}{8\pi\mu} \label{eq:defn-K} \end{equation} represents higher-order moments of the force distribution along the second filament. \subsection{Leading-order dynamics} \label{sec:leading-order} The induced flow, Eq.~\eqref{eq:expn-induced-flow}, makes no contributions to Eq.~\eqref{eq:defn-force-density-RFT} at $\mathcal{O}(1)$. By using Eq.~\eqref{eq:defn-u-suffix-notation} to express the rigid-body motion of the filament, we find that the leading-order force distribution is given by \begin{equation} (\mathbf{f}_1^{(0)}(s))_i = (\mathbf{\Sigma}_1(s))_{ij}(\delta_{jk}+\varepsilon_{j,k-3,l}(\mathbf{r}_1(s))_l) (\mathbf{U}_1,\mathbf{\Omega}_1)_k. \label{eq:result-f0} \end{equation} Then, by using Eq.~\eqref{eq:defn-F-T-suffix-notation} to find the total force and torque exerted by the filament, and putting the result in the form of Eq.~\eqref{eq:F_T_first_helix}, we find that \begin{equation} S_{ij}^{(0)}(\mathbf{p}_1) = \sint{(\delta_{ik}+\varepsilon_{i-3,lk}(\mathbf{r}_1(s))_l) (\mathbf{\Sigma}_1(s))_{km}(\delta_{mj}+\varepsilon_{j-3,nm}(\mathbf{r}_1(s))_n)}, \label{eq:result-S0} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$. but all others are summed over from $1$ to $3$. Note that the integral depends implicitly on the orientation $\mathbf{p}_1$ of the filament through the filament centreline $\mathbf{r}_1$ and {\color{black} the tensor} $\mathbf{\Sigma}_1$. The self-induced resistance matrix $\mathbf{S}^{(0)}(\mathbf{p}_1)$ can be obtained, for any orientation $\mathbf{p}_1$ of the filament, by applying a change of basis to the resistance matrix expressed in the body frame of the filament, which we denote by \begin{equation} \mathbf{S}_0 = \begin{pmatrix} \mathbf{A} & \textbf{B} \\ \mathbf{B}^T & \mathbf{D} \end{pmatrix} \equiv \mathbf{S}^{(0)}(\mathbf{0}). \label{eq:defn-S0} \end{equation} If $\mathbf{Q}(\mathbf{p}_1)$ is the orthogonal matrix whose columns are the unit vectors $\{\mathbf{e}_1^{(1)},\mathbf{e}_2^{(1)},\mathbf{e}_3^{(1)}\}$ defined in Eqs.~\eqref{eq:bodyframe-A}-\eqref{eq:bodyframe-Z}, then the self-induced resistance matrix for orientation $\mathbf{p}_1$ is \begin{equation} \mathbf{S}^{(0)}(\mathbf{p}_1) = \begin{pmatrix} \mathbf{Q}(\mathbf{p}_1) & \mathbf{0} \\ \mathbf{0} &\mathbf{Q}(\mathbf{p}_1) \end{pmatrix} \begin{pmatrix} \mathbf{A} & \textbf{B} \\ \mathbf{B}^T & \mathbf{D} \end{pmatrix} \begin{pmatrix} \mathbf{Q}(\mathbf{p}_1)^T & \mathbf{0} \\ \mathbf{0} & \mathbf{Q}(\mathbf{p}_1)^T \end{pmatrix}, \label{eq:result-S0(p1)} \end{equation} where we applied the change of basis to each three-by-three block of the resistance matrix. \subsection{First-order correction} Next, we analyse Eq.~\eqref{eq:defn-force-density-RFT} at $\mathcal{O}(d^{-1})$ using the expansion of the induced flow from Eq.~\eqref{eq:expn-induced-flow}. We find that the first-order correction to the force distribution is given by \begin{equation} (\mathbf{f}_1^{(1)}(s))_i = -(\mathbf{\Sigma}_1(s))_{ij}\sdint{J_{jk}(\hat{\mathbf{d}})(\mathbf{f}_2^{(0)}(s'))_k}. \label{eq:expansion-f1} \end{equation} Then, substituting the leading-order force density from Eq.~\eqref{eq:result-f0}, we find that \begin{equation} (\mathbf{f}_1^{(1)}(s))_i = -(\mathbf{\Sigma}_1(s))_{ij}J_{jk}(\hat{\mathbf{d}})\sdint{(\mathbf{\Sigma}_2(s'))_{kl}(\delta_{ij}+\varepsilon_{i,j-3,k}(\mathbf{r}_2(s'))_k)}(\mathbf{U}_2,\mathbf{\Omega}_2)_l. \label{eq:result-f1} \end{equation} Then, by using Eq.~\eqref{eq:defn-F-T-suffix-notation} to find the total force and torque exerted by the filament, and putting the result in the form of Eq.~\eqref{eq:F_T_first_helix}, we find that \begin{equation} S_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = 0, \label{eq:result-S1} \end{equation} and \begin{multline} C_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) =-\sint{(\mathbf{\Sigma}_1(s))_{ik}(\delta_{kl}+\varepsilon_{k,l-3,m}(\mathbf{r}_1(s))_m)} \\ \times J_{kn}(\hat{\mathbf{d}})\sdint{(\mathbf{\Sigma}_2(s'))_{np}(\delta_{pj}+\varepsilon_{p,j-3,q}(\mathbf{r}_2(s'))_q)}. \label{eq:dervn-C1} \end{multline} We recognise from Eq.~\eqref{eq:result-S0} that these integrals are the first three columns and rows of the leading-order matrix for the first and second filament, respectively, so we can write the leading-order cross-interaction matrix as \begin{equation} C_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) =-S_{ik}^{(0)}(\mathbf{p}_1)J_{kl}(\hat{\mathbf{d}})S_{lj}^{(0)}(\mathbf{p}_2), \label{eq:result-C1} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$, but all others are summed over from $1$ to $3$. We can read this expression from right to left to understand its physical interpretation. At leading order, the second filament induces a Stokeslet flow of strength $(\mathbf{S}^{(0)}(\mathbf{p}_2))_{lj}(\mathbf{U}_2,\mathbf{\Omega}_2)_j$ (with $l\in\{1,2,3\}, j\in\{1,2,...,6\}$), which gets carried over to the position of the first filament by the Oseen tensor $J_{kl}(\hat{\mathbf{d}})/d$. The first filament sees a uniform background flow at leading order and responds to it using its own self-induced resistance matrix $(\mathbf{S}^{(0)}(\mathbf{p}_1))_{ik}$ (with $i\in\{1,2,...,6\},k\in\{1,2,3\}$), as if it was translating with a uniform velocity in the opposite direction to the background flow, hence the minus sign. We note that directionality is lost at this order, because the tensor $J_{ij}(\hat{\mathbf{d}})$, defined in Eq.~\eqref{eq:defn-J}, is invariant under the transformation $\hat{\mathbf{d}} \mapsto - \hat{\mathbf{d}}$. All that matters at this order is the distance $d$ between the two filaments. Furthermore, $\mathbf{C}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)^T = \mathbf{C}^{(1)}(-\hat{\mathbf{d}},\mathbf{p}_2,\mathbf{p}_1)$, so the reciprocal theorem is satisfied at this order. The result can also be extended to non-identical filaments by incorporating information about the filament geometry. We can make this dependence explicit in our notation by writing $\mathbf{S}^{(0)}(\mathbf{p};\mathbf{g})$, where the vector parameter $\mathbf{g}$ encapsulates all information about the filament geometry. For the particular case of helical filaments, note from Eqs.~\eqref{eq:Acomponents-A}-\eqref{eq:Dcomponents-Z} that our dimensionless $S^{(0)}_{ij}$ depends explicitly on the helix angle $\psi$, the number of turns $N$, and implicitly on the slenderness parameter $\epsilon$ through the drag coefficients $c_\perp$ and $c_\parallel$, hence $\mathbf{g} = (\psi,N,\epsilon)$ for a helix. Note also that, in our derivation of the dimensionless $\mathbf{S}(\mathbf{n};\mathbf{g})$ we had rescaled lengths by the filament length, so we would need to add this information back in if we wanted to consider filaments of different lengths. Using tildes to denote dimensional quantities, we can write the leading-order self-induced resistance matrix as \begin{equation} \tilde{\mathbf{S}}^{(0)}(\mathbf{p};\mathbf{g},\tilde{L}) = \frac{\tilde{\mu}\tilde{L}}{2}\begin{pmatrix} \mathbf{I} & 0 \\ 0 & \mathbf{I}\tilde{L}/2 \end{pmatrix} \begin{pmatrix} \mathbf{Q}(\mathbf{p})\mathbf{A}(\mathbf{g})\mathbf{Q}(\mathbf{p})^T & \mathbf{Q}(\mathbf{p})\textbf{B}(\mathbf{g})\mathbf{Q}(\mathbf{p})^T \\ \mathbf{Q}(\mathbf{p})\mathbf{B}(\mathbf{g})^T\mathbf{Q}(\mathbf{p})^T & \mathbf{Q}(\mathbf{p})\mathbf{D}(\mathbf{g})\mathbf{Q}(\mathbf{p})^T \end{pmatrix} \begin{pmatrix} \mathbf{I} & 0 \\ 0 & \mathbf{I}\tilde{L}/2 \end{pmatrix}, \label{eq:S0-general} \end{equation} and also the dimensional cross-interaction matrix as \begin{equation} \tilde{C}^{(1)}_{ij}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2;\mathbf{g}_1,\mathbf{g}_2,\tilde{L}_1,\tilde{L}_2) = - \tilde{S}^{(0)}_{ip}(\mathbf{p}_1;\mathbf{g}_1,\tilde{L}_1)\frac{\left(\delta_{pq}+\hat{d}_p\hat{d}_q\right)}{8\pi \tilde{\mu} \tilde{d}}\tilde{S}^{(0)}_{qj}(\mathbf{p}_2;\mathbf{g}_2,\tilde{L}_2). \label{eq:C1-general} \end{equation} The results in Eqs.~\eqref{eq:S0-general} and \eqref{eq:C1-general} describe in full generality the far-field HIs between two filaments of arbitrary shape and orientation up to order $\mathcal{O}(\tilde{d}^{-1})$. \subsection{Second-order correction} We now begin to analyse Eq.~\eqref{eq:defn-force-density-RFT} at $\mathcal{O}(d^{-2})$ using the expansion of the induced flow from Eq.~\eqref{eq:expn-induced-flow}. We find that the second-order correction to the force distribution is given by \begin{multline} (\mathbf{f}_1^{(2)}(s))_i = -(\mathbf{\Sigma}_1(s))_{ij}\sdint{J_{jk}(\hat{\mathbf{d}})(\mathbf{f}_2^{(1)}(s'))_k} \\ - (\mathbf{\Sigma}_1(s))_{ij}\sdint{K_{jkp}(\hat{\mathbf{d}})(\mathbf{r}_2(s')-\mathbf{r}_1(s))_p(\mathbf{f}_2^{(0)}(s'))_k}. \label{eq:expansion-f2} \end{multline} The first of these terms will contribute to the self-induced resistance matrix because $\mathbf{f}_2^{(1)}$ is linear in the kinematics of the first filament, while the second of them will contribute to the cross-interaction matrix because $\mathbf{f}_2^{(0)}$ is linear in the kinematics of the second filament. After substituting the first-order force density from Eq.~\eqref{eq:result-f1} into Eq.~\eqref{eq:expansion-f2}, we find that there is a contribution to $\mathbf{f}_1^{(2)}(s)$ of the form \begin{equation} -(\mathbf{\Sigma}_1(s))_{ij}\sdint{J_{jk}(\hat{\mathbf{d}})(-\mathbf{\Sigma}_2(s'))_{kl}J_{lm}(\hat{\mathbf{d}})} \sddint{(\mathbf{\Sigma}_1(s''))_{mn}(\delta_{np}+\varepsilon_{n,p-3,q}(\mathbf{r}_1(s''))_q)}(\mathbf{U}_1,\mathbf{\Omega}_1)_p. \end{equation} Then, using Eqs.~\eqref{eq:defn-F-T-suffix-notation} and \eqref{eq:F_T_first_helix} to bring the result to its final form, we deduce that \begin{equation} S_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = S_{ik}^{(0)}(\mathbf{p}_1)J_{kl}(\hat{\mathbf{d}}) S_{lm}^{(0)}(\mathbf{p}_2) J_{mn}(\hat{\mathbf{d}}) S_{nj}^{(0)}(\mathbf{p}_1), \label{eq:result-S2} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$, but all others are summed from $1$ to $3$. Note that this clearly satisfies the reciprocal theorem because both $\mathbf{S}^{(0)}$ and the Oseen tensor are symmetric. Physically, the result in Eq.~\eqref{eq:result-S2} expresses the fact that the Stokeslet field produced by the first filament propagates with an $\mathcal{O}(d^{-1})$ decay to the position of the second filament, where it produces a disturbance in the force. The $\mathcal{O}(d^{-1})$ perturbation in the force exerted by the second filament gets reflected back to the first filament with the same $\mathcal{O}(d^{-1})$ decay. This generates an $\mathcal{O}(d^{-2})$ disturbance in the dynamics of the first filament that is self-induced (i.e.~proportional to its own kinematics). Similarly, after substituting the leading-order force density from Eq.~\eqref{eq:result-f0} into Eq.~\eqref{eq:expansion-f2}, we find that there is a contribution to $\mathbf{f}_1^{(2)}(s)$ of the form \begin{multline} -(\mathbf{\Sigma}_1(s))_{ij}\sdint{K_{jkl}(\hat{\mathbf{d}})(\mathbf{r}_2(s'))_l (\mathbf{\Sigma}_2(s'))_{km}(\delta_{mn}+\varepsilon_{m,n-3,p}(\mathbf{r}_2(s'))_p)}(\mathbf{U}_2,\mathbf{\Omega}_2)_n \\ +(\mathbf{\Sigma}_1(s))_{ij}K_{jkl}(\hat{\mathbf{d}})(\mathbf{r}_1(s))_l\sdint{(\mathbf{\Sigma}_2(s'))_{km}(\delta_{mn}+\varepsilon_{m,n-3,p}(\mathbf{r}_2(s'))_p)}(\mathbf{U}_2,\mathbf{\Omega}_2)_n. \label{eq:contrib_C2} \end{multline} We introduce the notation \begin{equation} P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2) = \sdint{K_{ikl}(\hat{\mathbf{d}})(\mathbf{r}_2(s'))_l (\mathbf{\Sigma}_2(s'))_{km}(\delta_{mj}+\varepsilon_{m,j-3,n}(\mathbf{r}_2(s'))_n)} \label{eq:defn-P} \end{equation} for the second-rank tensor appearing in Eq.~\eqref{eq:contrib_C2}, and rewrite this contribution as \begin{equation} \left[-(\mathbf{\Sigma}_1(s))_{ij}P_{jn}(\hat{\mathbf{d}},\mathbf{p}_2) +(\mathbf{\Sigma}_1(s))_{ij}K_{jkl}(\hat{\mathbf{d}})(\mathbf{r}_1(s))_l S^{(0)}_{kn}(\mathbf{p}_2)\right](\mathbf{U}_2,\mathbf{\Omega}_2)_n \end{equation} with the help of Eq.~\eqref{eq:result-S0}. Finally, we integrate the force density as per Eq.~\eqref{eq:defn-F-T-suffix-notation} to find the correction to the total force and torque due to the kinematics of the second filament. Using the fact that $K_{jkl}(\hat{\mathbf{d}}) = K_{kjl}(\hat{\mathbf{d}})$ (follows directly from the definition in Eq.~\eqref{eq:defn-K}), we deduce that the $\mathcal{O}(d^{-2})$ correction to the cross-interaction matrix is \begin{equation} C_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = -S_{ik}^{(0)}(\mathbf{p}_1)P_{kj}(\hat{\mathbf{d}},\mathbf{p}_2) + P^T_{ik}(\hat{\mathbf{d}},\mathbf{p}_1)S_{kj}^{(0)}(\mathbf{p}_2), \label{eq:result-C2} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$, but $k$ is summed from $1$ to $3$. Note that this also satisfies the reciprocal theorem, according to which $\mathbf{C}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)^T = \mathbf{C}(-\hat{\mathbf{d}},\mathbf{p}_2,\mathbf{p}_1)$ because $P_{ij}(-\hat{\mathbf{d}},\mathbf{p}_2)=-P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2)$ (follows directly from the definitions of $K_{ijp}$ and $P_{ij}$ in Eqs.~\eqref{eq:defn-K} and \eqref{eq:defn-P}, respectively). The final result for $C_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)$, given by Eq.~\eqref{eq:result-C2}, involves a new quantity that we have not calculated explicitly yet -- the tensor $P_{ij}$, defined in Eq.~\eqref{eq:defn-P}. In contrast, the expressions for $C_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)$ and $S_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)$ (Eqs.~\eqref{eq:result-C1} and \eqref{eq:result-S2}, respectively) have the advantage that they involve only the leading-order resistance matrices $S_{ij}^{(0)}(\mathbf{p}_1)$ and $S_{ij}^{(0)}(\mathbf{p}_2)$. These can be easily calculated {\color{black} from RFT or SBT} since they are nothing more than the resistance matrix for an isolated filament. Our final task is to show that the tensor $P_{ij}(\hat{\mathbf{d}},\mathbf{p}_1)$ can also be calculated easily from the leading-order resistance matrix $S_{ij}^{(0)}(\mathbf{p}_1)$ and two minor follow-up calculations. \subsection{Force moments for second-order correction} The tensor $P_{ij}$ defined in Eq.~\eqref{eq:defn-P} is constructed in a similar way to the last three rows of the leading-order resistance matrix from Eq.~\eqref{eq:result-S0}. If we introduce the quantity \begin{equation} M_{lkj}(\mathbf{p}_2) = \sint{(\mathbf{r}_2(s))_l (\mathbf{\Sigma}_2(s))_{km}(\delta_{mj}+\varepsilon_{j-3,nm}(\mathbf{r}_2(s))_n)}, \label{eq:defn-M} \end{equation} which represents force moments along the centreline of a filament with orientation $\mathbf{p}_2$, then what we want to compute is \begin{equation} P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2) = K_{ikl}(\hat{\mathbf{d}})M_{lkj}(\mathbf{p}_2), \end{equation} but we already have an expression for the last three rows ($4\leq i \leq 6$) of the resistance matrix \begin{equation} S_{ij}^{(0)}(\mathbf{p}_2) = \varepsilon_{i-3,lk}M_{lkj}(\mathbf{p}_2), \end{equation} in the laboratory frame, Eq.~\eqref{eq:result-S0(p1)}. So far we have assumed that the laboratory and interaction frame are identical, and we have only talked about changing basis from the body frame to the laboratory frame, Eq.~\eqref{eq:result-S0(p1)}. This was convenient because $S_{ij}^{(0)}(\mathbf{p}_2)$ has a simple representation in the body frame of the second filament, since the orientation of the filament is $\mathbf{p}_2 = \mathbf{0}$ relative to this frame. But the natural frame in which to describe the tensor $K_{ikl}(\hat{\mathbf{d}})$ is the interaction frame where $\hat{\mathbf{d}} = \mathbf{e}_x^{(1\to2)}$, as shown in Fig.~\ref{fig:setup} (b). In this frame, the tensor $K_{ijp}(\hat{\mathbf{d}})$ defined in Eq.~\eqref{eq:defn-K} has components \begin{equation} K_{1kl}(\mathbf{e}_x^{(1\to2)}) = \frac{1}{8\pi}\begin{pmatrix} -2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, ~ K_{2kl}(\mathbf{e}_x^{(1\to2)}) = \frac{1}{8\pi}\begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, ~ K_{3kl}(\mathbf{e}_x^{(1\to2)}) = \frac{1}{8\pi}\begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix}. \end{equation} Hence, the tensor $P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2)$ can be written in the interaction frame as \begin{multline} P_{ij}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{1}{8\pi}\delta_{i1}(-2M_{11j}(\mathbf{p}_2')+M_{22j}(\mathbf{p}_2')+M_{33j}(\mathbf{p}_2')) \\ + \frac{1}{8\pi}\delta_{i2}(-M_{12j}(\mathbf{p}_2')+M_{21j}(\mathbf{p}_2')) + \frac{1}{8\pi}\delta_{i2}(-M_{13j}(\mathbf{p}_2')+M_{31j}(\mathbf{p}_2')), \label{eq:dervn-Pij} \end{multline} whereas the last three rows ($4\leq i \leq 6$) of the resistance matrix are \begin{multline} S_{ij}^{(0)}(\mathbf{p}_2') = \delta_{i4}(M_{23j}(\mathbf{p}_2')-M_{32j}(\mathbf{p}_2')) \\ + \delta_{i5}(-M_{13j}(\mathbf{p}_2')+M_{31j}(\mathbf{p}_2')) + \delta_{i6}(M_{12j}(\mathbf{p}_2')-M_{21j}(\mathbf{p}_2')). \label{eq:dervn-Sij} \end{multline} Note that we have used the notation $\mathbf{p}_2'$ to indicate the orientation of the filament relative to the interaction frame, so the tensors $\mathbf{M}(\mathbf{p}_2')$ and $\mathbf{S}^{(0)}(\mathbf{p}_2')$ are also to be expressed in these coordinates. By comparing the two expressions in Eqs.~\eqref{eq:dervn-Pij} and \eqref{eq:dervn-Sij}, we deduce that \begin{equation} P_{2j}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = -\frac{S_{6j}^{(0)}(\mathbf{p}_2')}{8\pi}, \quad P_{3j}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{S_{5j}^{(0)}(\mathbf{p}_2')}{8\pi}, \label{eq:result-P-tworows} \end{equation} so we get the last two rows of $P_{ij}$ for free. To complete the top row of $P_{ij}$ we simply need to calculate the quantity \begin{equation} P_{1j}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{1}{8\pi}(-2M_{11j}(\mathbf{p}_2')+M_{22j}(\mathbf{p}_2')+M_{33j}(\mathbf{p}_2')), \label{eq:dervn-Pij-1row} \end{equation} which is more easily calculated in the body frame of the filament and then transferred to the interaction frame by a change of basis. {\color{black} Everything we have done so far is valid for filaments of arbitrary shape. Below, we go into more detail about the evaluation of the new row $P_{1j}$ for helical filaments, which will be used later for the validation and application of our theory. In the body frame of a helical filament, where $\mathbf{p}_2' \to \mathbf{0}$, we denote the right-hand side of Eq.~\eqref{eq:dervn-Pij-1row} by} \begin{equation} (\mathbf{m}_0)_j = -2M_{11j}(\mathbf{0})+M_{22j}(\mathbf{0})+M_{33j}(\mathbf{0}). \label{eq:defn-m0} \end{equation} {\color{black} The helical centreline introduced in Eq.~\eqref{eq:centreline} is symmetric under a rotation by angle $\pi$ around the unit vector $\mathbf{e}_1$. Due to this symmetry, the vector $\mathbf{m}_0$ has vanishing components along the $\mathbf{e}_2$ and $\mathbf{e}_3$ directions, regardless of the method (RFT or SBT) by which we choose to evaluate it, meaning that} \begin{equation} (\mathbf{m}_0)_{i} = (\mathcal{M}_{1} \mathbf{e}_1)_i, ~ (\mathbf{m}_0)_{i+3} = (\mathcal{M}_{4} \mathbf{e}_1)_i, \label{eq:dervn-m0} \end{equation} for index $i = 1,2,3$. Hence, when we move this result to the interaction frame of two helices, we obtain the final result for the matrix $\mathbf{P}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2')$ \begin{equation} \mathbf{P}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{1}{8\pi} \begin{pmatrix} \mathcal{M}_{1} \alpha(\mathbf{p}_2') & \mathcal{M}_{1} \beta(\mathbf{p}_2') & \mathcal{M}_{1} \gamma(\mathbf{p}_2') & \mathcal{M}_{4} \alpha(\mathbf{p}_2') & \mathcal{M}_{4} \beta(\mathbf{p}_2') & \mathcal{M}_{4} \gamma(\mathbf{p}_2') \\ -S_{61}^{(0)}(\mathbf{p}_2') & -S_{62}^{(0)}(\mathbf{p}_2') & -S_{63}^{(0)}(\mathbf{p}_2') & -S_{64}^{(0)}(\mathbf{p}_2') & -S_{65}^{(0)}(\mathbf{p}_2') & -S_{66}^{(0)}(\mathbf{p}_2') \\ S_{51}^{(0)}(\mathbf{p}_2') & S_{52}^{(0)}(\mathbf{p}_2') & S_{53}^{(0)}(\mathbf{p}_2') & S_{54}^{(0)}(\mathbf{p}_2') & S_{55}^{(0)}(\mathbf{p}_2') & S_{56}^{(0)}(\mathbf{p}_2') \end{pmatrix}, \label{eq:result-Pmatrix} \end{equation} where $\alpha(\mathbf{p}_2') = \mathbf{e}_1^{(2)}\cdot\mathbf{e}_x^{(1\to2)}$, $\beta(\mathbf{p}_2') = \mathbf{e}_1^{(2)}\cdot\mathbf{e}_y^{(1\to2)}$ and $\gamma(\mathbf{p}_2') = \mathbf{e}_1^{(2)}\cdot\mathbf{e}_z^{(1\to2)}$ are the components of $\mathbf{e}_1^{(2)}$ relative to the interaction frame of filaments $1$ and $2$. If the interaction frame does not coincide with the laboratory frame (e.g.~if there are more than two filaments), this result would have to be moved to the laboratory frame by a change of basis on each three-by-three block. {\color{black} \subsection{Evaluating coefficients in the series expansion} \label{sec:evalcoeff} The first and second-order coefficients in the series expansion only require the leading-order resistance matrix, $\mathbf{S}^{(0)}$, and the force moment, $\mathbf{m}_0$, which themselves only depend on the shape of the filament, $\mathbf{r}(s)$, and the drag tensor, $\mathbf{\Sigma}(s)$. We now explain how to evaluate these coefficients using both resistive-force theory (RFT) and slender-body theory (SBT). The former has the advantage of being analytically tractable but only logarithmically accurate, while the latter is algebraically correct but requires computations. In RFT \cite{Hancock1953,Gray1955,Lighthill1996_helical}, the drag tensor depends only on the local tangent to the filament,} \begin{equation} \mathbf{\Sigma}_{\mathrm{RFT}}(s) = c_\perp[\mathbf{I} -\hat{\mathbf{t}}(s)\hat{\mathbf{t}}(s)]+c_\parallel\hat{\mathbf{t}}(s)\hat{\mathbf{t}}(s), \label{eq:defn-Sigma} \end{equation} and quantifies the {\color{black}anisotropic} drag on the filament through the perpendicular, $c_\perp$, and parallel, $c_\parallel$, drag coefficients \begin{equation} c_\perp = \frac{4\pi\mu}{\ln(2/\epsilon)+1/2}, \quad c_\parallel = \frac{2\pi\mu}{\ln(2/\epsilon)-1/2}. \end{equation} Note that, for clarity, we have included the dimensionless viscosity $\mu=1$ in the above definition of the drag coefficients. For the special case of a helical filament, we {\color{black}use RFT to derive} analytical expressions for $\mathbf{S}_0$ in Appendix \ref{app:RFT} and for $\mathbf{m}_0$ in Appendix \ref{app:forcemoments_RFT}. {\color{black} In SBT \cite{Cox1970,Lighthill1976,Johnson1980}, on the other hand, the relationship between force density and velocity is non-local, so we cannot express the drag tensor as a local object. The value of $\mathbf{\Sigma}_{\mathrm{SBT}}(s)$ at each point $s$ along the centreline depends on the specifics of the motion relative to the shape of the filament. However, we do not need to know the general form of $\mathbf{\Sigma}_{\mathrm{SBT}}(s)$ in order to evaluate the coefficients in our asymptotic series expansion using SBT. An inspection of Eqs.~\eqref{eq:result-S0} and \eqref{eq:defn-P} reveals that the drag tensor always appears contracted with the six modes of rigid-body motion that are available to our rigid filaments, in the form $\Sigma_{ik}(s)(\delta_{kj}+\varepsilon_{j-3,lk}r_l(s))$. Therefore, we only need to know the SBT drag tensor as it pertains to rigid-body motion, \begin{equation} \mathbf{\Sigma}_{\mathrm{SBT}}(s)\cdot (\mathbf{U} + \mathbf{\Omega}\times\mathbf{r}(s)) \equiv \mathbf{f}_{\mathrm{SBT}}(s;\mathbf{U},\mathbf{\Omega}), \end{equation} where $\mathbf{f}_{\mathrm{SBT}}(s;\mathbf{U},\mathbf{\Omega})$ is the SBT force density along a filament with kinematics $(\mathbf{U},\mathbf{\Omega})$. By considering each mode of rigid-body motion individually, we can write \begin{equation} \Sigma_{ik}(s)(\delta_{kj}+\varepsilon_{j-3,lk}r_l(s)) \equiv (\mathbf{f}^{(j)}_{\mathrm{SBT}}(s))_i, \label{eq:defn-fSBT} \end{equation} where $\mathbf{f}^{(j)}_{\mathrm{SBT}}(s)$ is now the force density computed from SBT for the $j$th mode of rigid body motion ($j=1,2,3$ for translations, $j=4,5,6$ for rotations). From Eqs.~\eqref{eq:result-S0} and \eqref{eq:defn-fSBT}, we get the leading-order resistance matrix, $\mathbf{S}^{(0)}$, from SBT \begin{equation} (\mathbf{S}^{(0)}_{\mathrm{SBT}})_{ij} = \sint{(\delta_{ik}+\varepsilon_{i-3,lk}(\mathbf{r}_1(s))_l)(\mathbf{f}^{(j)}_{\mathrm{SBT}}(s))_k}. \end{equation} Similarly, from Eqs.~\eqref{eq:defn-M}, \eqref{eq:defn-m0} and \eqref{eq:defn-fSBT}, we find the SBT equivalent of $\mathbf{m}_0$ as \begin{equation} (\mathbf{m}_0^{\mathrm{SBT}})_j = \sint{\mathbf{r}(s) \cdot (\mathbf{I} -3\mathbf{e}_x^{(1 \to 2)}\mathbf{e}_x^{(1 \to 2)})\cdot \mathbf{f}^{(j)}_{\mathrm{SBT}}(s)}. \label{eq:m0-SBT} \end{equation} Evaluating the force density $\mathbf{f}^{(j)}_{\mathrm{SBT}}(s)$ does require a numerical computation but for a rigid filament this only needs to be done once, in the body frame of the filament, and then modified with a change of basis if the filament changes orientation over time. The SBT computation consists of solving Eq.~\eqref{eq:COMP-method} numerically, exactly as described in Section \ref{sec:comp-method}, but without the interaction term $\mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}]$. In the following sections, when we refer to the asymptotic theory with RFT or SBT coefficients, we mean that we have used the series expansion for the extended resistance matrix from Eqs.~\eqref{eq:expn-S} and \eqref{eq:expn-C}, with coefficients up to second order given by Eqs.~\eqref{eq:result-C0},\eqref{eq:result-S0}, \eqref{eq:result-S1},\eqref{eq:result-C1},\eqref{eq:result-S2} and \eqref{eq:result-C2}, but these coefficients have been evaluated either analytically with RFT or computationally with SBT. The RFT calculations for the matrix $\mathbf{S}^{(0)}$ and the vector $\mathbf{m}_0$ are given in Appendices \ref{app:RFT} and \ref{app:forcemoments_RFT}, respectively, while the computational method for SBT is described in Section \ref{sec:comp-method} (except that the interaction term $\mathcal{J}$ is not included in the SBT computation for a single filament).} \newpage \section{Validation of asymptotic model} \label{sec:validation} We will now verify {\color{black} the asymptotic theory with RFT/SBT coefficients} against numerical simulations {\color{black} based on SBT}. In this section, we focus on filaments with a helical centreline, which are very common in microscopic scale flows (e.g.~the helical flagellar filaments of bacteria, helical microbots actuated by external magnetic fields, elongated microorganisms with a spiral body shape). \subsection{Computational method for hydrodynamic interactions} \label{sec:comp-method} In order to validate our asymptotic model, we implement Johnson's slender-body theory \cite{Johnson1980,thesisKoens} with additional interactions between the filaments \cite{Tornberg2004}. In our computational method, we replace Eq.~\eqref{eq:defn-force-density-RFT} with the following relationship between the force density and velocity along the filament centreline, \begin{equation} 8\pi\mu\mathbf{u}(\mathbf{r}_1(s)) = \mathcal{L}[\mathbf{f}_1(s)] + \mathcal{K}[\mathbf{f}_1(s')] + \mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}], \label{eq:COMP-method} \end{equation} where the first operator represents local effects \begin{equation} \mathcal{L}[\mathbf{f}_1(s)] = \left[2\left(\ln\left(\frac{2}{\epsilon}\right)+\frac{1}{2}\right)\mathbf{I} + 2\left(\ln\left(\frac{2}{\epsilon}\right)-\frac{3}{2}\right)\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)\right]\cdot \mathbf{f}_1(s), \label{eq:COMP-local} \end{equation} and the second operator represents non-local effects \begin{multline} \mathcal{K}[\mathbf{f}_1(s')] = \sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_0(s,s')\hat{\mathbf{R}}_0(s,s')}{|\mathbf{R}_0(s,s')|}-\frac{\mathbf{I}+\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)}{|s'-s|}\right]\cdot \mathbf{f}_1(s')} \\ + \left(\mathbf{I}+\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)\right)\cdot\sdint{\frac{\mathbf{f}_1(s')-\mathbf{f}_1(s)}{|s'-s|}}, \label{eq:COMP-nonlocal} \end{multline} where $\mathbf{R}_0(s,s') = \mathbf{r}_1(s)-\mathbf{r}_1(s')$, and we have split the terms in such a way that both integrals have a removable singularity at $s'=s$. Finally, the third operator represents interactions between the two filaments {\color{black} as previously modelled by Tornberg and Shelley \cite{Tornberg2004}}, \begin{equation} \mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}] = \sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|} + \frac{\epsilon^2}{2}\frac{\mathbf{I}-3\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|^3}\right]\cdot\mathbf{f}_2(s')}, \label{eq:COMP-interaction} \end{equation} where $\mathbf{R}_d(s,s') = \mathbf{d} +\mathbf{r}_2(s')-\mathbf{r}_1(s)$. {\color{black} In our computational method, which was implemented for purposes beyond the present study, we choose to include the source dipole term that was left out of our asymptotic theory, Eq.~\eqref{eq:defn-induced-flow}, because it would have contributed to the asymptotic series expansion only at order $\mathcal{O}(d^{-3})$. Note that we have used the same prefactor of $1/2$ for the dipole term as in \cite{Tornberg2004}, while a more recent study based on the Rotne-Prager-Yamakawa kernel and matched asymptotics uses a larger prefactor of $e^3/24$ \cite{Maxian2021}.} {\color{black} We solve Eqs.~\eqref{eq:COMP-method}-\eqref{eq:COMP-interaction} numerically using a spectral method based on Legendre polynomials as in Ref.~\cite{thesisKoens}. Other studies have chosen to solve these integral equations by regularizing the integral operator $\mathcal{K}$ and approximating its arguments with piecewise polynomials \cite{Tornberg2004}, or more recently using a spectral method based on Chebyshev polynomials \cite{Maxian2021}. In the present study, the choice of Legendre polynomials as a set of basis functions is motivated by their being eigenfunctions of the second integral in the non-local operator $\mathcal{K}$, meaning that \begin{equation} \sdint{\frac{P_n(s')-P_n(s)}{|s'-s|}} = E_n P_n(s), \end{equation} with eigenvalues $E_0=0$ and \begin{equation} E_n = -2\sum_{j=1}^{n}\frac{1}{j}, \end{equation} for $n>0$ \cite{thesisGotz}. We discretize the force density and velocity along the filaments as \begin{equation} \mathbf{u}(\mathbf{r}_k(s)) = \sum_{n=0}^\infty \mathbf{u}_k^{(n)}P_n(s), \quad \mathbf{f}_k(s) = \sum_{n=0}^\infty \mathbf{f}_k^{(n)}P_n(s), \end{equation} where the velocity coefficients $\mathbf{u}_k^{(n)}$ are known from the prescribed kinematics, and the force coefficients $\mathbf{f}_k^{(n)}$ must be solved for. After projecting Eq.~\eqref{eq:COMP-method} onto the space of Legendre polynomials and making use of the orthogonality condition \begin{equation} \sint{P_n(s)P_m(s)} = \frac{2\delta_{mn}}{2n+1}, \end{equation} we recover the following system of equations relating the velocity and the force coefficients \begin{multline} 8\pi\mu \mathbf{u}_1^{(n)} = \left[2\left(\ln\left(\frac{2}{\epsilon}\right)+\frac{1}{2}\right) + E_n \right] \mathbf{f}_1^{(n)} \\ + \frac{2n+1}{2} \sum_{m=0}^{\infty} \Bigg[ \left[2\left(\ln\left(\frac{2}{\epsilon}\right)-\frac{3}{2}\right) + E_m \right]\mathbf{M}_{\parallel}^{(n,m)}\mathbf{f}_1^{(m)} + \mathbf{M}_{0}^{(n,m)}\mathbf{f}_1^{(m)} + \mathbf{M}_{d}^{(n,m)}\mathbf{f}_2^{(m)}\Bigg], \label{eq:COMP-method-projected} \end{multline} where the matrices $\mathbf{M}_{\parallel}^{(n,m)}$, $\mathbf{M}_{0}^{(n,m)}$ and $\mathbf{M}_{d}^{(n,m)}$ are given by \begin{eqnarray} \mathbf{M}_{\parallel}^{(n,m)} &=& \sint{\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)P_n(s)P_m(s)}, \\ \mathbf{M}_{0}^{(n,m)} &=& \sint{\sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_0(s,s')\hat{\mathbf{R}}_0(s,s')}{|\mathbf{R}_0(s,s')|}-\frac{\mathbf{I}+\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)}{|s'-s|}\right]P_n(s)P_m(s')}}, \\ \mathbf{M}_{d}^{(n,m)} &=& \sint{\sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|} + \frac{\epsilon^2}{2}\frac{\mathbf{I}-3\hat{\mathbf{R}}_d\hat{\mathbf{R}}_d}{|\mathbf{R}_d(s,s')|^3}\right]P_n(s)P_m(s')}}. \end{eqnarray} The second of these matrices involves a removable singularity at $s'=s$, but the quadrature integration methods readily available in MATLAB can evaluate this integral accurately so long as the singular points lie on the boundaries of the integration domain. Therefore, when computing the matrices $\mathbf{M}_{0}^{(n,m)}$ in MATLAB we split the double integral into two parts - $s\in[-1,+1]$, $s'\in[-1,s]$ and $s\in[-1,+1]$, $s'\in[s,+1]$. The infinite system of linear equations from Eq.~\eqref{eq:COMP-method-projected} is truncated to $m \leq N_{\mathrm{Legendre}}$ modes and inverted numerically, in order to find the force density coefficients $\mathbf{f}_1^{(k)}$ in terms of the velocity coefficients $\mathbf{u}_1^{(k)}$, which themselves are linearly dependent on the filament kinematics $(\mathbf{U}_k,\mathbf{\Omega}_k)$. The force density is then integrated along the filaments to find the extended resistance matrix that relates filament kinematics and dynamics. We implement this algorithm in MATLAB and validate it using the tests described in Appendix \ref{app:comptests}.} For each set of parameters $(N,\psi,\epsilon)$ describing the geometry of the helical filament, we vary the number of Legendre modes in our truncation until the numerical solution for an isolated helix settles to within 1\% error. We then make the reasonable assumption that the number of Legendre modes determined from this single-helix self-convergence test is sufficient to obtain the same level of accuracy in our double-helix simulations as well. In general, we find that the required number of Legendre modes increases with the number of helical turns of the filament, because we must be able to capture variations in the force density and filament velocity which have the same wavenumber as the filament centreline. For most simulations presented in this study it was sufficient to use $N_{\mathrm{Legendre}} = 15$, because the helices have a small number of helical turns. \subsection{Relative errors} In the absence of an exact solution, we use the numerical solution from SBT as a reference value against which to {\color{black} validate} our asymptotic model. {\color{black} In the previous section, we derived a series expansion for the extended resistance, $\mathbf{R}$, in the form \begin{equation} \mathbf{R} = \mathbf{R}^{(0)} + d^{-1}\mathbf{R}^{(1)} + d^{-2}\mathbf{R}^{(2)} + \mathcal{O}(d^{-3}), \label{eq:expn-R} \end{equation} up to and including second-order terms. We wish to compare this expansion of the resistance matrix with the numerical solution, $\tilde{\mathbf{R}}$, of the fully-coupled integral equations described in Section \ref{sec:comp-method}. However, we cannot compare the matrices $\mathbf{R}$ and $\tilde{\mathbf{R}}$ component-wise, because this would depend on the basis in which we represent the matrices. One can always choose a vector basis in which some component of the ``true" solution $\tilde{\mathbf{R}}$ is zero, relative to which our approximate solution $\mathbf{R}$ would have an infinite relative error. Therefore, we need to think of the extended resistance matrices as linear operators between the space of filament kinematics and the space of filament dynamics, and define an error for the operator as a whole in a way that is basis-independent. A standard way to do this is to use an operator norm.} Suppose we have some given kinematics $\mathbf{x}$ (two linear and two angular velocities, so a vector with twelve components) and we want to compute the dynamics $\mathbf{y}$. Then the error in $\mathbf{y}$ is $\Delta \mathbf{y} = \mathbf{R}\mathbf{x} - \tilde{\mathbf{R}}\mathbf{x}$. We define the ``relative error" in the dynamics to be \begin{equation} E_{\mathrm{dyn}} \equiv \sup_{\mathbf{x}}\left\{ \frac{||\tilde{\mathbf{R}}\mathbf{x} - \mathbf{R}\mathbf{x}||_p}{||\tilde{\mathbf{R}}\mathbf{x}||_p} \right\} = \sup_{\mathbf{y}}\left\{ \frac{||(\mathbf{I} - \mathbf{R}\tilde{\mathbf{R}}^{-1})\mathbf{y}||_p}{||\mathbf{y}||_p} \right\}, \label{eq:rel_error_dynamics} \end{equation} in other words the operator norm of $\mathbf{I} - \mathbf{R}\tilde{\mathbf{R}}^{-1}$. {\color{black} Note that taking the supremum over the entire space of filament kinematics is important, so that the value we compute for the relative error is not dependent on an arbitrary choice of filament kinematics.} Similarly, we can define the relative error in the kinematics as \begin{equation} E_{\mathrm{kin}} \equiv \sup_{\mathbf{y}}\left\{ \frac{||\tilde{\mathbf{R}}^{-1}\mathbf{y} - \mathbf{R}^{-1}\mathbf{y}||_p}{||\tilde{\mathbf{R}}^{-1}\mathbf{y}||_p} \right\} = \sup_{\mathbf{x}}\left\{ \frac{||(\mathbf{I} - \mathbf{R}^{-1}\tilde{\mathbf{R}})\mathbf{x}||_p}{||\mathbf{y}||_p} \right\}, \label{eq:rel_error_kinematics} \end{equation} so the operator norm of $\mathbf{I} - \mathbf{R}^{-1}\tilde{\mathbf{R}}$. Here again, {\color{black} taking the supremum is important, so that the relative error we compute does not depend on an arbitrary choice of filament dynamics}. \begin{figure} \landscapetrim{17cm}{10cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_1.pdf} \caption{Relative error in (a) helix dynamics and (b) helix kinematics, as defined in Eqs.~\eqref{eq:rel_error_dynamics} and \eqref{eq:rel_error_kinematics} respectively, with $p=2$. {\color{black} As we increase the helix separation, $d$, the asymptotic theory with SBT coefficients} converges to the numerical solution, and the error decays as expected {\color{black} with each higher order included in the theory}. Parameter specification: helices have configurations $(\theta_1,\chi_1,\phi_1) = (0,0,\pi/6)$ and $(\theta_2,\chi_2,\phi_2) = (0,0,2\pi/3)$, and $N=2.75$ helical turns. Helix angle, $\psi = 0.5$ rad, and filament slenderness, $\epsilon = 10^{-2}$, are representative of bacterial flagella.} \label{fig:matrix_errors} \end{figure} In Fig.~\ref{fig:matrix_errors} (a) and (b) we compare the relative errors, {\color{black} defined with a $p=2$ norm}, for different {\color{black} orders in our asymptotic theory with SBT coefficients. If our asymptotic series expansion up to $\mathcal{O}(d^{-m})$ terms was calculated correctly, then we would expect the relative error to decay like $ d^{-(m+1)}$, the order of the first neglected terms. This is confirmed by the slopes of our log-log plots, which validate our asymptotic series expansion up to $\mathcal{O}(d^{-2})$. Note that the comparison is only meaningful between the computations and the asymptotic theory with SBT coefficients. This is an unavoidable consequence of our choice to implement the computational method based on SBT. The asymptotic theory with RFT coefficients differs at leading order from the numerical solution based on SBT, and so we would not be able to observe convergence unless we implemented a different computational method based on RFT. The results presented in Fig.~\ref{fig:matrix_errors} (a) and (b) serve to validate the asymptotic series expansion in itself, regardless of the method (RFT or SBT) by which we choose to calculate the leading-order resistance matrix, $\mathbf{S}^{(0)}$, and the force moment, $\mathbf{m}_0$. } {\color{black} Furthermore, by examining the size of the relative error, we deduce that the asymptotic theory can be useful for any $d>L$, which is the regime of validity for our binomial expansion of the Oseen tensor. When the filaments are parallel and orthogonal to the line that connects their centres, we observe that our asymptotic theory with SBT coefficients can achieve 99\% accuracy for $d/L > 1.4$. This accuracy is achieved by the asymptotic solution up to and including $\mathcal{O}(d^{-2})$ terms. Higher accuracy could be obtained either by including more terms in the asymptotic series expansion, or by increasing the distance between the filaments. Based on further results presented in this study, where we also vary the phase difference between filaments, we believe this accuracy estimate to be representative of any parallel configuration of two filaments with this particular helical geometry. A broader numerical investigation would be necessary to determine the accuracy of our method for rigid filaments of arbitrary geometry and non-parallel configurations.} \subsection{Time evolution of forces and torques} {\color{black} The main purpose of the asymptotic theory presented in this paper is to provide a systematic method to calculate analytically the specific HIs between two filaments. When carrying out calculations by hand, we are interested in finding relative patterns more than in calculating accurate absolute values, which is the purpose of numerical schemes. With this perspective in mind, we propose to validate the asymptotic theory with RFT coefficients by looking at the time variation of hydrodynamically-induced forces and torques. We consider the case of two slender helices rotating in parallel with the same angular velocity}. Back in Fig.~\ref{fig:matrix_errors}, we examined the relative error for a fixed orientation of the helices, and we varied the distance between the filaments to see how the error decays - a quantitative {\color{black} validation} of our asymptotic model. In Fig.~\ref{fig:results_time_evolution}, however, we fix the distance between the helical filaments and we let time flow, and the orientation of the filaments along with it, to look for patterns over time - a qualitative {\color{black} validation} of our asymptotic model. {\color{black} Because the helices are vertical, their body-fixed axis $\mathbf{e}_3$ is parallel to the laboratory frame $\mathbf{e}_z$. Hence, the phase angle $\phi$ around $\mathbf{e}_z$ and the spin angle $\chi$ around $\mathbf{e}_3$, as defined in Eqs.~\eqref{eq:bodyframe-A}-\eqref{eq:bodyframe-Z}, are interchangeable. Without loss of generality, we can describe the configuration of the filaments from Figs.~\ref{fig:results_time_evolution} and \ref{fig:results_compareorders} as $(\theta_1,\chi_1,\phi_1) = (0,0,\Omega t)$ and $(\theta_2,\chi_2,\phi_2) = (0,0,\Omega t+\Delta\phi)$.} \begin{figure} \centering \portraittrim{17cm}{20.9cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=14cm]{figure_2a.pdf} \caption{Comparison between {\color{black} computations and the asymptotic theory with RFT/SBT coefficients}, by means of the time evolution of forces and torques induced by the second (rightmost) filament on the first (leftmost). The helices are vertical {\color{black} ($\theta=0$)} and rotating with constant angular velocity $\Omega\mathbf{e}_z$. We fix the phase difference $\Delta\phi = \pi/2$ between them, and a horizontal distance equal to the integrated filament length (a-f) or ten times larger (g-l). The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:results_time_evolution} \end{figure} \begin{figure} \centering \portraittrim{17cm}{20.9cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=14cm]{figure_2b.pdf} \caption{Comparison between {\color{black} computations and the asymptotic theory with SBT coefficients} to $\mathcal{O}(d^{-1})$ and $\mathcal{O}(d^{-2})$, by means of the time evolution of forces and torques induced by the second (rightmost) filament on the first (leftmost). The helices are vertical {\color{black} ($\theta=0$)} and rotating with constant angular velocity $\Omega\mathbf{e}_z$. We impose the phase difference $\Delta\phi = \pi/2$ between them, and a horizontal distance equal to the integrated filament length (a-f) or ten times larger (g-l). The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:results_compareorders} \end{figure} {\color{black} Our asymptotic theory with both RFT and SBT coefficients} captures the qualitative features of the interaction even for smaller helix separations Fig.~\ref{fig:results_time_evolution} (a)-(f), with the agreement becoming quantitative at larger separations Fig.~\ref{fig:results_time_evolution} (g)-(l). This indicates that our {\color{black}asymptotic series expansion can be used to derive meaningful analytical expressions for the HIs between filaments separated by a distance greater than their contour length, as later demonstrated in Section \ref{sec:application}.} We also provide a direct comparison between the {\color{black} asymptotic theory with SBT coefficients} at $\mathcal{O}(d^{-1})$ and $\mathcal{O}(d^{-2})$, in Fig.~\ref{fig:results_compareorders}. These plots provide clearer visual evidence that higher-order corrections improve the fidelity of the asymptotic solution, as opposed to Fig.~\ref{fig:matrix_errors} where the evidence {\color{black} spanned a wider range of kinematic conditions, but was presented in a more condensed format}. \section{Application to helical pumps} \label{sec:application} To demonstrate the usefulness of our asymptotic theory, we now apply and extend our analytical calculations to the interaction of rotating helical pumps. This particular application of our theory is motivated by previous theoretical and experimental studies of helical micropumps \cite{Darnton2004,Kim2008,Martindale2017,Dauparas2018,Buchmann2018}. Experimentally, these systems often take the form of bacterial carpets or forests, where the bacteria are stuck to a substrate while their helical flagellar filaments are free to rotate and pump fluid around. \subsection{Problem specification} We consider two parallel identical helices, rotating with constant angular velocity $\tilde{\Omega}$, as illustrated in Fig.~\ref{fig:mean_FT}. {\color{black} We may choose the laboratory frame so that the filaments are parallel to the $z$-axis and, therefore, the tilt angle $\theta$ is identically zero. When $\theta = 0$, the angles $\phi$ and $\chi$ can be used interchangeably to refer to the rotation of the filament about its own axis, because the body-fixed axis $\mathbf{e}_3$ is parallel to $\mathbf{e}_z$. Without loss of generality, we describe the configuration of the filaments using the angle $\chi=0$ and a varying phase $\phi$.} Because they are driven at constant angular velocity, the helices maintain a fixed phase difference $\phi_2-\phi_1 = \Delta\phi$. If we rescale time by $\tilde{\Omega}^{-1}$, such that $\Omega=1$ in dimensionless terms, then \begin{equation} \phi_1 = t, \quad \phi_2=t + \Delta\phi. \end{equation} Since the helices are held in place, they exert a net force on the fluid, which is pumped in the positive $z$ direction for left-handed helices rotating clockwise. To characterise the net long-term effect of the helical pumps, we need to consider the time-averaged forces and torques exerted by the rotating filaments on the fluid, so we define the mean \begin{equation} \mean{Y} = \frac{1}{2\pi}\int_{0}^{2\pi}Y(t) \mathrm{d}t, \end{equation} for any time-varying quantity $Y$ that we are interested in. We may also want to look at the oscillations of this quantity around its mean value, so we define the variance over time as \begin{equation} \var{Y} = \frac{1}{2\pi}\int_{0}^{2\pi}(Y(t)-\mean{Y})^2\mathrm{d}t. \end{equation} \begin{figure} \landscapetrim{17cm}{15cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_3a.pdf} \caption{Average forces and torques exerted by the leftmost helix due to the presence of a second parallel helix rotating at a distance $d$ to the right, with fixed phase difference $\Delta\phi = \pi/4$. The data points come from SBT simulations including HIs. The power law triangles indicate that the average forces and torques along the axis of the helix (c,f) are an $\mathcal{O}(d^{-1})$ effect, while the other forces and torques (a,b,d,e) are an $\mathcal{O}(d^{-2})$ effect. Simulation parameters: $\psi = 0.5043$ rad, $\epsilon =0.0038$, $N=2.5$ helical turns.} \label{fig:mean_FT} \end{figure} Because our focus is on the HIs between helical pumps, we need to compare the effect of a helical pump when it is part of an ensemble, to what it otherwise would be if the helical pump was operating on its own. If $Y(t;d)$ is a force or torque exerted by a helical pump when there is second helical pump operating at distance $d$ away, then we define \begin{equation} Y_\infty (t) = \lim_{d\to\infty}Y(t;d), \end{equation} which is the force or torque that the same helical pump would exert in isolation. For our asymptotic theory, this corresponds to the leading-order terms in Section \ref{sec:leading-order}. For our computational method, this corresponds to the numerical solution of Eq.~\eqref{eq:COMP-method} without the interaction term $\mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}]$. In the next sections, we will look at differences of the form $\mean{Y} - \mean{Y_\infty}$ to understand if HIs increase or decrease the net effect of the helical pumps on the fluid, and differences of the form $\var{Y} - \var{Y_\infty}$ to investigate whether HIs make the pumping fluctuate more or less over time. \subsection{Computational results} In our simulations, we sample the forces and torques exerted by two helical pumps at twelve regular intervals over one period of rotation, i.e.~$0 \leq \Omega t\leq 2\pi$. The time-averaged forces and torques obtained in this way are shown in Fig.~\ref{fig:mean_FT}, while their variances over time are shown in Fig.~\ref{fig:var_FT}, both for a given phase difference $\Delta\phi = \pi/4$ and varying inter-filament distance. The geometry of the helices was chosen to be representative of bacterial flagella: helix angle, $\psi = 0.5043$ rad, filament slenderness, $\epsilon =0.0038$, and $N=2.5$ helical turns. \begin{figure} \landscapetrim{17cm}{15cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_3b.pdf} \caption{Variance over time in the forces and torques exerted by the leftmost helix due to the presence of a second parallel helix rotating at a distance $d$ to the right, with fixed phase difference $\Delta\phi = \pi/4$. The data points come from SBT simulations including HIs. The power law triangles indicate that the variances in force and torque along the axis of the helix (c,f) are an $\mathcal{O}(d^{-2})$ effect, while the other forces and torques (a,b,d,e) are an $\mathcal{O}(d^{-1})$ effect. Simulation parameters: $\psi = 0.5043$ rad, $\epsilon =0.0038$, $N=2.5$ helical turns.} \label{fig:var_FT} \end{figure} We will now seek to interpret the trends observed in these computations using our asymptotic theory. Specifically, we want to understand why the interaction between the filaments alters the time average of $F_z$ and $T_z$ by $\mathcal{O}(d^{-1})$, but their fluctuation over time by $\mathcal{O}(d^{-2})$. Meanwhile, for the forces and torques in the $x$ and $y$ direction, we want to understand why the time average changes by $\mathcal{O}(d^{-2})$ due to inter-filament interaction, but their fluctuation over time changes by $\mathcal{O}(d^{-1})$. \subsection{Asymptotic theory} We start by computing the intrinsic resistance matrix $\mathbf{S}^{(0)}(0,0,\phi)$ for a vertical helix with arbitrary phase $\phi$, which we will denote from now on simply as $\mathbf{S}^{(0)}(\phi)$. We need to apply the change of basis from Eqs.~\eqref{eq:result-S0(p1)} with the orthogonal matrix \begin{equation} \mathbf{Q}(0,0,\phi) = \begin{pmatrix} \cos\phi & -\sin\phi & 0 \\ \sin\phi & \cos\phi & 0 \\ 0 & 0 & 1 \end{pmatrix}. \end{equation} Because the filament is symmetric under a rotation by angle $\pi$ around the first vector ($\mathbf{e}_1$) in the body frame basis, the resistance matrix expressed in the body frame has the structure \begin{equation} \mathbf{S}_0 = \begin{pmatrix} A_{11} & 0 & 0 & B_{11} & 0 & 0 \\ 0 & A_{22} & A_{23} & 0 & B_{22} & B_{23}\\ 0 & A_{32} & A_{33} & 0 & B_{32} & B_{33}\\ B_{11} & 0 & 0 & D_{11} & 0 & 0 \\ 0 & B_{22} & B_{32} & 0 & D_{22} & D_{23}\\ 0 & B_{23} & B_{33} & 0 & D_{32} & D_{33}\\ \end{pmatrix}, \end{equation} noting that $A_{23} = A_{32}$ and $D_{23} = D_{32}$ because the resistance matrix is symmetric. Hence, after a rotation by angle $\phi$, the matrix can be written as \begin{equation} \mathbf{S}^{(0)}(\phi) = \begin{pmatrix} \mathbf{A}(\phi) & \mathbf{B}(\phi) \\ \mathbf{B}(\phi)^T & \mathbf{D}(\phi) \end{pmatrix}, \label{eq:S_structure} \end{equation} where the matrices $\mathbf{A}(\phi)$, $\mathbf{B}(\phi)$ and $\mathbf{D}(\phi)$ have the same structure with respect to $\phi$, that is \begin{equation} \mathbf{A}(\phi) = \begin{pmatrix} A_0 + \Delta A \cos(2\phi) & \Delta A \sin(2\phi) & -A_{23}\sin(\phi) \\ \Delta A \sin(2\phi) & A_0 - \Delta A \cos(2\phi) & A_{23}\cos(\phi) \\ -A_{32}\sin(\phi) & A_{32}\cos(\phi) & A_{33} \end{pmatrix}, \label{eq:Aphi_structure} \end{equation} where we define $A_0 = (A_{11} + A_{22})/2$ and $\Delta A = (A_{11}-A_{22})/2$, and similarly for $\mathbf{B}(\phi)$ and $\mathbf{D}(\phi)$ but with $A_{ij} \mapsto B_{ij}$ and $A_{ij} \mapsto D_{ij}$ respectively. Without loss of generality, we may choose our laboratory frame to coincide with the interaction frame of the two filaments, so the directed distance between the two helices is $\mathbf{d} = d\mathbf{e}_x$. From Eqs.~\eqref{eq:defn-J} and \eqref{eq:result-C1}, we can write \begin{equation} C^{(1)}_{ij} (\phi_1,\phi_2) = -\frac{1}{8\pi}\left(2S^{(0)}_{i1}(\phi_1)S^{(0)}_{1j}(\phi_2)+S^{(0)}_{i2}(\phi_1)S^{(0)}_{2j}(\phi_2)+S^{(0)}_{i3}(\phi_1)S^{(0)}_{3j}(\phi_2) \right), \label{eq:appln-C1} \end{equation} and then replace the expressions for the elements of $\mathbf{S}(\phi)$ from Eqs.~\eqref{eq:S_structure}-\eqref{eq:Aphi_structure}. Furthermore, from Eq.~\eqref{eq:result-Pmatrix} we derive the matrix \begin{equation} \mathbf{P}(\phi) = \begin{pmatrix} \mathbf{G}(\phi) & \mathbf{H}(\phi) \end{pmatrix}, \label{eq:P_structure} \end{equation} where the matrices $\mathbf{G}(\phi)$ and $\mathbf{H}(\phi)$ have the same structure with respect to the phase $\phi$. Because $\mathbf{e}_1 = \cos\phi\mathbf{e}_x + \sin\phi \mathbf{e}_y$, we have \begin{equation} \mathbf{G}(\phi) = \frac{1}{8\pi}\begin{pmatrix} \mathcal{M}_1 \cos\phi & \mathcal{M}_1 \sin\phi & 0 \\ B_{23}\sin(\phi) & -B_{23}\cos(\phi) & -B_{33} \\ \Delta B \sin(2\phi) & B_0 - \Delta B \cos(2\phi) & B_{32}\cos(\phi) \end{pmatrix}, \label{eq:Gphi_structure} \end{equation} and similarly for $\mathbf{H}(\phi)$ but with $B_{ij} \mapsto D_{ij}$ and $\mathcal{M}_1 \mapsto \mathcal{M}_4$. We are now ready to evaluate the mean forces and torques, and their fluctuations over time, for the specific case of constant rotation about the helical axis $\mathbf{e}_3 = \mathbf{e}_z$. The two helical pumps rotate with constant angular velocities $\mathbf{\Omega}_1 = \mathbf{e}_z$ and $\mathbf{\Omega}_2 = \mathbf{e}_z$, since $\Omega = 1$ in our chosen units of time. Therefore, the forces and torques exerted by the first filament are \begin{equation} \begin{pmatrix} \mathbf{F}_1 \\ \mathbf{T}_1 \end{pmatrix}_{\hspace{-.15cm}i} = S^{(0)}_{i6}(t) + \frac{C^{(1)}_{i6}(t,t+\Delta\phi)}{d} + \frac{S^{(2)}_{i6}(t,t+\Delta\phi) + C^{(2)}_{i6}(t,t+\Delta\phi)}{d^2} + \mathcal{O}(d^{-3}), \label{eq:appln-FT} \end{equation} where we have substituted the phases $\phi_1 = t,~\phi_2 = t + \Delta\phi$. {\color{black} \subsection{Forces and torques parallel to axis of rotation}} We begin by looking at the force exerted by the leftmost filament along its helical axis, $\mathbf{e}_3 = \mathbf{e}_z$. From Eqs.~\eqref{eq:S_structure},\eqref{eq:Aphi_structure} and \eqref{eq:appln-FT}, we see that \begin{equation} F_z(t) = B_{33}+d^{-1}C^{(1)}_{36}(t,t+\Delta\phi) + \mathcal{O}(d^{-2}), \end{equation} which is constant at leading order with $\mean{F_z^\infty} = B_{33}$. The first-order correction, given by Eqs.~\eqref{eq:S_structure},\eqref{eq:Aphi_structure} and \eqref{eq:appln-C1}, will be \begin{equation} C^{(1)}_{36} = -\frac{1}{8\pi }\left[A_{33}B_{33}+A_{23}B_{23}\left(2\sin(t)\sin(t+\Delta\phi)+\cos(t)\cos(t+\Delta\phi)\right)\right], \label{eq:C36} \end{equation} which has a non-zero time-average. Hence, the mean thrust provided by the helical pump is \begin{equation} \mean{F_z} - \mean{F_z^\infty} = -\frac{1}{8\pi d} \left(A_{33}B_{33}+\frac{3}{2}A_{23}B_{23}\cos(\Delta\phi)\right) + \mathcal{O}(d^{-2}), \label{eq:result-meanFz} \end{equation} so indeed the interaction between the filaments changes the mean thrust by $\mathcal{O}(d^{-1})$, as seen in the computations. {\color{black} Note that the result in Eq.~\eqref{eq:result-meanFz} is independent of the method (RFT or SBT) by which we choose to evaluate the coefficients $A_{33}, B_{33}, A_{23}$ and $B_{23}$. In Fig.~\ref{fig:helices_time_average} (e), we examine how the $\mathcal{O}(d^{-1})$ change in thrust depends on the phase difference between the filament}s. The {\color{black} asymptotic theory with SBT coefficients} provides perfect quantitative agreement in the limit of large $d$, while the {\color{black} asymptotic theory with RFT coefficients} has an approximate error of 5\% but captures all qualitative features. \begin{figure} \landscapetrim{17.25cm}{16cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_456.pdf} \caption{Average forces (a,c,e) and torques (b,d,f) due to HIs between the helices, as a function of the phase difference between filaments. The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:helices_time_average} \end{figure} Because $F_z$ is constant at leading order, i.e.~$\var{F_z^\infty} = 0$, its variance over time will be given by \begin{equation} \var{F_z} - \var{F_z^\infty} = \frac{1}{ d^2}\left(\mean{C^{(1)}_{36}(t,t+\Delta\phi)^2} - \mean{C^{(1)}_{36}(t,t+\Delta\phi)}^2 \right) + \mathcal{O}(d^{-3}), \end{equation} which is indeed an $\mathcal{O}(d^{-2})$ effect as seen in computations. This is shown in Fig.~\ref{fig:helices_variances_over_time} (e), where we look at how this $\mathcal{O}(d^{-2})$ effect depends on the phase difference between the filaments. Once again, the {\color{black} asymptotic theory with SBT coefficients} provides quantitative agreement, while the {\color{black}theory with RFT coefficients captures the correct shape and order of magnitude}. Moving on to the torque exerted by the leftmost filament along its helical axis, we can derive in a similar way expressions for the time-average \begin{equation} \mean{T_z} - \mean{T_z^\infty} = -\frac{1}{8\pi d} \left(B_{33}^2+\frac{3}{2}B_{23}^2\cos(\Delta\phi)\right) + \mathcal{O}(d^{-2}), \label{eq:result-meanTz} \end{equation} and the fluctuation over time \begin{equation} \var{T_z} - \var{T_z^\infty} = \frac{1}{ d^2}\left(\mean{C^{(1)}_{66}(t,t+\Delta\phi)^2} - \mean{C^{(1)}_{66}(t,t+\Delta\phi)}^2 \right) + \mathcal{O}(d^{-3}), \label{eq:result-varTz} \end{equation} which are compare against computations in Figs.~\ref{fig:helices_time_average} (f) and \ref{fig:helices_variances_over_time} (f), respectively. \begin{figure} \landscapetrim{17.25cm}{16cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_789.pdf} \caption{Variance in forces (a,b,e) and torques (c,d,f) due to HIs between the helices, as a function of the phase difference between filaments. The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:helices_variances_over_time} \end{figure} \vspace{\baselineskip} {\color{black} \subsection{Forces and torques perpendicular to axis of rotation}} Next, we evaluate the forces and torques perpendicular to the filament axis, starting with $F_x$. From Eqs.~\eqref{eq:S_structure},\eqref{eq:Aphi_structure} and \eqref{eq:appln-FT}, we see that \begin{multline} F_x(t) = -B_{23}\sin(t) + d^{-1}C^{(1)}_{16}(t,t+\Delta\phi) + \\ d^{-2}(S^{(2)}_{16}(t,t+\Delta\phi) + C^{(2)}_{16}(t,t+\Delta\phi)) + \mathcal{O}(d^{-3}), \end{multline} which averages out to zero at leading order,i.e.~$\mean{F_x^\infty} = 0$. The first-order correction, \begin{multline} C^{(1)}_{16} = -\frac{1}{8\pi }\left[-A_{23}B_{33}\sin(t) - 2A_0 B_{23}\sin(t+\Delta\phi) \right. \\ - \left. \Delta A B_{23}\left(2\cos(2t)\sin(t+\Delta\phi)-\sin(2t)\cos(t+\Delta\phi)\right) \right], \label{eq:C16} \end{multline} also averages out to zero, so the mean of $F_x$ is an $\mathcal{O}(d^{-2})$ effect as seen in Fig.~\ref{fig:mean_FT} (a). Using Eqs.~\eqref{eq:result-S2},\eqref{eq:S_structure} and \eqref{eq:Aphi_structure}, we obtain that \begin{equation} \langle S^{(2)}_{16}(t,t+\Delta\phi) \rangle = 0. \end{equation} Then, by using Eqs.~\eqref{eq:result-C2},\eqref{eq:S_structure},\eqref{eq:Aphi_structure},\eqref{eq:P_structure} and \eqref{eq:Gphi_structure}, we get that \begin{equation} \langle C^{(2)}_{16}(t,t+\Delta\phi) \rangle = -\frac{1}{16\pi}\left(A_{23}D_{23} + B_{23}^2 + B_{23}\mathcal{M}_1\right)\sin(\Delta\phi), \label{eq:C2_16} \end{equation} and hence \begin{equation} \mean{F_x} - \mean{F_x^\infty} = -\frac{1}{16\pi d^2}\left(A_{23}D_{23} + B_{23}^2 + B_{23}\mathcal{M}_1\right)\sin(\Delta\phi). \label{eq:result-meanFx} \end{equation} Because the time-average of $F_x$ is only $\mathcal{O}(d^{-2})$, we deduce that the variance over time is \begin{equation} \var{F_x} = \mean{(-B_{23}\sin(t)+d^{-1}C^{(1)}_{16}(t,t+\Delta\phi) + \mathcal{O}(d^{-2}))^2}. \end{equation} Because $F_x$ oscillates at leading order with variance $\var{F_x^\infty} = A_{23}^2/22$, we deduce that the variance due to HIs is given by \begin{equation} \var{F_x} - \var{F_x^\infty}= -\frac{2B_{23}}{d}\mean{\sin(t)C^{(1)}_{16}(t,t+\Delta\phi)} + \mathcal{O}(d^{-2}), \end{equation} so indeed an $\mathcal{O}(d^{-1})$ effect as seen in Fig.~\ref{fig:var_FT} (a). Using Eq.~\eqref{eq:C16}, we arrive at the final result \begin{equation} \var{F_x} - \var{F_x^\infty}= -\frac{B_{23}}{8\pi d}\left(A_{23}B_{33} + 2A_0 B_{23}\cos(\Delta\phi) + \frac{1}{2} \Delta A B_{23}\cos(\Delta\phi) \right) + \mathcal{O}(d^{-2}). \label{eq:result-varFx} \end{equation} The analytical expressions from Eqs.~\eqref{eq:result-meanFx} and \eqref{eq:result-varFx} are compared against computational results in Fig.~\ref{fig:helices_time_average} (a) and \ref{fig:helices_variances_over_time} (a), respectively. As above, we have quantitative agreement between computations and the {\color{black} asymptotic theory with SBT coefficients} in the limit $d\to\infty$, and qualitative agreement with the {\color{black} asymptotic theory with RFT coefficients}. Just as we have done for $F_x$, we may compute the time-average of the other transverse forces and torques to $\mathcal{O}(d^{-2})$, \begin{eqnarray} \mean{F_y} - \mean{F_y^\infty} &=& \frac{1}{16\pi d^2}\left(2(A_0 D_{33}+B_0 B_{33}) - (A_{23}D_{23} + B_{23}^2 + B_{23}\mathcal{M}_1)\cos(\Delta\phi)\right), \label{eq:result-meanFy} \\ \mean{T_x} - \mean{T_x^\infty} &=& -\frac{1}{16\pi d^2}\left(B_{23}D_{23} + B_{23}D_{23} + B_{23}\mathcal{M}_4\right)\sin(\Delta\phi), \label{eq:result-meanTx} \\ \mean{T_y} - \mean{T_y^\infty} &=& \frac{1}{16\pi d^2}\left(2(B_0 D_{33}+D_0 B_{33}) - (B_{23}D_{23} + B_{23}D_{23} + B_{23}\mathcal{M}_4)\cos(\Delta\phi)\right). \label{eq:result-meanTy} \end{eqnarray} Similarly, we can derive the fluctuations over time to $\mathcal{O}(d^{-1})$, \begin{eqnarray} \var{F_y} - \var{F_y^\infty} &=& -\frac{B_{23}}{8\pi d}\left(A_{23}B_{33} + \phantom{2}A_0 B_{23}\cos(\Delta\phi) - \frac{3}{2} \Delta A B_{23}\cos(\Delta\phi)\right), \label{eq:result-varFy} \\ \var{T_x} - \var{T_x^\infty} &=& -\frac{D_{23}}{8\pi d}\left(B_{32}B_{33} + 2B_0 B_{23}\cos(\Delta\phi) + \frac{1}{2} \Delta B B_{23}\cos(\Delta\phi)\right), \label{eq:result-varTx} \\ \var{T_y} - \var{T_y^\infty} &=& -\frac{D_{23}}{8\pi d}\left(B_{32}B_{33} + \phantom{2}B_0 B_{23}\cos(\Delta\phi) - \frac{3}{2} \Delta B B_{23}\cos(\Delta\phi)\right). \label{eq:result-varTy} \end{eqnarray} The analytical expressions from Eqs.~\eqref{eq:result-meanFy}-\eqref{eq:result-varTy} are compared against computational results in Fig.~\ref{fig:helices_time_average} (b)-(d) and \ref{fig:helices_variances_over_time} (b)-(d). \vspace{\baselineskip} {\color{black} \subsection{Deducing the dynamics of the second filament}} \label{sec:deducing-second} We remind the reader that the forces and torques plotted in Fig.~\ref{fig:helices_time_average} are those exerted \textit{on} the fluid \textit{by} the leftmost filament - see Fig.~\ref{fig:interpretation} (a). Relative to this, the rightmost filament is in the positive $x$ direction, and accordingly we have taken $\hat{\mathbf{d}}=\mathbf{e}_x$ in our calculation of second-order corrections from Eqs.~\eqref{eq:result-meanFx}, \eqref{eq:result-meanFy}-\eqref{eq:result-meanTy}. To obtain the forces and torques exerted by the rightmost filament, we can rotate our coordinate system by an angle $\pi$ about the $z$-axis. First of all, this swaps the filaments around and, hence, reverses the sign of the phase difference. It also changes the signs of all $x$ and $y$ components, but not the $z$ components. {\color{black} Hence, the average dynamics of the second filament satisfy the relations $-\Gamma^{(2)}_{x,y}(\Delta\phi) = \Gamma^{(1)}_{x,y}(-\Delta\phi)$ and $ \Gamma^{(2)}_{z}(\Delta\phi) = \Gamma^{(1)}_{z}(-\Delta\phi)$, where $\Gamma^{(k)}$ is a placeholder for the time-averaged force or torque exerted by the $k$th filament on the fluid.} Because $\langle F_x \rangle$ and $\langle T_x \rangle$ depend on the sine of the phase difference (see Eqs.~\eqref{eq:result-meanFx} and \eqref{eq:result-meanTx}), the rightmost helix exerts the same average force $\langle F_x \rangle$ and torque $\langle T_x \rangle$ as the leftmost helix. Meanwhile, for $\langle F_y \rangle$ and $\langle T_y \rangle$, which depend on the cosine of the phase difference (see Eqs.~\eqref{eq:result-meanFy} and \eqref{eq:result-meanTy}), the rightmost helix exerts an equal and opposite average force and torque to the leftmost helix. Finally, the average $\langle F_z \rangle$ and $\langle T_z \rangle$ are the same for the two helices, because the two quantities depend on the cosine of the phase difference (see Eqs.~\eqref{eq:result-meanFz} and \eqref{eq:result-meanTz}), and the sign of $z$ components has not changed {\color{black} due to the rotation}. \vspace{\baselineskip} \subsection{Interpretation of results} \label{sec:application-interpretation} {\color{black} We now provide some physical interpretation for the earlier computational results. \subsubsection*{Deficit in pumping force}} \begin{figure} \landscapetrim{17cm}{13cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_new.pdf} \caption{\color{black} (Not to scale) Physical mechanism for the reduction in pumping force due to HIs. Top panels illustrate the local velocity of the filament relative to the surrounding fluid. Lower panels show the periodic force density along the filament, rendered at points along a horizontal projection of the centreline. The total force and torque exerted by the helical pump are obtained by integrating the force density around the circle as many times as needed. (a) Due to the anisotropic drag on the slender filament, a rotating helix exerts a net force along its axis of rotation, $\mathbf{e}_3^{(1)}$. If the helix does not have an integer number of turns, there is also a net component of the force along the $\mathbf{e}_2^{(1)}$ direction, due to a ``surplus" of filament on one side (indicated by a thick orange arc on the circular projection of the centreline). (b) Changes to the force density along the second filament due to the $\mathbf{e}_3^{(1)}$ component of the force exerted by the first filament on the fluid. (c) Likewise for the $\mathbf{e}_2^{(1)}$ component of the force.} \label{fig:physicalmechanism} \end{figure} {\color{black} Since the main purpose of the helical pumps is to push fluid along their axes, we start by explaining how HIs affect the vertical \color{black} pumping force, $\langle F_z \rangle$. The leading-order dynamics of a rotating helical pump are illustrated in Fig.~\ref{fig:physicalmechanism} (a) using a local description of the problem (i.e.~no end effects). The local velocity of the centreline relative to the fluid is shown at various points along the filament. At one of these points we decompose the velocity into the directions tangent and perpendicular to the filament. Because the perpendicular drag coefficient on a slender rod is higher, by roughly a factor of two, than the parallel drag coefficient, this gives rise to a leading-order viscous drag on the filament, $-\mathbf{f}_1^{(0)}(s)$, that has a negative vertical component. Below the three-dimensional picture of the filament, we draw the projection of the filament centreline onto the horizontal plane. At each point on this circular projection, we show the corresponding force density exerted by the filament on the fluid, $\mathbf{f}_1^{(0)}(s)$, decomposed into vertical and horizontal components. Notice that the force density simply rotates about the axis $\mathbf{e}_3^{(1)}=\mathbf{e}_z$ as we rotate around the circle, due to the rotational symmetry of the system. The total force and torque exerted by the helical pump are obtained by integrating the force density along the entirety the filament, or equivalently by integrating around the circular projection as many times as needed. For left-handed helices rotating counter-clockwise, the vertical components of the force density are positive, so the helical pump exerts a net positive force in the $\mathbf{e}_3^{(1)}$ direction. The fluid is pumped vertically upwards. By integrating the horizontal components of the force density, we also obtain a net counter-clockwise torque that must be applied to the helical pump to keep it rotating. Furthermore, if the helical filament does not have an integer number of turns, there will be a surplus of filament on one side, indicated by a thick orange line on the circular projection. This means that the helical pump also exerts a net horizontal force on the fluid along the $\mathbf{e}_2^{(1)}$ direction. In Fig.~\ref{fig:physicalmechanism} (b) and (c) we explain how the $\mathbf{e}_3^{(1)}$ and the $\mathbf{e}_2^{(1)}$ components of the leading-order force exerted by the first helical pump, respectively, affect the pumping force exerted by the second helical pump. Firstly, the $\mathbf{e}_3^{(1)}$ component of the pumping force exerted by one helical pump on the fluid leads to an upward vertical flow at the position of the other helical pump. This flow is uniform to leading-order in the distance between the filaments. Therefore, the second filament appears to be moving in the negative vertical direction relative to the fluid, with velocity $-\mathbf{u}_\infty(\mathbf{r}_2(s))$, as indicated at various points along the filament in Fig.~\ref{fig:physicalmechanism} (b). Following the same procedure as above, we can determine the local force density along the second filament and depict it along the horizontal projection of the centreline. The first-order change in the force density, $\mathbf{f}_2^{(1)}(s)$, has negative vertical components, because the second filament appears to be moving downward with respect to the background flow. When integrated along the filament, this leads to a deficit in pumping force due to the HIs between the helical pumps. This is confirmed by the negative sign in Fig.~\ref{fig:helices_time_average} (e). Note that this effect is independent of the phase difference between the filaments, because the force density has a constant vertical component along the entire filament, due to rotational symmetry. By integrating the horizontal components of the force density, we also deduce that HIs lead to a deficit in the torque exerted by the helical pumps, as seen in Fig.~\ref{fig:helices_time_average} (f) as well. Hence, less power is needed to actuate two helical pumps with the same angular velocity, if they are rotating in parallel. Secondly, the $\mathbf{e}_2^{(1)}$ component of the leading-order force exerted by the first helical pump generates a horizontal flow at the position of the second helical pump, which is again depicted at various points along the filament in Fig.~\ref{fig:physicalmechanism} (c). Because the flow is horizontal, we no longer have rotational symmetry so the force density is variable along the filament. Note that we only depict the vertical components of the force in the lower panels of Fig.~\ref{fig:physicalmechanism} (c), to avoid overcrowding the diagram. Unlike Figs.~\ref{fig:physicalmechanism} (a) and (b), where the force density simply rotates around the vertical axis as we go around the centreline, in Fig.~\ref{fig:physicalmechanism} (c) we observe that the vertical component of the force density depends on the alignment of the tangent vector and the direction of the flow. Where the velocity of the filament relative to the background flow, $-\mathbf{u}_\infty(\mathbf{r}_2(s))$, has a positive (or negative) component in the direction of the local tangent, the force density has a positive (or negative) vertical component. Hence, this particular contribution of HIs to the pumping force will depend on the phase difference between the two helical pumps. If the two are in-phase, $\Delta\phi =0$ and $\mathbf{e}_2^{(2)} = \mathbf{e}_2^{(1)}$, there is a surplus of negative vertical force as we integrate along the centreline. If the pumps are anti-phase, $\Delta\phi =\pi$ and $\mathbf{e}_2^{(2)} = -\mathbf{e}_2^{(1)}$, there is a surplus of positive vertical force instead. This dependence on the phase difference is confirmed by Fig.~\ref{fig:helices_time_average} (e), where the deficit in pumping force is greater when the filaments are in-phase than anti-phase. It is important to emphasise that the dominant effect here comes from the flow discussed in Fig.~\ref{fig:physicalmechanism} (b), which is a result of integrating a constant force along the entire length of the filament. The effect described in Fig.~\ref{fig:physicalmechanism} (c) is a correction that comes from integrating forces along just a fraction of the filament, if the helix deviates from an integer number of turns. Regardless of the phase difference between the helical pumps, each of them will pump fluid with less force when they are interacting, because each filament tries to push fluid that has already been entrained by the other pump. The deficit is greatest when the filaments are in-phase, because they entrain the fluid in the same direction both vertically and horizontally, whereas filaments that are anti-phase will work against each other in the horizontal plane (Fig.~\ref{fig:physicalmechanism} (c)).} {\color{black} \subsubsection*{Fluctuations over time}} Another question to consider is whether HIs dampen or enhance fluctuations in the dynamics of the helical pumps. The results in Fig.~\ref{fig:helices_variances_over_time} suggest that HIs tend to increase the variances over time for most forces and torques. The only exceptions we observe, for this set of parameters, are the forces $F_x$ and $F_y$ when $|\Delta\phi|<\pi/2$ and the torque $T_x$ in a small interval around $\Delta\phi=\pi$. {\color{black} \subsubsection*{Attraction vs.~repulsion}} We have so far considered the average forces and torques exerted by the filaments on the fluid while they are held in place, except for rotating about the vertical axis. It is also important to consider what would happen to the helices if they were not held in place, but free to move in response to the forces and torques exerted on them by the fluid. Note that the time averages we previously computed assumed that the helices remain vertical. However, we may still use these results to get a sense for what happens in the early stages, when the axes of the helices are still close to vertical. In Fig.~\ref{fig:interpretation} (b) and (c) we show the horizontal components of the average force exerted by the fluid on {\color{black} two left-handed filaments rotating counter-clockwise. The relative directions of the forces and torques on the two helices were established in Section \hyperref[sec:deducing-second]{IV F}}. The first observation is that, at second order, there is no net attraction or repulsion between the helices. Previous theoretical work had ruled out the possibility of attraction or repulsion between two helices rotating with zero phase difference, based on symmetry arguments \cite{Kim2004b}. Our findings add to that observation by excluding any net attraction or repulsion between helices rotating with any phase difference, so long as they are parallel. Instead, we discover a net migration to one side, because the two filaments experience the same force along the $x$ direction -- Fig.~\ref{fig:interpretation} (b). The direction of migration depends on the sine of the phase difference, so it is not a consistent behaviour. On the other hand, the helices will be swirled around by the fluid in the counter-clockwise direction, because they experience equal and opposite forces along the $y$ direction -- Fig.~\ref{fig:interpretation} (c). The direction of the swirl is consistent with the individual rotation of the helices, and this effect is persistent across all phase differences, as demonstrated by Fig.~\ref{fig:helices_time_average} (c). Note from Fig.~\ref{fig:helices_time_average} (a)-(d) that the sign of $\langle T_x \rangle$ is the same as $\langle F_x \rangle$, likewise for $\langle T_y \rangle$ and $\langle F_y \rangle$. Hence, the arrows in Fig.~\ref{fig:interpretation} (b) and (c) could equally well represent the horizontal components of the torques exerted by the fluid on the filaments. The key observation here is that, due to equal and opposite average torques along $y$, the helices would initially experience a splaying out effect where the fluid pushes the tips of the helical pumps apart (the tips being {\color{black} the ends pointing in the same direction as the angular velocity}) and brings their bases together. \subsection{Outlook: circular array of helical pumps} \begin{figure} \landscapetrim{17cm}{8cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_10.pdf} \caption{Basic principles of HIs between helical pumps. (a) Minimal setup with two helical pumps rotating with constant angular velocity around their axes. (b) There is no net attraction or repulsion between the two rotating helices (cf.~symmetry arguments for zero phase difference in Ref.~\cite{Kim2004b}), but rather a sideways migration whose sign depends on the phase difference. (c) There is a persistent (i.e.~independent of phase difference) swirling effect in the same direction as the rotation of the helices. (d) A ring of helical pumps would initially experience counter-clockwise swirling (due to the forces $-\langle F_y \rangle$ exerted by the fluid) and outward splaying of the tips (due to the torques $-\langle T_y \rangle$ exerted by the fluid).} \label{fig:interpretation} \end{figure} Once we understand the basic principles of pairwise HIs between helical pumps, it is natural to consider ensembles with more than two helical pumps. The simplest example is a ring of regularly spaced helical pumps, illustrated from the top in Fig.~\ref{fig:interpretation} (d). For simplicity, let us consider a ring of sufficiently large radius that the dominant HIs come from the nearest neighbours only. We expect the dominant contribution to the horizontal force to come from $\langle F_y \rangle$, which is two orders of magnitude larger than $\langle F_x \rangle$ -- cf.~Fig~\ref{fig:helices_time_average} (a) and (c). The effects of $\langle F_y \rangle$ are also consistent, compared to $\langle F_x \rangle$ which depends strongly on the phase difference. In conclusion, we need to focus on the force components perpendicular to the distance between nearest neighbours, depicted in Fig.~\ref{fig:interpretation} (c). By adding the contributions from the left nearest neighbour (L) and the right nearest neighbour (R), we find that the net effect is a force along the circumference of the ring. Therefore, the ring of helical pumps experiences a tendency towards counter-clockwise swirling about the centre. If instead of forces we consider the torques $\langle T_y \rangle$, which are likewise dominant over $\langle T_x \rangle$, we find once again that there is a net torque along the circumference of the circle. This means that the tips of the helical pumps have a tendency to spread out and away from the centre of the ring. Note that the sign of these two hydrodynamic effects (swirling and splaying) would stay the same if we include more than nearest neighbour interactions, due to the symmetry of the system. \section{Discussion} \label{sec:discussion} In this paper, we have considered the problem of HIs between slender filaments in viscous fluids. We have approached the topic theoretically, focusing on the case of two interacting rigid filaments whose dynamics can be described by an extended resistance matrix, Eq.~\eqref{eq:defn-resistance-matrix}. We have solved for the extended resistance matrix and the force distribution along two arbitrarily-shaped filaments as series expansions in inverse powers of the distance between the filaments, up to second-order corrections. Our asymptotic results from Section \ref{sec:model} are valid {\color{black}in the limit of small aspect ratio, $\epsilon\ll 1$, and in the regime, $d>L$, where the inter-filament separation is greater than the contour length of the filament.} {\color{black}Although HIs decrease in magnitude with increasing distance between the filaments, they continue to play a leading-order role important to physical mechanisms such as synchronisation and self-organisation. This provides a strong motivation for developing an analytical theory of HIs to advance our fundamental understanding of such phenomena. While other studies have dealt with the limit $d\ll L$, here we have chosen to focus on the regime $d>L$, which can provide just as many valuable physical insights.} {\color{black} We have evaluated the coefficients in the asymptotic series expansion using both resistive-force theory (RFT) and slender-body theory (SBT), and validated our asymptotic theory against numerical simulations in Section \ref{sec:validation}.} In the final part, Section \ref{sec:application}, motivated by bacterial microfluidic pumps \cite{Darnton2004,Kim2008,Martindale2017,Dauparas2018}, we have demonstrated the usefulness of our asymptotic theory by applying it to the interaction of two rotating helical pumps. Here, we have identified the dependence of forces and torques on the distance and phase difference between the helices, which is illustrated in Figs.~\ref{fig:helices_time_average} and \ref{fig:helices_variances_over_time} and made explicit in Eqs.~\eqref{eq:result-meanFz}-\eqref{eq:result-varTz}, \eqref{eq:result-meanFx}, \eqref{eq:result-varFx}-\eqref{eq:result-varTy}. The analytical expressions are also implicitly dependent on the helix geometry through the components $A_{ij}, B_{ij}, D_{ij}$ of the single-helix resistance matrix, which are given in Appendix \ref{app:RFT}, and the force moments $\mathcal{M}_i$ from Appendix \ref{app:forcemoments_RFT}. Our theory provides us with new physical understanding of the HIs between helical pumps. We find that the pumping force exerted by each rotating helix is reduced due to HIs, and the reduction is greatest when the helical pumps are rotating in phase with each other. Similarly, the torque required to rotate the two helical pumps is lowest when they are in-phase and greatest when they are antiphase, as the helices are working against each other in the latter case. Because we include second-order corrections in our calculation of the average forces and torques acting on the helical pumps, we are able to determine that there is no net attraction or repulsion between the filaments, but rather a sideways migration whose sign depends on the phase difference. However, we identify two persistent hydrodynamic effects which are independent of the phase difference: a swirl in the direction of rotation of the helices and a splaying out at the tips of the helical pumps (i.e.~the ends pointing in the same direction as the angular velocity). We believe that these effects are consistent with the behaviour observed by Kim and co-authors {\color{black}in the initial stage (i.e. when the filaments are still nearly parallel) of their} macroscopic-scale experiments of flagellar bundling \cite{Kim2003}, despite the fact that our theory is intended for \textcolor{black}{$d > L$} while the experiments were carried out in the \textcolor{black}{$d < L$} regime. {\color{black} This suggests that there may be fundamental similarities in the HIs between helical filaments across different regimes of separation. Without further investigation, it is not possible to quantify in which ways the HIs between bacterial flagella within a bundle ($d<L$) are qualitatively different from the HIs between flagellar filaments that are further apart ($d>L$). Our theory provides a starting point to investigate these questions further, analytically.} {\color{black} The primary purpose of our asymptotic theory is to provide a method to calculate, analytically, the specific HIs between two rigid filaments, as opposed to previous theoretical studies which focus on the bulk properties of suspensions of fibers \cite{Shaqfeh1990,Mackaplow1996}. The asymptotic theory with RFT coefficients is suitable for this purpose, since all the coefficients have closed-form solutions provided in Appendices \ref{app:RFT} and \ref{app:forcemoments_RFT}. The asymptotic theory with SBT coefficients can provide a quantitative improvement on some of these results, since SBT calculates the force density along the filament with algebraic accuracy, but the ultimate goal of the asymptotic theory is to capture the qualitative features of HIs such as the dependence on filament geometry and relative configuration. A secondary use of the asymptotic theory could be to speed up the simulation of long time-evolution problems governed by HIs or, in special cases, to provide a way to integrate the equations of motion by hand. The reduction in computation time would come from removing the need to recompute the interaction term $\mathcal{J}$ (see Section \ref{sec:comp-method}) at each time step, as the relative orientation of the two filaments changes. Our asymptotic series expansion provides expression for the HIs between filaments in terms of the resistance matrix of a single filament, which can be precomputed (either by evaluating the analytical expressions from RFT, or by numerically solving the integral equations of SBT for a single filament) and updated at each time step using a rigid-body rotation to reflect changes in filament orientation. This relies on the filaments being rigid so that the shape of their centreline does not change over time. {\color{black} However, we reiterate that the main purpose of our asymptotic theory is to provide a way to evaluate the HIs between filaments analytically, and not to challenge well-established computational methods.} For the simulation of flexible fibers, there exist specialised computational methods that can handle large numbers of filaments with HIs efficiently \cite{Tornberg2004,Maxian2021}.} One advantage of the current asymptotic theory is the compactness of the final results in Eqs.~\eqref{eq:result-C1}, \eqref{eq:result-S2}, and \eqref{eq:result-C2}, which means they can be used to develop analytical models for certain hydrodynamic phenomena that have only been studied computationally until now. Another advantage is that the results of Eqs.~\eqref{eq:result-C1}, \eqref{eq:result-S2}, and \eqref{eq:result-C2} are valid for arbitrary filament shapes, in contrast to other theories of HIs which require a small-amplitude assumption for the shape of the filament. However, no theory is without its limitations. {\color{black} One important restriction is that, within the current setup, our asymptotic theory can only handle filaments in an infinite fluid domain. Further work would be needed to account for external surface such as the cell body of the organism to which the filaments might be attached.} {\color{black} Just as important is the fact that our asymptotic theory, in its current state,} can only fully describe the interaction of rigid filaments. A possible extension is to refine the series expansions for the force distributions from Eqs.~\eqref{eq:expansion-f1} and \eqref{eq:expansion-f2}, which are valid for any type of filament, in order to obtain a comprehensive theory for HIs between flexible filaments as well. We also note that we have neglected HIs due to moment distributions along the centrelines of the filaments. This is because such contributions would scale like $\epsilon^2/d^2$ and would always be smaller than the second-order corrections from the force distributions, which scale like $\log(\epsilon)/d^2$ and are the final terms included in our asymptotic theory. We have also considered the interactions between multiple slender filaments but only in a qualitative way, when discussing the physics of HIs in a circular array of helical pumps. Our asymptotic theory can be easily extended to include HIs between more than two filaments, because it is based on the method of reflections. With this approach, $j$th-order corrections to the extended resistance matrix come from hydrodynamic effects that have reflected $j$ times between the filament that induces the flow and the filament that feels its effect. The only complication comes from the fact that, in a collection of $N>2$ filaments, there is no single expansion parameter. Instead, there are $\frac{1}{2}N(N-1)$ pairwise distances between the filaments. Hence, the order in which corrections appear in the series expansion must be considered carefully, {\color{black}unless the filaments are so far apart that it is sufficient to consider first-order corrections due to pairwise interactions}. There are many possible applications for the theoretical results presented in this paper, beyond the case of helical pumps discussed in Section \ref{sec:application}. Our asymptotic theory can be used to investigate the collective swimming of elongated microorganisms like the \textit{Spirochaetes} and \textit{Spiroplasma}, as well as some artificial micro-swimmers (e.g.~helical micromachines actuated by an external magnetic field). {\color{black} Amongst all moving appendages in the microscopic world, the closest to being rigid are the bacterial flagellum and nodal cilia, which makes them more suitable for applications of our asymptotic theory. Although the distance between flagellar filaments within a bundle is less than their contour length, there are other situations in which bacterial flagella interact on a larger length scale, making these problems directly relevant to our asymptotic theory. Examples include the HIs between filaments at either pole of an amphitrichous bacterium or filaments belonging to different cells in a sparse bacterial carpet or swarm.} Following an extension of our theory to the case of flexible filaments, as discussed before, one could also examine the HIs between eukaryotic cilia and flagella, or between fluctuating polymeric filaments in the cytoplasm, such as actin filaments and microtubules. Another, more technical, avenue for future research will be to bridge the gap between near-field {\color{black}($d\ll L$)} theories of HIs \cite{Man2016} {\color{black} and the present study ($d > L$)}. \section*{Acknowledgements} We gratefully acknowledge funding from the George and Lillian Schiff Fund through the University of Cambridge (studentship supporting M.T.C.) and the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement 682754 to E.L.). \section{Introduction} The microscopic world is filled with examples of rigid structures that interact with each other as they move through fluids. In the biological context, these can range from very dense systems such as bacterial swarms \cite{Darnton2010}, where steric interactions are important, to regularly-spaced arrays of cilia, which can be coupled both hydrodynamically (through the fluid) \cite{Brumley2014} and elastically (through the cell membrane) \cite{Wan2016,Kanso2021}, down to dilute suspensions of planktonic bacteria and algae \cite{Ishikawa2009}, where only hydrodynamic interactions prevail. Outside biology, hydrodynamic interactions are important in the dynamics of sedimentation and the {\color{black} rheology of suspensions \cite{Shaqfeh1990,Mackaplow1996,Guazelli2011,Shelley2019}}, as well as the collective behaviour of synthetic active particles \cite{Ramaswamy2010,Marchetti2013}. For artificial devices such as diffusio- or electrophoretic swimmers, one must also consider long-range chemical interactions in addition to the hydrodynamics \cite{Sharifi2016,Varma2018,Varma2019,Saha2019}. Hydrodynamic interactions (HIs) represent a particular interest for research because, due to their long-range nature, they can give rise to collective behaviour in systems with a large number of active, self-propelled particles \cite{Vicsek2012,Elgeti2015}. A popular approach for studying active matter is to coarse-grain the system and postulate phenomenological equations based on symmetries, but it remains important to capture the microscopic origin of interactions between the particles. Therefore, the study of HIs between a small number of suspended bodies is the necessary link between understanding the dynamics of a single body in an unbounded fluid and that of a large collection thereof. On a microscopic length scale, the physics of the fluid is dominated by viscous dissipation, and inertia is negligible most of the time. Therefore, the interaction of micro-swimmers is usually a low Reynolds number problem, governed by the Stokes equations. Naturally, HIs are important in biology across all Reynolds numbers. For instance, they influence predator-prey interactions and sexual reproduction in small marine organisms such as copepods, which operate at low to intermediate Reynolds number \cite{Arezoo2016}. HIs are also very important in schools of fish (usually high Reynolds number), where they give rise to stable swimming formations and affect endurance and propulsive efficiency \cite{Weihs1973,Dai2018,Pan2020}. At intermediate and high Reynolds number, however, the problem of HIs is usually approached with experimental and computational tools. In contrast, in the low Reynolds number limit, the linearity of the Stokes equations allows for exact analytical solutions if the geometry is simple enough, e.g.~the interaction between two rigid spheres. For rigid spheres at low Reynolds number, exact analytical solutions were found for the flow field around two spheres of arbitrary size but specified orientation \cite{Jeffery1915,StimsonJeffery1926,Goddard2020}, as well as around two identical spheres with arbitrary orientation \cite{Goldman1966,Wakiya1967}. These exact solutions are possible either by exploiting a cylindrical symmetry in the problem \cite{Jeffery1915,StimsonJeffery1926}, or by using a bispherical coordinate system \cite{Goddard2020,Goldman1966,Wakiya1967}. These classical analytical results were later confirmed by computational studies \cite{Dabros1985,Kim1985,YoonKim1987}. In addition to the exact solutions, there are also approximate analytical solutions for the interaction of two spheres sufficiently far apart \cite{Felderhof1977,Cichocki1988}. These solutions are expressed as series expansions in inverse powers of the distance between the spheres, and have the advantage of circumventing bispherical coordinates. For more than two spheres, the interactions become more complicated, but researchers have studied this problem experimentally \cite{Jayaweera1964} and numerically \cite{Cichocki1994}, and have also made analytical progress in the form of a far-field theory \cite{Hocking1964}. For shapes more complex than a sphere, it is often necessary to approach the modelling problem with computational tools. In the biological context, full boundary-element method (BEM) simulations have been carried out to study the HIs between micromachines with spiral tails \cite{Nasseri1997}, uniflagellar bacteria swimming side by side \cite{Ishikawa2007}, and spherical colonies of algae swimming near boundaries \cite{Ishikawa2020}. Other computational studies have considered the interactions between more abstract types of swimmers such as dumbbell-type \cite{Gyrya2010} or squirmer-type pushers and pullers \cite{Goetze2010,Molina2013}. One important question to consider when talking about HIs between microorganisms is whether there is any net attraction or repulsion between the swimmers, and if they settle into stable swimming patterns. These questions are also motivated by experimental observations of swimming bacteria and volvocine algae \cite{Liao2007,Drescher2009}. In this study we focus on HIs between slender filaments at low Reynolds number, in order to tackle the interactions between swimming appendages such as cilia and flagella, rather than entire microorganisms. If HIs between microorganisms are important for the stability of swimming patterns in groups of swimmers, then the HIs between swimming appendages are essential to single-cell behaviour. This includes questions such as the speed and state of flagellar synchronisation \cite{Kim2004b,Reigh2012,Reigh2013,Brumley2014,Chakrabarti2019,Man2020}, the emergence of swimming gaits \cite{Wan2016} and metachronal waves \cite{Joanny2007mcw,Elgeti2013}, and the propulsive capacity of an organism with multiple appendages \cite{Elgeti2013,Nguyen2018}. Much previous work in this area is computational \cite{Kim2004b,Reigh2012,Reigh2013,Chakrabarti2019,Man2020,Nguyen2018,Elgeti2013}, but there has also been some analytical work on the HIs between nearby slender filaments \cite{Man2016}, as well as experimental work on HIs between the beating cilia of live algae \cite{Brumley2014}, and between rotating helices in macro-scale models of bacterial flagella \cite{Kim2003,Kim2004a}. After spheres, the next shapes that can be tackled analytically are slender filaments. This is because we now have well-developed theories for modelling the flows generated by moving filaments using a distribution of force singularities along the centreline of the slender body. One very successful analytical method is resistive-force theory (RFT) \cite{Hancock1953,Gray1955,Lighthill1996_helical}, which describes the anisotropic drag on a slender filament by a linear and local relationship between the force and velocity distributions along the centreline. Since it neglects non-local interactions along the filament, RFT is quantitatively accurate only for exponentially slender filaments, but it usually reproduces the qualitative features of the flow and it is analytically tractable, which leads to a deeper physical understanding. For more accurate quantitative results, one can use slender-body theory (SBT), which takes into account both local and non-local hydrodynamic effects \cite{Cox1970,Lighthill1976,Johnson1980}. While RFT is logarithmically correct, the errors in SBT are algebraically small. In this investigation we apply the theoretical techniques commonly used for single filaments (RFT and SBT) to describe the HIs between two slender filaments {\color{black} separated by a distance, $d$, greater than the contour length of the filaments, $L$}. In a similar way to previous studies on spheres \cite{Felderhof1977,Cichocki1988}, we express the force distribution along each filament as a series expansion in inverse powers of {\color{black}$d/L>1$}. This uses principles from the method of reflections, where some contributions in the expansion correspond to hydrodynamic effects that have reflected back and forth between the filaments a number of times. {\color{black} The method of scattering has previously been employed in the theoretical study of suspensions of rods \cite{Shaqfeh1990,Mackaplow1996}, but these studies focus on the bulk rheology of a suspension of passive fibres, whereas our current purpose is to derive analytical expressions for the specific HIs between two active slender filaments. Furthermore, the present study can handle helical and other shapes of filaments, while the aforementioned work was limited to straight rods.} Our final analytical results pertain specifically to rigid filaments, whose motion can be encapsulated in one mathematical object -- the resistance matrix. For multiple filaments, it is the extended resistance matrix (see also Ref.~\cite{Cichocki1988}) that relates the full dynamics (forces and torques on all the filaments) to the full kinematics (the linear and angular velocities of all the filaments). {\color{black} We expand our solution for the extended resistance matrix up to and including second-order corrections in $L/d<1$. This is motivated by our subsequent application to rotating helical pumps, where the net attraction or repulsion between the helices is only noticeable at second order. It is also at second order that the power of slender-filament methods like RFT and SBT comes into play. The first-order contribution of HIs is the same for slender filaments as it is for spheres or any rigid object that exerts a net force on the fluid. At second order, however, we have contributions not only from the flow that is reflected between the objects (which is the same for spheres), but also from expanding the shape of the filament centreline about its centre.} The paper is structured around three central parts -- the derivation, validation, and application of the theory for HIs between slender filaments at low Reynolds number. In Section \ref{sec:model} we derive analytical expressions for the extended resistance matrix of two arbitrarily-shaped rigid slender filaments, written as a series expansion up to second-order corrections in inverse distance. {\color{black} We then evaluate the coefficients in this series using both RFT and SBT, and in Section \ref{sec:validation} we validate the asymptotic theory against numerical simulations based on SBT}. Finally, in Section \ref{sec:application}, we apply both theory and simulations to the case of two helical pumps rotating side by side in an infinite fluid. We perform a thorough investigation of the forces and torques exerted by the helical pumps, and derive analytical expressions that capture the qualitative effects of HIs with varying distance and phase difference between the helices. Based on our understanding of pairwise HIs between helical pumps, we then provide a perspective on the HIs within a circular array of helical pumps, and we conclude this study in Section \ref{sec:discussion} by discussing our results in a wider context. \section{Asymptotic model for hydrodynamic interactions} \label{sec:model} In this section, we consider the HIs between two rigid slender filaments {\color{black} separated by a distance, $d$, greater than their contour length, $L$}. We quantify the dynamics of the interacting filaments through an extended resistance matrix, for which we derive a series expansion solution up to second-order corrections in {\color{black} $L/d<1$}. \subsection{Geometrical setup} \begin{figure} \landscapetrim{17cm}{9cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_0.pdf} \caption{Geometrical setup of the problem. (a) Two rigid filaments of dimensionless contour length $L=2$ interact with each other hydrodynamically as they move through a viscous fluid. Our asymptotic theory is valid {\color{black} for sufficiently large inter-filament separation, $d > L$, and in the limit of small filament thickness, $\epsilon \ll 1$}. We identify three useful coordinate systems: the laboratory frame (green), the interaction frame for a pair of filaments (blue), and the body frame for an individual filament (black). (b) Parameters describing the geometry of a helical filament, which we will use for the validation and application of our asymptotic theory.} \label{fig:setup} \end{figure} We begin by sketching the setup of our hydrodynamic problem and introducing the mathematical notation. In Fig.~\ref{fig:setup} (a) we illustrate the different coordinate systems used in this paper. First, there is the laboratory frame $\{\mathbf{e}_x,\mathbf{e}_y,\mathbf{e}_z\}$ in usual Cartesian coordinates. Then there is a body frame $\{\mathbf{e}_1^{(k)},\mathbf{e}_2^{(k)},\mathbf{e}_3^{(k)}\}$ for each filament, labelled by $k$. Relative to the laboratory frame, we define the body frame vectors for a filament with orientation $\mathbf{p} = (\phi,\theta,\chi)$ to be \begin{eqnarray} \mathbf{e}_1 &=& \cos\chi \left[ \cos\theta\left(\cos\phi \mathbf{e}_x + \sin\phi \mathbf{e}_y\right) -\sin\theta \mathbf{e}_z \right] + \sin\chi \left[ -\sin\phi \mathbf{e}_x + \cos\phi \mathbf{e}_y \right], \label{eq:bodyframe-A} \\ \mathbf{e}_2 &=& -\sin\chi \left[ \cos\theta\left(\cos\phi \mathbf{e}_x + \sin\phi \mathbf{e}_y\right) -\sin\theta \mathbf{e}_z \right] + \cos\chi \left[ -\sin\phi \mathbf{e}_x + \cos\phi \mathbf{e}_y \right] , \\ \mathbf{e}_3 &=& \sin\theta\left(\cos\phi \mathbf{e}_x + \sin\phi \mathbf{e}_y\right) + \cos\theta \mathbf{e}_z. \label{eq:bodyframe-Z} \end{eqnarray} Working outwards through the transformations applied to the laboratory frame vectors $\{\mathbf{e}_x,\mathbf{e}_y,\mathbf{e}_z\}$, we see that the body frame $\{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}$ is obtained by a rotation through angle $\phi$ around the vertical, $\mathbf{e}_z$, then a tilting by angle $\theta$ away from the vertical (i.e.~a rotation through angle $\theta$ around $-\sin\phi \mathbf{e}_x + \cos\phi \mathbf{e}_y$), and finally a rotation by angle $\chi$ around the axis $\mathbf{e}_3$. Relative to the body frame, we write the position of the centreline and the unit tangent along an arbitrarily-shaped filament $k$ as \begin{eqnarray} \mathbf{r}_k(s) &=& x^{(k)}_1(s) \mathbf{e}_1^{(k)} + x^{(k)}_2(s) \mathbf{e}_2^{(k)} + x^{(k)}_3(s)\mathbf{e}_3^{(k)}, \\ \hat{\mathbf{t}}_k(s) &=& \frac{\partial x^{(k)}_1}{\partial s} \mathbf{e}_1^{(k)} + \frac{\partial x^{(k)}_2}{\partial s} \mathbf{e}_2^{(k)} + \frac{\partial x^{(k)}_3}{\partial s}\mathbf{e}_3^{(k)}, \label{eq:filament-arbitrary} \end{eqnarray} where $s$ is the arc length along the filament. Finally there is a frame of interaction, $\{\mathbf{e}_x^{(j \to k)},\mathbf{e}_y^{(j \to k)},\mathbf{e}_z^{(j \to k)}\}$, defined for every pair of filaments $j$ and $k$ such that the unit vector $\mathbf{e}_x^{(j \to k)}$ points from the origin of the body frame of filament $j$ to that of filament $k$. This frame is useful for discussing interactions between three filaments or more, where there could be multiple pairwise interaction frames distinct from the absolute laboratory frame. However, in our discussion of interactions between two filaments, we may assume without loss of generality that the interaction frame is identical to the laboratory frame. Our asymptotic theory is written in terms of dimensionless quantities. We measure lengths in units of $\tilde{L}/2$ and viscosity in units of $\tilde{\mu}$, where $\tilde{L}$ is the integrated length of the filament and $\tilde{\mu}$ is the viscosity of the medium. This is equivalent to taking $L=2$ and $\mu=1$ in dimensionless terms. In these units, the cross-sectional radius of the filament, $\epsilon$, and the centre-to-centre distance between the filaments, $d$, must {satisfy \color{black}$\epsilon \ll 1 < d$} in order for our theory to hold. We also note that, in our notation, the arc length falls in the interval $s\in (-1,+1)$, giving a total dimensionless length $L=2$ for the filament, and placing the midpoint of the filament at $s=0$. In Fig.~\ref{fig:setup} (b), we illustrate a filament geometry of particular interest - a helical filament with helical radius, $R$, and helical pitch, $p$. It is convenient to introduce the helix angle $\psi = \tan^{-1}(2\pi R/p)$ and the number of helical turns $N=L/\sqrt{(2\pi R)^2+p^2}$. In terms of these, the dimensionless radius of the helix is $R = \sin(\psi)/(\pi N)$ and the pitch is $p = 2\cos(\psi)/N$. We write the centreline of helix $k$ relative to the midpoint of the helical axis, $\mathbf{x}_k$, as \begin{equation} \mathbf{r}_k(s) = R \cos(\pi N s) \mathbf{e}_1^{(k)} + \sigma R \sin(\pi N s) \mathbf{e}_2^{(k)} + s\cos\psi\mathbf{e}_3^{(k)}, \label{eq:centreline} \end{equation} where $s\in (-1,+1)$ is the arc length along the helix and $\sigma=\pm 1$ is the chirality (negative for left-handed helices, positive for right-handed). We can also write the unit tangent vector along the centreline as \begin{equation} \hat{\mathbf{t}}_k(s) = -\sin\psi \sin(\pi N s) \mathbf{e}_1^{(k)} + \sigma \sin\psi \cos(\pi N s) \mathbf{e}_2^{(k)} + \cos\psi\mathbf{e}_3^{(k)}. \label{eq:tangent} \end{equation} The calculations in Section \ref{sec:model} are valid for filaments of arbitrary shape, but in later sections we focus on helical filaments for the purposes of validating and applying our analytical results. \subsection{Hydrodynamic setup} The goal is to find a relationship between the kinematics and the dynamics of the two filaments. This is generally quantified by an extended resistance matrix, which relates the forces and torques exerted by the filaments to their linear and angular velocities, such that \begin{equation} \begin{pmatrix} \mathbf{F}_1 \\ \mathbf{T}_1 \\ \mathbf{F}_2 \\ \mathbf{T}_2 \end{pmatrix} = \begin{pmatrix} \mathbf{S}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{p}_1,\mathbf{p}_2) & \mathbf{C}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{p}_1,\mathbf{p}_2) \\ \mathbf{C}(\mathbf{x}_2,\mathbf{x}_1,\mathbf{p}_2,\mathbf{p}_1) & \mathbf{S}(\mathbf{x}_2,\mathbf{x}_1,\mathbf{p}_2,\mathbf{p}_1) \end{pmatrix} \begin{pmatrix} \mathbf{U}_1 \\ \mathbf{\Omega}_1 \\ \mathbf{U}_2 \\ \mathbf{\Omega}_2 \end{pmatrix}, \label{eq:defn-resistance-matrix} \end{equation} where the matrix $\mathbf{S}$ stands for self-induced dynamics and the matrix $\mathbf{C}$ represents cross-interactions between the filaments. We have made it explicit that the resistance matrix depends on the positions, $\mathbf{x}_j$, and orientations, $\mathbf{p}_j$, of the two filaments. {\color{black} Note that even the matrix $\mathbf{S}$ for self-induced dynamics depends on the position of both filaments, because fluid disturbances induced by the motion of one filament will reflect off the second filament and travel back to the position where they originated.} Because $\mathbf{F}_j$ and $\mathbf{T}_j$ are the forces and torques exerted by the filaments on the fluid, the resistance matrix is positive definite and, by the reciprocal theorem, also symmetric. In particular, this means that $\mathbf{C}(\mathbf{x}_2,\mathbf{x}_1,\mathbf{p}_2,\mathbf{p}_1) = \mathbf{C}(\mathbf{x}_1,\mathbf{x}_2,\mathbf{p}_1,\mathbf{p}_2)^T$. Without loss of generality for the two filament case, we may define the laboratory frame to be centred on the first filament, so that $\mathbf{x}_1 = 0$. Thus, the resistance matrix only depends on the directed distance $\mathbf{d} = \mathbf{x}_2 - \mathbf{x}_1$ so that \begin{eqnarray} \begin{pmatrix} \mathbf{F}_1 \\ \mathbf{T}_1 \end{pmatrix} &=& \phantom{-}\mathbf{S}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2)\begin{pmatrix} \mathbf{U}_1 \\ \mathbf{\Omega}_1 \end{pmatrix} + \phantom{-}\mathbf{C}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2)\begin{pmatrix} \mathbf{U}_2 \\ \mathbf{\Omega}_2 \end{pmatrix}, \label{eq:F_T_first_helix} \\ \begin{pmatrix} \mathbf{F}_2 \\ \mathbf{T}_2 \end{pmatrix} &=& \mathbf{S}(-\mathbf{d},\mathbf{p}_2,\mathbf{p}_1)\begin{pmatrix} \mathbf{U}_2 \\ \mathbf{\Omega}_2 \end{pmatrix} + \mathbf{C}(-\mathbf{d},\mathbf{p}_2,\mathbf{p}_1)\begin{pmatrix} \mathbf{U}_1 \\ \mathbf{\Omega}_1 \end{pmatrix}. \label{eq:F_T_second_helix} \end{eqnarray} If the filaments are slender ($\epsilon\ll 1$), then we may represent the dynamics of filament $k$ by a force density $\mathbf{f}_k(s)$ along its centreline. {\color{black} We define an arclength-dependent drag tensor $\mathbf{\Sigma}(s)$ which relates the force density to the relative velocity of the filament centreline through the expression \begin{equation} \mathbf{f}_k(s) = \mathbf{\Sigma}_k(s) \cdot \left[\mathbf{u}(\mathbf{r}_k(s))-\mathbf{u}_\infty(\mathbf{r}_k(s))\right]. \label{eq:defn-force-density-RFT} \end{equation} In Section \ref{sec:evalcoeff} we will return to the drag tensor and explain how to evaluate it using resistive-force theory (RFT) and slender-body theory (SBT). Until then, the derivation of the asymptotic series expansion is independent of which method we use to characterise the drag on an individual filament.} For a rigid filament, the velocity of the centreline is given by the rigid body motion \begin{equation} \mathbf{u}(\mathbf{r}_k(s)) = \mathbf{U}_k + \mathbf{\Omega}_k\times\mathbf{r}_k(s). \label{eq:defn-u-vector-form} \end{equation} To make our notation more compact, we introduce a kinematics vector with six components made through the concatenation of the linear and angular velocities of the filament, i.e.~$(\mathbf{U}_k,\mathbf{\Omega}_k)$. Then, using summation convention, we may write the velocity of the first filament's centreline as \begin{equation} u_i(\mathbf{r}_1(s)) = (\delta_{ij}+\varepsilon_{i,j-3,k}(\mathbf{r}_1(s))_k) (\mathbf{U}_1,\mathbf{\Omega}_1)_j, \label{eq:defn-u-suffix-notation} \end{equation} where the index $j$ is summed over from $1$ to $6$, while the other free indices run from $1$ to $3$ as usual, and the Kronecker delta and Levi-Civita symbol are understood to be identically zero if any index falls outside the normal range $\{1,2,3\}$. Next, we consider the background flow at the position of the first filament, which is nothing more than the flow induced by the second filament. {\color{black} At distances much greater than the filament thickness, $\epsilon$, the dominant flow induced by the second filament is the cumulative effect of a distribution of Stokeslets placed along its centreline, and represented by the force density $\mathbf{f}_2(s)$. Hence, we can express the background flow as \begin{equation} \mathbf{u}_\infty(\mathbf{r}_1(s)) = \frac{1}{8\pi\mu} \sdint{\frac{\mathbf{I}+\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|} \cdot\mathbf{f}_2(s')}, \label{eq:defn-induced-flow} \end{equation} where $\mathbf{R}_d(s,s') = \mathbf{d} + \mathbf{r}_2(s') - \mathbf{r}_1(s)$ is the relative distance between a point $s'$ on the centreline of the second filament and a point $s$ on the centreline of the first filament.} Note that $\mu = 1$ in our dimensionless units, but was included for clarity. {\color{black} Higher-order singularities, such as the source dipoles included in computational studies \cite{Tornberg2004,Maxian2021}, decay at least as fast as the inverse cube of distance, and hence do not contribute to HIs at order $\mathcal{O}(d^{-2})$, which is as far as we go with the asymptotic series expansion in this paper.} To obtain the total hydrodynamic force and torque exerted by the filament, we need to calculate force moments along the length of the filament, so that \begin{equation} \mathbf{F} = \int_{-1}^{+1} \mathbf{f}(s) \mathrm{d}s, \quad \mathbf{T} = \int_{-1}^{+1} \mathbf{r}(s)\times\mathbf{f}(s) \mathrm{d}s. \label{eq:F_T_vector_form} \end{equation} Using the compact notation introduced earlier, we can write an expression for the dynamics vector $(\mathbf{F}_1,\mathbf{T}_1)$ of the first filament as \begin{equation} (\mathbf{F}_1,\mathbf{T}_1)_i = \sint{(\delta_{ij}+\varepsilon_{i-3,kj}(\mathbf{r}_1(s))_k)(\mathbf{f}_1(s))_j}, \label{eq:defn-F-T-suffix-notation} \end{equation} where the index $i$ runs from $1$ to $6$, while the other indices are summed over from $1$ to $3$. \subsection{Asymptotic series formulation} Equations \eqref{eq:defn-force-density-RFT}-\eqref{eq:defn-induced-flow} define a coupled system of equations for the force densities on the two filaments, which we will solve {\color{black} in the regime $d > L =2$}. We write the force distribution along each filament as an asymptotic series expansion \begin{equation} \mathbf{f}_k(s) = \mathbf{f}_k^{(0)}(s) + d^{-1}\mathbf{f}_k^{(1)}(s) + d^{-2}\mathbf{f}_k^{(2)}(s) + \mathcal{O}(d^{-3}), \label{eq:expn-f} \end{equation} with the ultimate goal of calculating series expansions for the self-induced and cross-interaction resistance matrices in Eq.~\eqref{eq:F_T_first_helix}. We can write these as \begin{eqnarray} \mathbf{S}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2) &=& \mathbf{S}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) + d^{-1}\mathbf{S}^{(1)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + d^{-2}\mathbf{S}^{(2)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + \mathcal{O}(d^{-3}), \label{eq:expn-S} \\ \mathbf{C}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2) &=& \mathbf{C}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) + d^{-1}\mathbf{C}^{(1)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + d^{-2}\mathbf{C}^{(2)}(\hat{\mathbf{d}}, \mathbf{p}_1,\mathbf{p}_2) + \mathcal{O}(d^{-3}), \label{eq:expn-C} \end{eqnarray} where the matrices at each order only depend on the direction of separation, $\hat{\mathbf{d}}$, with all dependence on the magnitude of separation, $|\mathbf{d}|=d$, captured by the algebraic power of the given order. Because the leading order is given by the limit $d \to \infty$, where the filaments do not know of each other's presence, we deduce that \begin{equation} \mathbf{S}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = \mathbf{S}^{(0)}(\mathbf{p}_1), \quad \mathbf{C}^{(0)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = \mathbf{0}. \label{eq:result-C0} \end{equation} In order to solve Eq.~\eqref{eq:defn-force-density-RFT} as an asymptotic series, we need to expand the flow induced by the second filament in inverse powers of distance. {\color{black} The Stokeslets decay like $1/|\mathbf{R}_d|$, so we first write the magnitude of the relative distance as} \begin{equation} |\mathbf{R}_d| = d\left(1 + \frac{2\hat{\mathbf{d}}\cdot(\mathbf{r}_2(s')-\mathbf{r}_1(s))}{d} + \frac{|\mathbf{r}_2(s')-\mathbf{r}_1(s)|^2}{d^2} \right)^{1/2}. \end{equation} {\color{black}Because all points on the filament centreline lie within a sphere of diameter $L$ around the centre, we have $|\mathbf{r}_2(s')-\mathbf{r}_1(s)| < L < d$, so we can apply the binomial expansion to get} \begin{eqnarray} \frac{1}{|\mathbf{R}_d|} &=& \frac{1}{d} - \frac{\hat{\mathbf{d}}\cdot(\mathbf{r}_2(s')-\mathbf{r}_1(s))}{d^2} + \mathcal{O}(d^{-3}),\\ \hat{\mathbf{R}}_d &=& \hat{\mathbf{d}} + \frac{(\mathbf{I} - \hat{\mathbf{d}}\dhatb)\cdot(\mathbf{r}_2(s')-\mathbf{r}_1(s))}{d} + \mathcal{O}(d^{-2}). \end{eqnarray} {\color{black} Note that these binomial expansions is valid for any $d>L$, and higher accuracy can be obtained by including more terms in the series.} Therefore, we can expand the induced flow in Eq.~\eqref{eq:defn-induced-flow} as \begin{equation} u_{\infty,i}(\mathbf{r}_1(s)) = \sdint{\left(d^{-1} J_{ij}(\hat{\mathbf{d}}) + d^{-2}K_{ijp}(\hat{\mathbf{d}})(\mathbf{r}_2(s')-\mathbf{r}_1(s))_p + \mathcal{O}(d^{-3}) \right)(\mathbf{f}_2(s'))_j}, \label{eq:expn-induced-flow} \end{equation} where the second-rank tensor \begin{equation} J_{ij}(\hat{\mathbf{d}}) = \frac{\delta_{ij} + \hat{d}_i\hat{d}_j}{8\pi\mu} \label{eq:defn-J} \end{equation} represents the leading-order Stokeslet induced by the second filament, and the third-rank tensor \begin{equation} K_{ijp}(\hat{\mathbf{d}}) = \frac{\hat{d}_i\delta_{jp} + \hat{d}_j\delta_{ip} - \hat{d}_p\delta_{ij} - 3\hat{d}_i\hat{d}_j\hat{d}_p}{8\pi\mu} \label{eq:defn-K} \end{equation} represents higher-order moments of the force distribution along the second filament. \subsection{Leading-order dynamics} \label{sec:leading-order} The induced flow, Eq.~\eqref{eq:expn-induced-flow}, makes no contributions to Eq.~\eqref{eq:defn-force-density-RFT} at $\mathcal{O}(1)$. By using Eq.~\eqref{eq:defn-u-suffix-notation} to express the rigid-body motion of the filament, we find that the leading-order force distribution is given by \begin{equation} (\mathbf{f}_1^{(0)}(s))_i = (\mathbf{\Sigma}_1(s))_{ij}(\delta_{jk}+\varepsilon_{j,k-3,l}(\mathbf{r}_1(s))_l) (\mathbf{U}_1,\mathbf{\Omega}_1)_k. \label{eq:result-f0} \end{equation} Then, by using Eq.~\eqref{eq:defn-F-T-suffix-notation} to find the total force and torque exerted by the filament, and putting the result in the form of Eq.~\eqref{eq:F_T_first_helix}, we find that \begin{equation} S_{ij}^{(0)}(\mathbf{p}_1) = \sint{(\delta_{ik}+\varepsilon_{i-3,lk}(\mathbf{r}_1(s))_l) (\mathbf{\Sigma}_1(s))_{km}(\delta_{mj}+\varepsilon_{j-3,nm}(\mathbf{r}_1(s))_n)}, \label{eq:result-S0} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$. but all others are summed over from $1$ to $3$. Note that the integral depends implicitly on the orientation $\mathbf{p}_1$ of the filament through the filament centreline $\mathbf{r}_1$ and {\color{black} the tensor} $\mathbf{\Sigma}_1$. The self-induced resistance matrix $\mathbf{S}^{(0)}(\mathbf{p}_1)$ can be obtained, for any orientation $\mathbf{p}_1$ of the filament, by applying a change of basis to the resistance matrix expressed in the body frame of the filament, which we denote by \begin{equation} \mathbf{S}_0 = \begin{pmatrix} \mathbf{A} & \textbf{B} \\ \mathbf{B}^T & \mathbf{D} \end{pmatrix} \equiv \mathbf{S}^{(0)}(\mathbf{0}). \label{eq:defn-S0} \end{equation} If $\mathbf{Q}(\mathbf{p}_1)$ is the orthogonal matrix whose columns are the unit vectors $\{\mathbf{e}_1^{(1)},\mathbf{e}_2^{(1)},\mathbf{e}_3^{(1)}\}$ defined in Eqs.~\eqref{eq:bodyframe-A}-\eqref{eq:bodyframe-Z}, then the self-induced resistance matrix for orientation $\mathbf{p}_1$ is \begin{equation} \mathbf{S}^{(0)}(\mathbf{p}_1) = \begin{pmatrix} \mathbf{Q}(\mathbf{p}_1) & \mathbf{0} \\ \mathbf{0} &\mathbf{Q}(\mathbf{p}_1) \end{pmatrix} \begin{pmatrix} \mathbf{A} & \textbf{B} \\ \mathbf{B}^T & \mathbf{D} \end{pmatrix} \begin{pmatrix} \mathbf{Q}(\mathbf{p}_1)^T & \mathbf{0} \\ \mathbf{0} & \mathbf{Q}(\mathbf{p}_1)^T \end{pmatrix}, \label{eq:result-S0(p1)} \end{equation} where we applied the change of basis to each three-by-three block of the resistance matrix. \subsection{First-order correction} Next, we analyse Eq.~\eqref{eq:defn-force-density-RFT} at $\mathcal{O}(d^{-1})$ using the expansion of the induced flow from Eq.~\eqref{eq:expn-induced-flow}. We find that the first-order correction to the force distribution is given by \begin{equation} (\mathbf{f}_1^{(1)}(s))_i = -(\mathbf{\Sigma}_1(s))_{ij}\sdint{J_{jk}(\hat{\mathbf{d}})(\mathbf{f}_2^{(0)}(s'))_k}. \label{eq:expansion-f1} \end{equation} Then, substituting the leading-order force density from Eq.~\eqref{eq:result-f0}, we find that \begin{equation} (\mathbf{f}_1^{(1)}(s))_i = -(\mathbf{\Sigma}_1(s))_{ij}J_{jk}(\hat{\mathbf{d}})\sdint{(\mathbf{\Sigma}_2(s'))_{kl}(\delta_{ij}+\varepsilon_{i,j-3,k}(\mathbf{r}_2(s'))_k)}(\mathbf{U}_2,\mathbf{\Omega}_2)_l. \label{eq:result-f1} \end{equation} Then, by using Eq.~\eqref{eq:defn-F-T-suffix-notation} to find the total force and torque exerted by the filament, and putting the result in the form of Eq.~\eqref{eq:F_T_first_helix}, we find that \begin{equation} S_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = 0, \label{eq:result-S1} \end{equation} and \begin{multline} C_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) =-\sint{(\mathbf{\Sigma}_1(s))_{ik}(\delta_{kl}+\varepsilon_{k,l-3,m}(\mathbf{r}_1(s))_m)} \\ \times J_{kn}(\hat{\mathbf{d}})\sdint{(\mathbf{\Sigma}_2(s'))_{np}(\delta_{pj}+\varepsilon_{p,j-3,q}(\mathbf{r}_2(s'))_q)}. \label{eq:dervn-C1} \end{multline} We recognise from Eq.~\eqref{eq:result-S0} that these integrals are the first three columns and rows of the leading-order matrix for the first and second filament, respectively, so we can write the leading-order cross-interaction matrix as \begin{equation} C_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) =-S_{ik}^{(0)}(\mathbf{p}_1)J_{kl}(\hat{\mathbf{d}})S_{lj}^{(0)}(\mathbf{p}_2), \label{eq:result-C1} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$, but all others are summed over from $1$ to $3$. We can read this expression from right to left to understand its physical interpretation. At leading order, the second filament induces a Stokeslet flow of strength $(\mathbf{S}^{(0)}(\mathbf{p}_2))_{lj}(\mathbf{U}_2,\mathbf{\Omega}_2)_j$ (with $l\in\{1,2,3\}, j\in\{1,2,...,6\}$), which gets carried over to the position of the first filament by the Oseen tensor $J_{kl}(\hat{\mathbf{d}})/d$. The first filament sees a uniform background flow at leading order and responds to it using its own self-induced resistance matrix $(\mathbf{S}^{(0)}(\mathbf{p}_1))_{ik}$ (with $i\in\{1,2,...,6\},k\in\{1,2,3\}$), as if it was translating with a uniform velocity in the opposite direction to the background flow, hence the minus sign. We note that directionality is lost at this order, because the tensor $J_{ij}(\hat{\mathbf{d}})$, defined in Eq.~\eqref{eq:defn-J}, is invariant under the transformation $\hat{\mathbf{d}} \mapsto - \hat{\mathbf{d}}$. All that matters at this order is the distance $d$ between the two filaments. Furthermore, $\mathbf{C}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)^T = \mathbf{C}^{(1)}(-\hat{\mathbf{d}},\mathbf{p}_2,\mathbf{p}_1)$, so the reciprocal theorem is satisfied at this order. The result can also be extended to non-identical filaments by incorporating information about the filament geometry. We can make this dependence explicit in our notation by writing $\mathbf{S}^{(0)}(\mathbf{p};\mathbf{g})$, where the vector parameter $\mathbf{g}$ encapsulates all information about the filament geometry. For the particular case of helical filaments, note from Eqs.~\eqref{eq:Acomponents-A}-\eqref{eq:Dcomponents-Z} that our dimensionless $S^{(0)}_{ij}$ depends explicitly on the helix angle $\psi$, the number of turns $N$, and implicitly on the slenderness parameter $\epsilon$ through the drag coefficients $c_\perp$ and $c_\parallel$, hence $\mathbf{g} = (\psi,N,\epsilon)$ for a helix. Note also that, in our derivation of the dimensionless $\mathbf{S}(\mathbf{n};\mathbf{g})$ we had rescaled lengths by the filament length, so we would need to add this information back in if we wanted to consider filaments of different lengths. Using tildes to denote dimensional quantities, we can write the leading-order self-induced resistance matrix as \begin{equation} \tilde{\mathbf{S}}^{(0)}(\mathbf{p};\mathbf{g},\tilde{L}) = \frac{\tilde{\mu}\tilde{L}}{2}\begin{pmatrix} \mathbf{I} & 0 \\ 0 & \mathbf{I}\tilde{L}/2 \end{pmatrix} \begin{pmatrix} \mathbf{Q}(\mathbf{p})\mathbf{A}(\mathbf{g})\mathbf{Q}(\mathbf{p})^T & \mathbf{Q}(\mathbf{p})\textbf{B}(\mathbf{g})\mathbf{Q}(\mathbf{p})^T \\ \mathbf{Q}(\mathbf{p})\mathbf{B}(\mathbf{g})^T\mathbf{Q}(\mathbf{p})^T & \mathbf{Q}(\mathbf{p})\mathbf{D}(\mathbf{g})\mathbf{Q}(\mathbf{p})^T \end{pmatrix} \begin{pmatrix} \mathbf{I} & 0 \\ 0 & \mathbf{I}\tilde{L}/2 \end{pmatrix}, \label{eq:S0-general} \end{equation} and also the dimensional cross-interaction matrix as \begin{equation} \tilde{C}^{(1)}_{ij}(\mathbf{d},\mathbf{p}_1,\mathbf{p}_2;\mathbf{g}_1,\mathbf{g}_2,\tilde{L}_1,\tilde{L}_2) = - \tilde{S}^{(0)}_{ip}(\mathbf{p}_1;\mathbf{g}_1,\tilde{L}_1)\frac{\left(\delta_{pq}+\hat{d}_p\hat{d}_q\right)}{8\pi \tilde{\mu} \tilde{d}}\tilde{S}^{(0)}_{qj}(\mathbf{p}_2;\mathbf{g}_2,\tilde{L}_2). \label{eq:C1-general} \end{equation} The results in Eqs.~\eqref{eq:S0-general} and \eqref{eq:C1-general} describe in full generality the far-field HIs between two filaments of arbitrary shape and orientation up to order $\mathcal{O}(\tilde{d}^{-1})$. \subsection{Second-order correction} We now begin to analyse Eq.~\eqref{eq:defn-force-density-RFT} at $\mathcal{O}(d^{-2})$ using the expansion of the induced flow from Eq.~\eqref{eq:expn-induced-flow}. We find that the second-order correction to the force distribution is given by \begin{multline} (\mathbf{f}_1^{(2)}(s))_i = -(\mathbf{\Sigma}_1(s))_{ij}\sdint{J_{jk}(\hat{\mathbf{d}})(\mathbf{f}_2^{(1)}(s'))_k} \\ - (\mathbf{\Sigma}_1(s))_{ij}\sdint{K_{jkp}(\hat{\mathbf{d}})(\mathbf{r}_2(s')-\mathbf{r}_1(s))_p(\mathbf{f}_2^{(0)}(s'))_k}. \label{eq:expansion-f2} \end{multline} The first of these terms will contribute to the self-induced resistance matrix because $\mathbf{f}_2^{(1)}$ is linear in the kinematics of the first filament, while the second of them will contribute to the cross-interaction matrix because $\mathbf{f}_2^{(0)}$ is linear in the kinematics of the second filament. After substituting the first-order force density from Eq.~\eqref{eq:result-f1} into Eq.~\eqref{eq:expansion-f2}, we find that there is a contribution to $\mathbf{f}_1^{(2)}(s)$ of the form \begin{equation} -(\mathbf{\Sigma}_1(s))_{ij}\sdint{J_{jk}(\hat{\mathbf{d}})(-\mathbf{\Sigma}_2(s'))_{kl}J_{lm}(\hat{\mathbf{d}})} \sddint{(\mathbf{\Sigma}_1(s''))_{mn}(\delta_{np}+\varepsilon_{n,p-3,q}(\mathbf{r}_1(s''))_q)}(\mathbf{U}_1,\mathbf{\Omega}_1)_p. \end{equation} Then, using Eqs.~\eqref{eq:defn-F-T-suffix-notation} and \eqref{eq:F_T_first_helix} to bring the result to its final form, we deduce that \begin{equation} S_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = S_{ik}^{(0)}(\mathbf{p}_1)J_{kl}(\hat{\mathbf{d}}) S_{lm}^{(0)}(\mathbf{p}_2) J_{mn}(\hat{\mathbf{d}}) S_{nj}^{(0)}(\mathbf{p}_1), \label{eq:result-S2} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$, but all others are summed from $1$ to $3$. Note that this clearly satisfies the reciprocal theorem because both $\mathbf{S}^{(0)}$ and the Oseen tensor are symmetric. Physically, the result in Eq.~\eqref{eq:result-S2} expresses the fact that the Stokeslet field produced by the first filament propagates with an $\mathcal{O}(d^{-1})$ decay to the position of the second filament, where it produces a disturbance in the force. The $\mathcal{O}(d^{-1})$ perturbation in the force exerted by the second filament gets reflected back to the first filament with the same $\mathcal{O}(d^{-1})$ decay. This generates an $\mathcal{O}(d^{-2})$ disturbance in the dynamics of the first filament that is self-induced (i.e.~proportional to its own kinematics). Similarly, after substituting the leading-order force density from Eq.~\eqref{eq:result-f0} into Eq.~\eqref{eq:expansion-f2}, we find that there is a contribution to $\mathbf{f}_1^{(2)}(s)$ of the form \begin{multline} -(\mathbf{\Sigma}_1(s))_{ij}\sdint{K_{jkl}(\hat{\mathbf{d}})(\mathbf{r}_2(s'))_l (\mathbf{\Sigma}_2(s'))_{km}(\delta_{mn}+\varepsilon_{m,n-3,p}(\mathbf{r}_2(s'))_p)}(\mathbf{U}_2,\mathbf{\Omega}_2)_n \\ +(\mathbf{\Sigma}_1(s))_{ij}K_{jkl}(\hat{\mathbf{d}})(\mathbf{r}_1(s))_l\sdint{(\mathbf{\Sigma}_2(s'))_{km}(\delta_{mn}+\varepsilon_{m,n-3,p}(\mathbf{r}_2(s'))_p)}(\mathbf{U}_2,\mathbf{\Omega}_2)_n. \label{eq:contrib_C2} \end{multline} We introduce the notation \begin{equation} P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2) = \sdint{K_{ikl}(\hat{\mathbf{d}})(\mathbf{r}_2(s'))_l (\mathbf{\Sigma}_2(s'))_{km}(\delta_{mj}+\varepsilon_{m,j-3,n}(\mathbf{r}_2(s'))_n)} \label{eq:defn-P} \end{equation} for the second-rank tensor appearing in Eq.~\eqref{eq:contrib_C2}, and rewrite this contribution as \begin{equation} \left[-(\mathbf{\Sigma}_1(s))_{ij}P_{jn}(\hat{\mathbf{d}},\mathbf{p}_2) +(\mathbf{\Sigma}_1(s))_{ij}K_{jkl}(\hat{\mathbf{d}})(\mathbf{r}_1(s))_l S^{(0)}_{kn}(\mathbf{p}_2)\right](\mathbf{U}_2,\mathbf{\Omega}_2)_n \end{equation} with the help of Eq.~\eqref{eq:result-S0}. Finally, we integrate the force density as per Eq.~\eqref{eq:defn-F-T-suffix-notation} to find the correction to the total force and torque due to the kinematics of the second filament. Using the fact that $K_{jkl}(\hat{\mathbf{d}}) = K_{kjl}(\hat{\mathbf{d}})$ (follows directly from the definition in Eq.~\eqref{eq:defn-K}), we deduce that the $\mathcal{O}(d^{-2})$ correction to the cross-interaction matrix is \begin{equation} C_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2) = -S_{ik}^{(0)}(\mathbf{p}_1)P_{kj}(\hat{\mathbf{d}},\mathbf{p}_2) + P^T_{ik}(\hat{\mathbf{d}},\mathbf{p}_1)S_{kj}^{(0)}(\mathbf{p}_2), \label{eq:result-C2} \end{equation} where the free indices $i$ and $j$ run from $1$ to $6$, but $k$ is summed from $1$ to $3$. Note that this also satisfies the reciprocal theorem, according to which $\mathbf{C}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)^T = \mathbf{C}(-\hat{\mathbf{d}},\mathbf{p}_2,\mathbf{p}_1)$ because $P_{ij}(-\hat{\mathbf{d}},\mathbf{p}_2)=-P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2)$ (follows directly from the definitions of $K_{ijp}$ and $P_{ij}$ in Eqs.~\eqref{eq:defn-K} and \eqref{eq:defn-P}, respectively). The final result for $C_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)$, given by Eq.~\eqref{eq:result-C2}, involves a new quantity that we have not calculated explicitly yet -- the tensor $P_{ij}$, defined in Eq.~\eqref{eq:defn-P}. In contrast, the expressions for $C_{ij}^{(1)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)$ and $S_{ij}^{(2)}(\hat{\mathbf{d}},\mathbf{p}_1,\mathbf{p}_2)$ (Eqs.~\eqref{eq:result-C1} and \eqref{eq:result-S2}, respectively) have the advantage that they involve only the leading-order resistance matrices $S_{ij}^{(0)}(\mathbf{p}_1)$ and $S_{ij}^{(0)}(\mathbf{p}_2)$. These can be easily calculated {\color{black} from RFT or SBT} since they are nothing more than the resistance matrix for an isolated filament. Our final task is to show that the tensor $P_{ij}(\hat{\mathbf{d}},\mathbf{p}_1)$ can also be calculated easily from the leading-order resistance matrix $S_{ij}^{(0)}(\mathbf{p}_1)$ and two minor follow-up calculations. \subsection{Force moments for second-order correction} The tensor $P_{ij}$ defined in Eq.~\eqref{eq:defn-P} is constructed in a similar way to the last three rows of the leading-order resistance matrix from Eq.~\eqref{eq:result-S0}. If we introduce the quantity \begin{equation} M_{lkj}(\mathbf{p}_2) = \sint{(\mathbf{r}_2(s))_l (\mathbf{\Sigma}_2(s))_{km}(\delta_{mj}+\varepsilon_{j-3,nm}(\mathbf{r}_2(s))_n)}, \label{eq:defn-M} \end{equation} which represents force moments along the centreline of a filament with orientation $\mathbf{p}_2$, then what we want to compute is \begin{equation} P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2) = K_{ikl}(\hat{\mathbf{d}})M_{lkj}(\mathbf{p}_2), \end{equation} but we already have an expression for the last three rows ($4\leq i \leq 6$) of the resistance matrix \begin{equation} S_{ij}^{(0)}(\mathbf{p}_2) = \varepsilon_{i-3,lk}M_{lkj}(\mathbf{p}_2), \end{equation} in the laboratory frame, Eq.~\eqref{eq:result-S0(p1)}. So far we have assumed that the laboratory and interaction frame are identical, and we have only talked about changing basis from the body frame to the laboratory frame, Eq.~\eqref{eq:result-S0(p1)}. This was convenient because $S_{ij}^{(0)}(\mathbf{p}_2)$ has a simple representation in the body frame of the second filament, since the orientation of the filament is $\mathbf{p}_2 = \mathbf{0}$ relative to this frame. But the natural frame in which to describe the tensor $K_{ikl}(\hat{\mathbf{d}})$ is the interaction frame where $\hat{\mathbf{d}} = \mathbf{e}_x^{(1\to2)}$, as shown in Fig.~\ref{fig:setup} (b). In this frame, the tensor $K_{ijp}(\hat{\mathbf{d}})$ defined in Eq.~\eqref{eq:defn-K} has components \begin{equation} K_{1kl}(\mathbf{e}_x^{(1\to2)}) = \frac{1}{8\pi}\begin{pmatrix} -2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, ~ K_{2kl}(\mathbf{e}_x^{(1\to2)}) = \frac{1}{8\pi}\begin{pmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, ~ K_{3kl}(\mathbf{e}_x^{(1\to2)}) = \frac{1}{8\pi}\begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix}. \end{equation} Hence, the tensor $P_{ij}(\hat{\mathbf{d}},\mathbf{p}_2)$ can be written in the interaction frame as \begin{multline} P_{ij}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{1}{8\pi}\delta_{i1}(-2M_{11j}(\mathbf{p}_2')+M_{22j}(\mathbf{p}_2')+M_{33j}(\mathbf{p}_2')) \\ + \frac{1}{8\pi}\delta_{i2}(-M_{12j}(\mathbf{p}_2')+M_{21j}(\mathbf{p}_2')) + \frac{1}{8\pi}\delta_{i2}(-M_{13j}(\mathbf{p}_2')+M_{31j}(\mathbf{p}_2')), \label{eq:dervn-Pij} \end{multline} whereas the last three rows ($4\leq i \leq 6$) of the resistance matrix are \begin{multline} S_{ij}^{(0)}(\mathbf{p}_2') = \delta_{i4}(M_{23j}(\mathbf{p}_2')-M_{32j}(\mathbf{p}_2')) \\ + \delta_{i5}(-M_{13j}(\mathbf{p}_2')+M_{31j}(\mathbf{p}_2')) + \delta_{i6}(M_{12j}(\mathbf{p}_2')-M_{21j}(\mathbf{p}_2')). \label{eq:dervn-Sij} \end{multline} Note that we have used the notation $\mathbf{p}_2'$ to indicate the orientation of the filament relative to the interaction frame, so the tensors $\mathbf{M}(\mathbf{p}_2')$ and $\mathbf{S}^{(0)}(\mathbf{p}_2')$ are also to be expressed in these coordinates. By comparing the two expressions in Eqs.~\eqref{eq:dervn-Pij} and \eqref{eq:dervn-Sij}, we deduce that \begin{equation} P_{2j}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = -\frac{S_{6j}^{(0)}(\mathbf{p}_2')}{8\pi}, \quad P_{3j}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{S_{5j}^{(0)}(\mathbf{p}_2')}{8\pi}, \label{eq:result-P-tworows} \end{equation} so we get the last two rows of $P_{ij}$ for free. To complete the top row of $P_{ij}$ we simply need to calculate the quantity \begin{equation} P_{1j}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{1}{8\pi}(-2M_{11j}(\mathbf{p}_2')+M_{22j}(\mathbf{p}_2')+M_{33j}(\mathbf{p}_2')), \label{eq:dervn-Pij-1row} \end{equation} which is more easily calculated in the body frame of the filament and then transferred to the interaction frame by a change of basis. {\color{black} Everything we have done so far is valid for filaments of arbitrary shape. Below, we go into more detail about the evaluation of the new row $P_{1j}$ for helical filaments, which will be used later for the validation and application of our theory. In the body frame of a helical filament, where $\mathbf{p}_2' \to \mathbf{0}$, we denote the right-hand side of Eq.~\eqref{eq:dervn-Pij-1row} by} \begin{equation} (\mathbf{m}_0)_j = -2M_{11j}(\mathbf{0})+M_{22j}(\mathbf{0})+M_{33j}(\mathbf{0}). \label{eq:defn-m0} \end{equation} {\color{black} The helical centreline introduced in Eq.~\eqref{eq:centreline} is symmetric under a rotation by angle $\pi$ around the unit vector $\mathbf{e}_1$. Due to this symmetry, the vector $\mathbf{m}_0$ has vanishing components along the $\mathbf{e}_2$ and $\mathbf{e}_3$ directions, regardless of the method (RFT or SBT) by which we choose to evaluate it, meaning that} \begin{equation} (\mathbf{m}_0)_{i} = (\mathcal{M}_{1} \mathbf{e}_1)_i, ~ (\mathbf{m}_0)_{i+3} = (\mathcal{M}_{4} \mathbf{e}_1)_i, \label{eq:dervn-m0} \end{equation} for index $i = 1,2,3$. Hence, when we move this result to the interaction frame of two helices, we obtain the final result for the matrix $\mathbf{P}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2')$ \begin{equation} \mathbf{P}(\mathbf{e}_x^{(1\to2)},\mathbf{p}_2') = \frac{1}{8\pi} \begin{pmatrix} \mathcal{M}_{1} \alpha(\mathbf{p}_2') & \mathcal{M}_{1} \beta(\mathbf{p}_2') & \mathcal{M}_{1} \gamma(\mathbf{p}_2') & \mathcal{M}_{4} \alpha(\mathbf{p}_2') & \mathcal{M}_{4} \beta(\mathbf{p}_2') & \mathcal{M}_{4} \gamma(\mathbf{p}_2') \\ -S_{61}^{(0)}(\mathbf{p}_2') & -S_{62}^{(0)}(\mathbf{p}_2') & -S_{63}^{(0)}(\mathbf{p}_2') & -S_{64}^{(0)}(\mathbf{p}_2') & -S_{65}^{(0)}(\mathbf{p}_2') & -S_{66}^{(0)}(\mathbf{p}_2') \\ S_{51}^{(0)}(\mathbf{p}_2') & S_{52}^{(0)}(\mathbf{p}_2') & S_{53}^{(0)}(\mathbf{p}_2') & S_{54}^{(0)}(\mathbf{p}_2') & S_{55}^{(0)}(\mathbf{p}_2') & S_{56}^{(0)}(\mathbf{p}_2') \end{pmatrix}, \label{eq:result-Pmatrix} \end{equation} where $\alpha(\mathbf{p}_2') = \mathbf{e}_1^{(2)}\cdot\mathbf{e}_x^{(1\to2)}$, $\beta(\mathbf{p}_2') = \mathbf{e}_1^{(2)}\cdot\mathbf{e}_y^{(1\to2)}$ and $\gamma(\mathbf{p}_2') = \mathbf{e}_1^{(2)}\cdot\mathbf{e}_z^{(1\to2)}$ are the components of $\mathbf{e}_1^{(2)}$ relative to the interaction frame of filaments $1$ and $2$. If the interaction frame does not coincide with the laboratory frame (e.g.~if there are more than two filaments), this result would have to be moved to the laboratory frame by a change of basis on each three-by-three block. {\color{black} \subsection{Evaluating coefficients in the series expansion} \label{sec:evalcoeff} The first and second-order coefficients in the series expansion only require the leading-order resistance matrix, $\mathbf{S}^{(0)}$, and the force moment, $\mathbf{m}_0$, which themselves only depend on the shape of the filament, $\mathbf{r}(s)$, and the drag tensor, $\mathbf{\Sigma}(s)$. We now explain how to evaluate these coefficients using both resistive-force theory (RFT) and slender-body theory (SBT). The former has the advantage of being analytically tractable but only logarithmically accurate, while the latter is algebraically correct but requires computations. In RFT \cite{Hancock1953,Gray1955,Lighthill1996_helical}, the drag tensor depends only on the local tangent to the filament,} \begin{equation} \mathbf{\Sigma}_{\mathrm{RFT}}(s) = c_\perp[\mathbf{I} -\hat{\mathbf{t}}(s)\hat{\mathbf{t}}(s)]+c_\parallel\hat{\mathbf{t}}(s)\hat{\mathbf{t}}(s), \label{eq:defn-Sigma} \end{equation} and quantifies the {\color{black}anisotropic} drag on the filament through the perpendicular, $c_\perp$, and parallel, $c_\parallel$, drag coefficients \begin{equation} c_\perp = \frac{4\pi\mu}{\ln(2/\epsilon)+1/2}, \quad c_\parallel = \frac{2\pi\mu}{\ln(2/\epsilon)-1/2}. \end{equation} Note that, for clarity, we have included the dimensionless viscosity $\mu=1$ in the above definition of the drag coefficients. For the special case of a helical filament, we {\color{black}use RFT to derive} analytical expressions for $\mathbf{S}_0$ in Appendix \ref{app:RFT} and for $\mathbf{m}_0$ in Appendix \ref{app:forcemoments_RFT}. {\color{black} In SBT \cite{Cox1970,Lighthill1976,Johnson1980}, on the other hand, the relationship between force density and velocity is non-local, so we cannot express the drag tensor as a local object. The value of $\mathbf{\Sigma}_{\mathrm{SBT}}(s)$ at each point $s$ along the centreline depends on the specifics of the motion relative to the shape of the filament. However, we do not need to know the general form of $\mathbf{\Sigma}_{\mathrm{SBT}}(s)$ in order to evaluate the coefficients in our asymptotic series expansion using SBT. An inspection of Eqs.~\eqref{eq:result-S0} and \eqref{eq:defn-P} reveals that the drag tensor always appears contracted with the six modes of rigid-body motion that are available to our rigid filaments, in the form $\Sigma_{ik}(s)(\delta_{kj}+\varepsilon_{j-3,lk}r_l(s))$. Therefore, we only need to know the SBT drag tensor as it pertains to rigid-body motion, \begin{equation} \mathbf{\Sigma}_{\mathrm{SBT}}(s)\cdot (\mathbf{U} + \mathbf{\Omega}\times\mathbf{r}(s)) \equiv \mathbf{f}_{\mathrm{SBT}}(s;\mathbf{U},\mathbf{\Omega}), \end{equation} where $\mathbf{f}_{\mathrm{SBT}}(s;\mathbf{U},\mathbf{\Omega})$ is the SBT force density along a filament with kinematics $(\mathbf{U},\mathbf{\Omega})$. By considering each mode of rigid-body motion individually, we can write \begin{equation} \Sigma_{ik}(s)(\delta_{kj}+\varepsilon_{j-3,lk}r_l(s)) \equiv (\mathbf{f}^{(j)}_{\mathrm{SBT}}(s))_i, \label{eq:defn-fSBT} \end{equation} where $\mathbf{f}^{(j)}_{\mathrm{SBT}}(s)$ is now the force density computed from SBT for the $j$th mode of rigid body motion ($j=1,2,3$ for translations, $j=4,5,6$ for rotations). From Eqs.~\eqref{eq:result-S0} and \eqref{eq:defn-fSBT}, we get the leading-order resistance matrix, $\mathbf{S}^{(0)}$, from SBT \begin{equation} (\mathbf{S}^{(0)}_{\mathrm{SBT}})_{ij} = \sint{(\delta_{ik}+\varepsilon_{i-3,lk}(\mathbf{r}_1(s))_l)(\mathbf{f}^{(j)}_{\mathrm{SBT}}(s))_k}. \end{equation} Similarly, from Eqs.~\eqref{eq:defn-M}, \eqref{eq:defn-m0} and \eqref{eq:defn-fSBT}, we find the SBT equivalent of $\mathbf{m}_0$ as \begin{equation} (\mathbf{m}_0^{\mathrm{SBT}})_j = \sint{\mathbf{r}(s) \cdot (\mathbf{I} -3\mathbf{e}_x^{(1 \to 2)}\mathbf{e}_x^{(1 \to 2)})\cdot \mathbf{f}^{(j)}_{\mathrm{SBT}}(s)}. \label{eq:m0-SBT} \end{equation} Evaluating the force density $\mathbf{f}^{(j)}_{\mathrm{SBT}}(s)$ does require a numerical computation but for a rigid filament this only needs to be done once, in the body frame of the filament, and then modified with a change of basis if the filament changes orientation over time. The SBT computation consists of solving Eq.~\eqref{eq:COMP-method} numerically, exactly as described in Section \ref{sec:comp-method}, but without the interaction term $\mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}]$. In the following sections, when we refer to the asymptotic theory with RFT or SBT coefficients, we mean that we have used the series expansion for the extended resistance matrix from Eqs.~\eqref{eq:expn-S} and \eqref{eq:expn-C}, with coefficients up to second order given by Eqs.~\eqref{eq:result-C0},\eqref{eq:result-S0}, \eqref{eq:result-S1},\eqref{eq:result-C1},\eqref{eq:result-S2} and \eqref{eq:result-C2}, but these coefficients have been evaluated either analytically with RFT or computationally with SBT. The RFT calculations for the matrix $\mathbf{S}^{(0)}$ and the vector $\mathbf{m}_0$ are given in Appendices \ref{app:RFT} and \ref{app:forcemoments_RFT}, respectively, while the computational method for SBT is described in Section \ref{sec:comp-method} (except that the interaction term $\mathcal{J}$ is not included in the SBT computation for a single filament).} \newpage \section{Validation of asymptotic model} \label{sec:validation} We will now verify {\color{black} the asymptotic theory with RFT/SBT coefficients} against numerical simulations {\color{black} based on SBT}. In this section, we focus on filaments with a helical centreline, which are very common in microscopic scale flows (e.g.~the helical flagellar filaments of bacteria, helical microbots actuated by external magnetic fields, elongated microorganisms with a spiral body shape). \subsection{Computational method for hydrodynamic interactions} \label{sec:comp-method} In order to validate our asymptotic model, we implement Johnson's slender-body theory \cite{Johnson1980,thesisKoens} with additional interactions between the filaments \cite{Tornberg2004}. In our computational method, we replace Eq.~\eqref{eq:defn-force-density-RFT} with the following relationship between the force density and velocity along the filament centreline, \begin{equation} 8\pi\mu\mathbf{u}(\mathbf{r}_1(s)) = \mathcal{L}[\mathbf{f}_1(s)] + \mathcal{K}[\mathbf{f}_1(s')] + \mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}], \label{eq:COMP-method} \end{equation} where the first operator represents local effects \begin{equation} \mathcal{L}[\mathbf{f}_1(s)] = \left[2\left(\ln\left(\frac{2}{\epsilon}\right)+\frac{1}{2}\right)\mathbf{I} + 2\left(\ln\left(\frac{2}{\epsilon}\right)-\frac{3}{2}\right)\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)\right]\cdot \mathbf{f}_1(s), \label{eq:COMP-local} \end{equation} and the second operator represents non-local effects \begin{multline} \mathcal{K}[\mathbf{f}_1(s')] = \sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_0(s,s')\hat{\mathbf{R}}_0(s,s')}{|\mathbf{R}_0(s,s')|}-\frac{\mathbf{I}+\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)}{|s'-s|}\right]\cdot \mathbf{f}_1(s')} \\ + \left(\mathbf{I}+\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)\right)\cdot\sdint{\frac{\mathbf{f}_1(s')-\mathbf{f}_1(s)}{|s'-s|}}, \label{eq:COMP-nonlocal} \end{multline} where $\mathbf{R}_0(s,s') = \mathbf{r}_1(s)-\mathbf{r}_1(s')$, and we have split the terms in such a way that both integrals have a removable singularity at $s'=s$. Finally, the third operator represents interactions between the two filaments {\color{black} as previously modelled by Tornberg and Shelley \cite{Tornberg2004}}, \begin{equation} \mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}] = \sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|} + \frac{\epsilon^2}{2}\frac{\mathbf{I}-3\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|^3}\right]\cdot\mathbf{f}_2(s')}, \label{eq:COMP-interaction} \end{equation} where $\mathbf{R}_d(s,s') = \mathbf{d} +\mathbf{r}_2(s')-\mathbf{r}_1(s)$. {\color{black} In our computational method, which was implemented for purposes beyond the present study, we choose to include the source dipole term that was left out of our asymptotic theory, Eq.~\eqref{eq:defn-induced-flow}, because it would have contributed to the asymptotic series expansion only at order $\mathcal{O}(d^{-3})$. Note that we have used the same prefactor of $1/2$ for the dipole term as in \cite{Tornberg2004}, while a more recent study based on the Rotne-Prager-Yamakawa kernel and matched asymptotics uses a larger prefactor of $e^3/24$ \cite{Maxian2021}.} {\color{black} We solve Eqs.~\eqref{eq:COMP-method}-\eqref{eq:COMP-interaction} numerically using a spectral method based on Legendre polynomials as in Ref.~\cite{thesisKoens}. Other studies have chosen to solve these integral equations by regularizing the integral operator $\mathcal{K}$ and approximating its arguments with piecewise polynomials \cite{Tornberg2004}, or more recently using a spectral method based on Chebyshev polynomials \cite{Maxian2021}. In the present study, the choice of Legendre polynomials as a set of basis functions is motivated by their being eigenfunctions of the second integral in the non-local operator $\mathcal{K}$, meaning that \begin{equation} \sdint{\frac{P_n(s')-P_n(s)}{|s'-s|}} = E_n P_n(s), \end{equation} with eigenvalues $E_0=0$ and \begin{equation} E_n = -2\sum_{j=1}^{n}\frac{1}{j}, \end{equation} for $n>0$ \cite{thesisGotz}. We discretize the force density and velocity along the filaments as \begin{equation} \mathbf{u}(\mathbf{r}_k(s)) = \sum_{n=0}^\infty \mathbf{u}_k^{(n)}P_n(s), \quad \mathbf{f}_k(s) = \sum_{n=0}^\infty \mathbf{f}_k^{(n)}P_n(s), \end{equation} where the velocity coefficients $\mathbf{u}_k^{(n)}$ are known from the prescribed kinematics, and the force coefficients $\mathbf{f}_k^{(n)}$ must be solved for. After projecting Eq.~\eqref{eq:COMP-method} onto the space of Legendre polynomials and making use of the orthogonality condition \begin{equation} \sint{P_n(s)P_m(s)} = \frac{2\delta_{mn}}{2n+1}, \end{equation} we recover the following system of equations relating the velocity and the force coefficients \begin{multline} 8\pi\mu \mathbf{u}_1^{(n)} = \left[2\left(\ln\left(\frac{2}{\epsilon}\right)+\frac{1}{2}\right) + E_n \right] \mathbf{f}_1^{(n)} \\ + \frac{2n+1}{2} \sum_{m=0}^{\infty} \Bigg[ \left[2\left(\ln\left(\frac{2}{\epsilon}\right)-\frac{3}{2}\right) + E_m \right]\mathbf{M}_{\parallel}^{(n,m)}\mathbf{f}_1^{(m)} + \mathbf{M}_{0}^{(n,m)}\mathbf{f}_1^{(m)} + \mathbf{M}_{d}^{(n,m)}\mathbf{f}_2^{(m)}\Bigg], \label{eq:COMP-method-projected} \end{multline} where the matrices $\mathbf{M}_{\parallel}^{(n,m)}$, $\mathbf{M}_{0}^{(n,m)}$ and $\mathbf{M}_{d}^{(n,m)}$ are given by \begin{eqnarray} \mathbf{M}_{\parallel}^{(n,m)} &=& \sint{\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)P_n(s)P_m(s)}, \\ \mathbf{M}_{0}^{(n,m)} &=& \sint{\sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_0(s,s')\hat{\mathbf{R}}_0(s,s')}{|\mathbf{R}_0(s,s')|}-\frac{\mathbf{I}+\hat{\mathbf{t}}_1(s)\hat{\mathbf{t}}_1(s)}{|s'-s|}\right]P_n(s)P_m(s')}}, \\ \mathbf{M}_{d}^{(n,m)} &=& \sint{\sdint{\left[\frac{\mathbf{I}+\hat{\mathbf{R}}_d(s,s')\hat{\mathbf{R}}_d(s,s')}{|\mathbf{R}_d(s,s')|} + \frac{\epsilon^2}{2}\frac{\mathbf{I}-3\hat{\mathbf{R}}_d\hat{\mathbf{R}}_d}{|\mathbf{R}_d(s,s')|^3}\right]P_n(s)P_m(s')}}. \end{eqnarray} The second of these matrices involves a removable singularity at $s'=s$, but the quadrature integration methods readily available in MATLAB can evaluate this integral accurately so long as the singular points lie on the boundaries of the integration domain. Therefore, when computing the matrices $\mathbf{M}_{0}^{(n,m)}$ in MATLAB we split the double integral into two parts - $s\in[-1,+1]$, $s'\in[-1,s]$ and $s\in[-1,+1]$, $s'\in[s,+1]$. The infinite system of linear equations from Eq.~\eqref{eq:COMP-method-projected} is truncated to $m \leq N_{\mathrm{Legendre}}$ modes and inverted numerically, in order to find the force density coefficients $\mathbf{f}_1^{(k)}$ in terms of the velocity coefficients $\mathbf{u}_1^{(k)}$, which themselves are linearly dependent on the filament kinematics $(\mathbf{U}_k,\mathbf{\Omega}_k)$. The force density is then integrated along the filaments to find the extended resistance matrix that relates filament kinematics and dynamics. We implement this algorithm in MATLAB and validate it using the tests described in Appendix \ref{app:comptests}.} For each set of parameters $(N,\psi,\epsilon)$ describing the geometry of the helical filament, we vary the number of Legendre modes in our truncation until the numerical solution for an isolated helix settles to within 1\% error. We then make the reasonable assumption that the number of Legendre modes determined from this single-helix self-convergence test is sufficient to obtain the same level of accuracy in our double-helix simulations as well. In general, we find that the required number of Legendre modes increases with the number of helical turns of the filament, because we must be able to capture variations in the force density and filament velocity which have the same wavenumber as the filament centreline. For most simulations presented in this study it was sufficient to use $N_{\mathrm{Legendre}} = 15$, because the helices have a small number of helical turns. \subsection{Relative errors} In the absence of an exact solution, we use the numerical solution from SBT as a reference value against which to {\color{black} validate} our asymptotic model. {\color{black} In the previous section, we derived a series expansion for the extended resistance, $\mathbf{R}$, in the form \begin{equation} \mathbf{R} = \mathbf{R}^{(0)} + d^{-1}\mathbf{R}^{(1)} + d^{-2}\mathbf{R}^{(2)} + \mathcal{O}(d^{-3}), \label{eq:expn-R} \end{equation} up to and including second-order terms. We wish to compare this expansion of the resistance matrix with the numerical solution, $\tilde{\mathbf{R}}$, of the fully-coupled integral equations described in Section \ref{sec:comp-method}. However, we cannot compare the matrices $\mathbf{R}$ and $\tilde{\mathbf{R}}$ component-wise, because this would depend on the basis in which we represent the matrices. One can always choose a vector basis in which some component of the ``true" solution $\tilde{\mathbf{R}}$ is zero, relative to which our approximate solution $\mathbf{R}$ would have an infinite relative error. Therefore, we need to think of the extended resistance matrices as linear operators between the space of filament kinematics and the space of filament dynamics, and define an error for the operator as a whole in a way that is basis-independent. A standard way to do this is to use an operator norm.} Suppose we have some given kinematics $\mathbf{x}$ (two linear and two angular velocities, so a vector with twelve components) and we want to compute the dynamics $\mathbf{y}$. Then the error in $\mathbf{y}$ is $\Delta \mathbf{y} = \mathbf{R}\mathbf{x} - \tilde{\mathbf{R}}\mathbf{x}$. We define the ``relative error" in the dynamics to be \begin{equation} E_{\mathrm{dyn}} \equiv \sup_{\mathbf{x}}\left\{ \frac{||\tilde{\mathbf{R}}\mathbf{x} - \mathbf{R}\mathbf{x}||_p}{||\tilde{\mathbf{R}}\mathbf{x}||_p} \right\} = \sup_{\mathbf{y}}\left\{ \frac{||(\mathbf{I} - \mathbf{R}\tilde{\mathbf{R}}^{-1})\mathbf{y}||_p}{||\mathbf{y}||_p} \right\}, \label{eq:rel_error_dynamics} \end{equation} in other words the operator norm of $\mathbf{I} - \mathbf{R}\tilde{\mathbf{R}}^{-1}$. {\color{black} Note that taking the supremum over the entire space of filament kinematics is important, so that the value we compute for the relative error is not dependent on an arbitrary choice of filament kinematics.} Similarly, we can define the relative error in the kinematics as \begin{equation} E_{\mathrm{kin}} \equiv \sup_{\mathbf{y}}\left\{ \frac{||\tilde{\mathbf{R}}^{-1}\mathbf{y} - \mathbf{R}^{-1}\mathbf{y}||_p}{||\tilde{\mathbf{R}}^{-1}\mathbf{y}||_p} \right\} = \sup_{\mathbf{x}}\left\{ \frac{||(\mathbf{I} - \mathbf{R}^{-1}\tilde{\mathbf{R}})\mathbf{x}||_p}{||\mathbf{y}||_p} \right\}, \label{eq:rel_error_kinematics} \end{equation} so the operator norm of $\mathbf{I} - \mathbf{R}^{-1}\tilde{\mathbf{R}}$. Here again, {\color{black} taking the supremum is important, so that the relative error we compute does not depend on an arbitrary choice of filament dynamics}. \begin{figure} \landscapetrim{17cm}{10cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_1.pdf} \caption{Relative error in (a) helix dynamics and (b) helix kinematics, as defined in Eqs.~\eqref{eq:rel_error_dynamics} and \eqref{eq:rel_error_kinematics} respectively, with $p=2$. {\color{black} As we increase the helix separation, $d$, the asymptotic theory with SBT coefficients} converges to the numerical solution, and the error decays as expected {\color{black} with each higher order included in the theory}. Parameter specification: helices have configurations $(\theta_1,\chi_1,\phi_1) = (0,0,\pi/6)$ and $(\theta_2,\chi_2,\phi_2) = (0,0,2\pi/3)$, and $N=2.75$ helical turns. Helix angle, $\psi = 0.5$ rad, and filament slenderness, $\epsilon = 10^{-2}$, are representative of bacterial flagella.} \label{fig:matrix_errors} \end{figure} In Fig.~\ref{fig:matrix_errors} (a) and (b) we compare the relative errors, {\color{black} defined with a $p=2$ norm}, for different {\color{black} orders in our asymptotic theory with SBT coefficients. If our asymptotic series expansion up to $\mathcal{O}(d^{-m})$ terms was calculated correctly, then we would expect the relative error to decay like $ d^{-(m+1)}$, the order of the first neglected terms. This is confirmed by the slopes of our log-log plots, which validate our asymptotic series expansion up to $\mathcal{O}(d^{-2})$. Note that the comparison is only meaningful between the computations and the asymptotic theory with SBT coefficients. This is an unavoidable consequence of our choice to implement the computational method based on SBT. The asymptotic theory with RFT coefficients differs at leading order from the numerical solution based on SBT, and so we would not be able to observe convergence unless we implemented a different computational method based on RFT. The results presented in Fig.~\ref{fig:matrix_errors} (a) and (b) serve to validate the asymptotic series expansion in itself, regardless of the method (RFT or SBT) by which we choose to calculate the leading-order resistance matrix, $\mathbf{S}^{(0)}$, and the force moment, $\mathbf{m}_0$. } {\color{black} Furthermore, by examining the size of the relative error, we deduce that the asymptotic theory can be useful for any $d>L$, which is the regime of validity for our binomial expansion of the Oseen tensor. When the filaments are parallel and orthogonal to the line that connects their centres, we observe that our asymptotic theory with SBT coefficients can achieve 99\% accuracy for $d/L > 1.4$. This accuracy is achieved by the asymptotic solution up to and including $\mathcal{O}(d^{-2})$ terms. Higher accuracy could be obtained either by including more terms in the asymptotic series expansion, or by increasing the distance between the filaments. Based on further results presented in this study, where we also vary the phase difference between filaments, we believe this accuracy estimate to be representative of any parallel configuration of two filaments with this particular helical geometry. A broader numerical investigation would be necessary to determine the accuracy of our method for rigid filaments of arbitrary geometry and non-parallel configurations.} \subsection{Time evolution of forces and torques} {\color{black} The main purpose of the asymptotic theory presented in this paper is to provide a systematic method to calculate analytically the specific HIs between two filaments. When carrying out calculations by hand, we are interested in finding relative patterns more than in calculating accurate absolute values, which is the purpose of numerical schemes. With this perspective in mind, we propose to validate the asymptotic theory with RFT coefficients by looking at the time variation of hydrodynamically-induced forces and torques. We consider the case of two slender helices rotating in parallel with the same angular velocity}. Back in Fig.~\ref{fig:matrix_errors}, we examined the relative error for a fixed orientation of the helices, and we varied the distance between the filaments to see how the error decays - a quantitative {\color{black} validation} of our asymptotic model. In Fig.~\ref{fig:results_time_evolution}, however, we fix the distance between the helical filaments and we let time flow, and the orientation of the filaments along with it, to look for patterns over time - a qualitative {\color{black} validation} of our asymptotic model. {\color{black} Because the helices are vertical, their body-fixed axis $\mathbf{e}_3$ is parallel to the laboratory frame $\mathbf{e}_z$. Hence, the phase angle $\phi$ around $\mathbf{e}_z$ and the spin angle $\chi$ around $\mathbf{e}_3$, as defined in Eqs.~\eqref{eq:bodyframe-A}-\eqref{eq:bodyframe-Z}, are interchangeable. Without loss of generality, we can describe the configuration of the filaments from Figs.~\ref{fig:results_time_evolution} and \ref{fig:results_compareorders} as $(\theta_1,\chi_1,\phi_1) = (0,0,\Omega t)$ and $(\theta_2,\chi_2,\phi_2) = (0,0,\Omega t+\Delta\phi)$.} \begin{figure} \centering \portraittrim{17cm}{20.9cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=14cm]{figure_2a.pdf} \caption{Comparison between {\color{black} computations and the asymptotic theory with RFT/SBT coefficients}, by means of the time evolution of forces and torques induced by the second (rightmost) filament on the first (leftmost). The helices are vertical {\color{black} ($\theta=0$)} and rotating with constant angular velocity $\Omega\mathbf{e}_z$. We fix the phase difference $\Delta\phi = \pi/2$ between them, and a horizontal distance equal to the integrated filament length (a-f) or ten times larger (g-l). The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:results_time_evolution} \end{figure} \begin{figure} \centering \portraittrim{17cm}{20.9cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=14cm]{figure_2b.pdf} \caption{Comparison between {\color{black} computations and the asymptotic theory with SBT coefficients} to $\mathcal{O}(d^{-1})$ and $\mathcal{O}(d^{-2})$, by means of the time evolution of forces and torques induced by the second (rightmost) filament on the first (leftmost). The helices are vertical {\color{black} ($\theta=0$)} and rotating with constant angular velocity $\Omega\mathbf{e}_z$. We impose the phase difference $\Delta\phi = \pi/2$ between them, and a horizontal distance equal to the integrated filament length (a-f) or ten times larger (g-l). The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:results_compareorders} \end{figure} {\color{black} Our asymptotic theory with both RFT and SBT coefficients} captures the qualitative features of the interaction even for smaller helix separations Fig.~\ref{fig:results_time_evolution} (a)-(f), with the agreement becoming quantitative at larger separations Fig.~\ref{fig:results_time_evolution} (g)-(l). This indicates that our {\color{black}asymptotic series expansion can be used to derive meaningful analytical expressions for the HIs between filaments separated by a distance greater than their contour length, as later demonstrated in Section \ref{sec:application}.} We also provide a direct comparison between the {\color{black} asymptotic theory with SBT coefficients} at $\mathcal{O}(d^{-1})$ and $\mathcal{O}(d^{-2})$, in Fig.~\ref{fig:results_compareorders}. These plots provide clearer visual evidence that higher-order corrections improve the fidelity of the asymptotic solution, as opposed to Fig.~\ref{fig:matrix_errors} where the evidence {\color{black} spanned a wider range of kinematic conditions, but was presented in a more condensed format}. \section{Application to helical pumps} \label{sec:application} To demonstrate the usefulness of our asymptotic theory, we now apply and extend our analytical calculations to the interaction of rotating helical pumps. This particular application of our theory is motivated by previous theoretical and experimental studies of helical micropumps \cite{Darnton2004,Kim2008,Martindale2017,Dauparas2018,Buchmann2018}. Experimentally, these systems often take the form of bacterial carpets or forests, where the bacteria are stuck to a substrate while their helical flagellar filaments are free to rotate and pump fluid around. \subsection{Problem specification} We consider two parallel identical helices, rotating with constant angular velocity $\tilde{\Omega}$, as illustrated in Fig.~\ref{fig:mean_FT}. {\color{black} We may choose the laboratory frame so that the filaments are parallel to the $z$-axis and, therefore, the tilt angle $\theta$ is identically zero. When $\theta = 0$, the angles $\phi$ and $\chi$ can be used interchangeably to refer to the rotation of the filament about its own axis, because the body-fixed axis $\mathbf{e}_3$ is parallel to $\mathbf{e}_z$. Without loss of generality, we describe the configuration of the filaments using the angle $\chi=0$ and a varying phase $\phi$.} Because they are driven at constant angular velocity, the helices maintain a fixed phase difference $\phi_2-\phi_1 = \Delta\phi$. If we rescale time by $\tilde{\Omega}^{-1}$, such that $\Omega=1$ in dimensionless terms, then \begin{equation} \phi_1 = t, \quad \phi_2=t + \Delta\phi. \end{equation} Since the helices are held in place, they exert a net force on the fluid, which is pumped in the positive $z$ direction for left-handed helices rotating clockwise. To characterise the net long-term effect of the helical pumps, we need to consider the time-averaged forces and torques exerted by the rotating filaments on the fluid, so we define the mean \begin{equation} \mean{Y} = \frac{1}{2\pi}\int_{0}^{2\pi}Y(t) \mathrm{d}t, \end{equation} for any time-varying quantity $Y$ that we are interested in. We may also want to look at the oscillations of this quantity around its mean value, so we define the variance over time as \begin{equation} \var{Y} = \frac{1}{2\pi}\int_{0}^{2\pi}(Y(t)-\mean{Y})^2\mathrm{d}t. \end{equation} \begin{figure} \landscapetrim{17cm}{15cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_3a.pdf} \caption{Average forces and torques exerted by the leftmost helix due to the presence of a second parallel helix rotating at a distance $d$ to the right, with fixed phase difference $\Delta\phi = \pi/4$. The data points come from SBT simulations including HIs. The power law triangles indicate that the average forces and torques along the axis of the helix (c,f) are an $\mathcal{O}(d^{-1})$ effect, while the other forces and torques (a,b,d,e) are an $\mathcal{O}(d^{-2})$ effect. Simulation parameters: $\psi = 0.5043$ rad, $\epsilon =0.0038$, $N=2.5$ helical turns.} \label{fig:mean_FT} \end{figure} Because our focus is on the HIs between helical pumps, we need to compare the effect of a helical pump when it is part of an ensemble, to what it otherwise would be if the helical pump was operating on its own. If $Y(t;d)$ is a force or torque exerted by a helical pump when there is second helical pump operating at distance $d$ away, then we define \begin{equation} Y_\infty (t) = \lim_{d\to\infty}Y(t;d), \end{equation} which is the force or torque that the same helical pump would exert in isolation. For our asymptotic theory, this corresponds to the leading-order terms in Section \ref{sec:leading-order}. For our computational method, this corresponds to the numerical solution of Eq.~\eqref{eq:COMP-method} without the interaction term $\mathcal{J}[\mathbf{f}_2(s'),\mathbf{d}]$. In the next sections, we will look at differences of the form $\mean{Y} - \mean{Y_\infty}$ to understand if HIs increase or decrease the net effect of the helical pumps on the fluid, and differences of the form $\var{Y} - \var{Y_\infty}$ to investigate whether HIs make the pumping fluctuate more or less over time. \subsection{Computational results} In our simulations, we sample the forces and torques exerted by two helical pumps at twelve regular intervals over one period of rotation, i.e.~$0 \leq \Omega t\leq 2\pi$. The time-averaged forces and torques obtained in this way are shown in Fig.~\ref{fig:mean_FT}, while their variances over time are shown in Fig.~\ref{fig:var_FT}, both for a given phase difference $\Delta\phi = \pi/4$ and varying inter-filament distance. The geometry of the helices was chosen to be representative of bacterial flagella: helix angle, $\psi = 0.5043$ rad, filament slenderness, $\epsilon =0.0038$, and $N=2.5$ helical turns. \begin{figure} \landscapetrim{17cm}{15cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_3b.pdf} \caption{Variance over time in the forces and torques exerted by the leftmost helix due to the presence of a second parallel helix rotating at a distance $d$ to the right, with fixed phase difference $\Delta\phi = \pi/4$. The data points come from SBT simulations including HIs. The power law triangles indicate that the variances in force and torque along the axis of the helix (c,f) are an $\mathcal{O}(d^{-2})$ effect, while the other forces and torques (a,b,d,e) are an $\mathcal{O}(d^{-1})$ effect. Simulation parameters: $\psi = 0.5043$ rad, $\epsilon =0.0038$, $N=2.5$ helical turns.} \label{fig:var_FT} \end{figure} We will now seek to interpret the trends observed in these computations using our asymptotic theory. Specifically, we want to understand why the interaction between the filaments alters the time average of $F_z$ and $T_z$ by $\mathcal{O}(d^{-1})$, but their fluctuation over time by $\mathcal{O}(d^{-2})$. Meanwhile, for the forces and torques in the $x$ and $y$ direction, we want to understand why the time average changes by $\mathcal{O}(d^{-2})$ due to inter-filament interaction, but their fluctuation over time changes by $\mathcal{O}(d^{-1})$. \subsection{Asymptotic theory} We start by computing the intrinsic resistance matrix $\mathbf{S}^{(0)}(0,0,\phi)$ for a vertical helix with arbitrary phase $\phi$, which we will denote from now on simply as $\mathbf{S}^{(0)}(\phi)$. We need to apply the change of basis from Eqs.~\eqref{eq:result-S0(p1)} with the orthogonal matrix \begin{equation} \mathbf{Q}(0,0,\phi) = \begin{pmatrix} \cos\phi & -\sin\phi & 0 \\ \sin\phi & \cos\phi & 0 \\ 0 & 0 & 1 \end{pmatrix}. \end{equation} Because the filament is symmetric under a rotation by angle $\pi$ around the first vector ($\mathbf{e}_1$) in the body frame basis, the resistance matrix expressed in the body frame has the structure \begin{equation} \mathbf{S}_0 = \begin{pmatrix} A_{11} & 0 & 0 & B_{11} & 0 & 0 \\ 0 & A_{22} & A_{23} & 0 & B_{22} & B_{23}\\ 0 & A_{32} & A_{33} & 0 & B_{32} & B_{33}\\ B_{11} & 0 & 0 & D_{11} & 0 & 0 \\ 0 & B_{22} & B_{32} & 0 & D_{22} & D_{23}\\ 0 & B_{23} & B_{33} & 0 & D_{32} & D_{33}\\ \end{pmatrix}, \end{equation} noting that $A_{23} = A_{32}$ and $D_{23} = D_{32}$ because the resistance matrix is symmetric. Hence, after a rotation by angle $\phi$, the matrix can be written as \begin{equation} \mathbf{S}^{(0)}(\phi) = \begin{pmatrix} \mathbf{A}(\phi) & \mathbf{B}(\phi) \\ \mathbf{B}(\phi)^T & \mathbf{D}(\phi) \end{pmatrix}, \label{eq:S_structure} \end{equation} where the matrices $\mathbf{A}(\phi)$, $\mathbf{B}(\phi)$ and $\mathbf{D}(\phi)$ have the same structure with respect to $\phi$, that is \begin{equation} \mathbf{A}(\phi) = \begin{pmatrix} A_0 + \Delta A \cos(2\phi) & \Delta A \sin(2\phi) & -A_{23}\sin(\phi) \\ \Delta A \sin(2\phi) & A_0 - \Delta A \cos(2\phi) & A_{23}\cos(\phi) \\ -A_{32}\sin(\phi) & A_{32}\cos(\phi) & A_{33} \end{pmatrix}, \label{eq:Aphi_structure} \end{equation} where we define $A_0 = (A_{11} + A_{22})/2$ and $\Delta A = (A_{11}-A_{22})/2$, and similarly for $\mathbf{B}(\phi)$ and $\mathbf{D}(\phi)$ but with $A_{ij} \mapsto B_{ij}$ and $A_{ij} \mapsto D_{ij}$ respectively. Without loss of generality, we may choose our laboratory frame to coincide with the interaction frame of the two filaments, so the directed distance between the two helices is $\mathbf{d} = d\mathbf{e}_x$. From Eqs.~\eqref{eq:defn-J} and \eqref{eq:result-C1}, we can write \begin{equation} C^{(1)}_{ij} (\phi_1,\phi_2) = -\frac{1}{8\pi}\left(2S^{(0)}_{i1}(\phi_1)S^{(0)}_{1j}(\phi_2)+S^{(0)}_{i2}(\phi_1)S^{(0)}_{2j}(\phi_2)+S^{(0)}_{i3}(\phi_1)S^{(0)}_{3j}(\phi_2) \right), \label{eq:appln-C1} \end{equation} and then replace the expressions for the elements of $\mathbf{S}(\phi)$ from Eqs.~\eqref{eq:S_structure}-\eqref{eq:Aphi_structure}. Furthermore, from Eq.~\eqref{eq:result-Pmatrix} we derive the matrix \begin{equation} \mathbf{P}(\phi) = \begin{pmatrix} \mathbf{G}(\phi) & \mathbf{H}(\phi) \end{pmatrix}, \label{eq:P_structure} \end{equation} where the matrices $\mathbf{G}(\phi)$ and $\mathbf{H}(\phi)$ have the same structure with respect to the phase $\phi$. Because $\mathbf{e}_1 = \cos\phi\mathbf{e}_x + \sin\phi \mathbf{e}_y$, we have \begin{equation} \mathbf{G}(\phi) = \frac{1}{8\pi}\begin{pmatrix} \mathcal{M}_1 \cos\phi & \mathcal{M}_1 \sin\phi & 0 \\ B_{23}\sin(\phi) & -B_{23}\cos(\phi) & -B_{33} \\ \Delta B \sin(2\phi) & B_0 - \Delta B \cos(2\phi) & B_{32}\cos(\phi) \end{pmatrix}, \label{eq:Gphi_structure} \end{equation} and similarly for $\mathbf{H}(\phi)$ but with $B_{ij} \mapsto D_{ij}$ and $\mathcal{M}_1 \mapsto \mathcal{M}_4$. We are now ready to evaluate the mean forces and torques, and their fluctuations over time, for the specific case of constant rotation about the helical axis $\mathbf{e}_3 = \mathbf{e}_z$. The two helical pumps rotate with constant angular velocities $\mathbf{\Omega}_1 = \mathbf{e}_z$ and $\mathbf{\Omega}_2 = \mathbf{e}_z$, since $\Omega = 1$ in our chosen units of time. Therefore, the forces and torques exerted by the first filament are \begin{equation} \begin{pmatrix} \mathbf{F}_1 \\ \mathbf{T}_1 \end{pmatrix}_{\hspace{-.15cm}i} = S^{(0)}_{i6}(t) + \frac{C^{(1)}_{i6}(t,t+\Delta\phi)}{d} + \frac{S^{(2)}_{i6}(t,t+\Delta\phi) + C^{(2)}_{i6}(t,t+\Delta\phi)}{d^2} + \mathcal{O}(d^{-3}), \label{eq:appln-FT} \end{equation} where we have substituted the phases $\phi_1 = t,~\phi_2 = t + \Delta\phi$. {\color{black} \subsection{Forces and torques parallel to axis of rotation}} We begin by looking at the force exerted by the leftmost filament along its helical axis, $\mathbf{e}_3 = \mathbf{e}_z$. From Eqs.~\eqref{eq:S_structure},\eqref{eq:Aphi_structure} and \eqref{eq:appln-FT}, we see that \begin{equation} F_z(t) = B_{33}+d^{-1}C^{(1)}_{36}(t,t+\Delta\phi) + \mathcal{O}(d^{-2}), \end{equation} which is constant at leading order with $\mean{F_z^\infty} = B_{33}$. The first-order correction, given by Eqs.~\eqref{eq:S_structure},\eqref{eq:Aphi_structure} and \eqref{eq:appln-C1}, will be \begin{equation} C^{(1)}_{36} = -\frac{1}{8\pi }\left[A_{33}B_{33}+A_{23}B_{23}\left(2\sin(t)\sin(t+\Delta\phi)+\cos(t)\cos(t+\Delta\phi)\right)\right], \label{eq:C36} \end{equation} which has a non-zero time-average. Hence, the mean thrust provided by the helical pump is \begin{equation} \mean{F_z} - \mean{F_z^\infty} = -\frac{1}{8\pi d} \left(A_{33}B_{33}+\frac{3}{2}A_{23}B_{23}\cos(\Delta\phi)\right) + \mathcal{O}(d^{-2}), \label{eq:result-meanFz} \end{equation} so indeed the interaction between the filaments changes the mean thrust by $\mathcal{O}(d^{-1})$, as seen in the computations. {\color{black} Note that the result in Eq.~\eqref{eq:result-meanFz} is independent of the method (RFT or SBT) by which we choose to evaluate the coefficients $A_{33}, B_{33}, A_{23}$ and $B_{23}$. In Fig.~\ref{fig:helices_time_average} (e), we examine how the $\mathcal{O}(d^{-1})$ change in thrust depends on the phase difference between the filament}s. The {\color{black} asymptotic theory with SBT coefficients} provides perfect quantitative agreement in the limit of large $d$, while the {\color{black} asymptotic theory with RFT coefficients} has an approximate error of 5\% but captures all qualitative features. \begin{figure} \landscapetrim{17.25cm}{16cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_456.pdf} \caption{Average forces (a,c,e) and torques (b,d,f) due to HIs between the helices, as a function of the phase difference between filaments. The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:helices_time_average} \end{figure} Because $F_z$ is constant at leading order, i.e.~$\var{F_z^\infty} = 0$, its variance over time will be given by \begin{equation} \var{F_z} - \var{F_z^\infty} = \frac{1}{ d^2}\left(\mean{C^{(1)}_{36}(t,t+\Delta\phi)^2} - \mean{C^{(1)}_{36}(t,t+\Delta\phi)}^2 \right) + \mathcal{O}(d^{-3}), \end{equation} which is indeed an $\mathcal{O}(d^{-2})$ effect as seen in computations. This is shown in Fig.~\ref{fig:helices_variances_over_time} (e), where we look at how this $\mathcal{O}(d^{-2})$ effect depends on the phase difference between the filaments. Once again, the {\color{black} asymptotic theory with SBT coefficients} provides quantitative agreement, while the {\color{black}theory with RFT coefficients captures the correct shape and order of magnitude}. Moving on to the torque exerted by the leftmost filament along its helical axis, we can derive in a similar way expressions for the time-average \begin{equation} \mean{T_z} - \mean{T_z^\infty} = -\frac{1}{8\pi d} \left(B_{33}^2+\frac{3}{2}B_{23}^2\cos(\Delta\phi)\right) + \mathcal{O}(d^{-2}), \label{eq:result-meanTz} \end{equation} and the fluctuation over time \begin{equation} \var{T_z} - \var{T_z^\infty} = \frac{1}{ d^2}\left(\mean{C^{(1)}_{66}(t,t+\Delta\phi)^2} - \mean{C^{(1)}_{66}(t,t+\Delta\phi)}^2 \right) + \mathcal{O}(d^{-3}), \label{eq:result-varTz} \end{equation} which are compare against computations in Figs.~\ref{fig:helices_time_average} (f) and \ref{fig:helices_variances_over_time} (f), respectively. \begin{figure} \landscapetrim{17.25cm}{16cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_789.pdf} \caption{Variance in forces (a,b,e) and torques (c,d,f) due to HIs between the helices, as a function of the phase difference between filaments. The helix angle, $\psi = 0.5043$ rad, and filament slenderness, $\epsilon =0.0038$, were chosen as representative of bacterial flagella. The helices have $N=2.5$ helical turns.} \label{fig:helices_variances_over_time} \end{figure} \vspace{\baselineskip} {\color{black} \subsection{Forces and torques perpendicular to axis of rotation}} Next, we evaluate the forces and torques perpendicular to the filament axis, starting with $F_x$. From Eqs.~\eqref{eq:S_structure},\eqref{eq:Aphi_structure} and \eqref{eq:appln-FT}, we see that \begin{multline} F_x(t) = -B_{23}\sin(t) + d^{-1}C^{(1)}_{16}(t,t+\Delta\phi) + \\ d^{-2}(S^{(2)}_{16}(t,t+\Delta\phi) + C^{(2)}_{16}(t,t+\Delta\phi)) + \mathcal{O}(d^{-3}), \end{multline} which averages out to zero at leading order,i.e.~$\mean{F_x^\infty} = 0$. The first-order correction, \begin{multline} C^{(1)}_{16} = -\frac{1}{8\pi }\left[-A_{23}B_{33}\sin(t) - 2A_0 B_{23}\sin(t+\Delta\phi) \right. \\ - \left. \Delta A B_{23}\left(2\cos(2t)\sin(t+\Delta\phi)-\sin(2t)\cos(t+\Delta\phi)\right) \right], \label{eq:C16} \end{multline} also averages out to zero, so the mean of $F_x$ is an $\mathcal{O}(d^{-2})$ effect as seen in Fig.~\ref{fig:mean_FT} (a). Using Eqs.~\eqref{eq:result-S2},\eqref{eq:S_structure} and \eqref{eq:Aphi_structure}, we obtain that \begin{equation} \langle S^{(2)}_{16}(t,t+\Delta\phi) \rangle = 0. \end{equation} Then, by using Eqs.~\eqref{eq:result-C2},\eqref{eq:S_structure},\eqref{eq:Aphi_structure},\eqref{eq:P_structure} and \eqref{eq:Gphi_structure}, we get that \begin{equation} \langle C^{(2)}_{16}(t,t+\Delta\phi) \rangle = -\frac{1}{16\pi}\left(A_{23}D_{23} + B_{23}^2 + B_{23}\mathcal{M}_1\right)\sin(\Delta\phi), \label{eq:C2_16} \end{equation} and hence \begin{equation} \mean{F_x} - \mean{F_x^\infty} = -\frac{1}{16\pi d^2}\left(A_{23}D_{23} + B_{23}^2 + B_{23}\mathcal{M}_1\right)\sin(\Delta\phi). \label{eq:result-meanFx} \end{equation} Because the time-average of $F_x$ is only $\mathcal{O}(d^{-2})$, we deduce that the variance over time is \begin{equation} \var{F_x} = \mean{(-B_{23}\sin(t)+d^{-1}C^{(1)}_{16}(t,t+\Delta\phi) + \mathcal{O}(d^{-2}))^2}. \end{equation} Because $F_x$ oscillates at leading order with variance $\var{F_x^\infty} = A_{23}^2/22$, we deduce that the variance due to HIs is given by \begin{equation} \var{F_x} - \var{F_x^\infty}= -\frac{2B_{23}}{d}\mean{\sin(t)C^{(1)}_{16}(t,t+\Delta\phi)} + \mathcal{O}(d^{-2}), \end{equation} so indeed an $\mathcal{O}(d^{-1})$ effect as seen in Fig.~\ref{fig:var_FT} (a). Using Eq.~\eqref{eq:C16}, we arrive at the final result \begin{equation} \var{F_x} - \var{F_x^\infty}= -\frac{B_{23}}{8\pi d}\left(A_{23}B_{33} + 2A_0 B_{23}\cos(\Delta\phi) + \frac{1}{2} \Delta A B_{23}\cos(\Delta\phi) \right) + \mathcal{O}(d^{-2}). \label{eq:result-varFx} \end{equation} The analytical expressions from Eqs.~\eqref{eq:result-meanFx} and \eqref{eq:result-varFx} are compared against computational results in Fig.~\ref{fig:helices_time_average} (a) and \ref{fig:helices_variances_over_time} (a), respectively. As above, we have quantitative agreement between computations and the {\color{black} asymptotic theory with SBT coefficients} in the limit $d\to\infty$, and qualitative agreement with the {\color{black} asymptotic theory with RFT coefficients}. Just as we have done for $F_x$, we may compute the time-average of the other transverse forces and torques to $\mathcal{O}(d^{-2})$, \begin{eqnarray} \mean{F_y} - \mean{F_y^\infty} &=& \frac{1}{16\pi d^2}\left(2(A_0 D_{33}+B_0 B_{33}) - (A_{23}D_{23} + B_{23}^2 + B_{23}\mathcal{M}_1)\cos(\Delta\phi)\right), \label{eq:result-meanFy} \\ \mean{T_x} - \mean{T_x^\infty} &=& -\frac{1}{16\pi d^2}\left(B_{23}D_{23} + B_{23}D_{23} + B_{23}\mathcal{M}_4\right)\sin(\Delta\phi), \label{eq:result-meanTx} \\ \mean{T_y} - \mean{T_y^\infty} &=& \frac{1}{16\pi d^2}\left(2(B_0 D_{33}+D_0 B_{33}) - (B_{23}D_{23} + B_{23}D_{23} + B_{23}\mathcal{M}_4)\cos(\Delta\phi)\right). \label{eq:result-meanTy} \end{eqnarray} Similarly, we can derive the fluctuations over time to $\mathcal{O}(d^{-1})$, \begin{eqnarray} \var{F_y} - \var{F_y^\infty} &=& -\frac{B_{23}}{8\pi d}\left(A_{23}B_{33} + \phantom{2}A_0 B_{23}\cos(\Delta\phi) - \frac{3}{2} \Delta A B_{23}\cos(\Delta\phi)\right), \label{eq:result-varFy} \\ \var{T_x} - \var{T_x^\infty} &=& -\frac{D_{23}}{8\pi d}\left(B_{32}B_{33} + 2B_0 B_{23}\cos(\Delta\phi) + \frac{1}{2} \Delta B B_{23}\cos(\Delta\phi)\right), \label{eq:result-varTx} \\ \var{T_y} - \var{T_y^\infty} &=& -\frac{D_{23}}{8\pi d}\left(B_{32}B_{33} + \phantom{2}B_0 B_{23}\cos(\Delta\phi) - \frac{3}{2} \Delta B B_{23}\cos(\Delta\phi)\right). \label{eq:result-varTy} \end{eqnarray} The analytical expressions from Eqs.~\eqref{eq:result-meanFy}-\eqref{eq:result-varTy} are compared against computational results in Fig.~\ref{fig:helices_time_average} (b)-(d) and \ref{fig:helices_variances_over_time} (b)-(d). \vspace{\baselineskip} {\color{black} \subsection{Deducing the dynamics of the second filament}} \label{sec:deducing-second} We remind the reader that the forces and torques plotted in Fig.~\ref{fig:helices_time_average} are those exerted \textit{on} the fluid \textit{by} the leftmost filament - see Fig.~\ref{fig:interpretation} (a). Relative to this, the rightmost filament is in the positive $x$ direction, and accordingly we have taken $\hat{\mathbf{d}}=\mathbf{e}_x$ in our calculation of second-order corrections from Eqs.~\eqref{eq:result-meanFx}, \eqref{eq:result-meanFy}-\eqref{eq:result-meanTy}. To obtain the forces and torques exerted by the rightmost filament, we can rotate our coordinate system by an angle $\pi$ about the $z$-axis. First of all, this swaps the filaments around and, hence, reverses the sign of the phase difference. It also changes the signs of all $x$ and $y$ components, but not the $z$ components. {\color{black} Hence, the average dynamics of the second filament satisfy the relations $-\Gamma^{(2)}_{x,y}(\Delta\phi) = \Gamma^{(1)}_{x,y}(-\Delta\phi)$ and $ \Gamma^{(2)}_{z}(\Delta\phi) = \Gamma^{(1)}_{z}(-\Delta\phi)$, where $\Gamma^{(k)}$ is a placeholder for the time-averaged force or torque exerted by the $k$th filament on the fluid.} Because $\langle F_x \rangle$ and $\langle T_x \rangle$ depend on the sine of the phase difference (see Eqs.~\eqref{eq:result-meanFx} and \eqref{eq:result-meanTx}), the rightmost helix exerts the same average force $\langle F_x \rangle$ and torque $\langle T_x \rangle$ as the leftmost helix. Meanwhile, for $\langle F_y \rangle$ and $\langle T_y \rangle$, which depend on the cosine of the phase difference (see Eqs.~\eqref{eq:result-meanFy} and \eqref{eq:result-meanTy}), the rightmost helix exerts an equal and opposite average force and torque to the leftmost helix. Finally, the average $\langle F_z \rangle$ and $\langle T_z \rangle$ are the same for the two helices, because the two quantities depend on the cosine of the phase difference (see Eqs.~\eqref{eq:result-meanFz} and \eqref{eq:result-meanTz}), and the sign of $z$ components has not changed {\color{black} due to the rotation}. \vspace{\baselineskip} \subsection{Interpretation of results} \label{sec:application-interpretation} {\color{black} We now provide some physical interpretation for the earlier computational results. \subsubsection*{Deficit in pumping force}} \begin{figure} \landscapetrim{17cm}{13cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_new.pdf} \caption{\color{black} (Not to scale) Physical mechanism for the reduction in pumping force due to HIs. Top panels illustrate the local velocity of the filament relative to the surrounding fluid. Lower panels show the periodic force density along the filament, rendered at points along a horizontal projection of the centreline. The total force and torque exerted by the helical pump are obtained by integrating the force density around the circle as many times as needed. (a) Due to the anisotropic drag on the slender filament, a rotating helix exerts a net force along its axis of rotation, $\mathbf{e}_3^{(1)}$. If the helix does not have an integer number of turns, there is also a net component of the force along the $\mathbf{e}_2^{(1)}$ direction, due to a ``surplus" of filament on one side (indicated by a thick orange arc on the circular projection of the centreline). (b) Changes to the force density along the second filament due to the $\mathbf{e}_3^{(1)}$ component of the force exerted by the first filament on the fluid. (c) Likewise for the $\mathbf{e}_2^{(1)}$ component of the force.} \label{fig:physicalmechanism} \end{figure} {\color{black} Since the main purpose of the helical pumps is to push fluid along their axes, we start by explaining how HIs affect the vertical \color{black} pumping force, $\langle F_z \rangle$. The leading-order dynamics of a rotating helical pump are illustrated in Fig.~\ref{fig:physicalmechanism} (a) using a local description of the problem (i.e.~no end effects). The local velocity of the centreline relative to the fluid is shown at various points along the filament. At one of these points we decompose the velocity into the directions tangent and perpendicular to the filament. Because the perpendicular drag coefficient on a slender rod is higher, by roughly a factor of two, than the parallel drag coefficient, this gives rise to a leading-order viscous drag on the filament, $-\mathbf{f}_1^{(0)}(s)$, that has a negative vertical component. Below the three-dimensional picture of the filament, we draw the projection of the filament centreline onto the horizontal plane. At each point on this circular projection, we show the corresponding force density exerted by the filament on the fluid, $\mathbf{f}_1^{(0)}(s)$, decomposed into vertical and horizontal components. Notice that the force density simply rotates about the axis $\mathbf{e}_3^{(1)}=\mathbf{e}_z$ as we rotate around the circle, due to the rotational symmetry of the system. The total force and torque exerted by the helical pump are obtained by integrating the force density along the entirety the filament, or equivalently by integrating around the circular projection as many times as needed. For left-handed helices rotating counter-clockwise, the vertical components of the force density are positive, so the helical pump exerts a net positive force in the $\mathbf{e}_3^{(1)}$ direction. The fluid is pumped vertically upwards. By integrating the horizontal components of the force density, we also obtain a net counter-clockwise torque that must be applied to the helical pump to keep it rotating. Furthermore, if the helical filament does not have an integer number of turns, there will be a surplus of filament on one side, indicated by a thick orange line on the circular projection. This means that the helical pump also exerts a net horizontal force on the fluid along the $\mathbf{e}_2^{(1)}$ direction. In Fig.~\ref{fig:physicalmechanism} (b) and (c) we explain how the $\mathbf{e}_3^{(1)}$ and the $\mathbf{e}_2^{(1)}$ components of the leading-order force exerted by the first helical pump, respectively, affect the pumping force exerted by the second helical pump. Firstly, the $\mathbf{e}_3^{(1)}$ component of the pumping force exerted by one helical pump on the fluid leads to an upward vertical flow at the position of the other helical pump. This flow is uniform to leading-order in the distance between the filaments. Therefore, the second filament appears to be moving in the negative vertical direction relative to the fluid, with velocity $-\mathbf{u}_\infty(\mathbf{r}_2(s))$, as indicated at various points along the filament in Fig.~\ref{fig:physicalmechanism} (b). Following the same procedure as above, we can determine the local force density along the second filament and depict it along the horizontal projection of the centreline. The first-order change in the force density, $\mathbf{f}_2^{(1)}(s)$, has negative vertical components, because the second filament appears to be moving downward with respect to the background flow. When integrated along the filament, this leads to a deficit in pumping force due to the HIs between the helical pumps. This is confirmed by the negative sign in Fig.~\ref{fig:helices_time_average} (e). Note that this effect is independent of the phase difference between the filaments, because the force density has a constant vertical component along the entire filament, due to rotational symmetry. By integrating the horizontal components of the force density, we also deduce that HIs lead to a deficit in the torque exerted by the helical pumps, as seen in Fig.~\ref{fig:helices_time_average} (f) as well. Hence, less power is needed to actuate two helical pumps with the same angular velocity, if they are rotating in parallel. Secondly, the $\mathbf{e}_2^{(1)}$ component of the leading-order force exerted by the first helical pump generates a horizontal flow at the position of the second helical pump, which is again depicted at various points along the filament in Fig.~\ref{fig:physicalmechanism} (c). Because the flow is horizontal, we no longer have rotational symmetry so the force density is variable along the filament. Note that we only depict the vertical components of the force in the lower panels of Fig.~\ref{fig:physicalmechanism} (c), to avoid overcrowding the diagram. Unlike Figs.~\ref{fig:physicalmechanism} (a) and (b), where the force density simply rotates around the vertical axis as we go around the centreline, in Fig.~\ref{fig:physicalmechanism} (c) we observe that the vertical component of the force density depends on the alignment of the tangent vector and the direction of the flow. Where the velocity of the filament relative to the background flow, $-\mathbf{u}_\infty(\mathbf{r}_2(s))$, has a positive (or negative) component in the direction of the local tangent, the force density has a positive (or negative) vertical component. Hence, this particular contribution of HIs to the pumping force will depend on the phase difference between the two helical pumps. If the two are in-phase, $\Delta\phi =0$ and $\mathbf{e}_2^{(2)} = \mathbf{e}_2^{(1)}$, there is a surplus of negative vertical force as we integrate along the centreline. If the pumps are anti-phase, $\Delta\phi =\pi$ and $\mathbf{e}_2^{(2)} = -\mathbf{e}_2^{(1)}$, there is a surplus of positive vertical force instead. This dependence on the phase difference is confirmed by Fig.~\ref{fig:helices_time_average} (e), where the deficit in pumping force is greater when the filaments are in-phase than anti-phase. It is important to emphasise that the dominant effect here comes from the flow discussed in Fig.~\ref{fig:physicalmechanism} (b), which is a result of integrating a constant force along the entire length of the filament. The effect described in Fig.~\ref{fig:physicalmechanism} (c) is a correction that comes from integrating forces along just a fraction of the filament, if the helix deviates from an integer number of turns. Regardless of the phase difference between the helical pumps, each of them will pump fluid with less force when they are interacting, because each filament tries to push fluid that has already been entrained by the other pump. The deficit is greatest when the filaments are in-phase, because they entrain the fluid in the same direction both vertically and horizontally, whereas filaments that are anti-phase will work against each other in the horizontal plane (Fig.~\ref{fig:physicalmechanism} (c)).} {\color{black} \subsubsection*{Fluctuations over time}} Another question to consider is whether HIs dampen or enhance fluctuations in the dynamics of the helical pumps. The results in Fig.~\ref{fig:helices_variances_over_time} suggest that HIs tend to increase the variances over time for most forces and torques. The only exceptions we observe, for this set of parameters, are the forces $F_x$ and $F_y$ when $|\Delta\phi|<\pi/2$ and the torque $T_x$ in a small interval around $\Delta\phi=\pi$. {\color{black} \subsubsection*{Attraction vs.~repulsion}} We have so far considered the average forces and torques exerted by the filaments on the fluid while they are held in place, except for rotating about the vertical axis. It is also important to consider what would happen to the helices if they were not held in place, but free to move in response to the forces and torques exerted on them by the fluid. Note that the time averages we previously computed assumed that the helices remain vertical. However, we may still use these results to get a sense for what happens in the early stages, when the axes of the helices are still close to vertical. In Fig.~\ref{fig:interpretation} (b) and (c) we show the horizontal components of the average force exerted by the fluid on {\color{black} two left-handed filaments rotating counter-clockwise. The relative directions of the forces and torques on the two helices were established in Section \hyperref[sec:deducing-second]{IV F}}. The first observation is that, at second order, there is no net attraction or repulsion between the helices. Previous theoretical work had ruled out the possibility of attraction or repulsion between two helices rotating with zero phase difference, based on symmetry arguments \cite{Kim2004b}. Our findings add to that observation by excluding any net attraction or repulsion between helices rotating with any phase difference, so long as they are parallel. Instead, we discover a net migration to one side, because the two filaments experience the same force along the $x$ direction -- Fig.~\ref{fig:interpretation} (b). The direction of migration depends on the sine of the phase difference, so it is not a consistent behaviour. On the other hand, the helices will be swirled around by the fluid in the counter-clockwise direction, because they experience equal and opposite forces along the $y$ direction -- Fig.~\ref{fig:interpretation} (c). The direction of the swirl is consistent with the individual rotation of the helices, and this effect is persistent across all phase differences, as demonstrated by Fig.~\ref{fig:helices_time_average} (c). Note from Fig.~\ref{fig:helices_time_average} (a)-(d) that the sign of $\langle T_x \rangle$ is the same as $\langle F_x \rangle$, likewise for $\langle T_y \rangle$ and $\langle F_y \rangle$. Hence, the arrows in Fig.~\ref{fig:interpretation} (b) and (c) could equally well represent the horizontal components of the torques exerted by the fluid on the filaments. The key observation here is that, due to equal and opposite average torques along $y$, the helices would initially experience a splaying out effect where the fluid pushes the tips of the helical pumps apart (the tips being {\color{black} the ends pointing in the same direction as the angular velocity}) and brings their bases together. \subsection{Outlook: circular array of helical pumps} \begin{figure} \landscapetrim{17cm}{8cm} \includegraphics[trim={{.5\cutwidth} {.5\cutheight} {.5\cutwidth} {.5\cutheight}},clip,width=17cm]{figure_10.pdf} \caption{Basic principles of HIs between helical pumps. (a) Minimal setup with two helical pumps rotating with constant angular velocity around their axes. (b) There is no net attraction or repulsion between the two rotating helices (cf.~symmetry arguments for zero phase difference in Ref.~\cite{Kim2004b}), but rather a sideways migration whose sign depends on the phase difference. (c) There is a persistent (i.e.~independent of phase difference) swirling effect in the same direction as the rotation of the helices. (d) A ring of helical pumps would initially experience counter-clockwise swirling (due to the forces $-\langle F_y \rangle$ exerted by the fluid) and outward splaying of the tips (due to the torques $-\langle T_y \rangle$ exerted by the fluid).} \label{fig:interpretation} \end{figure} Once we understand the basic principles of pairwise HIs between helical pumps, it is natural to consider ensembles with more than two helical pumps. The simplest example is a ring of regularly spaced helical pumps, illustrated from the top in Fig.~\ref{fig:interpretation} (d). For simplicity, let us consider a ring of sufficiently large radius that the dominant HIs come from the nearest neighbours only. We expect the dominant contribution to the horizontal force to come from $\langle F_y \rangle$, which is two orders of magnitude larger than $\langle F_x \rangle$ -- cf.~Fig~\ref{fig:helices_time_average} (a) and (c). The effects of $\langle F_y \rangle$ are also consistent, compared to $\langle F_x \rangle$ which depends strongly on the phase difference. In conclusion, we need to focus on the force components perpendicular to the distance between nearest neighbours, depicted in Fig.~\ref{fig:interpretation} (c). By adding the contributions from the left nearest neighbour (L) and the right nearest neighbour (R), we find that the net effect is a force along the circumference of the ring. Therefore, the ring of helical pumps experiences a tendency towards counter-clockwise swirling about the centre. If instead of forces we consider the torques $\langle T_y \rangle$, which are likewise dominant over $\langle T_x \rangle$, we find once again that there is a net torque along the circumference of the circle. This means that the tips of the helical pumps have a tendency to spread out and away from the centre of the ring. Note that the sign of these two hydrodynamic effects (swirling and splaying) would stay the same if we include more than nearest neighbour interactions, due to the symmetry of the system. \section{Discussion} \label{sec:discussion} In this paper, we have considered the problem of HIs between slender filaments in viscous fluids. We have approached the topic theoretically, focusing on the case of two interacting rigid filaments whose dynamics can be described by an extended resistance matrix, Eq.~\eqref{eq:defn-resistance-matrix}. We have solved for the extended resistance matrix and the force distribution along two arbitrarily-shaped filaments as series expansions in inverse powers of the distance between the filaments, up to second-order corrections. Our asymptotic results from Section \ref{sec:model} are valid {\color{black}in the limit of small aspect ratio, $\epsilon\ll 1$, and in the regime, $d>L$, where the inter-filament separation is greater than the contour length of the filament.} {\color{black}Although HIs decrease in magnitude with increasing distance between the filaments, they continue to play a leading-order role important to physical mechanisms such as synchronisation and self-organisation. This provides a strong motivation for developing an analytical theory of HIs to advance our fundamental understanding of such phenomena. While other studies have dealt with the limit $d\ll L$, here we have chosen to focus on the regime $d>L$, which can provide just as many valuable physical insights.} {\color{black} We have evaluated the coefficients in the asymptotic series expansion using both resistive-force theory (RFT) and slender-body theory (SBT), and validated our asymptotic theory against numerical simulations in Section \ref{sec:validation}.} In the final part, Section \ref{sec:application}, motivated by bacterial microfluidic pumps \cite{Darnton2004,Kim2008,Martindale2017,Dauparas2018}, we have demonstrated the usefulness of our asymptotic theory by applying it to the interaction of two rotating helical pumps. Here, we have identified the dependence of forces and torques on the distance and phase difference between the helices, which is illustrated in Figs.~\ref{fig:helices_time_average} and \ref{fig:helices_variances_over_time} and made explicit in Eqs.~\eqref{eq:result-meanFz}-\eqref{eq:result-varTz}, \eqref{eq:result-meanFx}, \eqref{eq:result-varFx}-\eqref{eq:result-varTy}. The analytical expressions are also implicitly dependent on the helix geometry through the components $A_{ij}, B_{ij}, D_{ij}$ of the single-helix resistance matrix, which are given in Appendix \ref{app:RFT}, and the force moments $\mathcal{M}_i$ from Appendix \ref{app:forcemoments_RFT}. Our theory provides us with new physical understanding of the HIs between helical pumps. We find that the pumping force exerted by each rotating helix is reduced due to HIs, and the reduction is greatest when the helical pumps are rotating in phase with each other. Similarly, the torque required to rotate the two helical pumps is lowest when they are in-phase and greatest when they are antiphase, as the helices are working against each other in the latter case. Because we include second-order corrections in our calculation of the average forces and torques acting on the helical pumps, we are able to determine that there is no net attraction or repulsion between the filaments, but rather a sideways migration whose sign depends on the phase difference. However, we identify two persistent hydrodynamic effects which are independent of the phase difference: a swirl in the direction of rotation of the helices and a splaying out at the tips of the helical pumps (i.e.~the ends pointing in the same direction as the angular velocity). We believe that these effects are consistent with the behaviour observed by Kim and co-authors {\color{black}in the initial stage (i.e. when the filaments are still nearly parallel) of their} macroscopic-scale experiments of flagellar bundling \cite{Kim2003}, despite the fact that our theory is intended for \textcolor{black}{$d > L$} while the experiments were carried out in the \textcolor{black}{$d < L$} regime. {\color{black} This suggests that there may be fundamental similarities in the HIs between helical filaments across different regimes of separation. Without further investigation, it is not possible to quantify in which ways the HIs between bacterial flagella within a bundle ($d<L$) are qualitatively different from the HIs between flagellar filaments that are further apart ($d>L$). Our theory provides a starting point to investigate these questions further, analytically.} {\color{black} The primary purpose of our asymptotic theory is to provide a method to calculate, analytically, the specific HIs between two rigid filaments, as opposed to previous theoretical studies which focus on the bulk properties of suspensions of fibers \cite{Shaqfeh1990,Mackaplow1996}. The asymptotic theory with RFT coefficients is suitable for this purpose, since all the coefficients have closed-form solutions provided in Appendices \ref{app:RFT} and \ref{app:forcemoments_RFT}. The asymptotic theory with SBT coefficients can provide a quantitative improvement on some of these results, since SBT calculates the force density along the filament with algebraic accuracy, but the ultimate goal of the asymptotic theory is to capture the qualitative features of HIs such as the dependence on filament geometry and relative configuration. A secondary use of the asymptotic theory could be to speed up the simulation of long time-evolution problems governed by HIs or, in special cases, to provide a way to integrate the equations of motion by hand. The reduction in computation time would come from removing the need to recompute the interaction term $\mathcal{J}$ (see Section \ref{sec:comp-method}) at each time step, as the relative orientation of the two filaments changes. Our asymptotic series expansion provides expression for the HIs between filaments in terms of the resistance matrix of a single filament, which can be precomputed (either by evaluating the analytical expressions from RFT, or by numerically solving the integral equations of SBT for a single filament) and updated at each time step using a rigid-body rotation to reflect changes in filament orientation. This relies on the filaments being rigid so that the shape of their centreline does not change over time. {\color{black} However, we reiterate that the main purpose of our asymptotic theory is to provide a way to evaluate the HIs between filaments analytically, and not to challenge well-established computational methods.} For the simulation of flexible fibers, there exist specialised computational methods that can handle large numbers of filaments with HIs efficiently \cite{Tornberg2004,Maxian2021}.} One advantage of the current asymptotic theory is the compactness of the final results in Eqs.~\eqref{eq:result-C1}, \eqref{eq:result-S2}, and \eqref{eq:result-C2}, which means they can be used to develop analytical models for certain hydrodynamic phenomena that have only been studied computationally until now. Another advantage is that the results of Eqs.~\eqref{eq:result-C1}, \eqref{eq:result-S2}, and \eqref{eq:result-C2} are valid for arbitrary filament shapes, in contrast to other theories of HIs which require a small-amplitude assumption for the shape of the filament. However, no theory is without its limitations. {\color{black} One important restriction is that, within the current setup, our asymptotic theory can only handle filaments in an infinite fluid domain. Further work would be needed to account for external surface such as the cell body of the organism to which the filaments might be attached.} {\color{black} Just as important is the fact that our asymptotic theory, in its current state,} can only fully describe the interaction of rigid filaments. A possible extension is to refine the series expansions for the force distributions from Eqs.~\eqref{eq:expansion-f1} and \eqref{eq:expansion-f2}, which are valid for any type of filament, in order to obtain a comprehensive theory for HIs between flexible filaments as well. We also note that we have neglected HIs due to moment distributions along the centrelines of the filaments. This is because such contributions would scale like $\epsilon^2/d^2$ and would always be smaller than the second-order corrections from the force distributions, which scale like $\log(\epsilon)/d^2$ and are the final terms included in our asymptotic theory. We have also considered the interactions between multiple slender filaments but only in a qualitative way, when discussing the physics of HIs in a circular array of helical pumps. Our asymptotic theory can be easily extended to include HIs between more than two filaments, because it is based on the method of reflections. With this approach, $j$th-order corrections to the extended resistance matrix come from hydrodynamic effects that have reflected $j$ times between the filament that induces the flow and the filament that feels its effect. The only complication comes from the fact that, in a collection of $N>2$ filaments, there is no single expansion parameter. Instead, there are $\frac{1}{2}N(N-1)$ pairwise distances between the filaments. Hence, the order in which corrections appear in the series expansion must be considered carefully, {\color{black}unless the filaments are so far apart that it is sufficient to consider first-order corrections due to pairwise interactions}. There are many possible applications for the theoretical results presented in this paper, beyond the case of helical pumps discussed in Section \ref{sec:application}. Our asymptotic theory can be used to investigate the collective swimming of elongated microorganisms like the \textit{Spirochaetes} and \textit{Spiroplasma}, as well as some artificial micro-swimmers (e.g.~helical micromachines actuated by an external magnetic field). {\color{black} Amongst all moving appendages in the microscopic world, the closest to being rigid are the bacterial flagellum and nodal cilia, which makes them more suitable for applications of our asymptotic theory. Although the distance between flagellar filaments within a bundle is less than their contour length, there are other situations in which bacterial flagella interact on a larger length scale, making these problems directly relevant to our asymptotic theory. Examples include the HIs between filaments at either pole of an amphitrichous bacterium or filaments belonging to different cells in a sparse bacterial carpet or swarm.} Following an extension of our theory to the case of flexible filaments, as discussed before, one could also examine the HIs between eukaryotic cilia and flagella, or between fluctuating polymeric filaments in the cytoplasm, such as actin filaments and microtubules. Another, more technical, avenue for future research will be to bridge the gap between near-field {\color{black}($d\ll L$)} theories of HIs \cite{Man2016} {\color{black} and the present study ($d > L$)}. \section*{Acknowledgements} We gratefully acknowledge funding from the George and Lillian Schiff Fund through the University of Cambridge (studentship supporting M.T.C.) and the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement 682754 to E.L.).
1,116,691,497,990
arxiv
\section{Introduction} This document serves as an example submission. It illustrates the format we expect authors to follow when submitting a paper to ECCV. At the same time, it gives details on various aspects of paper submission, including preservation of anonymity and how to deal with dual submissions, so we advise authors to read this document carefully. \section{Initial Submission} \subsection{Language} All manuscripts must be in English. \subsection{Paper length} Papers submitted for review should be complete. The length should match that intended for final publication. Papers accepted for the conference will be allocated 14 pages (plus additional pages for references) in the proceedings. Note that the allocated 14 pages do not include the references. The reason for this policy is that we do not want authors to omit references for sake of space limitations. Papers with more than 14 pages (excluding references) will be rejected without review. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Do not use the TIMES, or any other font than the default. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in 14 pages if it is reviewed in 16. \subsection{Paper ID} It is imperative that the paper ID is mentioned on each page of the manuscript. The paper ID is a number automatically assigned to your submission when registering your paper submission on the submission site. All lines should be numbered in the initial submission, as in this example document. This makes reviewing more efficient, because reviewers can refer to a line on a page. Line numbering is removed in the camera-ready. \subsection{Mathematics} Please number all of your sections and displayed equations. Again, this makes reviewing more efficient, because reviewers can refer to a line on a page. Also, it is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the line numbering will not be present in the final copy, so is not an alternative to equation numbers). Some authors might benefit from reading Mermin's description of how to write mathematics: \url{www.pamitc.org/documents/mermin.pdf}. \section{Policies} To avoid confusion, in case of discrepancies between policies mentioned here and those in the ECCV 2022 webpage, the web page is the one that is updated regularly and its policies shall overrule those appearing here. \subsection{Review Process} By submitting a paper to ECCV, the authors agree to the review process and understand that papers are processed by the Toronto system to match each manuscript to the best possible chairs and reviewers. \subsection{Confidentiality} The review process of ECCV is confidential. Reviewers are volunteers not part of the ECCV organisation and their efforts are greatly appreciated. The standard practice of keeping all information confidential during the review is part of the standard communication to all reviewers. Misuse of confidential information is a severe professional failure and appropriate measures will be taken when brought to the attention of ECCV organizers. It should be noted, however, that the organisation of ECCV is not and cannot be held responsible for the consequences when reviewers break confidentiality. Accepted papers will be published by Springer (with appropriate copyrights) electronically up to three weeks prior to the main conference. Please make sure to discuss this issue with your legal advisors as it pertains to public disclosure of the contents of the papers submitted. \subsection{Dual and Double Submissions} By submitting a manuscript to ECCV 2022, authors acknowledge that it has not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference, or workshop. Furthermore, no paper substantially similar in content has been or will be submitted to a journal, another conference or workshop during the review period (March 07, 2022 – July 3, 2022). The authors also attest that they did not submit substantially similar submissions to ECCV 2022. Violation of any of these conditions will lead to rejection and the violation will be reported to the other venue or journal, which will typically lead to rejection there as well. The goals of the dual submission policy are (i) to have exciting new work be published for the first time at ECCV 2022, and (ii) to avoid duplicating the efforts of the reviewers. Therefore, all papers under review are checked for dual submissions and this is not allowed, independent of the page size of submissions. For already published papers, our policy is based upon the following particular definition of ``publication''. A publication, for the purposes of the dual submission policy, is defined to be a written work longer than four pages that was submitted for review by peers for either acceptance or rejection, and, after review, was accepted. In particular, this definition of publication does not depend upon whether such an accepted written work appears in a formal proceedings or whether the organizers declare that such work ``counts as a publication''. An arXiv.org paper does not count as a publication because it was not peer-reviewed for acceptance. The same is true for university technical reports. However, this definition of publication does include peer-reviewed workshop papers, even if they do not appear in a proceedings, if their length is more than 4 pages including citations. Given this definition, any submission to ECCV 2022 should not have substantial overlap with prior publications or other concurrent submissions. As a rule of thumb, the ECCV 2022 submission should contain no more than 20 percent of material from previous publications. \subsection{Requirements for publication} Publication of the paper in the ECCV 2022 proceedings of Springer requires that at least one of the authors registers for the conference and present the paper there. It also requires that a camera-ready version that satisfies all formatting requirements is submitted before the camera-ready deadline. \subsection{Double blind review} \label{sec:blind} ECCV reviewing is double blind, in that authors do not know the names of the area chair/reviewers of their papers, and the area chairs/reviewers cannot, beyond reasonable doubt, infer the names of the authors from the submission and the additional material. Avoid providing links to websites that identify the authors. Violation of any of these guidelines may lead to rejection without review. If you need to cite a different paper of yours that is being submitted concurrently to ECCV, the authors should (1) cite these papers, (2) argue in the body of your paper why your ECCV paper is non trivially different from these concurrent submissions, and (3) include anonymized versions of those papers in the supplemental material. Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work. In fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for technical reports). Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an excellent paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L. and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{detr} as additional material and cite it as \begin{quote} 1. Authors. ``The frobnicatable foo filter'', BMVC 2014 Submission ID 324, Supplied as additional material {\tt bmvc14.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{detr}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ECCV audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. \\ For sake of anonymity, it's recommended to omit acknowledgements in your review copy. They can be added later when you prepare the final copy. \section{Manuscript Preparation} This is an edited version of Springer LNCS instructions adapted for ECCV 2022 first paper submission. You are strongly encouraged to use \LaTeX2$_\varepsilon$ for the preparation of your camera-ready manuscript together with the corresponding Springer class file \verb+llncs.cls+. We would like to stress that the class/style files and the template should not be manipulated and that the guidelines regarding font sizes and format should be adhered to. This is to ensure that the end product is as homogeneous as possible. \subsection{Printing Area} The printing area is $122 \; \mbox{mm} \times 193 \; \mbox{mm}$. The text should be justified to occupy the full line width, so that the right margin is not ragged, with words hyphenated as appropriate. Please fill pages so that the length of the text is no less than 180~mm. \subsection{Layout, Typeface, Font Sizes, and Numbering} Use 10-point type for the name(s) of the author(s) and 9-point type for the address(es) and the abstract. For the main text, please use 10-point type and single-line spacing. We recommend using Computer Modern Roman (CM) fonts, which is the default font in this template. Italic type may be used to emphasize words in running text. Bold type and underlining should be avoided. With these sizes, the interline distance should be set so that some 45 lines occur on a full-text page. \subsubsection{Headings.} Headings should be capitalized (i.e., nouns, verbs, and all other words except articles, prepositions, and conjunctions should be set with an initial capital) and should, with the exception of the title, be aligned to the left. Words joined by a hyphen are subject to a special rule. If the first word can stand alone, the second word should be capitalized. The font sizes are given in Table~\ref{table:headings}. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Font sizes of headings. Table captions should always be positioned {\it above} the tables. The final sentence of a table caption should end without a full stop} \label{table:headings} \begin{tabular}{lll} \hline\noalign{\smallskip} Heading level & Example & Font size and style\\ \noalign{\smallskip} \hline \noalign{\smallskip} Title (centered) & {\Large \bf Lecture Notes \dots} & 14 point, bold\\ 1st-level heading & {\large \bf 1 Introduction} & 12 point, bold\\ 2nd-level heading & {\bf 2.1 Printing Area} & 10 point, bold\\ 3rd-level heading & {\bf Headings.} Text follows \dots & 10 point, bold \\ 4th-level heading & {\it Remark.} Text follows \dots & 10 point, italic\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} Here are some examples of headings: ``Criteria to Disprove Context-Freeness of Collage Languages'', ``On Correcting the Intrusion of Tracing Non-deterministic Programs by Software'', ``A User-Friendly and Extendable Data Distribution System'', ``Multi-flip Networks: Parallelizing GenSAT'', ``Self-determinations of Man''. \subsubsection{Lemmas, Propositions, and Theorems.} The numbers accorded to lemmas, propositions, and theorems etc. should appear in consecutive order, starting with the number 1, and not, for example, with the number 11. \subsection{Figures and Photographs} \label{sect:figures} Please produce your figures electronically and integrate them into your text file. For \LaTeX\ users we recommend using package \verb+graphicx+ or the style files \verb+psfig+ or \verb+epsf+. Check that in line drawings, lines are not interrupted and have constant width. Grids and details within the figures must be clearly readable and may not be written one on top of the other. Line drawings should have a resolution of at least 800 dpi (preferably 1200 dpi). For digital halftones 300 dpi is usually sufficient. The lettering in figures should have a height of 2~mm (10-point type). Figures should be scaled up or down accordingly. Please do not use any absolute coordinates in figures. Figures should be numbered and should have a caption which should always be positioned {\it under} the figures, in contrast to the caption belonging to a table, which should always appear {\it above} the table. Please center the captions between the margins and set them in 9-point type (Fig.~\ref{fig:example} shows an example). The distance between text and figure should be about 8~mm, the distance between figure and caption about 5~mm. \begin{figure} \centering \includegraphics[height=6.5cm]{eijkel2} \caption{One kernel at $x_s$ ({\it dotted kernel}) or two kernels at $x_i$ and $x_j$ ({\it left and right}) lead to the same summed estimate at $x_s$. This shows a figure consisting of different types of lines. Elements of the figure described in the caption should be set in italics, in parentheses, as shown in this sample caption. The last sentence of a figure caption should generally end without a full stop} \label{fig:example} \end{figure} If possible (e.g. if you use \LaTeX) please define figures as floating objects. \LaTeX\ users, please avoid using the location parameter ``h'' for ``here''. If you have to insert a pagebreak before a figure, please ensure that the previous page is completely filled. \subsection{Formulas} Displayed equations or formulas are centered and set on a separate line (with an extra line or halfline space above and below). Displayed expressions should be numbered for reference. The numbers should be consecutive within the contribution, with numbers enclosed in parentheses and set on the right margin. For example, \begin{align} \psi (u) & = \int_{0}^{T} \left[\frac{1}{2} \left(\Lambda_{0}^{-1} u,u\right) + N^{\ast} (-u)\right] dt \; \\ & = 0 ? \end{align} Please punctuate a displayed equation in the same way as ordinary text but with a small space before the end punctuation. \subsection{Footnotes} The superscript numeral used to refer to a footnote appears in the text either directly after the word to be discussed or, in relation to a phrase or a sentence, following the punctuation sign (comma, semicolon, or full stop). Footnotes should appear at the bottom of the normal text area, with a line of about 2~cm in \TeX\ and about 5~cm in Word set immediately above them.\footnote{The footnote numeral is set flush left and the text follows with the usual word spacing. Second and subsequent lines are indented. Footnotes should end with a full stop.} \subsection{Program Code} Program listings or program commands in the text are normally set in typewriter font, e.g., CMTT10 or Courier. \noindent {\it Example of a Computer Program} \begin{verbatim} C, M := SOLOv2(img) \end{verbatim} \begin{verbatim} program Inflation (Output) {Assuming annual inflation rates of years}; const MaxYears = 10; var Year: 0..MaxYears; Factor1, Factor2, Factor3: Real; begin Year := 0; Factor1 := 1.0; Factor2 := 1.0; Factor3 := 1.0; WriteLn('Year repeat Year := Year + 1; Factor1 := Factor1 * 1.07; Factor2 := Factor2 * 1.08; Factor3 := Factor3 * 1.10; WriteLn(Year:5,Factor1:7:3,Factor2:7:3,Factor3:7:3) until Year = MaxYears end. \end{verbatim} \noindent {\small (Example from Jensen K., Wirth N. (1991) Pascal user manual and report. Springer, New York)} \subsection{Citations} The list of references is headed ``References" and is not assigned a number in the decimal system of headings. The list should be set in small print and placed at the end of your contribution, in front of the appendix, if one exists. Please do not insert a pagebreak before the list of references if the page is not completely filled. An example is given at the end of this information sheet. For citations in the text please use square brackets and consecutive numbers: \section{Submitting a Camera-Ready for an Accepted Paper} \subsection{Converting Initial Submission to Camera-Ready} To convert a submission file into a camera-ready for an accepted paper: \begin{enumerate} \item First comment out \begin{verbatim} \usepackage{ruler} \end{verbatim} and the line that follows it. \item The anonymous title part should be removed or commented out, and a proper author block should be inserted, for which a skeleton is provided in a commented-out version. These are marked in the source file as \begin{verbatim} \end{verbatim} and \begin{verbatim} \end{verbatim} \item Please write out author names in full in the paper, i.e. full given and family names. If any authors have names that can be parsed into FirstName LastName in multiple ways, please include the correct parsing in a comment to the editors, below the \begin{verbatim}\author{}\end{verbatim} field. \item Make sure you have inserted the proper Acknowledgments. \end{enumerate} \subsection{Preparing the Submission Package} We need all the source files (LaTeX files, style files, special fonts, figures, bib-files) that are required to compile papers, as well as the camera ready PDF. For each paper, one ZIP-file called XXXX.ZIP (where XXXX is the zero-padded, four-digit paper ID) has to be prepared and submitted via the ECCV 2022 Submission Website, using the password you received with your initial registration on that site. The size of the ZIP-file may not exceed the limit of 60 MByte. The ZIP-file has to contain the following: \begin{enumerate} \item All source files, e.g. LaTeX2e files for the text, PS/EPS or PDF/JPG files for all figures. \item PDF file named ``XXXX.pdf" that has been produced by the submitted source, where XXXX is the four-digit paper ID (zero-padded if necessary). For example, if your paper ID is 24, the filename must be 0024.pdf. This PDF will be used as a reference and has to exactly match the output of the compilation. \item PDF file named ``XXXX-copyright.PDF": a scanned version of the signed copyright form (see ECCV 2022 Website, Camera Ready Guidelines for the correct form to use). \item If you wish to provide supplementary material, the file name must be in the form XXXX-supp.pdf or XXXX-supp.zip, where XXXX is the zero-padded, four-digit paper ID as used in the previous step. Upload your supplemental file on the ``File Upload" page as a single PDF or ZIP file of 100 MB in size or less. Only PDF and ZIP files are allowed for supplementary material. You can put anything in this file – movies, code, additional results, accompanying technical reports–anything that may make your paper more useful to readers. If your supplementary material includes video or image data, you are advised to use common codecs and file formats. This will make the material viewable by the largest number of readers (a desirable outcome). ECCV encourages authors to submit videos using an MP4 codec such as DivX contained in an AVI. Also, please submit a README text file with each video specifying the exact codec used and a URL where the codec can be downloaded. Authors should refer to the contents of the supplementary material appropriately in the paper. \end{enumerate} Check that the upload of your file (or files) was successful either by matching the file length to that on your computer, or by using the download options that will appear after you have uploaded. Please ensure that you upload the correct camera-ready PDF–renamed to XXXX.pdf as described in the previous step as your camera-ready submission. Every year there is at least one author who accidentally submits the wrong PDF as their camera-ready submission. Further considerations for preparing the camera-ready package: \begin{enumerate} \item Make sure to include any further style files and fonts you may have used. \item References are to be supplied as BBL files to avoid omission of data while conversion from BIB to BBL. \item Please do not send any older versions of papers. There should be one set of source files and one XXXX.pdf file per paper. Our typesetters require the author-created pdfs in order to check the proper representation of symbols, figures, etc. \item Please remove unnecessary files (such as eijkel2.pdf and eijkel2.eps) from the source folder. \item You may use sub-directories. \item Make sure to use relative paths for referencing files. \item Make sure the source you submit compiles. \end{enumerate} Springer is the first publisher to implement the ORCID identifier for proceedings, ultimately providing authors with a digital identifier that distinguishes them from every other researcher. ORCID (Open Researcher and Contributor ID) hosts a registry of unique researcher identifiers and a transparent method of linking research activities to these identifiers. This is achieved through embedding ORCID identifiers in key workflows, such as research profile maintenance, manuscript submissions, grant applications and patent applications. \subsection{Most Frequently Encountered Issues} Please kindly use the checklist below to deal with some of the most frequently encountered issues in ECCV submissions. {\bf FILES:} \begin{itemize} \item My submission package contains ONE compiled pdf file for the camera-ready version to go on Springerlink. \item I have ensured that the submission package has all the additional files necessary for compiling the pdf on a standard LaTeX distribution. \item I have used the correct copyright form (with editor names pre-printed), and a signed pdf is included in the zip file with the correct file name. \end{itemize} {\bf CONTENT:} \begin{itemize} \item I have removed all \verb| \vspace| and \verb|\hspace| commands from my paper. \item I have not used \verb|\thanks| or \verb|\footnote| commands and symbols for corresponding authors in the title (which is processed with scripts) and (optionally) used an Acknowledgement section for all the acknowledgments, at the end of the paper. \item I have not used \verb|\cite| command in the abstract. \item I have read the Springer author guidelines, and complied with them, including the point on providing full information on editors and publishers for each reference in the paper (Author Guidelines – Section 2.8). \item I have entered a correct \verb|\titlerunning{}| command and selected a meaningful short name for the paper. \item I have entered \verb|\index{Lastname,Firstname}| commands for names that are longer than two words. \item I have used the same name spelling in all my papers accepted to ECCV and ECCV Workshops. \item I have inserted the ORCID identifiers of the authors in the paper header (see http://bit.ly/2H5xBpN for more information). \item I have not decreased the font size of any part of the paper (except tables) to fit into 14 pages, I understand Springer editors will remove such commands. \end{itemize} {\bf SUBMISSION:} \begin{itemize} \item All author names, titles, and contact author information are correctly entered in the submission site. \item The corresponding author e-mail is given. \item At least one author has registered by the camera ready deadline. \end{itemize} \section{Conclusions} The paper ends with a conclusion. \clearpage\mbox{}Page \thepage\ of the manuscript. \clearpage\mbox{}Page \thepage\ of the manuscript. This is the last page of the manuscript. \par\vfill\par Now we have reached the maximum size of the ECCV 2022 submission (excluding references). References should start immediately after the main text, but can continue on p.15 if needed. \clearpage \section{Behaviour of Duplicate Confusion} For any two predictions $a$ and $b$ and associated confidences $p_a$ and $p_b$ where the connectivity between them is not restricted by the confidence of an intermediary prediction (i.e. $c_{ab} = \min(p_a, p_b)$), the duplicate confusion (DC) associated with these predictions is nondecreasing in $p_a$ and $p_b$. Suppose, without loss of generality, that $p_a > p_b$: \begin{align*}\ \mathrm{DC} &= \frac{1}{2} \sum_i^2 \sum_{j \ne i}^2 p_j \frac{c_{ij}}{p_i} \\ &= \frac{1}{2}\left( p_a \frac{\min(p_a, p_b)}{p_b} + p_b \frac{\min(p_a, p_b)}{p_a} \right) \\ &= \frac{1}{2}\left( p_a + \frac{p_b^2}{p_a} \right) \end{align*} The derivative of the DC with respect to $p_a$ and $p_b$ is thus \begin{align*} \frac{\partial DC}{\partial p_a} &= \frac{1}{2}\left( 1 - \frac{p_b^2}{p_a^2}\right) \\ \frac{\partial DC}{\partial p_b} &= \frac{1}{2}\left( p_a + \frac{p_b}{p_a}\right) \end{align*} which are positive on the ranges $[p_b, 1]$ and $(0, p_b]$, respectively. Since we assumed that $p_a > p_b$, we thus conclude that the DC for any two predictions with unrestricted connectivity is nondecreasing in $p_a$ and $p_b$. If, instead, there is some other prediction $c$ that bottlenecks the connectivity between $a, b$ such that $c_{ab} = p_v$ (and thus $p_c \le p_a$ and $p_c \le p_b$), we can decompose the DC into three components: the DC between $a$ and $b$, $a$ and $c$, and $b$ and $c$ respectively: \begin{equation*} DC = \frac{p_c}{3}\left(\frac{p_a}{p_b} + \frac{p_b}{p_a}\right) + \frac{1}{3}\left(p_a + \frac{p_c^2}{p_a}\right) + \frac{1}{3}\left(p_b + \frac{p_c^2}{p_b}\right) \end{equation*} While the above is nondecreasing in $p_c$, this is no longer the case for $p_a$ and $p_b$. Specifically, the first term (associated with the DC between $a$ and $b$) in the above equation is \textit{nonincreasing} in $p_a$ and $p_b$. Since $c_{ab}$ is not dependent on $p_a$ or $p_b$, $a$ and $b$ become increasingly bottlenecked by $p_c$ as $p_a$ and $p_b$ increase. As $a$ and $b$ become increasingly unable to fully explain each other, their DC decrease. At the same time, $c$ becomes increasingly explainable by $a$ and $b$ and thus the DC increases. In combination, the total DC decrease with respect to $p_a$ and $p_b$ when one is approximately equal to $p_c$ and increases when they are larger. For more complicated connectivity graphs, the behaviour of the duplicate confusion follows a similar pattern to that described above. \section{Implementation of Contrastive Delaunay Flow and Semantic Sorting} In Section \ref{sec:cdfalgo}, we motivate and propose the use of Contrastive Delaunay Flow, and showed its effectiveness in amplifying feature differences between similar instances. A pseudocode for the method is presented in Alg. \ref{alg:cdf}. \begin{algorithm} \caption{Pseudocode for constructing CDF for class $c$, given set of ground truth instances $S_k$ with centers $t_k$ and category $c$}\label{alg:cdf} \KwData{$\{ S_{k},\}_{k = 1\ldots N}, \{ t_k \}_{k = 1\ldots N}, N$} \KwResult{Flow $\mathbf{f}_c$ characterizing CDF for class $c$} // Initialize with center flow \For{$k = 1 \ldots N$}{ $ \mathbf{f}_c[u] \leftarrow \frac{t_k - u}{\| t_k - u \|} \quad , \forall u \in S_k$ } // No graph in case of single instance \If{$N == 1$}{return $\mathbf{f}_c$} // Construct graph as set of directed edges \eIf{$N == 2$}{$\mathcal{G} \leftarrow \{\{0,1\}, \{1,0\}\}$}{ $\mathcal{G} \leftarrow \text{Delaunay}(t_1, t_2 \ldots t_N)$} // Construct flow \For{$(i, j) \in \mathcal{G}$}{ $\hat{f}_{ij} \leftarrow \frac{t_i - t_j}{\|t_i - t_j \|}$ $\mathbf{f}_c[u] \leftarrow \mathbf{f}_c[u] + \mathbb{I}((t_i - u)^T \hat{f}_{ij} > 0) \hat{f}_{ij} , \forall u \in S_i $ } $\mathbf{f}_c[u] = \frac{\mathbf{f}_c[u]}{\|\mathbf{f}_c[u]\| + \epsilon} \quad \forall u$ return $\mathbf{f}_c$ \end{algorithm} Moreover, Semantic Sorting and NMS are also shown to be effective methods for resolving both intra-class hedging (counting) and inter-class hedging (naming) errors. A pseudocode of Semantic Sorting and NMS is presented in Alg. \ref{alg:semnms}. \begin{algorithm} \caption{Pseudocode for semantic sorting and NMS, given instances $S_k$ with category $c_k$ and confidence $p_k$, and semantic masks $M$}\label{alg:semnms} \KwData{$\{ S_{k}, c_{k}, p_{k}\}_{k = 1\ldots N}$, $\{ M_c \}_{c = 1\ldots C}$} \KwResult{Boolean array $keep$ indicating preservation of instances} \For{$k = 1 \ldots N$} { $pr \leftarrow \text{precision}(S_k, M_{c_k} ) $; \\ $iou \leftarrow \text{computeIoU}(S_k, M_{c_k} ) $; \\ $p_k \leftarrow p_k + pr + (1 - iou) $; \\ } $(S, c, p) = \text{sort}(S, c, p); \quad$ // sort by decreasing $p$ \\ $keep \leftarrow [True]\times N$; \\ \For{$k = 1 \ldots N$}{ $overlap \leftarrow \text{precision}(S_k, M_{c_k})$; \\ \eIf{$overlap \ge thr$}{ $keep_k = True$; \\ $M_{c_k} = M_{c_k} \backslash S_k$ \\ }{ $keep_k = False$} } \end{algorithm} \section{Separation of instance features using CDF} In this section, we show more qualitative examples of similarity in the \textit{mask features} leading to merged predictions in the synthetic dataset (Tab. \ref{tab:synthetic}). The CDF represents an explicit output that is contextual, and leads to better instance merging resolution, which tackles the merging problem due to redundant query features, and lack of distinguishing features. Examples are shown in Fig. \ref{fig:heatmaps-supp}. \begin{figure}[t!] \centering \includegraphics[width=0.85\linewidth]{images/heatmaps-supp.png.jpg} \caption{\textbf{More examples of instance merging problem in Synthetic dataset} Each row shows examples of \textit{mask feature} similarity in SOLOv2 and our method. Fig.(a) shows input image patches with similar objects. Fig.(b) shows the pixel-wise cosine similarity of the mask features with the mask feature of the pixel marked {\color{red} $\times$} in Fig.(a) along with predictions (shown as bounding boxes). Note that instance merging is rampant, among instances that have similar orientation, regardless of their spatial proximity or instances around them. Fig.(c) shows our method with CDF which leads to amplification of difference in features (due to contrasting Delaunay neighbors) leading to resolution of merged predictions } \label{fig:heatmaps-supp} \end{figure} \section{Shortcoming of AP for measuring hedging errors} In this section, we explore some real examples from the COCO validation dataset in terms of hedging errors and its effect on mAP. This shortcoming occurs due to low-confidence false-positives that are not explicitly pruned in a post-processing step like NMS (or thresholding low-confidence predictions after a `soft' NMS). These low-confidence predictions accumulate in the tail end of the Precision-Recall curve, and do not negatively contribute to the mAP metric. Some examples are shown in Fig. \ref{fig:ap-ind-supp}. This is also reflected in Tab. \ref{tab:ablation}, \ref{tab:cocoresult} where hedging improves mAP, but at the cost of worsening all other errors (F1-score, LRP, NE, DC). More qualitative results are shown in Fig. \ref{fig:coco-solo-v-ours-1}, \ref{fig:coco-solo-v-ours-2}, \ref{fig:coco-solo-v-ours-3}. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/ap-ind-supp.png.jpg} \caption{\textbf{Shortcoming of AP in resolving the \textit{hedging problem}}: (a) shows the prediction of SOLOv2 model with Matrix NMS, (b) shows the corresponding P/R curve. (c) shows the prediction with the same network but with Mask NMS, (d) shows the corresponding P/R curve. Note that despite having severe hedging (overcounting) in the first case, the AP scores are the same for both cases. However, they exhibit drastically different qualitative behavior, showing that AP is not an adequate metric for evaluating the \textit{hedging problem} } \label{fig:ap-ind-supp} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{images/coco-solo-v-ours.001.pdf} \caption{\textbf{Qualitative comparison on COCO-val-2017 dataset}: Images on left are predictions made by SOLOv2, images on right are predictions by our model with CDF, Semantic Sorting and Semantic NMS} \label{fig:coco-solo-v-ours-1} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{images/coco-solo-v-ours.002.pdf} \caption{\textbf{Qualitative comparison on COCO-val-2017 dataset}: Images on left are predictions made by SOLOv2, images on right are predictions by our model with CDF, Semantic Sorting and Semantic NMS} \label{fig:coco-solo-v-ours-2} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{images/coco-solo-v-ours.003.pdf} \caption{\textbf{Qualitative comparison on COCO-val-2017 dataset}: Images on left are predictions made by SOLOv2, images on right are predictions by our model with CDF, Semantic Sorting and Semantic NMS} \label{fig:coco-solo-v-ours-3} \end{figure*} \section{Conclusion} We highlight two important problems in instance segmentation, namely, \textit{merging} and \textit{hedging}. We highlight the ways in which intra and inter-class hedging errors can increase mAP, and propose metrics that isolate these errors. To address \textit{merging}, we learn a contrastive flow that encourages each pixel to learn a flow dependent on the relative positions of the instances around it. To address \textit{hedging}, we propose a semantic sorting mechanism that re-ranks instances and prunes duplicates, leading to better resolution of both inter and intra-class hedging. Empirically, we show that many top-down instance segmentation methods suffer from these three errors even if they have high mAP. Experiments on the COCO dataset shows better resolution of the merging and hedging errors by our method compared to other SOTA algorithms. \section{Experiments} \textbf{Implementation}. Given an input image $I \in \mathbb{R}^{3 \times H \times W}$, the FPN backbone generates a list of $[F \times \frac{H}{k} \times \frac{W}{k} ]$ feature maps(where $k$ is pyramid level), which feed into the category, kernel and mask feature prediction branch to give $\mathbb{R}^{S_k \times S_k \times C}$, $\mathbb{R}^{S_k \times S_k \times E}$ and $\mathbb{R}^{H' \times W' \times E}$ dimensional output respectively, where $H', W' =\frac{H}{4}, \frac{W}{4} $, $C$: semantic classes, $S_k$: grid size, $E$: feature maps. Then, one 1$\times$1 convolution and two 1$\times$1 convolutions with GroupNorm\cite{groupnorm} and ReLU are performed on \textit{mask features} to output vector flow $\in \mathbb{R}^{H' \times W' \times 2C}$ and semantic segmentation $\in \mathbb{R}^{H' \times W' \times C}$ predictions respectively. \subsection{Ablation study on instance separation} \paragraph{Synthetic dataset} To isolate the \textit{merging problem}, we construct a synthetic dataset of 20 identical nails that are placed randomly in the image. Each image is of size $394\times394$ and the locations of the nails are sampled from a truncated random normal distribution around the image center. The training and validation set consist of 2000 and 500 images respectively. The main challenges of this dataset are instance clutter and a lack of distinct appearance between two instances which can lead to instance merging. \paragraph{Performance with CDF} The ablation is shown in Table \ref{tab:synthetic}. Even in a simple scenario, SOLOv2 suffers from severe overcounting and instance merging problems. Note that explicit coordinates (CoordConv) do not improve AP, F1 and LRP indicating no resolution of hedging and masking. Adding CDF improves all results significantly. The CDF forces different instances to learn different mask features in order to predict a different flow for each instance, where the flow is a function of its Delaunay triangulation. An example is shown in Tab. \ref{tab:synthetic}. Note that nails that have similar local appearance have a very high cosine-similarity to each other in the mask feature space. Therefore, a kernel feature when convolved with this mask feature ends up masking both instances. Our method explicitly reduces mask feature similarity in order to be able to predict the CDF, which is a function of the relative positions of the neighbors. This results in a drastic reduction of merged instance prediction compared to the baseline. \input{tab/cocoresult} \subsection{Ablation on coco-minitrain} Next, we investigate the effectiveness of CDF and semantic sorting in improving hedging (intra and inter class) and merging. Therefore, we ablate on the CDF (\cmark/\xmark), semantic segmentation module (\cmark/\xmark), and NMS type ({M}atrix/ {m}ask/ {S}emantic). We perform ablations on the coco-minitrain \cite{houghnet} dataset. We use coco-minitrain instead of the COCO-train-2017 set owing to similar data statistics as the full training set and to reduce the cost of running ablations. All hyperparameters used for SOLOv2 follow the experimental setup of \cite{solov2}. The results are in \textbf{Table \ref{tab:ablation}}. For a given NMS method (mask/matrix/semantic), adding CDF increases mAP and masking performance over its counterpart without CDF, showing the effectiveness of the CDF in providing reliable context. The boundary IoU metric shows that true positives now have better contour quality compared to the baseline, and LRP{\tiny{Loc}} shows that CDF helps in better localization leading to better masks. Meanwhile, using semantic NMS provide at least a 86.8 $\%$ decrease in the duplicate confusion and a 15.4 $\%$ increase in the F1 score compared to Matrix and Mask NMS. Using Semantic NMS leads to a much better DC, F1, LRP{\tiny{FP}}, and NE showing better resolution of both inter-class and intra-class hedge-predictions. \subsection{Result on COCO-val-2017} We run our full method on the COCO \cite{coco} training set. Results are shown in Table \ref{tab:cocoresult}. To contrast the effect of Semantic NMS, we also compare our method but with MatrixNMS. Methods like QueryInst \cite{instancequery} use a fixed number of queries (e.g. 100) and produce predictions for each query without performing any NMS. The tail end behavior of these queries is therefore undefined. This leads to it having the highest mAP values, but the poorest performance in terms of F1, bIoU (owing to memorization of templates), LRP and NE (due to FPs from other classes). Higher LRP{\tiny{FP}} indicates more intra-class hedging, while higher DC indicates strong connectivity among the hedged predictions. However, since the predictions produced by QueryInst communicate with each other and self-separate, QueryInst manages to achieve the second best performance in terms of duplicate confusion. In general, the behavior of different algorithms is performant along different dimensions, like HTC \cite{htc} being better at localization and MaskRCNN \cite{maskrcnn} being better at F1 and LRP. Intra-class hedging error is high in other state-of-the-art models because the classification and segmentation branches operate independently and can output multiple classes for the same instance. MaskRCNN uses the same RoIAligned boxes for classification and segmentation, essentially entangling their representations. Furthermore, MaskRCNN chooses one category for each prediction, leading to less dithering among classes. Although MaskRCNN uses NMS, it has high DC, which means the connectivity of its hedges is very high, although the actual quantity of hedges is low (as denoted by F1 and LRP{\tiny{FP}}). Our method is based on SOLOv2, which has independent category and mask branches. However, our semantic sorting and NMS helps close the gap between category and instance predictions, resolving the naming problem, and we perform closely to MaskRCNN in naming. \section{Introduction} \label{sec:intro} \input{fig/namemaskcount} Top-down instance segmentation methods suffer from two problems -- \textit{(instance) merging} and \textit{hedging}. \textit{Merging} refers to the problem of masking multiple objects that are similar as a single instance. This occurs in the query-key paradigm where a query feature generates a mask by selecting mask features. Since mask features are similar for similar instances, query features have no way to distinguish these instances, which leads to the instance merging problem. \textit{Hedging} refers to the problem of predicting multiple instances of the same instance with slight variations in localization and/or class. Hedging can be intra-class (different masks for the same instance - \textit{counting}) or inter-class (predicting the same mask with multiple classes - \textit{naming}). A successful instance segmentation involves the integration of the category and localization branches of visual perception to solve these problems. { Popular approaches are dominated by top-down methods where the network regresses a bounding box, mask, and category. Mask-RCNN \cite{maskrcnn} approaches it as a two-stage problem: localize the object, then predict the associated instance segmentation mask. SOLO \cite{solo,solov2} builds on an anchor-free framework and directly regresses an object segmentation using a spatial grid as a probe. More recent work based on Transformers \cite{detr} explicitly learns a query in the network memory, then refines this prediction. Despite their differences, these architectures share similar types of errors: 1) instance merging of similar objects 2) excessive hedging within and across classes. The \textit{instance merging} problem occurs when the network segments two similar objects as one instance. In analyzing why networks with widely varied designs all make these systematic forms of errors, we notice an unusual observation: one can improve mAP by substantially increasing overcounting. Specifically, we notice that mAP can be `gamed' by hedging bets on low confidence predictions to match a ground truth. The hedging becomes more prominent as we move away from traditional NMS to more soft or implicit variations \cite{solov2},\cite{instancequery}. Overcounting in instance segmentation can be traced to the behavior of the precision-recall (P/R) curve at its tail end (high recall range). We note that mAP discounts the tail end performance and encourages over-counting with duplicates (Fig. \ref{fig:illustrative}, more examples in Supplementary Material). NMS methods that are soft \cite{solov2}, or implicit \cite{detr},\cite{instancequery} tend to keep low-confidence predictions which end up in the tail end of the P/R curve, hence, increasing mAP but worsening the hedging problem. This provides a trivial spatial dithering scheme to increase mAP by overcounting, which we notice occurs in state-of-the-art top-down instance segmentation methods due to near-identical queries. Addressing this is important for many practical counting problems such as medical applications \cite{nuclei}, crowd detection \cite{crowddet}, or industrial applications where counting is critical. The current pre-NMS ranking scores are mainly predicted by an independent category branch that is often miscalibrated \cite{guocal,longtailcls,calobjdet} and doesn't reflect the instance mask quality. \cite{longtailcls} highlights that inaccurate object proposal classification can lead to a drastic performance drop in mask AP of rare classes. Moreover, implementations of modern instance segmentations allow predicting multiple classes for the same instance, exacerbating the \textit{inter-class hedging} problem. To remove the loophole in mAP-based evaluation, we develop a new metric to quantify the amount of hedging based on graph analysis on the proposed detection/segmentation instance space and apply both within classes (counting) and across classes (naming). \begin{figure}[t!] \centering \includegraphics[width=0.96\linewidth]{images/illustrative.pdf} \caption{\textbf{Illustration of counting and naming errors that increase mAP}: \textbf{(a)} Given ground truths G1-G4, predictions D1-D4 produce an mAP of 0.75 (D4 doesn't match with any ground truth because of low IoU). Dithering predictions D1-D4 to produce detections D5-D12 results in an accidental match of D8 with G2, leading to mAP of 0.875. \textbf{(b)} In the bottom example with three ground truths, a sheep is misclassified as a cat. Copying the same predictions from the left, and dithering the classification label to produce extra predictions leads to a new match, increasing mAP. } \label{fig:illustrative} \end{figure} The new metric allows us to explore algorithm designs that explicitly target the \textit{hedging} errors. Top-down instance segmentation methods tend to `pool' together instances that look similar to a single mask. This is because similar instances have similar features, and a query feature cannot distinguish between these instances. We refer to this problem as \textit{instance merging}, and we conjecture it is a major contributor to the \textit{hedging} problem. We notice that instance merging is similar to a problem in human vision: visual crowding \cite{ruthcontextcrowd},\cite{ruthpooling}. A human can solve this problem by shifting their gaze and attention to the area of crowding. Inspired by this, we implement a feedback process that first uses semantic segmentation to group pixels of all similar objects into one category. To resolve this merging, we incorporate a bottom-up flow based feedback that actively pulls pixels within an instance closer and pushes those across different instances farther. We implement this by training a contrastive instance flow field, constructed as a sum of both a flow field towards the centers of each object and a flow repelling nearby instances, ensuring the nearby objects are separable. The pixel-wise contrastive instance flow is reminiscent of bottom-up grouping-based methods \cite{mcg},\cite{ssap},\cite{sgn},\cite{assocembed},\cite{seminstdiscrim}. However, there is a critical distinction: our flow's direction also depends on nearby crowding objects' position. This dependence helps to encode relative positions of crowding objects to separate their features and thus eliminate instance merging in the top-down prediction. Semantic segmentation can alleviate the \textit{hedging} errors by using overlap between instance and semantic segmentation and consistency to re-rank mask proposals and use the semantic label to remove incorrectly named objects. } \section{Related works} \label{sec:related} \paragraph{Instance segmentation} Instance segmentation is often viewed as a localization task for object detection and pixel-wise classification to segment the object masks. Among such ``detect then segment'' strategies is FCIS \cite{fcis}, the first end-to-end fully convolutional work that considers position-sensitive score maps as mask proposals. The score maps are then assembled to produce classification agnostic instance masks and category likelihoods. Along the same line of strategies is MaskRCNN \cite{maskrcnn}, a two-stage detector that predicts masks from proposed boxes after RoIAlign operation on feature maps. Moving away from box-based object detection, SOLO\cite{solov2} and CondInst\cite{condinst} take an anchor-free approach and use position-sensitive \textit{query} to extract object masks directly from the feature map. The use of dynamic convolution in SOLOv2, is related to transformer based approaches through works like \cite{knet}, where dynamic kernels are learnt from grouped features similar to learning from queries in transformers. In SOLOv2, kernels are learnt from features on spatial grid centers. QueryInst\cite{instancequery}, is another query based object detection framework that links mask features and objects in a one to one correspondence across multiple stages. HTC \cite{htc} is a cascade-based approach that considers semantic segmentation to refine its instance predictions. \paragraph{Evaluation of Detection and Segmentation Methods} The mean average precision (mAP) is a commonly used evaluation metric for objection detection, which is also adopted for image segmentation. Existing works have pointed out several shortcomings to the mAP metric for object detection. \cite{mapproblem} show that mAP can be increased by introducing a nonsensical ranking among classes, and propose to make it truly class independent. LRP \cite{lrp} highlights two major issues with mAP, i.e. 1) different detectors having different P/R curves (implying different problems) can have the same mAP, and 2) mAP is not sufficient to quantify localization. Inspired by Panoptic Segmentation, explicitly penalizes false positive and false negative detections, in addition to localization error. The TIDE \cite{tide} framework instead identify and decomposes the error (1 - mAP) into its constituent error components to analyse where a detector is failing. \paragraph{Instance Merging} The top down prediction methods, where a few query points (often object centers) are responsible for predicting the whole object shape makes them prone to the instance merging problem. In contrast, bottom-up approaches focus on grouping pixels into an instance. These approaches, including Hough-voting \cite{houghforest,implicitshape}, pixel affinity \cite{adaptiveaffinity,affinitycnn}, Watershed methods \cite{watershed}, pixel embedding \cite{assocembed,partspixels,recurrentinstgroup}, can be thought of as `flow' based: each pixel directly or indirectly learns to flow towards the object center. The flow is category agnostic, making it easier to learn and more generalizable. However, flow based methods are more error-prone than top-down methods. \textit{Instance Separation in crowding via flow}: In human vision, there exists a perception difficulty called as crowding \cite{ruthcontextcrowd,ruthvisualawareness,ruthpooling}, which is correlated with change-blindness. Experimentally, we noticed that many instance segmentation systems often group two similar objects nearby as one object. Very few works explicitly address the problem of instance separation in the crowded pixel space. SOLOv2 \cite{solov2} adds position coordinates to convolutional mask features. However, position information is degraded in further layers. \cite{novotnysemi} point out that crowding is due to the inherent shift-invariant nature of convolution and proposes an instance coloring approach by defining a semi-convolutional operator to mix data from a convolutional network with the global location of the pixel. The embeddings regress to unconstrained representative points in each instance. This can be thought of as a center regression flow but with a semi-convolutional architecture. Other works, such as \cite{orienmask} propose a discriminative orientation mask to distinguish between foreground and background pixels, \cite{seminstdiscrim} creates a loss function to enforce cluster-and-contrast between embeddings of same and different instances. \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth]{images/namingerror.pdf} \caption{ % \textbf{Illustration of naming error}: Given ground truths G1-G3, the labels of the ground truths and detections (inside dashed boxes) are hidden. Each detection is matched to a ground truth based on its localization in a class-agnostic manner. After matching, the labels are revealed, and the naming error is calculated as the average number of predictions whose labels do not match their corresponding ground truth. } \label{fig:namingerror} \end{figure} \section{Resolving merging and hedging} In this section, we propose to resolve the merging and hedging errors. Merging errors trace back to the network's inability to distinguish between instances. To resolve this, we propose a contrastive flow field that encodes the relative positioning among instances of a given class. To alleviate the hedging of the segmentation network, we propose a Semantic Sorting and NMS procedure that resolves intra-class and inter-class hedge-predictions by sequentially measuring the overlap with the semantic segmentation module. \input{fig/flow} \subsection{Contrastive Delaunay Flow (CDF)} \label{sec:cdfalgo} {A commonly used vector flow in instance segmentation is the center flow. The idea is simple -- each foreground pixel tries to regress to its instance center. { However, because the center flow doesn't capture relative orientation between instances of the same class, this does not solve the problem of instance merging. For multiple instances with similar appearances, the flow vectors will likewise be similar. This doesn't resolve the ambiguity between multiple instances with similar appearance, because the flow vectors are the same for all these instances and therefore can be predicted from appearance features only. } An example is illustrated in Fig.\ref{fig:cdf}. Moreover, the magnitude of the vector field for center flow varies significantly, especially for large objects, making it difficult to regress. This motivates the need for a flow field that captures relationships between objects and is easy to learn. The CDF addresses these problems. } \begin{figure}[t!] \centering \includegraphics[width=0.97\linewidth]{images/modelarch.pdf} \caption{\textbf{Contrastive Flow and Semantic Sorting}: % We illustrate our approach to solving the counting, naming and masking problems. First, the mask features in SOLOv2 are used to predict a per-pixel flow and semantic segmentation. This is used for contrastive flow feedback to produce better masks, and semantic sorting tackles the counting and naming problems. In the second case, Semantic NMS prunes hedged predictions of the same object. In the third case, Semantic NMS prunes the duplicate prediction with incorrect class due to lack of a corresponding semantic mask } \label{fig:modelarch} \end{figure} The CDF consists of a unit vector at each foreground pixel, which characterizes the interactions with neighboring instances. Second, the CDF is easier to regress because the model only has to learn a direction but not a magnitude. The direction is a function of the relative location to the instance center, and the sum of repelling forces of other instances. Therefore, learning this direction amounts to learning different \textit{mask features} that encode not only local appearance, but also relative orientation with other instances. To incorporate both intra and inter instance context, the flow field is constructed for each class as follows: \begin{figure}[t!] \centering \includegraphics[width=0.97\linewidth]{images/cdf-diagram.pdf} \caption{\textbf{Examples of Contrastive Delaunay Flow}: % Figure (a) shows the center flow (red) and CDF (green). The center flow for each instance is the same, which doesn't provide any contextual information. The CDF for each instance is different, providing different contexts to instances with identical appearance. Figures (b, c, d) show the learnt CDF (overlaid with learnt semantic segmentation in yellow) which exhibits contrastive repulsion in scenarios with clutter, and occlusion } \label{fig:cdf} \end{figure} \begin{itemize} \item First, for each pixel within an instance, we initialize the vector to be a unit vector pointing towards the center of the instance. For a given instance $S_k$ \lz{with center $c_k$} and any pixel $u$ such that $u \in S_k$, the vector at pixel $u$ is initialized as $\bf{f}(u) = \frac{c_k - u}{\| c_k - u \|}$. \item \lz{Second, we compute the \textit{Delaunay triangulation} graph \cite{delaunay} \({\bf{\mathcal{G}} = \{\mathcal{\bf{V, E}}\}}\) of all instance centers. Since calculating interactions between all pairs is both expensive and difficult to learn, the Delaunay triangulation allows us to efficiently encode the relationships and relative orientations between objects.} For each edge $(S_i, S_j)$ in the graph $\bf{\mathcal{G}}$, we compute a unit repulsive force $\mathbf{f_{ij} = \frac{c_i - c_j}{\|c_i - c_j\|}}$ and $\mathbf{f_{ji} = -f_{ij}}$. These forces are then added to pixels in instances $S_i$ and $S_j$ respectively. To break symmetry, we add these forces only to pixels that are ``facing'' the neighboring instance. Finally, \lz{the vector field is} normalized to be of unit norm. \end{itemize} Instead of using the flow field in a bottom up manner during inference, we use the supervision from the CDF to amplify differences in \textit{mask features} during training, therefore correcting the instances in the top-down mask prediction process directly. This is similar to \cite{novotnysemi,orienmask,crossimagepixelcont} where bottom-up grouping supervision is used to improve top-down predictions, however, our choice of flow provides both \textit{discriminative} and \textit{structural} guidance in doing so. \input{tab/ablation} \subsection{Semantic Sorting} To detect and remove hedge-predictions, we need a `verification' mechanism for each predicted instance along both spatial and class dimensions. To this end, we add semantic segmentation as a lightweight module built on top of the instance mask features. This serves two purposes. First, it helps to re-rank instances based on their degree of `agreement' with the corresponding class of the semantic mask. This prevents instances with poor masks but high confidence from suppressing high-quality predictions. Second, it prevents \textit{hedging} by allowing only those instances which have a significant `overlap' with the semantic mask, subtracting the corresponding instance from the semantic map. Duplicate predictions have little overlap with the remaining semantic mask and will be removed. \paragraph{Calibrated pre-NMS re-ranking} Since the confidence predictions are unreliable, we use semantic segmentation as an additional signal to calibrate the quality of an instance prediction. Moreover, once we have a notion of `agreement' between an instance and a semantic mask, it is easy to rank instances in order of their mask quality. We re-rank instances based on the following factors: \textbf{(i) Precision}: An instance which has a high precision w.r.t. the semantic mask is a good instance. \textbf{(ii) IoU}: For instances with similar class scores and precision, we would prefer smaller instances, to avoid merged instances from appearing first in the sorted order. \textbf{(iii) Category score}: This score is predicted by the category branch. A detailed pseudocode can be found in the Supplementary Material. \paragraph{Semantic NMS} Once we have a pre-NMS scoring that is more indicative of the quality of the segmentation, we propose a Semantic NMS that uses the semantic mask for suppression. In order of confidence, if an instance has a minimum precision threshold with the semantic mask, the instance is preserved and is subtracted from the semantic mask. Otherwise, the instance is suppressed (discarded). This ensures that each instance has enough agreement with respect to the semantic mask to be preserved. A duplicate instance with a lower confidence would not satisfy the precision threshold since the previous instance is subtracted from the semantic mask, leaving no overlap for this duplicate mask. Only one iteration over each instance mask ensures an $O(n)$ time complexity. \input{tab/syntheticnails} \section{Quantifying Merging and Hedging beyond mAP} \begin{figure}[t!] \centering \includegraphics[width=0.93\linewidth]{images/confusion_figure.png.jpg} \caption{Calculating duplicate confusion on a sample set of predictions as described in Section \ref{sec:counting_errors}. Here, brighter colours represent larger confidences and darker colours represent smaller confidences. In this example, the duplicate confusion is 1.676.} \label{fig:confusion} \end{figure} \subsection{Hedging Bets} Current top down-approaches suffer from a recurring problem: similar instances often produce merged instance predictions. In order to maintain a reasonable mAP, networks are thus encouraged to predict multiple predictions for each instance, each dithered slightly from each other and potentially spanning multiple classes. These, often low-confidence, duplicate predictions are the network hedging its bets in case the higher confidence prediction does not align with the ground truth. Low-confidence predictions, occupying the tail end of the precision-recall (P/R) curve, are not penalized by the mAP metric for being incorrect but are rewarded if one, by chance, matches a ground truth (see Fig. \ref{fig:illustrative}). This exposes a critical tradeoff in non-max supression (NMS) procedures: do we suppress duplicates but lower the recall or include them and confuse the output predictions? One might question why low confidence duplicate predictions are a problem since an appropriate threshold would filter them out. In practice, confidence is often not well calibrated between classes \cite{guocal} and mAP decreases monotonically with increasing thresholds, making it difficult to select a threshold that is not overly exclusionary without also including duplicates. As describe above, this is a problem that should be solved by NMS. Fundamentally, when the network makes a prediction, even a low-confidence prediction, this represents a belief by the network that there exists a unique instance at that location. By this interpretation, the low-confidence predictions are not simply unnecessary but incorrect, an error that is not captured by the mAP. The TIDE \cite{tide} framework and the LRP both attempt to address some of the deficiency in the mAP metric. Because TIDE relies on the change in mAP to determine error, in cases such as Fig. \ref{fig:namemaskcount}, \ref{fig:illustrative} it still rewards a network for hedging its predictions. In contrast, the LRP explicitly penalizes false positive and false negative detections. Furthermore, the F1-score similarly penalizes false positives and false negatives while the boundary IoU \cite{contouracc} can identify when instance merging is occurring. However, none of these metrics are able to explicitly identify and penalize hedging. In order to quantify how much hedging is occurring in a set of predictions, we separate hedging into inter-class and intra-class hedging and approach both in a unified, graph-centric, approach. In doing so, we propose two new metrics, the \textit{Naming Error(NE)} and the \textit{Duplicate Confusion(DC)}, to complement the existing metrics described above. \subsection{Measuring Intra-Class Hedging}\label{sec:counting_errors} To measure how much a set of predictions is exploiting dithering and duplication to increase its mAP, we design a metric which captures the relative information between the predictions for a given image. For a given IoU threshold, a graph $G$ of the predictions within a class is constructed. The nodes of the graph are the predicted confidences and two nodes share an edge if their relative IoU is above this threshold. This graph represents, at the chosen threshold, which predictions are in the same cluster of predictions. For two nodes $i$ and $j$ in $G$, we define the connectivity between the nodes as \begin{equation}\label{eq:connectivity} c_{ij} = \max_{t \in T_{ij}} \min_{k \in t} p_k \end{equation} where $T_{ij}$ is the set of all paths on $G$ that connect $i$ and $j$ and $p_k$ is the predicted confidence for prediction $k$ along the path $t$. This represents the minimum confidence along the most connected path between two predictions. We now define the duplicate confusion at some IoU threshold $u$ and confidence threshold $v$ as the mean weighted sum of the connectivity of a node with all other nodes: \begin{equation}\label{eq:confusion} \mathrm{DC}_{uv} = \frac{1}{n} \sum_i^n \sum_{j \ne i}^n p_j \frac{c_{ij}}{p_i} \end{equation} Here, $n$ is the number of predictions with confidence at least $v$. For each image, the duplicate confusion $\mathrm{DC}$ is calculated by taking the mean of $\mathrm{DC}_{uv}$ across the range of IoU and confidence thresholds. This process is summarized in Fig. \ref{fig:confusion}. For clarity, the $\mathrm{DC}$ is multiplied by $1000$ in Table \ref{tab:ablation}, \ref{tab:cocoresult}. In the above equation, the term $\frac{c_{ij}}{p_i}$ can be interpreted as a bottlenecking coefficient: for predictions $i$ and $j$, how greatly is the information contained by the prediction $j$ restricted from explaining $i$ by the confidence of the predictions connecting $i$ and $j$. Since duplicate low confidence predictions are more tolerable, this bottlenecking coefficient is weighted by the confidence of the prediction $j$. Consequently, the duplicate confusion $\mathrm{DC}$ is nondecreasing with respect to the confidence of any prediction (see Appendix). If the network is producing duplicates, the duplicate confusion can be reduced by either removing the duplicate or reducing the confidence of both predictions. This metric can be interpreted as the confidence of a network in its own counting. This is not, however, a measure of how effectively a network can count instances -- the ground truth is not considered when calculating the metric. Neither is duplicate confusion a measure of the quality or completeness of the predictions of a network; consider, producing no predictions results in 0 duplicate confusion. Rather, the metric simply captures the amount of and uncertainty between duplicate predictions. By contrast, to increase mAP, the network is encouraged to ``hedge its bets'' when it is not completely certain, a behaviour heavily penalized by the duplicate confusion metric. \subsection{Measuring Inter-Class Hedging} Similar to intra-class hedging, inter-class hedging can be formulated as a connectivity problem, and penalizing the edges that connect nodes of different classes. Given the set of ground truths and predictions, we formulate the inter-class hedging as a \textit{naming error} by penalizing hedged predictions with a different class than its corresponding ground truth. \paragraph{Naming error (NE)}: To formulate a naming error, we need to associate ground truths with predictions in a class-agnostic way. We start by hiding the class predictions for each detection and ground truth, matching each detection with its ground truth by decreasing order of confidence of the predictions. This ensures that predictions are matched with ground truths only based on mask overlap. Note that this allows a single ground truth to be matched to potentially multiple predictions. Finally, we reveal all the labels. The naming error for a ground truth is simply the number of predictions that match this ground truth with incorrect labels. This is illustrated in Fig.\ref{fig:namingerror}. The naming error over the dataset is the average of naming error over all ground truths. Formally, let $\{ G_1 \ldots G_N \}$ be the set of ground truth masks and $\{ D_1 \ldots D_M \}$ be the set of predictions. For each detection $D_j$, we define $g(D_j)$ as \begin{equation} g(D_j) = \begin{cases} \arg \max_i\operatorname{IoU}(D_j, G_i) & , \max_i \operatorname{IoU}(D_j, G_i) \ge 0.5 \\ -1 & , \textbf{otherwise} \end{cases} \end{equation} Then, the naming error is defined as \begin{equation} \mathrm{NE} = \frac{1}{N} \sum_{i=1}^N \sum_{j: g(D_j) = i} \mathbb{I}\left[ l(D_j) \ne l(G_i) \right] \end{equation} where $l(.)$ is the function that returns the label. In the next subsection, we propose a semantic sorting module that attempts to resolve these errors. \section{A critical shortcoming of mAP} Mean Average Precision (mAP) is the de-facto standard metric used in object detection and instance segementation. However, a multifaceted problem like instance segmentation is quantified using a single metric. This can lead to \textit{blind spots} in the quantitative evaluation of IS algorithms. For example, \todo{cite achal here} show that the mAP metric is gameable for large vocabulary datasets. This occurs in long-tailed class distributions where the maximum number of detections is limited. However, we show a different problem that occurs in the mAP even for datasets like COCO with a moderate number of classes. Specifically, in a sequence of detections sorted by conficence, after a particular true positive is encountered, the sequence of false positives that follow do not contribute to the AP score. This is because the precision drops, but the recall stays constant. The AP is affected only when the next true positive is encountered. This is problematic because over the dataset, all the low-confidence predictions can be `pushed' to the end of the PR curve. We show this shortcoming in the context of SOLOv2, which uses MatrixNMS to decay duplicate predictions and later remove them via a chosen threshold. They achieve better AP scores than their MaskNMS counterpart, however, the qualitative examples look worse. Upon closer inspection of the precision-recall curve, it turns out that a lot of low-confidence false positives do not contribute to the AP. Results are shown in Figure \ref{fig:matrixvsmask}. We construct subsets of the COCO validation dataset of various sizes to show that mAP catastrophically fails to capture this behavior of MatrixNMS. One might argue that a simple way to tackle this situation is to increase the threshold after decaying duplicate predictions. Intuitively, increasing this threshold should get rid of more duplicate predictions which have lower confidence scores. However, quantitatively tuning this hyperparameter with mAP leads to \todo{figurename}. The values of bounding box and segmentation mAP drop monotonically with the value of \texttt{update\_threshold}, which suggests that this value should be set to 0 to maximize validation mAP. This is in direct contrast to how the threshold should actually be modified, showing that mAP doesn't provide any useful way of tuning this NMS procedure. Moreover, the mAP metric doesn't directly capture the mask quality of true positives in the predictions. It indirectly does so by averaging the AP at different IoU thresholds, but the metric also considers other effects like relative ordering of the predictions, etc. To alleviate these problems, we propose to use two metrics that attempt to tackle this issue. These metrics are implemented in the widely used Detectron2 framework for use by the community \footnote{Code will be made public upon acceptance}. \section{Introduction} This document serves as an example submission. It illustrates the format we expect authors to follow when submitting a paper to ECCV. At the same time, it gives details on various aspects of paper submission, including preservation of anonymity and how to deal with dual submissions, so we advise authors to read this document carefully. \section{Initial Submission} \subsection{Language} All manuscripts must be in English. \subsection{Paper length} Papers submitted for review should be complete. The length should match that intended for final publication. Papers accepted for the conference will be allocated 14 pages (plus additional pages for references) in the proceedings. Note that the allocated 14 pages do not include the references. The reason for this policy is that we do not want authors to omit references for sake of space limitations. Papers with more than 14 pages (excluding references) will be rejected without review. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Do not use the TIMES, or any other font than the default. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in 14 pages if it is reviewed in 16. \subsection{Paper ID} It is imperative that the paper ID is mentioned on each page of the manuscript. The paper ID is a number automatically assigned to your submission when registering your paper submission on the submission site. All lines should be numbered in the initial submission, as in this example document. This makes reviewing more efficient, because reviewers can refer to a line on a page. Line numbering is removed in the camera-ready. \subsection{Mathematics} Please number all of your sections and displayed equations. Again, this makes reviewing more efficient, because reviewers can refer to a line on a page. Also, it is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the line numbering will not be present in the final copy, so is not an alternative to equation numbers). Some authors might benefit from reading Mermin's description of how to write mathematics: \url{www.pamitc.org/documents/mermin.pdf}. \section{Policies} To avoid confusion, in case of discrepancies between policies mentioned here and those in the ECCV 2022 webpage, the web page is the one that is updated regularly and its policies shall overrule those appearing here. \subsection{Review Process} By submitting a paper to ECCV, the authors agree to the review process and understand that papers are processed by the Toronto system to match each manuscript to the best possible chairs and reviewers. \subsection{Confidentiality} The review process of ECCV is confidential. Reviewers are volunteers not part of the ECCV organisation and their efforts are greatly appreciated. The standard practice of keeping all information confidential during the review is part of the standard communication to all reviewers. Misuse of confidential information is a severe professional failure and appropriate measures will be taken when brought to the attention of ECCV organizers. It should be noted, however, that the organisation of ECCV is not and cannot be held responsible for the consequences when reviewers break confidentiality. Accepted papers will be published by Springer (with appropriate copyrights) electronically up to three weeks prior to the main conference. Please make sure to discuss this issue with your legal advisors as it pertains to public disclosure of the contents of the papers submitted. \subsection{Dual and Double Submissions} By submitting a manuscript to ECCV 2022, authors acknowledge that it has not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference, or workshop. Furthermore, no paper substantially similar in content has been or will be submitted to a journal, another conference or workshop during the review period (March 07, 2022 – July 3, 2022). The authors also attest that they did not submit substantially similar submissions to ECCV 2022. Violation of any of these conditions will lead to rejection and the violation will be reported to the other venue or journal, which will typically lead to rejection there as well. The goals of the dual submission policy are (i) to have exciting new work be published for the first time at ECCV 2022, and (ii) to avoid duplicating the efforts of the reviewers. Therefore, all papers under review are checked for dual submissions and this is not allowed, independent of the page size of submissions. For already published papers, our policy is based upon the following particular definition of ``publication''. A publication, for the purposes of the dual submission policy, is defined to be a written work longer than four pages that was submitted for review by peers for either acceptance or rejection, and, after review, was accepted. In particular, this definition of publication does not depend upon whether such an accepted written work appears in a formal proceedings or whether the organizers declare that such work ``counts as a publication''. An arXiv.org paper does not count as a publication because it was not peer-reviewed for acceptance. The same is true for university technical reports. However, this definition of publication does include peer-reviewed workshop papers, even if they do not appear in a proceedings, if their length is more than 4 pages including citations. Given this definition, any submission to ECCV 2022 should not have substantial overlap with prior publications or other concurrent submissions. As a rule of thumb, the ECCV 2022 submission should contain no more than 20 percent of material from previous publications. \subsection{Requirements for publication} Publication of the paper in the ECCV 2022 proceedings of Springer requires that at least one of the authors registers for the conference and present the paper there. It also requires that a camera-ready version that satisfies all formatting requirements is submitted before the camera-ready deadline. \subsection{Double blind review} \label{sec:blind} ECCV reviewing is double blind, in that authors do not know the names of the area chair/reviewers of their papers, and the area chairs/reviewers cannot, beyond reasonable doubt, infer the names of the authors from the submission and the additional material. Avoid providing links to websites that identify the authors. Violation of any of these guidelines may lead to rejection without review. If you need to cite a different paper of yours that is being submitted concurrently to ECCV, the authors should (1) cite these papers, (2) argue in the body of your paper why your ECCV paper is non trivially different from these concurrent submissions, and (3) include anonymized versions of those papers in the supplemental material. Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work. In fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for technical reports). Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an excellent paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L. and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{detr} as additional material and cite it as \begin{quote} 1. Authors. ``The frobnicatable foo filter'', BMVC 2014 Submission ID 324, Supplied as additional material {\tt bmvc14.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{detr}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ECCV audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. \\ For sake of anonymity, it's recommended to omit acknowledgements in your review copy. They can be added later when you prepare the final copy. \section{Manuscript Preparation} This is an edited version of Springer LNCS instructions adapted for ECCV 2022 first paper submission. You are strongly encouraged to use \LaTeX2$_\varepsilon$ for the preparation of your camera-ready manuscript together with the corresponding Springer class file \verb+llncs.cls+. We would like to stress that the class/style files and the template should not be manipulated and that the guidelines regarding font sizes and format should be adhered to. This is to ensure that the end product is as homogeneous as possible. \subsection{Printing Area} The printing area is $122 \; \mbox{mm} \times 193 \; \mbox{mm}$. The text should be justified to occupy the full line width, so that the right margin is not ragged, with words hyphenated as appropriate. Please fill pages so that the length of the text is no less than 180~mm. \subsection{Layout, Typeface, Font Sizes, and Numbering} Use 10-point type for the name(s) of the author(s) and 9-point type for the address(es) and the abstract. For the main text, please use 10-point type and single-line spacing. We recommend using Computer Modern Roman (CM) fonts, which is the default font in this template. Italic type may be used to emphasize words in running text. Bold type and underlining should be avoided. With these sizes, the interline distance should be set so that some 45 lines occur on a full-text page. \subsubsection{Headings.} Headings should be capitalized (i.e., nouns, verbs, and all other words except articles, prepositions, and conjunctions should be set with an initial capital) and should, with the exception of the title, be aligned to the left. Words joined by a hyphen are subject to a special rule. If the first word can stand alone, the second word should be capitalized. The font sizes are given in Table~\ref{table:headings}. \setlength{\tabcolsep}{4pt} \begin{table} \begin{center} \caption{Font sizes of headings. Table captions should always be positioned {\it above} the tables. The final sentence of a table caption should end without a full stop} \label{table:headings} \begin{tabular}{lll} \hline\noalign{\smallskip} Heading level & Example & Font size and style\\ \noalign{\smallskip} \hline \noalign{\smallskip} Title (centered) & {\Large \bf Lecture Notes \dots} & 14 point, bold\\ 1st-level heading & {\large \bf 1 Introduction} & 12 point, bold\\ 2nd-level heading & {\bf 2.1 Printing Area} & 10 point, bold\\ 3rd-level heading & {\bf Headings.} Text follows \dots & 10 point, bold \\ 4th-level heading & {\it Remark.} Text follows \dots & 10 point, italic\\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{1.4pt} Here are some examples of headings: ``Criteria to Disprove Context-Freeness of Collage Languages'', ``On Correcting the Intrusion of Tracing Non-deterministic Programs by Software'', ``A User-Friendly and Extendable Data Distribution System'', ``Multi-flip Networks: Parallelizing GenSAT'', ``Self-determinations of Man''. \subsubsection{Lemmas, Propositions, and Theorems.} The numbers accorded to lemmas, propositions, and theorems etc. should appear in consecutive order, starting with the number 1, and not, for example, with the number 11. \subsection{Figures and Photographs} \label{sect:figures} Please produce your figures electronically and integrate them into your text file. For \LaTeX\ users we recommend using package \verb+graphicx+ or the style files \verb+psfig+ or \verb+epsf+. Check that in line drawings, lines are not interrupted and have constant width. Grids and details within the figures must be clearly readable and may not be written one on top of the other. Line drawings should have a resolution of at least 800 dpi (preferably 1200 dpi). For digital halftones 300 dpi is usually sufficient. The lettering in figures should have a height of 2~mm (10-point type). Figures should be scaled up or down accordingly. Please do not use any absolute coordinates in figures. Figures should be numbered and should have a caption which should always be positioned {\it under} the figures, in contrast to the caption belonging to a table, which should always appear {\it above} the table. Please center the captions between the margins and set them in 9-point type (Fig.~\ref{fig:example} shows an example). The distance between text and figure should be about 8~mm, the distance between figure and caption about 5~mm. \begin{figure} \centering \includegraphics[height=6.5cm]{eijkel2} \caption{One kernel at $x_s$ ({\it dotted kernel}) or two kernels at $x_i$ and $x_j$ ({\it left and right}) lead to the same summed estimate at $x_s$. This shows a figure consisting of different types of lines. Elements of the figure described in the caption should be set in italics, in parentheses, as shown in this sample caption. The last sentence of a figure caption should generally end without a full stop} \label{fig:example} \end{figure} If possible (e.g. if you use \LaTeX) please define figures as floating objects. \LaTeX\ users, please avoid using the location parameter ``h'' for ``here''. If you have to insert a pagebreak before a figure, please ensure that the previous page is completely filled. \subsection{Formulas} Displayed equations or formulas are centered and set on a separate line (with an extra line or halfline space above and below). Displayed expressions should be numbered for reference. The numbers should be consecutive within the contribution, with numbers enclosed in parentheses and set on the right margin. For example, \begin{align} \psi (u) & = \int_{0}^{T} \left[\frac{1}{2} \left(\Lambda_{0}^{-1} u,u\right) + N^{\ast} (-u)\right] dt \; \\ & = 0 ? \end{align} Please punctuate a displayed equation in the same way as ordinary text but with a small space before the end punctuation. \subsection{Footnotes} The superscript numeral used to refer to a footnote appears in the text either directly after the word to be discussed or, in relation to a phrase or a sentence, following the punctuation sign (comma, semicolon, or full stop). Footnotes should appear at the bottom of the normal text area, with a line of about 2~cm in \TeX\ and about 5~cm in Word set immediately above them.\footnote{The footnote numeral is set flush left and the text follows with the usual word spacing. Second and subsequent lines are indented. Footnotes should end with a full stop.} \subsection{Program Code} Program listings or program commands in the text are normally set in typewriter font, e.g., CMTT10 or Courier. \noindent {\it Example of a Computer Program} \begin{verbatim} C, M := SOLOv2(img) \end{verbatim} \begin{verbatim} program Inflation (Output) {Assuming annual inflation rates of years}; const MaxYears = 10; var Year: 0..MaxYears; Factor1, Factor2, Factor3: Real; begin Year := 0; Factor1 := 1.0; Factor2 := 1.0; Factor3 := 1.0; WriteLn('Year repeat Year := Year + 1; Factor1 := Factor1 * 1.07; Factor2 := Factor2 * 1.08; Factor3 := Factor3 * 1.10; WriteLn(Year:5,Factor1:7:3,Factor2:7:3,Factor3:7:3) until Year = MaxYears end. \end{verbatim} \noindent {\small (Example from Jensen K., Wirth N. (1991) Pascal user manual and report. Springer, New York)} \subsection{Citations} The list of references is headed ``References" and is not assigned a number in the decimal system of headings. The list should be set in small print and placed at the end of your contribution, in front of the appendix, if one exists. Please do not insert a pagebreak before the list of references if the page is not completely filled. An example is given at the end of this information sheet. For citations in the text please use square brackets and consecutive numbers: \section{Submitting a Camera-Ready for an Accepted Paper} \subsection{Converting Initial Submission to Camera-Ready} To convert a submission file into a camera-ready for an accepted paper: \begin{enumerate} \item First comment out \begin{verbatim} \usepackage{ruler} \end{verbatim} and the line that follows it. \item The anonymous title part should be removed or commented out, and a proper author block should be inserted, for which a skeleton is provided in a commented-out version. These are marked in the source file as \begin{verbatim} \end{verbatim} and \begin{verbatim} \end{verbatim} \item Please write out author names in full in the paper, i.e. full given and family names. If any authors have names that can be parsed into FirstName LastName in multiple ways, please include the correct parsing in a comment to the editors, below the \begin{verbatim}\author{}\end{verbatim} field. \item Make sure you have inserted the proper Acknowledgments. \end{enumerate} \subsection{Preparing the Submission Package} We need all the source files (LaTeX files, style files, special fonts, figures, bib-files) that are required to compile papers, as well as the camera ready PDF. For each paper, one ZIP-file called XXXX.ZIP (where XXXX is the zero-padded, four-digit paper ID) has to be prepared and submitted via the ECCV 2022 Submission Website, using the password you received with your initial registration on that site. The size of the ZIP-file may not exceed the limit of 60 MByte. The ZIP-file has to contain the following: \begin{enumerate} \item All source files, e.g. LaTeX2e files for the text, PS/EPS or PDF/JPG files for all figures. \item PDF file named ``XXXX.pdf" that has been produced by the submitted source, where XXXX is the four-digit paper ID (zero-padded if necessary). For example, if your paper ID is 24, the filename must be 0024.pdf. This PDF will be used as a reference and has to exactly match the output of the compilation. \item PDF file named ``XXXX-copyright.PDF": a scanned version of the signed copyright form (see ECCV 2022 Website, Camera Ready Guidelines for the correct form to use). \item If you wish to provide supplementary material, the file name must be in the form XXXX-supp.pdf or XXXX-supp.zip, where XXXX is the zero-padded, four-digit paper ID as used in the previous step. Upload your supplemental file on the ``File Upload" page as a single PDF or ZIP file of 100 MB in size or less. Only PDF and ZIP files are allowed for supplementary material. You can put anything in this file – movies, code, additional results, accompanying technical reports–anything that may make your paper more useful to readers. If your supplementary material includes video or image data, you are advised to use common codecs and file formats. This will make the material viewable by the largest number of readers (a desirable outcome). ECCV encourages authors to submit videos using an MP4 codec such as DivX contained in an AVI. Also, please submit a README text file with each video specifying the exact codec used and a URL where the codec can be downloaded. Authors should refer to the contents of the supplementary material appropriately in the paper. \end{enumerate} Check that the upload of your file (or files) was successful either by matching the file length to that on your computer, or by using the download options that will appear after you have uploaded. Please ensure that you upload the correct camera-ready PDF–renamed to XXXX.pdf as described in the previous step as your camera-ready submission. Every year there is at least one author who accidentally submits the wrong PDF as their camera-ready submission. Further considerations for preparing the camera-ready package: \begin{enumerate} \item Make sure to include any further style files and fonts you may have used. \item References are to be supplied as BBL files to avoid omission of data while conversion from BIB to BBL. \item Please do not send any older versions of papers. There should be one set of source files and one XXXX.pdf file per paper. Our typesetters require the author-created pdfs in order to check the proper representation of symbols, figures, etc. \item Please remove unnecessary files (such as eijkel2.pdf and eijkel2.eps) from the source folder. \item You may use sub-directories. \item Make sure to use relative paths for referencing files. \item Make sure the source you submit compiles. \end{enumerate} Springer is the first publisher to implement the ORCID identifier for proceedings, ultimately providing authors with a digital identifier that distinguishes them from every other researcher. ORCID (Open Researcher and Contributor ID) hosts a registry of unique researcher identifiers and a transparent method of linking research activities to these identifiers. This is achieved through embedding ORCID identifiers in key workflows, such as research profile maintenance, manuscript submissions, grant applications and patent applications. \subsection{Most Frequently Encountered Issues} Please kindly use the checklist below to deal with some of the most frequently encountered issues in ECCV submissions. {\bf FILES:} \begin{itemize} \item My submission package contains ONE compiled pdf file for the camera-ready version to go on Springerlink. \item I have ensured that the submission package has all the additional files necessary for compiling the pdf on a standard LaTeX distribution. \item I have used the correct copyright form (with editor names pre-printed), and a signed pdf is included in the zip file with the correct file name. \end{itemize} {\bf CONTENT:} \begin{itemize} \item I have removed all \verb| \vspace| and \verb|\hspace| commands from my paper. \item I have not used \verb|\thanks| or \verb|\footnote| commands and symbols for corresponding authors in the title (which is processed with scripts) and (optionally) used an Acknowledgement section for all the acknowledgments, at the end of the paper. \item I have not used \verb|\cite| command in the abstract. \item I have read the Springer author guidelines, and complied with them, including the point on providing full information on editors and publishers for each reference in the paper (Author Guidelines – Section 2.8). \item I have entered a correct \verb|\titlerunning{}| command and selected a meaningful short name for the paper. \item I have entered \verb|\index{Lastname,Firstname}| commands for names that are longer than two words. \item I have used the same name spelling in all my papers accepted to ECCV and ECCV Workshops. \item I have inserted the ORCID identifiers of the authors in the paper header (see http://bit.ly/2H5xBpN for more information). \item I have not decreased the font size of any part of the paper (except tables) to fit into 14 pages, I understand Springer editors will remove such commands. \end{itemize} {\bf SUBMISSION:} \begin{itemize} \item All author names, titles, and contact author information are correctly entered in the submission site. \item The corresponding author e-mail is given. \item At least one author has registered by the camera ready deadline. \end{itemize} \section{Conclusions} The paper ends with a conclusion. \clearpage\mbox{}Page \thepage\ of the manuscript. \clearpage\mbox{}Page \thepage\ of the manuscript. This is the last page of the manuscript. \par\vfill\par Now we have reached the maximum size of the ECCV 2022 submission (excluding references). References should start immediately after the main text, but can continue on p.15 if needed. \clearpage \section{Behaviour of Duplicate Confusion} For any two predictions $a$ and $b$ and associated confidences $p_a$ and $p_b$ where the connectivity between them is not restricted by the confidence of an intermediary prediction (i.e. $c_{ab} = \min(p_a, p_b)$), the duplicate confusion (DC) associated with these predictions is nondecreasing in $p_a$ and $p_b$. Suppose, without loss of generality, that $p_a > p_b$: \begin{align*}\ \mathrm{DC} &= \frac{1}{2} \sum_i^2 \sum_{j \ne i}^2 p_j \frac{c_{ij}}{p_i} \\ &= \frac{1}{2}\left( p_a \frac{\min(p_a, p_b)}{p_b} + p_b \frac{\min(p_a, p_b)}{p_a} \right) \\ &= \frac{1}{2}\left( p_a + \frac{p_b^2}{p_a} \right) \end{align*} The derivative of the DC with respect to $p_a$ and $p_b$ is thus \begin{align*} \frac{\partial DC}{\partial p_a} &= \frac{1}{2}\left( 1 - \frac{p_b^2}{p_a^2}\right) \\ \frac{\partial DC}{\partial p_b} &= \frac{1}{2}\left( p_a + \frac{p_b}{p_a}\right) \end{align*} which are positive on the ranges $[p_b, 1]$ and $(0, p_b]$, respectively. Since we assumed that $p_a > p_b$, we thus conclude that the DC for any two predictions with unrestricted connectivity is nondecreasing in $p_a$ and $p_b$. If, instead, there is some other prediction $c$ that bottlenecks the connectivity between $a, b$ such that $c_{ab} = p_v$ (and thus $p_c \le p_a$ and $p_c \le p_b$), we can decompose the DC into three components: the DC between $a$ and $b$, $a$ and $c$, and $b$ and $c$ respectively: \begin{equation*} DC = \frac{p_c}{3}\left(\frac{p_a}{p_b} + \frac{p_b}{p_a}\right) + \frac{1}{3}\left(p_a + \frac{p_c^2}{p_a}\right) + \frac{1}{3}\left(p_b + \frac{p_c^2}{p_b}\right) \end{equation*} While the above is nondecreasing in $p_c$, this is no longer the case for $p_a$ and $p_b$. Specifically, the first term (associated with the DC between $a$ and $b$) in the above equation is \textit{nonincreasing} in $p_a$ and $p_b$. Since $c_{ab}$ is not dependent on $p_a$ or $p_b$, $a$ and $b$ become increasingly bottlenecked by $p_c$ as $p_a$ and $p_b$ increase. As $a$ and $b$ become increasingly unable to fully explain each other, their DC decrease. At the same time, $c$ becomes increasingly explainable by $a$ and $b$ and thus the DC increases. In combination, the total DC decrease with respect to $p_a$ and $p_b$ when one is approximately equal to $p_c$ and increases when they are larger. For more complicated connectivity graphs, the behaviour of the duplicate confusion follows a similar pattern to that described above. \section{Implementation of Contrastive Delaunay Flow and Semantic Sorting} In Section \ref{sec:cdfalgo}, we motivate and propose the use of Contrastive Delaunay Flow, and showed its effectiveness in amplifying feature differences between similar instances. A pseudocode for the method is presented in Alg. \ref{alg:cdf}. \begin{algorithm} \caption{Pseudocode for constructing CDF for class $c$, given set of ground truth instances $S_k$ with centers $t_k$ and category $c$}\label{alg:cdf} \KwData{$\{ S_{k},\}_{k = 1\ldots N}, \{ t_k \}_{k = 1\ldots N}, N$} \KwResult{Flow $\mathbf{f}_c$ characterizing CDF for class $c$} // Initialize with center flow \For{$k = 1 \ldots N$}{ $ \mathbf{f}_c[u] \leftarrow \frac{t_k - u}{\| t_k - u \|} \quad , \forall u \in S_k$ } // No graph in case of single instance \If{$N == 1$}{return $\mathbf{f}_c$} // Construct graph as set of directed edges \eIf{$N == 2$}{$\mathcal{G} \leftarrow \{\{0,1\}, \{1,0\}\}$}{ $\mathcal{G} \leftarrow \text{Delaunay}(t_1, t_2 \ldots t_N)$} // Construct flow \For{$(i, j) \in \mathcal{G}$}{ $\hat{f}_{ij} \leftarrow \frac{t_i - t_j}{\|t_i - t_j \|}$ $\mathbf{f}_c[u] \leftarrow \mathbf{f}_c[u] + \mathbb{I}((t_i - u)^T \hat{f}_{ij} > 0) \hat{f}_{ij} , \forall u \in S_i $ } $\mathbf{f}_c[u] = \frac{\mathbf{f}_c[u]}{\|\mathbf{f}_c[u]\| + \epsilon} \quad \forall u$ return $\mathbf{f}_c$ \end{algorithm} Moreover, Semantic Sorting and NMS are also shown to be effective methods for resolving both intra-class hedging (counting) and inter-class hedging (naming) errors. A pseudocode of Semantic Sorting and NMS is presented in Alg. \ref{alg:semnms}. \begin{algorithm} \caption{Pseudocode for semantic sorting and NMS, given instances $S_k$ with category $c_k$ and confidence $p_k$, and semantic masks $M$}\label{alg:semnms} \KwData{$\{ S_{k}, c_{k}, p_{k}\}_{k = 1\ldots N}$, $\{ M_c \}_{c = 1\ldots C}$} \KwResult{Boolean array $keep$ indicating preservation of instances} \For{$k = 1 \ldots N$} { $pr \leftarrow \text{precision}(S_k, M_{c_k} ) $; \\ $iou \leftarrow \text{computeIoU}(S_k, M_{c_k} ) $; \\ $p_k \leftarrow p_k + pr + (1 - iou) $; \\ } $(S, c, p) = \text{sort}(S, c, p); \quad$ // sort by decreasing $p$ \\ $keep \leftarrow [True]\times N$; \\ \For{$k = 1 \ldots N$}{ $overlap \leftarrow \text{precision}(S_k, M_{c_k})$; \\ \eIf{$overlap \ge thr$}{ $keep_k = True$; \\ $M_{c_k} = M_{c_k} \backslash S_k$ \\ }{ $keep_k = False$} } \end{algorithm} \section{Separation of instance features using CDF} In this section, we show more qualitative examples of similarity in the \textit{mask features} leading to merged predictions in the synthetic dataset (Tab. \ref{tab:synthetic}). The CDF represents an explicit output that is contextual, and leads to better instance merging resolution, which tackles the merging problem due to redundant query features, and lack of distinguishing features. Examples are shown in Fig. \ref{fig:heatmaps-supp}. \begin{figure}[t!] \centering \includegraphics[width=0.85\linewidth]{images/heatmaps-supp.png.jpg} \caption{\textbf{More examples of instance merging problem in Synthetic dataset} Each row shows examples of \textit{mask feature} similarity in SOLOv2 and our method. Fig.(a) shows input image patches with similar objects. Fig.(b) shows the pixel-wise cosine similarity of the mask features with the mask feature of the pixel marked {\color{red} $\times$} in Fig.(a) along with predictions (shown as bounding boxes). Note that instance merging is rampant, among instances that have similar orientation, regardless of their spatial proximity or instances around them. Fig.(c) shows our method with CDF which leads to amplification of difference in features (due to contrasting Delaunay neighbors) leading to resolution of merged predictions } \label{fig:heatmaps-supp} \end{figure} \section{Shortcoming of AP for measuring hedging errors} In this section, we explore some real examples from the COCO validation dataset in terms of hedging errors and its effect on mAP. This shortcoming occurs due to low-confidence false-positives that are not explicitly pruned in a post-processing step like NMS (or thresholding low-confidence predictions after a `soft' NMS). These low-confidence predictions accumulate in the tail end of the Precision-Recall curve, and do not negatively contribute to the mAP metric. Some examples are shown in Fig. \ref{fig:ap-ind-supp}. This is also reflected in Tab. \ref{tab:ablation}, \ref{tab:cocoresult} where hedging improves mAP, but at the cost of worsening all other errors (F1-score, LRP, NE, DC). More qualitative results are shown in Fig. \ref{fig:coco-solo-v-ours-1}, \ref{fig:coco-solo-v-ours-2}, \ref{fig:coco-solo-v-ours-3}. \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{images/ap-ind-supp.png.jpg} \caption{\textbf{Shortcoming of AP in resolving the \textit{hedging problem}}: (a) shows the prediction of SOLOv2 model with Matrix NMS, (b) shows the corresponding P/R curve. (c) shows the prediction with the same network but with Mask NMS, (d) shows the corresponding P/R curve. Note that despite having severe hedging (overcounting) in the first case, the AP scores are the same for both cases. However, they exhibit drastically different qualitative behavior, showing that AP is not an adequate metric for evaluating the \textit{hedging problem} } \label{fig:ap-ind-supp} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{images/coco-solo-v-ours.001.pdf} \caption{\textbf{Qualitative comparison on COCO-val-2017 dataset}: Images on left are predictions made by SOLOv2, images on right are predictions by our model with CDF, Semantic Sorting and Semantic NMS} \label{fig:coco-solo-v-ours-1} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{images/coco-solo-v-ours.002.pdf} \caption{\textbf{Qualitative comparison on COCO-val-2017 dataset}: Images on left are predictions made by SOLOv2, images on right are predictions by our model with CDF, Semantic Sorting and Semantic NMS} \label{fig:coco-solo-v-ours-2} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{images/coco-solo-v-ours.003.pdf} \caption{\textbf{Qualitative comparison on COCO-val-2017 dataset}: Images on left are predictions made by SOLOv2, images on right are predictions by our model with CDF, Semantic Sorting and Semantic NMS} \label{fig:coco-solo-v-ours-3} \end{figure*} \section{Conclusion} We highlight two important problems in instance segmentation, namely, \textit{merging} and \textit{hedging}. We highlight the ways in which intra and inter-class hedging errors can increase mAP, and propose metrics that isolate these errors. To address \textit{merging}, we learn a contrastive flow that encourages each pixel to learn a flow dependent on the relative positions of the instances around it. To address \textit{hedging}, we propose a semantic sorting mechanism that re-ranks instances and prunes duplicates, leading to better resolution of both inter and intra-class hedging. Empirically, we show that many top-down instance segmentation methods suffer from these three errors even if they have high mAP. Experiments on the COCO dataset shows better resolution of the merging and hedging errors by our method compared to other SOTA algorithms. \section{Experiments} \textbf{Implementation}. Given an input image $I \in \mathbb{R}^{3 \times H \times W}$, the FPN backbone generates a list of $[F \times \frac{H}{k} \times \frac{W}{k} ]$ feature maps(where $k$ is pyramid level), which feed into the category, kernel and mask feature prediction branch to give $\mathbb{R}^{S_k \times S_k \times C}$, $\mathbb{R}^{S_k \times S_k \times E}$ and $\mathbb{R}^{H' \times W' \times E}$ dimensional output respectively, where $H', W' =\frac{H}{4}, \frac{W}{4} $, $C$: semantic classes, $S_k$: grid size, $E$: feature maps. Then, one 1$\times$1 convolution and two 1$\times$1 convolutions with GroupNorm\cite{groupnorm} and ReLU are performed on \textit{mask features} to output vector flow $\in \mathbb{R}^{H' \times W' \times 2C}$ and semantic segmentation $\in \mathbb{R}^{H' \times W' \times C}$ predictions respectively. \subsection{Ablation study on instance separation} \paragraph{Synthetic dataset} To isolate the \textit{merging problem}, we construct a synthetic dataset of 20 identical nails that are placed randomly in the image. Each image is of size $394\times394$ and the locations of the nails are sampled from a truncated random normal distribution around the image center. The training and validation set consist of 2000 and 500 images respectively. The main challenges of this dataset are instance clutter and a lack of distinct appearance between two instances which can lead to instance merging. \paragraph{Performance with CDF} The ablation is shown in Table \ref{tab:synthetic}. Even in a simple scenario, SOLOv2 suffers from severe overcounting and instance merging problems. Note that explicit coordinates (CoordConv) do not improve AP, F1 and LRP indicating no resolution of hedging and masking. Adding CDF improves all results significantly. The CDF forces different instances to learn different mask features in order to predict a different flow for each instance, where the flow is a function of its Delaunay triangulation. An example is shown in Tab. \ref{tab:synthetic}. Note that nails that have similar local appearance have a very high cosine-similarity to each other in the mask feature space. Therefore, a kernel feature when convolved with this mask feature ends up masking both instances. Our method explicitly reduces mask feature similarity in order to be able to predict the CDF, which is a function of the relative positions of the neighbors. This results in a drastic reduction of merged instance prediction compared to the baseline. \input{tab/cocoresult} \subsection{Ablation on coco-minitrain} Next, we investigate the effectiveness of CDF and semantic sorting in improving hedging (intra and inter class) and merging. Therefore, we ablate on the CDF (\cmark/\xmark), semantic segmentation module (\cmark/\xmark), and NMS type ({M}atrix/ {m}ask/ {S}emantic). We perform ablations on the coco-minitrain \cite{houghnet} dataset. We use coco-minitrain instead of the COCO-train-2017 set owing to similar data statistics as the full training set and to reduce the cost of running ablations. All hyperparameters used for SOLOv2 follow the experimental setup of \cite{solov2}. The results are in \textbf{Table \ref{tab:ablation}}. For a given NMS method (mask/matrix/semantic), adding CDF increases mAP and masking performance over its counterpart without CDF, showing the effectiveness of the CDF in providing reliable context. The boundary IoU metric shows that true positives now have better contour quality compared to the baseline, and LRP{\tiny{Loc}} shows that CDF helps in better localization leading to better masks. Meanwhile, using semantic NMS provide at least a 86.8 $\%$ decrease in the duplicate confusion and a 15.4 $\%$ increase in the F1 score compared to Matrix and Mask NMS. Using Semantic NMS leads to a much better DC, F1, LRP{\tiny{FP}}, and NE showing better resolution of both inter-class and intra-class hedge-predictions. \subsection{Result on COCO-val-2017} We run our full method on the COCO \cite{coco} training set. Results are shown in Table \ref{tab:cocoresult}. To contrast the effect of Semantic NMS, we also compare our method but with MatrixNMS. Methods like QueryInst \cite{instancequery} use a fixed number of queries (e.g. 100) and produce predictions for each query without performing any NMS. The tail end behavior of these queries is therefore undefined. This leads to it having the highest mAP values, but the poorest performance in terms of F1, bIoU (owing to memorization of templates), LRP and NE (due to FPs from other classes). Higher LRP{\tiny{FP}} indicates more intra-class hedging, while higher DC indicates strong connectivity among the hedged predictions. However, since the predictions produced by QueryInst communicate with each other and self-separate, QueryInst manages to achieve the second best performance in terms of duplicate confusion. In general, the behavior of different algorithms is performant along different dimensions, like HTC \cite{htc} being better at localization and MaskRCNN \cite{maskrcnn} being better at F1 and LRP. Intra-class hedging error is high in other state-of-the-art models because the classification and segmentation branches operate independently and can output multiple classes for the same instance. MaskRCNN uses the same RoIAligned boxes for classification and segmentation, essentially entangling their representations. Furthermore, MaskRCNN chooses one category for each prediction, leading to less dithering among classes. Although MaskRCNN uses NMS, it has high DC, which means the connectivity of its hedges is very high, although the actual quantity of hedges is low (as denoted by F1 and LRP{\tiny{FP}}). Our method is based on SOLOv2, which has independent category and mask branches. However, our semantic sorting and NMS helps close the gap between category and instance predictions, resolving the naming problem, and we perform closely to MaskRCNN in naming. \section{Introduction} \label{sec:intro} \input{fig/namemaskcount} Top-down instance segmentation methods suffer from two problems -- \textit{(instance) merging} and \textit{hedging}. \textit{Merging} refers to the problem of masking multiple objects that are similar as a single instance. This occurs in the query-key paradigm where a query feature generates a mask by selecting mask features. Since mask features are similar for similar instances, query features have no way to distinguish these instances, which leads to the instance merging problem. \textit{Hedging} refers to the problem of predicting multiple instances of the same instance with slight variations in localization and/or class. Hedging can be intra-class (different masks for the same instance - \textit{counting}) or inter-class (predicting the same mask with multiple classes - \textit{naming}). A successful instance segmentation involves the integration of the category and localization branches of visual perception to solve these problems. { Popular approaches are dominated by top-down methods where the network regresses a bounding box, mask, and category. Mask-RCNN \cite{maskrcnn} approaches it as a two-stage problem: localize the object, then predict the associated instance segmentation mask. SOLO \cite{solo,solov2} builds on an anchor-free framework and directly regresses an object segmentation using a spatial grid as a probe. More recent work based on Transformers \cite{detr} explicitly learns a query in the network memory, then refines this prediction. Despite their differences, these architectures share similar types of errors: 1) instance merging of similar objects 2) excessive hedging within and across classes. The \textit{instance merging} problem occurs when the network segments two similar objects as one instance. In analyzing why networks with widely varied designs all make these systematic forms of errors, we notice an unusual observation: one can improve mAP by substantially increasing overcounting. Specifically, we notice that mAP can be `gamed' by hedging bets on low confidence predictions to match a ground truth. The hedging becomes more prominent as we move away from traditional NMS to more soft or implicit variations \cite{solov2},\cite{instancequery}. Overcounting in instance segmentation can be traced to the behavior of the precision-recall (P/R) curve at its tail end (high recall range). We note that mAP discounts the tail end performance and encourages over-counting with duplicates (Fig. \ref{fig:illustrative}, more examples in Supplementary Material). NMS methods that are soft \cite{solov2}, or implicit \cite{detr},\cite{instancequery} tend to keep low-confidence predictions which end up in the tail end of the P/R curve, hence, increasing mAP but worsening the hedging problem. This provides a trivial spatial dithering scheme to increase mAP by overcounting, which we notice occurs in state-of-the-art top-down instance segmentation methods due to near-identical queries. Addressing this is important for many practical counting problems such as medical applications \cite{nuclei}, crowd detection \cite{crowddet}, or industrial applications where counting is critical. The current pre-NMS ranking scores are mainly predicted by an independent category branch that is often miscalibrated \cite{guocal,longtailcls,calobjdet} and doesn't reflect the instance mask quality. \cite{longtailcls} highlights that inaccurate object proposal classification can lead to a drastic performance drop in mask AP of rare classes. Moreover, implementations of modern instance segmentations allow predicting multiple classes for the same instance, exacerbating the \textit{inter-class hedging} problem. To remove the loophole in mAP-based evaluation, we develop a new metric to quantify the amount of hedging based on graph analysis on the proposed detection/segmentation instance space and apply both within classes (counting) and across classes (naming). \begin{figure}[t!] \centering \includegraphics[width=0.96\linewidth]{images/illustrative.pdf} \caption{\textbf{Illustration of counting and naming errors that increase mAP}: \textbf{(a)} Given ground truths G1-G4, predictions D1-D4 produce an mAP of 0.75 (D4 doesn't match with any ground truth because of low IoU). Dithering predictions D1-D4 to produce detections D5-D12 results in an accidental match of D8 with G2, leading to mAP of 0.875. \textbf{(b)} In the bottom example with three ground truths, a sheep is misclassified as a cat. Copying the same predictions from the left, and dithering the classification label to produce extra predictions leads to a new match, increasing mAP. } \label{fig:illustrative} \end{figure} The new metric allows us to explore algorithm designs that explicitly target the \textit{hedging} errors. Top-down instance segmentation methods tend to `pool' together instances that look similar to a single mask. This is because similar instances have similar features, and a query feature cannot distinguish between these instances. We refer to this problem as \textit{instance merging}, and we conjecture it is a major contributor to the \textit{hedging} problem. We notice that instance merging is similar to a problem in human vision: visual crowding \cite{ruthcontextcrowd},\cite{ruthpooling}. A human can solve this problem by shifting their gaze and attention to the area of crowding. Inspired by this, we implement a feedback process that first uses semantic segmentation to group pixels of all similar objects into one category. To resolve this merging, we incorporate a bottom-up flow based feedback that actively pulls pixels within an instance closer and pushes those across different instances farther. We implement this by training a contrastive instance flow field, constructed as a sum of both a flow field towards the centers of each object and a flow repelling nearby instances, ensuring the nearby objects are separable. The pixel-wise contrastive instance flow is reminiscent of bottom-up grouping-based methods \cite{mcg},\cite{ssap},\cite{sgn},\cite{assocembed},\cite{seminstdiscrim}. However, there is a critical distinction: our flow's direction also depends on nearby crowding objects' position. This dependence helps to encode relative positions of crowding objects to separate their features and thus eliminate instance merging in the top-down prediction. Semantic segmentation can alleviate the \textit{hedging} errors by using overlap between instance and semantic segmentation and consistency to re-rank mask proposals and use the semantic label to remove incorrectly named objects. } \section{Related works} \label{sec:related} \paragraph{Instance segmentation} Instance segmentation is often viewed as a localization task for object detection and pixel-wise classification to segment the object masks. Among such ``detect then segment'' strategies is FCIS \cite{fcis}, the first end-to-end fully convolutional work that considers position-sensitive score maps as mask proposals. The score maps are then assembled to produce classification agnostic instance masks and category likelihoods. Along the same line of strategies is MaskRCNN \cite{maskrcnn}, a two-stage detector that predicts masks from proposed boxes after RoIAlign operation on feature maps. Moving away from box-based object detection, SOLO\cite{solov2} and CondInst\cite{condinst} take an anchor-free approach and use position-sensitive \textit{query} to extract object masks directly from the feature map. The use of dynamic convolution in SOLOv2, is related to transformer based approaches through works like \cite{knet}, where dynamic kernels are learnt from grouped features similar to learning from queries in transformers. In SOLOv2, kernels are learnt from features on spatial grid centers. QueryInst\cite{instancequery}, is another query based object detection framework that links mask features and objects in a one to one correspondence across multiple stages. HTC \cite{htc} is a cascade-based approach that considers semantic segmentation to refine its instance predictions. \paragraph{Evaluation of Detection and Segmentation Methods} The mean average precision (mAP) is a commonly used evaluation metric for objection detection, which is also adopted for image segmentation. Existing works have pointed out several shortcomings to the mAP metric for object detection. \cite{mapproblem} show that mAP can be increased by introducing a nonsensical ranking among classes, and propose to make it truly class independent. LRP \cite{lrp} highlights two major issues with mAP, i.e. 1) different detectors having different P/R curves (implying different problems) can have the same mAP, and 2) mAP is not sufficient to quantify localization. Inspired by Panoptic Segmentation, explicitly penalizes false positive and false negative detections, in addition to localization error. The TIDE \cite{tide} framework instead identify and decomposes the error (1 - mAP) into its constituent error components to analyse where a detector is failing. \paragraph{Instance Merging} The top down prediction methods, where a few query points (often object centers) are responsible for predicting the whole object shape makes them prone to the instance merging problem. In contrast, bottom-up approaches focus on grouping pixels into an instance. These approaches, including Hough-voting \cite{houghforest,implicitshape}, pixel affinity \cite{adaptiveaffinity,affinitycnn}, Watershed methods \cite{watershed}, pixel embedding \cite{assocembed,partspixels,recurrentinstgroup}, can be thought of as `flow' based: each pixel directly or indirectly learns to flow towards the object center. The flow is category agnostic, making it easier to learn and more generalizable. However, flow based methods are more error-prone than top-down methods. \textit{Instance Separation in crowding via flow}: In human vision, there exists a perception difficulty called as crowding \cite{ruthcontextcrowd,ruthvisualawareness,ruthpooling}, which is correlated with change-blindness. Experimentally, we noticed that many instance segmentation systems often group two similar objects nearby as one object. Very few works explicitly address the problem of instance separation in the crowded pixel space. SOLOv2 \cite{solov2} adds position coordinates to convolutional mask features. However, position information is degraded in further layers. \cite{novotnysemi} point out that crowding is due to the inherent shift-invariant nature of convolution and proposes an instance coloring approach by defining a semi-convolutional operator to mix data from a convolutional network with the global location of the pixel. The embeddings regress to unconstrained representative points in each instance. This can be thought of as a center regression flow but with a semi-convolutional architecture. Other works, such as \cite{orienmask} propose a discriminative orientation mask to distinguish between foreground and background pixels, \cite{seminstdiscrim} creates a loss function to enforce cluster-and-contrast between embeddings of same and different instances. \begin{figure}[t!] \centering \includegraphics[width=0.95\linewidth]{images/namingerror.pdf} \caption{ % \textbf{Illustration of naming error}: Given ground truths G1-G3, the labels of the ground truths and detections (inside dashed boxes) are hidden. Each detection is matched to a ground truth based on its localization in a class-agnostic manner. After matching, the labels are revealed, and the naming error is calculated as the average number of predictions whose labels do not match their corresponding ground truth. } \label{fig:namingerror} \end{figure} \section{Resolving merging and hedging} In this section, we propose to resolve the merging and hedging errors. Merging errors trace back to the network's inability to distinguish between instances. To resolve this, we propose a contrastive flow field that encodes the relative positioning among instances of a given class. To alleviate the hedging of the segmentation network, we propose a Semantic Sorting and NMS procedure that resolves intra-class and inter-class hedge-predictions by sequentially measuring the overlap with the semantic segmentation module. \input{fig/flow} \subsection{Contrastive Delaunay Flow (CDF)} \label{sec:cdfalgo} {A commonly used vector flow in instance segmentation is the center flow. The idea is simple -- each foreground pixel tries to regress to its instance center. { However, because the center flow doesn't capture relative orientation between instances of the same class, this does not solve the problem of instance merging. For multiple instances with similar appearances, the flow vectors will likewise be similar. This doesn't resolve the ambiguity between multiple instances with similar appearance, because the flow vectors are the same for all these instances and therefore can be predicted from appearance features only. } An example is illustrated in Fig.\ref{fig:cdf}. Moreover, the magnitude of the vector field for center flow varies significantly, especially for large objects, making it difficult to regress. This motivates the need for a flow field that captures relationships between objects and is easy to learn. The CDF addresses these problems. } \begin{figure}[t!] \centering \includegraphics[width=0.97\linewidth]{images/modelarch.pdf} \caption{\textbf{Contrastive Flow and Semantic Sorting}: % We illustrate our approach to solving the counting, naming and masking problems. First, the mask features in SOLOv2 are used to predict a per-pixel flow and semantic segmentation. This is used for contrastive flow feedback to produce better masks, and semantic sorting tackles the counting and naming problems. In the second case, Semantic NMS prunes hedged predictions of the same object. In the third case, Semantic NMS prunes the duplicate prediction with incorrect class due to lack of a corresponding semantic mask } \label{fig:modelarch} \end{figure} The CDF consists of a unit vector at each foreground pixel, which characterizes the interactions with neighboring instances. Second, the CDF is easier to regress because the model only has to learn a direction but not a magnitude. The direction is a function of the relative location to the instance center, and the sum of repelling forces of other instances. Therefore, learning this direction amounts to learning different \textit{mask features} that encode not only local appearance, but also relative orientation with other instances. To incorporate both intra and inter instance context, the flow field is constructed for each class as follows: \begin{figure}[t!] \centering \includegraphics[width=0.97\linewidth]{images/cdf-diagram.pdf} \caption{\textbf{Examples of Contrastive Delaunay Flow}: % Figure (a) shows the center flow (red) and CDF (green). The center flow for each instance is the same, which doesn't provide any contextual information. The CDF for each instance is different, providing different contexts to instances with identical appearance. Figures (b, c, d) show the learnt CDF (overlaid with learnt semantic segmentation in yellow) which exhibits contrastive repulsion in scenarios with clutter, and occlusion } \label{fig:cdf} \end{figure} \begin{itemize} \item First, for each pixel within an instance, we initialize the vector to be a unit vector pointing towards the center of the instance. For a given instance $S_k$ \lz{with center $c_k$} and any pixel $u$ such that $u \in S_k$, the vector at pixel $u$ is initialized as $\bf{f}(u) = \frac{c_k - u}{\| c_k - u \|}$. \item \lz{Second, we compute the \textit{Delaunay triangulation} graph \cite{delaunay} \({\bf{\mathcal{G}} = \{\mathcal{\bf{V, E}}\}}\) of all instance centers. Since calculating interactions between all pairs is both expensive and difficult to learn, the Delaunay triangulation allows us to efficiently encode the relationships and relative orientations between objects.} For each edge $(S_i, S_j)$ in the graph $\bf{\mathcal{G}}$, we compute a unit repulsive force $\mathbf{f_{ij} = \frac{c_i - c_j}{\|c_i - c_j\|}}$ and $\mathbf{f_{ji} = -f_{ij}}$. These forces are then added to pixels in instances $S_i$ and $S_j$ respectively. To break symmetry, we add these forces only to pixels that are ``facing'' the neighboring instance. Finally, \lz{the vector field is} normalized to be of unit norm. \end{itemize} Instead of using the flow field in a bottom up manner during inference, we use the supervision from the CDF to amplify differences in \textit{mask features} during training, therefore correcting the instances in the top-down mask prediction process directly. This is similar to \cite{novotnysemi,orienmask,crossimagepixelcont} where bottom-up grouping supervision is used to improve top-down predictions, however, our choice of flow provides both \textit{discriminative} and \textit{structural} guidance in doing so. \input{tab/ablation} \subsection{Semantic Sorting} To detect and remove hedge-predictions, we need a `verification' mechanism for each predicted instance along both spatial and class dimensions. To this end, we add semantic segmentation as a lightweight module built on top of the instance mask features. This serves two purposes. First, it helps to re-rank instances based on their degree of `agreement' with the corresponding class of the semantic mask. This prevents instances with poor masks but high confidence from suppressing high-quality predictions. Second, it prevents \textit{hedging} by allowing only those instances which have a significant `overlap' with the semantic mask, subtracting the corresponding instance from the semantic map. Duplicate predictions have little overlap with the remaining semantic mask and will be removed. \paragraph{Calibrated pre-NMS re-ranking} Since the confidence predictions are unreliable, we use semantic segmentation as an additional signal to calibrate the quality of an instance prediction. Moreover, once we have a notion of `agreement' between an instance and a semantic mask, it is easy to rank instances in order of their mask quality. We re-rank instances based on the following factors: \textbf{(i) Precision}: An instance which has a high precision w.r.t. the semantic mask is a good instance. \textbf{(ii) IoU}: For instances with similar class scores and precision, we would prefer smaller instances, to avoid merged instances from appearing first in the sorted order. \textbf{(iii) Category score}: This score is predicted by the category branch. A detailed pseudocode can be found in the Supplementary Material. \paragraph{Semantic NMS} Once we have a pre-NMS scoring that is more indicative of the quality of the segmentation, we propose a Semantic NMS that uses the semantic mask for suppression. In order of confidence, if an instance has a minimum precision threshold with the semantic mask, the instance is preserved and is subtracted from the semantic mask. Otherwise, the instance is suppressed (discarded). This ensures that each instance has enough agreement with respect to the semantic mask to be preserved. A duplicate instance with a lower confidence would not satisfy the precision threshold since the previous instance is subtracted from the semantic mask, leaving no overlap for this duplicate mask. Only one iteration over each instance mask ensures an $O(n)$ time complexity. \input{tab/syntheticnails} \section{Quantifying Merging and Hedging beyond mAP} \begin{figure}[t!] \centering \includegraphics[width=0.93\linewidth]{images/confusion_figure.png.jpg} \caption{Calculating duplicate confusion on a sample set of predictions as described in Section \ref{sec:counting_errors}. Here, brighter colours represent larger confidences and darker colours represent smaller confidences. In this example, the duplicate confusion is 1.676.} \label{fig:confusion} \end{figure} \subsection{Hedging Bets} Current top down-approaches suffer from a recurring problem: similar instances often produce merged instance predictions. In order to maintain a reasonable mAP, networks are thus encouraged to predict multiple predictions for each instance, each dithered slightly from each other and potentially spanning multiple classes. These, often low-confidence, duplicate predictions are the network hedging its bets in case the higher confidence prediction does not align with the ground truth. Low-confidence predictions, occupying the tail end of the precision-recall (P/R) curve, are not penalized by the mAP metric for being incorrect but are rewarded if one, by chance, matches a ground truth (see Fig. \ref{fig:illustrative}). This exposes a critical tradeoff in non-max supression (NMS) procedures: do we suppress duplicates but lower the recall or include them and confuse the output predictions? One might question why low confidence duplicate predictions are a problem since an appropriate threshold would filter them out. In practice, confidence is often not well calibrated between classes \cite{guocal} and mAP decreases monotonically with increasing thresholds, making it difficult to select a threshold that is not overly exclusionary without also including duplicates. As describe above, this is a problem that should be solved by NMS. Fundamentally, when the network makes a prediction, even a low-confidence prediction, this represents a belief by the network that there exists a unique instance at that location. By this interpretation, the low-confidence predictions are not simply unnecessary but incorrect, an error that is not captured by the mAP. The TIDE \cite{tide} framework and the LRP both attempt to address some of the deficiency in the mAP metric. Because TIDE relies on the change in mAP to determine error, in cases such as Fig. \ref{fig:namemaskcount}, \ref{fig:illustrative} it still rewards a network for hedging its predictions. In contrast, the LRP explicitly penalizes false positive and false negative detections. Furthermore, the F1-score similarly penalizes false positives and false negatives while the boundary IoU \cite{contouracc} can identify when instance merging is occurring. However, none of these metrics are able to explicitly identify and penalize hedging. In order to quantify how much hedging is occurring in a set of predictions, we separate hedging into inter-class and intra-class hedging and approach both in a unified, graph-centric, approach. In doing so, we propose two new metrics, the \textit{Naming Error(NE)} and the \textit{Duplicate Confusion(DC)}, to complement the existing metrics described above. \subsection{Measuring Intra-Class Hedging}\label{sec:counting_errors} To measure how much a set of predictions is exploiting dithering and duplication to increase its mAP, we design a metric which captures the relative information between the predictions for a given image. For a given IoU threshold, a graph $G$ of the predictions within a class is constructed. The nodes of the graph are the predicted confidences and two nodes share an edge if their relative IoU is above this threshold. This graph represents, at the chosen threshold, which predictions are in the same cluster of predictions. For two nodes $i$ and $j$ in $G$, we define the connectivity between the nodes as \begin{equation}\label{eq:connectivity} c_{ij} = \max_{t \in T_{ij}} \min_{k \in t} p_k \end{equation} where $T_{ij}$ is the set of all paths on $G$ that connect $i$ and $j$ and $p_k$ is the predicted confidence for prediction $k$ along the path $t$. This represents the minimum confidence along the most connected path between two predictions. We now define the duplicate confusion at some IoU threshold $u$ and confidence threshold $v$ as the mean weighted sum of the connectivity of a node with all other nodes: \begin{equation}\label{eq:confusion} \mathrm{DC}_{uv} = \frac{1}{n} \sum_i^n \sum_{j \ne i}^n p_j \frac{c_{ij}}{p_i} \end{equation} Here, $n$ is the number of predictions with confidence at least $v$. For each image, the duplicate confusion $\mathrm{DC}$ is calculated by taking the mean of $\mathrm{DC}_{uv}$ across the range of IoU and confidence thresholds. This process is summarized in Fig. \ref{fig:confusion}. For clarity, the $\mathrm{DC}$ is multiplied by $1000$ in Table \ref{tab:ablation}, \ref{tab:cocoresult}. In the above equation, the term $\frac{c_{ij}}{p_i}$ can be interpreted as a bottlenecking coefficient: for predictions $i$ and $j$, how greatly is the information contained by the prediction $j$ restricted from explaining $i$ by the confidence of the predictions connecting $i$ and $j$. Since duplicate low confidence predictions are more tolerable, this bottlenecking coefficient is weighted by the confidence of the prediction $j$. Consequently, the duplicate confusion $\mathrm{DC}$ is nondecreasing with respect to the confidence of any prediction (see Appendix). If the network is producing duplicates, the duplicate confusion can be reduced by either removing the duplicate or reducing the confidence of both predictions. This metric can be interpreted as the confidence of a network in its own counting. This is not, however, a measure of how effectively a network can count instances -- the ground truth is not considered when calculating the metric. Neither is duplicate confusion a measure of the quality or completeness of the predictions of a network; consider, producing no predictions results in 0 duplicate confusion. Rather, the metric simply captures the amount of and uncertainty between duplicate predictions. By contrast, to increase mAP, the network is encouraged to ``hedge its bets'' when it is not completely certain, a behaviour heavily penalized by the duplicate confusion metric. \subsection{Measuring Inter-Class Hedging} Similar to intra-class hedging, inter-class hedging can be formulated as a connectivity problem, and penalizing the edges that connect nodes of different classes. Given the set of ground truths and predictions, we formulate the inter-class hedging as a \textit{naming error} by penalizing hedged predictions with a different class than its corresponding ground truth. \paragraph{Naming error (NE)}: To formulate a naming error, we need to associate ground truths with predictions in a class-agnostic way. We start by hiding the class predictions for each detection and ground truth, matching each detection with its ground truth by decreasing order of confidence of the predictions. This ensures that predictions are matched with ground truths only based on mask overlap. Note that this allows a single ground truth to be matched to potentially multiple predictions. Finally, we reveal all the labels. The naming error for a ground truth is simply the number of predictions that match this ground truth with incorrect labels. This is illustrated in Fig.\ref{fig:namingerror}. The naming error over the dataset is the average of naming error over all ground truths. Formally, let $\{ G_1 \ldots G_N \}$ be the set of ground truth masks and $\{ D_1 \ldots D_M \}$ be the set of predictions. For each detection $D_j$, we define $g(D_j)$ as \begin{equation} g(D_j) = \begin{cases} \arg \max_i\operatorname{IoU}(D_j, G_i) & , \max_i \operatorname{IoU}(D_j, G_i) \ge 0.5 \\ -1 & , \textbf{otherwise} \end{cases} \end{equation} Then, the naming error is defined as \begin{equation} \mathrm{NE} = \frac{1}{N} \sum_{i=1}^N \sum_{j: g(D_j) = i} \mathbb{I}\left[ l(D_j) \ne l(G_i) \right] \end{equation} where $l(.)$ is the function that returns the label. In the next subsection, we propose a semantic sorting module that attempts to resolve these errors. \section{A critical shortcoming of mAP} Mean Average Precision (mAP) is the de-facto standard metric used in object detection and instance segementation. However, a multifaceted problem like instance segmentation is quantified using a single metric. This can lead to \textit{blind spots} in the quantitative evaluation of IS algorithms. For example, \todo{cite achal here} show that the mAP metric is gameable for large vocabulary datasets. This occurs in long-tailed class distributions where the maximum number of detections is limited. However, we show a different problem that occurs in the mAP even for datasets like COCO with a moderate number of classes. Specifically, in a sequence of detections sorted by conficence, after a particular true positive is encountered, the sequence of false positives that follow do not contribute to the AP score. This is because the precision drops, but the recall stays constant. The AP is affected only when the next true positive is encountered. This is problematic because over the dataset, all the low-confidence predictions can be `pushed' to the end of the PR curve. We show this shortcoming in the context of SOLOv2, which uses MatrixNMS to decay duplicate predictions and later remove them via a chosen threshold. They achieve better AP scores than their MaskNMS counterpart, however, the qualitative examples look worse. Upon closer inspection of the precision-recall curve, it turns out that a lot of low-confidence false positives do not contribute to the AP. Results are shown in Figure \ref{fig:matrixvsmask}. We construct subsets of the COCO validation dataset of various sizes to show that mAP catastrophically fails to capture this behavior of MatrixNMS. One might argue that a simple way to tackle this situation is to increase the threshold after decaying duplicate predictions. Intuitively, increasing this threshold should get rid of more duplicate predictions which have lower confidence scores. However, quantitatively tuning this hyperparameter with mAP leads to \todo{figurename}. The values of bounding box and segmentation mAP drop monotonically with the value of \texttt{update\_threshold}, which suggests that this value should be set to 0 to maximize validation mAP. This is in direct contrast to how the threshold should actually be modified, showing that mAP doesn't provide any useful way of tuning this NMS procedure. Moreover, the mAP metric doesn't directly capture the mask quality of true positives in the predictions. It indirectly does so by averaging the AP at different IoU thresholds, but the metric also considers other effects like relative ordering of the predictions, etc. To alleviate these problems, we propose to use two metrics that attempt to tackle this issue. These metrics are implemented in the widely used Detectron2 framework for use by the community \footnote{Code will be made public upon acceptance}.
1,116,691,497,991
arxiv
\section{Introduction} In some large E-COMMERCE, for example TABAO, AliExpress, the recommend algorithm is divided into two stages, i.e. the stage "match" and the stage "rank". In the stage "match", we need select item set with size $O(10^2 \sim 10^3)$ from all the items. In the stage "rank", we compute a score for the items in the match item set, and rank them by score. We can use model of any form in the stage rank, but there are some restriction for the model in the stage match, it is that it must can \textbf{quick} pick up $O(10^2 \sim 10^3)$ items from $O(10^8)$ or more items, hence it need an index. Only the models can generate index can be used in the stage match. The most familiar model for match is the static model, it compute the conditional probability $p_{i,j}=P(\mbox{view } i | \mbox{view } j)$ as score, and save a table with the fields "triger id", "item id", "score" indexed by "triger id" and "score" in offline. In online, we recall items with top N score using "triger id" as index, where "triger id" com from the items which user has behaviour. Sequence prediction is a problem that involves using historical sequence information to predict the next value or values in the sequence. There are a lot of applications of this type, for example, the language model and recommend system. Recurrent Neural Network (RNN) is widely used to solve the sequence prediction problems. For a given sequence $x_1, x_2, \cdots, x_n$, we wish predict $x_{n+1}$. In the situation of recommend system, the $x_i$s are the item which user clicked ( or buy, added to wish list, etc), hence the sequence $x_1, x_2, \cdots, x_n$ is the representative of user. In the situation of language model, the $x_i$s are the words in the sentence, hence the sequence $x_1, x_2, \cdots, x_n$ is the representative of front part of the sentence. The final layer is a full connectional layer with a softmax. In \cite{SESSION_BASED}, a session based model for recommend system is proposed. The model is like Figure \ref{FC}. This network structure is equivalent to the network structure in Figure \ref{vec_embed}. In Figure \ref{vec_embed}, we give two embedding for every item: if the item is passed, we call it ``trigger'', and call the embedding ``trigger embedding''; if the item is to be predict score, we call it ``item'', and call its embedding ``item embedding''. The layer ``Extension by 1'' means the static map \[ \begin{array}{rcl} \mathbb{R}^{(n)} & \longrightarrow & \mathbb{R}^{(n+1)} \\ (a_1, \cdots, a_n)^T & \mapsto & (a_1, \cdots, a_n, 1)^T \end{array} \] Because the output $h^L_t$ of final GRU layer collected all the trigger information up to time $t$ of the session, we can view the output of layer ``Extension by 1'' as ``session embedding''. We set the dimension of item embedding equal to the dimension of session embedding, and define the output of network as the inner product of the session embedding and the item embedding. It easy to see that the network structure in Figure\ref{FC} and Figure \ref{vec_embed} are equivalent under the corresponding \[ \begin{array}{rcl} \mbox{FC layer} & \longrightarrow & \mbox{item embedding layer} \\ (x \mapsto \mathop{softmax}(Ax+b)) & \mapsto & (i \mapsto \mathop{concat}(A_i, b_i) ) \end{array} \] where $A_i$ means the $i$ row of the matrix A and $b_i$ means the $i$ element of the column vector $b$. Hence we call this method ``vector embedding method''. The session based model of with vector embedding method can used as a model for match. In fact, after the model is trained, we can save the vector embedding of items with some index, for example, KD-Tree, BallTree, .... When a user visit our recommend page, we compute the vector embedding $x$ of users session using the click sequences of user, and find the Top N items which vector embedding has max inner product with $x$ using index. \begin{figure}[p] \begin{minipage}[t]{0.3\linewidth} \includegraphics[scale=0.3]{./image/FC.png} \caption{The network structure in \cite{SESSION_BASED}} \label{FC} \end{minipage} \hfill \begin{minipage}[t]{0.6\linewidth} \includegraphics[scale=0.45]{./image/vec_embed.png} \caption{A equivalent network structure to \cite{SESSION_BASED}} \label{vec_embed} \end{minipage} \end{figure} But the vector embedding method has an inherent defect. Because the interests of user may not be single. Suppose there are the item embeddings of dress and phone as shown in (Figure \ref{inherent_defect}). Generally the interest to dress is independent to the interest to phone, we can suppose they are linear independent. If a user clicked dresses 20 times and phones 10 times in one session, then the vector embedding of this session will mainly try to close to the dress, but will be drag away by phone under training, in the result, the vector embedding of session will lie between the dress and phone, which is not close to neither dress or phone. Hence when we predict using this embedding, we will recommend something like comb of dress and phone to the user as the top 1 selection, instead of the most interested dress. In other words, the scheme of vector embedding deprived the variousness of the intersection user in one session. \begin{figure}[p] \begin{minipage}[t]{0.3\linewidth} \includegraphics[scale=0.4]{./image/vec_embed_contradiction.png} \caption{The inherent defect of vector embedding method} \label{inherent_defect} \end{minipage} \hfill \begin{minipage}[t]{0.5\linewidth} \includegraphics[scale=0.36]{./image/mat_embed_narrow.png} \caption{The matrix embedding method} \label{mat_embed} \end{minipage} \end{figure} In order to model the variousness of the intersection user in one session, we use ``matrix embedding'' of session instead of ``vector embedding''. In our method, the items are modeled as vectors of dimension $n$ still, but a session is modeled as a symmetric matrix in $M_{n}(\mathbb{R})$ instead of a vector in $\mathbb{R}^{(n)}$. The score which represent the interest of the session to the item is modeled as \[ y^TAy, \] where $y$ is the vector embedding of item, and $A$ is the matrix embedding of the session. Because the symmetric matrix $A$ has the eigendecomposition (\cite{Eigendecomposition}) \[ A=Q \Lambda Q^T, \] where $Q$ is a real orthogonal square matrix, and $\Lambda$ is a diagonal square matrix with the elements $\lambda_1 \geq \lambda _2 \cdots \geq \lambda _n$ on the diagonal line. In fact, $\lambda_1 , \lambda _2 \cdots , \lambda _n$ are the eigenvalues of $A$, and the $i$-th column of $Q$ is the eigenvector according to $\lambda _i$. In the example in Figure \ref{inherent_defect}, the matrix embedding of session can has two eigenvalues $\lambda_1 > \lambda _2$ significantly greater than others, whose eigenvectors are close to the lines along the embedding vector of dress and the embedding vector of phone respectively. Hence, the function \[ \begin{array}{rcl} U_1 ^n(0) & \longrightarrow & \mathbb{R} \\ y & \mapsto & y^TAy \end{array} \] take its max value close to the direction of dress, where $U_r(0)$ means the unit ball in $\mathbb{R}^{(n)}$. When we using this model to predict, we will recommend dress to the user as the top 1 selection. \section{Network structure} The Network structure of our new method is showed in Figure \ref{mat_embed}. The main difference between Figure \ref{mat_embed} and Figure \ref{vec_embed} is that \begin{enumerate} \item We set the dimension of the hidden layers to be $\frac{n(n+1)}{2}$, where $n$ is the dimension of the embedding vectors of items. \item We use the layer ``reshape to a symmetric matrix'' instead of the layer ``extension by 1''. The layer ``reshape to a symmetric matrix'' is defined as \[ \begin{array}{rcl} \mathbb{R}^{\frac{n(n+1)}{2}} & \longrightarrow & M_n(\mathbb{R}) \\ (z_i)_{i=1} ^{\frac{n(n+1)}{2}} & \mapsto & (a_{i,j}) _{i,j =1} ^n \end{array}, \] where $a_{i,j}= \left\{ \begin{array}{ll} z_{\frac{i(i-1)}{2}+j} & \mbox{ if } i \leq j \\ a_{j,i} & \mbox{ otherwise}. \end{array} \right.$ \item We use the layer \[ \begin{array}{ccccl} M_n(\mathbb{R}) & \times & \mathbb{R}^{(n)} & \longrightarrow & \mathbb{R} \\ (A &,& y) & \mapsto & y^TAy \end{array} \] as the score layer instead of the inner product. \item There is a modifying of the item embedding layer. It is the upper half hyperplane embedding, i.e, the embedding vectors of items in the upper half hyperplane \[ \mathbb{H}^{(n)} := \{ (y_1, \cdots y_n)^T \in \mathbb{R}^{(n)} : y_n>0 \}. \] This modifying improve the performance greatly. We give some illustration of the reason: because the score value $y^TAy$ is invariant under the transformation $y \mapsto -y$, and if we train the model without the modifying of the item embedding layer, the embedding of items will lost its direction in training. The realizing of the layer ``upper half hyperplane embedding'' can be got through apply $\exp$ to the final coordinate of a vector in ordinal embedding layer. \end{enumerate} \section{Index method} For using as match method, we give two index method of matrix embedding method. We formulate the problem as following: There a lot of vector $\{x_i\}_i \subset \mathbb{R}^{(n)}$, for a symmetrical matrix $A \in M_n(\mathbb{R})$, how we can find the top N $x_i$s such that $x^TAx$ is maximal. \subsection{Flatten} Because $A$ is a symmetrical matrix, we have \[ x^TAx=\sum _{i \leq j} a_{i,j} k_{i,j}x_ix_j =<\Gamma _1(A), \Gamma _2(x)> \] where $k_{i,j}= \left\{ \begin{array} {ll} 1 & \mbox{ if } i=j \\ 2 & \mbox{ otherwise} \end{array} \right. $, and $\Gamma _1 (A):=( a_{i,j} ) _{i \leq j}$, $\Gamma _2 (x):=(k_{i,j} x_ix_j) _{i \leq j} $. Therefore, we can map the user session matrix embedding $A$ into a linear space of dimension $R^{\frac{n(n+1)}{2}}$ using $\Gamma_1$, and map the item vector embedding $x$ into the same linear space using $\Gamma _2$, the score $x^TAx$ is equal to the inner product of $\Gamma _1(A)$ and $\Gamma _2(x)$. Hence, we can construct the index of $\Gamma _2 (x)$ for the vector embedding $x$ for all items in offline and get Top N items of maximal inner product for every $\Gamma _1 (A)$ in online like usual method to get Top N items of maximal inner product. \subsection{Decomposition} The Flatten method need build index for vectors of dimension $\frac{n(n+1)}{2}$. When the dimension $n$ is big, it is difficult to save the data, build the index and search the items of maximal inner product. Hence we need a method to get the approximate top N items of maximal inner product faster. In fact, we have the Singular Value Decomposition \[ A=\sum _{i=1} ^ n \lambda _i \alpha _i \alpha _i ^T \] where $\lambda _1 \geq \lambda _2 \geq \cdots \geq \lambda _n$, and $\alpha _i \in \mathbb{H}^{(n)}$. Hence we have \[ x^TAx=\sum _{i=1} ^n \lambda _i <\alpha _i, x>^2 \] As a approximate method, we take a small positive integral number $k$, and take the top N items of maximal inner product $<\alpha _i, x>$ for $i=1,2, \cdots, k$, and hence we have $kN$ items, then we take top N items from these kN items by computing $x^T Ax$. \section{Experiments} We give the experiment to compare matrix embedding method and vector embedding method on the Dataset RSC15 (RecSys Challenge 2015 \footnote{ http://2015.recsyschallenge.com/}) and the last.fm \cite{last_fm} dataset . For the RSC15 dataset, after tuning the hyperparameters on the validation set, we retrained the three models above on the whole days among six months, and used the last single day to evaluate those models. When it comes to the last.fm playlists dataset, since the playlists have no timestamps, we followed the preprocessing procedure of \cite{last_fm_paper}, that is, randomly assigned each playlist to one of the 31 buckets (days), and used the lastest single day to evaluate. We compare the tree models: \textbf{GRU4REC} We re-implemented the code of GRU4REC which Hidasi et al. released online \cite{SESSION_BASED} in Tensorflow framework, including the whole GRU4REC architecture, the training procedure as well as the evaluation procedure. \textbf{GRU4REC with symmetric matrix} To address the problem of GRU4REC demonstrated in section 1, we replace the output of the GRU i.e. the embedding vector of the current session with a symmetric matrix. More specifically... \textbf{GRU4REC with fully connected layer} In addition to the above models, we also create a controlled experiment model as shown in Figure 3, which mainly based on the GRU4REC model but only add a fully connected layer right after the output of the GRU to expands the embedding vector size of the GRU output from $n$ dimensions to $n(n+1)/2$ dimensions. \subsection{ACM RecSys 2015 Challenge Dataset} In order to evaluate the performance of the three models described in section 2.1, we constrained the total quantity of their parameters to the same range. The detail of the networks' architecture are shown in table 1 respectively. Table 2 shows the results when testing those three models on the last day of the ACM RecSys 2015 Challenge dataset for 10 epochs. After tuning on the validation set, we set lr=0.002, batch size = 256 for all the experiments. And since the GRU4REC and GRU4REC with FC layer model have less hidden units, dropout=0.8 shows better performance for them while dropout=0.5 performs better for the symmetric matrix model. Meanwhile, they're using bpr loss and adam optimizer in all cases. \begin{table} \centering \caption{Results for the RSC15 dataset.} \begin{tabularx}{13cm}{XXX} \hline Method & recall@20 & mrr@20 \\ \hline GRU4REC & 0.389 & 0.135 \\ GRU4REC+FC & 0.515 & 0.515 \\ GRU4REC+Matrix & {\bf 0.749} & {\bf 0.748} \\ GRU4REC(1000) & 0.632 & 0.247 \\ \hline \end{tabularx} \end{table} We additionally include the results in \cite{SESSION_BASED} which uses 1000 hidden units for the GRU4REC model. It's obvious that by combining symmetric matrix embedding method with GRU4REC, we could use less parameter to achive better recall@20 and mrr@20 performance. \renewcommand{\arraystretch}{1.5} \begin{table}[tp] \centering \fontsize{6.5}{8}\selectfont \caption{Network Parameters For RecSys15 Dataset.} \label{tab:RecSys15_params} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model}& \multicolumn{3}{c|}{GRU4REC}&\multicolumn{3}{c|}{GRU4REC+FC}&\multicolumn{3}{c|}{GRU4REC+Matrix}\cr\cline{2-10} &shape&params&total&shape&params&total&shape&params&total\cr \hline \hline {input\_embedding}&{(37958, 32)}&1214656&1214656&{(37958, 32)}&1214656&1214656&{(37958, 32)}&1214656&1214656\cr\hline {softmax\_W}&{(37958, 64)}&2429312&3643968&{(37958, 55)}&2087690&3302346&{(37958, 32)}&1214656&2429312\cr\hline {softmax\_b}&{(37958,)}&37958&3681926&{(37958,)}&37958&3340304&-&-&-\cr\hline {gru\_cell/dense/kernel}&-&-&-&{(10, 55)}&550&3340854&-&-&-\cr\hline {gru\_cell/dense/bias}&-&-&-&{(55,)}&55&3340909&-&-&-\cr \hline {gru\_cell/gates/kernel}&{(96, 128)}&12288&3694214&{(42, 20)}&840&3341749&{(560, 1056)}&591360&3020672\cr\hline {gru\_cell/gates/bias}&(128,)&128&3694342&{(20,)}&20&3341769&{(1056,)}&1056&3021728\cr\hline {gru\_cell/candidate/kernel}&(96, 64)&6144&3700486&{(42, 10)}&420&3342189&{(560, 528)}&295680&3317408\cr\hline {gru\_cell/candidate/bias}&(64,)&64&{\bf 3700550}&{(10,)}&10&{\bf 3342199}&{(528,)}&528&{\bf 3317936}\cr \hline \end{tabular} \end{table} \subsection{Last.FM playlists Dataset} For the last.fm music playlists datasets, we applied the same network structure for each model as mentioned above, the specific parameters are shown in Table 3 as follows. \renewcommand{\arraystretch}{1.5} \begin{table}[tp] \fontsize{6.5}{8}\selectfont \caption{Network Parameters For Last.fm Dataset.} \centering \label{tab:playlists_params} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model}& \multicolumn{3}{c|}{GRU4REC}&\multicolumn{3}{c|}{GRU4REC+FC}&\multicolumn{3}{c|}{GRU4REC+Matrix}\cr\cline{2-10} &shape&params&total&shape&params&total&shape&params&total\cr \hline \hline {input\_embedding}&{(200668, 32)}&6421376&6421376&{(200668, 32)}&6421376&6421376&{(200668, 32)}&6421376&6421376\cr\hline {softmax\_W}&{(200668, 64)}&12842752&19264128&{((200668, 55)}&11036740&17458116&{(200668, 32)}&6421376&12842752\cr\hline {softmax\_b}&{((200668,)}&200668&19464796&{(200668,)}&200668&17658784&-&-&-\cr\hline {gru\_cell/dense/kernel}&-&-&-&{(10, 55)}&550&17659334&-&-&-\cr\hline {gru\_cell/dense/bias}&-&-&-&{(55,)}&55&17659389&-&-&-\cr \hline {gru\_cell/gates/kernel}&{(96, 128)}&12288&19477084&{(42, 20)}&840&17660229&{(560, 1056)}&591360&13434112\cr\hline {gru\_cell/gates/bias}&(128,)&128&19477212&{(20,)}&20&17660249&{(1056,)}&1056&13435168\cr\hline {gru\_cell/candidate/kernel}&(96, 64)&6144&19483356&{(42, 10)}&420&17660669&{(560, 528)}&295680&13730848\cr\hline {gru\_cell/candidate/bias}&(64,)&64&{\bf 19483420}&{(10,)}&10&{\bf 17660679}&{(528,)}&528&{\bf 13731376}\cr \hline \end{tabular} \end{table} Since the music playlists dataset is so different from the e-commerce click sequence dataset, after tuning on the validation set, we finally set lr = 0.0012 for all cases while the batch size and dropout configuration remain the same. Table 4 shows results for the last.fm music playlists datasets. We can notice the same trend when comparing the results with the RecSys15 dataset. \renewcommand\arraystretch{1.5} \begin{table} \centering \caption{Results for the last.fm dataset.} \begin{tabularx}{13cm}{XXX} \hline Method & recall@20 & mrr@20 \\ \hline GRU4REC & 0.027 & 0.022 \\ GRU4REC+FC & 0.054 & 0.054 \\ GRU4REC+Matrix & {\bf 0.164} & {\bf 0.164} \\ GRU4REC(1000) & 0.121 & 0.053 \\ \hline \end{tabularx} \end{table}
1,116,691,497,992
arxiv
\section{Introduction} The dynamics of SU($N$) gauge theories with adjoint fermions is expected to depend crucially on the number of flavors $N_f$. This is suggested by inspecting the $N_f$ dependence of the beta function $\beta$. The first two coefficients of $\beta$ expressed in terms of 't Hooft coupling $\lambda=g^2 N$ are $b_0=(4N_f-11)/24\pi^2$ and $b_1=(16N_f-17)/192\pi^4$. Asymptotic freedom requires that $b_0$ should be negative, so only $N_f$=1 and 2 are allowed (in this talk we do not consider half-integer $N_f$ corresponding to Majorana fermions). For $N_f$=1, $b_1$ is also negative, so we naturally expect that the theory is confining as ordinary QCD. For $N_f$=2, on the contrary, $b_1$ is positive, indicating that there could be an infrared fixed point at finite value of the 't Hooft coupling where the beta function becomes zero. Since no dimensional scale exists at the infrared fixed point, this theory is conjectured to be conformal. In fact, for $N$=2 (minimal walking technicolor), there are now many lattice simulations indicating that the theory is conformal at vanishing fermion mass\cite{DD}. The purpose of the present talk is to study both $N_f$=1 and 2 theories in the large $N$ limit. It is quite obvious that the direct application of the usual lattice simulation is unpractical for large $N$. Our idea is to use the twisted space-time reduced model defined on a $1^4$ lattice, recently proposed by the present authors\cite{GAO}. We point out that, in recent years, many authors have studied space-time reduced models of large $N$ QCD with adjoint fermions using periodic boundary conditions\cite{KUY,AHUY,BS,HN}. It turns out, however, that these models have too large finite $N$ corrections compared with those based on twisted boundary conditions\cite{GAO,AHUY}, casting doubts on whether the former models are of practical use. The main finite $N$ corrections of the twisted reduced model for $N=L^2$ amount to the finite volume corrections of ordinary Lattice Gauge Theory on an $L^4$ lattice\cite{GAO,TEK}. Thus, by choosing $N=17^2=289$, we can study large $N$ QCD with adjoint fermions on an effectively $17^4$ lattice within the present computer resources. From a practical point of view, the most important property of the reduced model is its rather small memory size. In fact, the size of four SU(289) matrices is only 5 MB, which can be fitted into cache memory, resulting in a rather high performance of computations. By making use of these advantages of the reduced model, we will analyze the properties of large $N$ QCD with adjoint fermions and clarify the difference of $N_f$=1 and 2 fermions. \vspace{-0.2cm} \section{Formulation} We consider the SU($N$) group with $N=L^2$, $L$ being some positive integer. Then the action of the twisted space-time reduced model of QCD with $N_f$ adjoint fermions is given by\cite{GAO} \vspace{-0.5cm} \begin{eqnarray} \nonumber S&=&-bN \sum_{\mu \ne \nu =1}^4 {\rm Tr} \left[ z_{\mu\nu} U_\mu U_\nu U_\mu^\dagger U_\nu^\dagger \right] \\ &&- \sum_{j =1}^{N_f} {\rm Tr}\left[ \right. {\bar \Psi}^j \Psi^j -\kappa \sum_{\mu=1}^4 \left\{ {\bar \Psi}^j (1-\gamma_\mu) U_{\mu} \Psi^j U_{\mu}^\dagger +{\bar \Psi}^j (1+\gamma_\mu) U_{\mu}^\dagger \Psi^j U_{\mu} \right\} \left. \right] \label{STR2} \nonumber\\ &\equiv&-b N \sum_{\mu \ne \nu =1}^4 {\rm Tr} \left[ z_{\mu\nu} U_\mu U_\nu U_\mu^\dagger U_\nu^\dagger \right] - 2 \kappa \sum_{j =1}^{N_f} {\rm Tr}\left[ {\bar \Psi}^j D_W \Psi^j \right] . \label{S} \end{eqnarray} \noindent $U_{\mu}$ are four SU($N$) link variables and $\Psi^j$ are $N_f$ Grassman-valued $N\times N$ matrices transforming in the ($N,{\bar N}$) color representation. Spinor indices of $\Psi^j$ are not explicitly shown. $b$ is the inverse (lattice) 't Hooft coupling $b=1/g^2N$ and $\kappa$ is the hopping parameter of Wilson fermions. The symmetric twist tensor $z_{\mu\nu}$ is an element of Z($L$), whose explicit form is \vspace{-0.2cm} \begin{equation} z_{\mu\nu} = \exp \left( k {2\pi i \over L} \right), \ \ \ z_{\nu\mu}=z_{\mu\nu}^*, \ \ \ \mu>\nu \label{Z} \end{equation} \noindent The integer $k$ represents the flux through each plane. $k$ and $L$ should be co-prime, and a general prescription for choosing $k$ and $L$ to minimize the finite $N$ corrections is given in Ref. \cite{GAO}. The condition is essentially the same as the one imposed in the pure gauge model to prevent Z($L$) symmetry breaking\cite{TEK2}, which is necessary for reduction to work\cite{EK,EKB}. We recall that our prescription is to take both $k/L$ and $\bar{k}/L$ (defined $k \bar{k} =1$ mod $L$) large enough. Throughout this paper we use $L$=17 ($N=L^2=289$), $k$=5, and thus $\bar{k}$=7. We have studied the model with $N_f=2$ by means of the Hybrid Monte Carlo method. For $N_f=1$, we have used the Rational Hybrid Monte Carlo method. Simulations have been done at two values of the inverse 't Hooft coupling $b$ = 0.35 and 0.36. For $N_f$=2, we have made simulations at eight values of $\kappa$ = 0.05, 0.10, 0.11, 0.12, 0.13, 0.14, 0.15 and 0.16. For $N_f$=1, we attempted to make simulations at the same eight values of $\kappa$. However, we found that, for $\kappa >$ 0.155, the CG iteration during the molecular dynamics evolution does not converge. Hence, for $N_f$=1 we took $\kappa$ = 0.05, 0.10, 0.11, 0.12, 0.13, 0.14, 0.15 and 0.155 instead. For every configuration we calculated the expectation value of ${\rm Tr}(U_\mu^\ell$), for $1\le \ell \le (L-1)$, which are the order parameters of the Z$^4(L)$ symmetry. We confirm that, in all the simulations presented here, the quantities $<{\rm Tr} (U_\mu^\ell)>$ are compatible with zero within statistical errors. For randomly chosen gauge configurations, we also calculated all traces of open loops within the effective $L^4$ box, checking that traces of all open loops are zero within statistical errors. \vspace{-0.5cm} \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\textwidth]{histgram.eps} \end{center} \vspace{-1.cm} \caption{Histogram of ${1 \over N} {\rm ReTr}(U_\mu^L)$ for $N_f$=2 and $b=0.35$. (a) $\kappa$=0.15. (b) $\kappa$=0.16.} \label{fig_histogram} \end{figure} \vspace{-0.2cm} During the simulations we also calculated ${\rm Tr}(U_\mu^L)$. We point out that this quantity could have a non-zero value without breaking the Z$^4(L)$ symmetry of the reduced model. The association of our system with an ordinary lattice system of size $L^4$, indeed suggests that at sufficiently weak coupling a non-zero expectation value would be observed. However, we confirmed that $<{\rm Tr} (U_\mu^L)>$ is statistically compatible with zero for all our simulations except for two runs at $N_f=2$ and $\kappa=0.16$. This is illustrated in Fig.\ref{fig_histogram}, where we display the histogram of ${1 \over N} {\rm ReTr}(U_\mu^L)$ at $\kappa$=0.15 and 0.16 for $b=0.35$. While the histogram is centered at ${1 \over N} {\rm ReTr}(U_\mu^L)$=0 for $\kappa$=0.15, it is slightly shifted towards positive values for $\kappa$=0.16. We observe the same phenomena at $b$=0.36. We expect the change of pattern to take place when a certain correlation length of the system becomes comparable to the effective size of the box $L$. Indeed, as shown in sect. 4, for $\kappa$=0.16 the dimensionless ratio $1/(L \sqrt{\sigma})$ reaches $\sim$0.6. \vspace{-0.5cm} \section{Quark mass $m_q$} \vspace{-0.2cm} One of the simplest quantities that one can study is the low-lying spectrum of the square of the hermitian Wilson Dirac matrix $Q^2=(D_W \gamma_5)^2$. The lowest eigenvalue provides a possible definition of the quark mass as follows $m_q = \sqrt{\lambda} $, where we use the lattice units $a$=1. There is a small correction here since the boundary conditions prohibit zero-momentum states. Thus, the lowest eigenvalue contains a small $1/N$ correction to the mass, which we have neglected. On the other hand, the bare quark mass is given by $M_q^{(0)}=\frac{1}{2}(\frac{1}{\kappa}-\frac{1}{1/8})$. Renormalization implies the necessity of an additive renormalization and multiplicative renormalization of the mass. Thus, we can parametrize the dependence as follows \vspace{-0.1cm} \begin{equation} m_q= A \left( \frac{1}{2 \kappa} - \frac{1}{2 \kappa_c}\right) ^\delta \left[ 1 + B\left( {1\over \kappa} - {1\over \kappa_c}\right) \right] \label{fitfunc_mq} \end{equation} where we have included a possible $O(m_q)$ correction since we are dealing with Wilson fermions. For a QCD-like theory one expects $\delta=1$~\cite{DDGLPT}. However, if the theory has an infrared fixed-point at $\kappa=\kappa_c$, the exponent could be different from 1. We have fitted our parameterization to our data in the range $\kappa=0.10-0.15$. The data at $\kappa$=0.05 is too far to neglect higher order corrections in $m_q$. On the other hand, we excluded $\kappa$=0.16, since in that case the system might suffer from finite size effects. The results, however, do not change significantly when including this value. Good fits are obtained at $N_f=2$ and the fitting parameters are $\delta=0.914(11)$, $\kappa_c=0.1744(3)$ at $b$=0.35, and $\delta=0.920(14)$, $\kappa_c=0.1722(5)$ at $b$=0.36. In both cases the $\delta=1$ value is statistically disfavored. We have repeated the same analysis for $N_f$=1. Here we use the data in the range $\kappa=0.10-0.155$. We get at $b$=0.35, $\delta=1.010(14)$, $\kappa_c=0.1834(5)$ and at $b$=0.36, $\delta=1.021(11)$, $\kappa_c=0.1804(3)$, compatible with the naively expected behavior $\delta$=1. Thus, we conclude that the quark mass $m_q$ gives evidence of a different critical behavior for $N_f$=2 and 1. The results for $m_q$ as a function of $\kappa$ for both cases are displayed in figs. \ref{fig_mq_2} and \ref{fig_mq_1}, together with the best fit lines. \vspace{-0.2cm} \begin{figure}[htbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=70mm]{mq_2.eps} \end{center} \vspace{-0.2cm} \caption{$\kappa$ dependence of $m_q$ for $N_f$=2.} \label{fig_mq_2} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=70mm]{mq_1.eps} \end{center} \vspace{-0.2cm} \caption{$\kappa$ dependence of $m_q$ for $N_f$=1.} \label{fig_mq_1} \end{minipage} \end{figure} \vspace{-0.5cm} \section{String tension} The string tension $\sigma$ is extracted from the large $R$ behavior of the square Creutz ratio $\chi(R,R)$ as follows: \vspace{-0.7cm} \begin{eqnarray} \nonumber \chi(R,T) =&& -\log{ \frac{W(R+0.5,T+0.5) W(R-0.5,T-0.5)}{ W(R+0.5,T-0.5) W(R-0.5,T+0.5)} } \\ && \chi(R,R) \xrightarrow[R \to \infty]{} \sigma + {2\eta \over R^2} + {\xi \over R^4} + \cdots . \label{sigma} \end{eqnarray} \vspace{-0.1cm} This method has been used successfully for the pure gauge theory (twisted Eguchi-Kawai model) \cite{TEK3}, where the three parameter ($\sigma$, $\eta$ and $\xi$) formula describes the data very well. For our adjoint fermion case, the smaller effective size $L$=17 and fewer statistics limits the range of $R$ values that can be fitted to Eq.~(\ref{sigma}). This introduces strong correlations among the parameters and a rather poor determination of the $\kappa$ dependence of each parameter. A better way to proceed is to fix one of the parameters and study the evolution of the other parameters with $\kappa$ and the inverse `t Hooft coupling $b$. From that point view the best choice is $\eta$, since it is dimensionless and its value is connected to universal properties of an effective bosonic string theory, not expected to depend on $\kappa$ or $b$. To determine the value of $\eta$ to use, we perform a simultaneous fit to all the data (with $\kappa$>0.05) fixing the value of $\eta$ and marginalizing over the remaining parameters. The resulting chi-square profiles ($\chi^2/{\rm n.o.d}$) are plotted in Fig.~\ref{fig_chi_square} for $N_f$=2 and 1. The figure shows that our hypothesis of a common value of $\eta$ is statistically satisfactory. The minimum of the chi-square curve determines the best choice for $\eta$, given by $\eta$=0.26 for $N_f$=2 and $\eta$=0.24 for $N_f$=1. The curve also provides a value for the error of order $\pm (0.04-0.05)$. \vspace{-0.2cm} \begin{figure}[htb] \begin{center} \includegraphics[width=0.4\textwidth]{chi_square.eps} \end{center} \vspace{-0.7cm} \caption{Chi-square per degree of freedom $\chi^2/n.o.d$ as functions of $\eta$ both for $N_f$=2 and 1.} \label{fig_chi_square} \end{figure} \vspace{-0.2cm} \begin{figure}[htbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=70mm]{sigma_2_35.eps} \end{center} \vspace{-0.7cm} \caption{$m_q$ dependence of $\sigma$ for $N_f$=2 at b=0.35.} \label{fig_sigma_2_35} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=70mm]{sigma_2_36.eps} \end{center} \vspace{-0.7cm} \caption{$m_q$ dependence of $\sigma$ for $N_f$=2 at b=0.36.} \label{fig_sigma_2_36} \end{minipage} \end{figure} Fixing the value of $\eta$ we can obtain good fits to the Creutz ratios using the parameterization of Eq.~(\ref{sigma}). The resulting values of the string tension $\sigma$ as a function of $m_q$ for $N_f$=2 at b=0.35 are displayed in Fig. \ref{fig_sigma_2_35}. The central black symbols are obtained with $\eta$=0.26, while red and blue symbols are obtained by taking $\eta$ to 0.21 and 0.30, respectively. From these values it is clear that the string tension value depends uniformly on $\eta$. The band covered between the values for $\eta$=0.21 and 0.30, serves as a rough estimate of the systematic error. If the theory is governed by an infrared fixed point deformed with a relevant mass term $m_q {\bar \Psi} \Psi$, all physical quantities having positive mass dimensions should vanish as $m_q \to 0$. In particular, the string tension having dimensions of mass squared should behave as \newpage \vspace{-0.8cm} \begin{equation} \sigma=A m_q^{\alpha}(1+B m_q) \label{scaling_sigma} \end{equation} where we have included possible $O(m_q)$ corrections. Our $N_f=2$ data are perfectly consistent with this formula as shown in Fig.~\ref{fig_sigma_2_35} for $b=0.35$ and Fig.~\ref{fig_sigma_2_36} for $b=0.36$. Unfortunately, the exponent $\alpha$ has a large uncertainty. A fit in the range $\kappa\in[0.10-0.15]$ for the $b=$0.35-$\eta$=0.26 data gives $\alpha=$1.17. Varying $\eta$ within the allowed range produces a systematic error of $0.12$. For the $b=$0.36-$\eta$=0.26 data one gets $\alpha=$1.42 with a systematic error of $0.25$. \vspace{-0.2cm} \begin{figure}[htbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=70mm]{sigma_1_35.eps} \end{center} \vspace{-0.7cm} \caption{$m_q$ dependence of $\sigma$ for $N_f$=1 at b=0.35.} \label{fig_sigma_1_35} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=70mm]{sigma_1_36.eps} \end{center} \vspace{-0.7cm} \caption{$m_q$ dependence of $\sigma$ for $N_f$=1 at b=0.36.} \label{fig_sigma_1_36} \end{minipage} \end{figure} The previous results show that, for the $N_f=2$ case, the string tension seems to vanish at the critical point $m_q=0$, in accordance with the infrared fixed point hypothesis. This contrasts with the results for $N_f$=1, summarized in Figs.~\ref{fig_sigma_1_35} and \ref{fig_sigma_1_36}. The data seems to approach a non-zero value at the critical point, as expected for a QCD-like theory, with confinement and spontaneous symmetry breaking. From the vanishing of the string tension at the critical point one can obtain a determination of the mass anomalous dimension $\gamma_*$ at the infrared fixed point. Equating the exponent $\alpha'$ of $(1/\kappa-1/\kappa_c)$ with $2/(1+\gamma_*)$ one gets values of $\gamma_*=0.87$ and $0.53$ for $b=$0.35 and 0.36 respectively. The determination is dominated by the systematic error of order $\pm(0.3-0.4)$. There is no fundamental difficulty in reducing these errors significantly. As mentioned previously, an important source comes from the small effective lattice volume $L^4=17^4$ of our data. However, increasing $N=L^2$ poses an important challenge within the present computer power. A more promising approach follows by employing partial volume reduction~\cite{NN,GGO}. In particular, one can reduce the system to a $2^4$ lattice, which at large N should behave as living in a $(2\sqrt{N})^4$ box. A finer analysis of the $\kappa$ dependence close to $\kappa_c$ is also important. Alternatively, one can use other observables to determine $\gamma_*$. One possibility is to use the distribution of eigenvalues of $Q^2$. Some results using this method have already been presented~\cite{GGLO}. \vspace{0.1cm} \noindent {\bf Acknowledgments} A.G-A is supported from Spanish grants from MINECO: FPA2012-31686, FPA2012-31880, FPA2009-09017, SEV-2012-0249 and CPAN CSD2007-00042; Comunidad de Madrid: HEPHACOS S2009/ESP-1473, and European Union PITN-GA-2009-238353 (ITN STRONGnet). M. O. is supported by the Japanese MEXT grant No 23540310. The calculation has been done on Hitachi SR16000-M1 computer at High Energy Accelerator Research Organization (KEK) supported by the Large Scale Simulation Program No.12/13-01 (FY2012-13). The authors thank the Hitachi system engineers for their help in highly optimizing the present simulation code. \vspace{-0.3cm}
1,116,691,497,993
arxiv
\section{Introduction} \label{sec:intro} The systematic computation of various classes of on-shell scattering amplitudes has become a very active field of research in the past few decades, and several very efficient methods have been put forward, which involve spinor helicity formalism, on-shell recursion relations, Ward identities, KLT relations, just to name a few---see~\cite{Elvang:2015rqa} for a review. The main paradigm consists in avoiding the use of lagrangian field theories with all their plethoric structures and rely instead on more basic features (symmetries, kinematics,...) that allow to more efficiently compute the amplitudes. One feature which is common to most of these very effective methods---particularly those where the spinor helicity tricks are used as the main tool---is the masslessness of the propagating particles. As a complementary method in the present manuscript we consider a worldline approach, which instead ideally works with massive propagating particles. Historically, the first pioneering work on the worldline approach to Quantum Field Theory is due to Feynman, who proposed a particle path integral representation for the dressed propagator of a scalar field coupled to electromagnetism~\cite{Feynman:1951gn}. However, this formulation was not taken seriously as an alternative to the standard Feynman diagram method for the actual computation of effective actions and scattering amplitudes until the early nineties, when Bern and Kosower~\cite{Bern:1990cu, Bern:1991aq} derived novel rules for the construction of one-loop $N$ - gluon amplitudes from first-quantized open string theory, and similar rules were shortly later derived from the closed string for one-loop $N$ - graviton amplitudes \cite{bedush}. For the gluonic case, these rules were then rederived from point particle path integrals by Strassler~\cite{Strassler:1992zr}, which established this ``worldline formalism'' as a serious alternative to Feynman diagrams, and triggered a host of generalizations to other types of amplitudes and effective actions ---see Ref.~\cite{Schubert:2001he} for an earlier account on the development of the method. So far, the majority of developments and applications of the worldline approach have been at the loop level: multiloop calculations~\cite{8,Schmidt:1994aq}, worldline methods with strong external fields~\cite{17,Reuter:1996zm, Gies:2005sb, Dunne:2005sx}, the worldline formalism in curved spacetime~\cite{Bastianelli:2002fv}, one-loop quantum gravity~\cite{Bastianelli:2013tsa, Bastianelli:2019xhi} and photon-graviton mixing in an electromagnetic field~\cite{Bastianelli:2004zp}, the worldline Monte-Carlo approach to the Casimir effect~\cite{Gies:2003cv}, higher-spin field theory~\cite{Bastianelli:2007pv, Bastianelli:2012bn}, and applications to QFT on manifolds with boundary~\cite{Bastianelli:2006hq, Corradini:2019nbb}, noncommutative QFT's~\cite{Bonezzi:2012vr} and form-factor decompositions of off-shell gluon amplitudes \cite{Ahmadiniaz:2012xp,Ahmadiniaz:2016qwn} and many more. On other hand, the worldline approach to dressed propagators and to the associated scattering amplitudes is a much less developed subject of research, though the Bern-Kosower rules for a scalar particle line coupled to electromagnetism in vacuum were found soon after their one-loop counterparts~\cite{Daikouji:1995dz, Ahmadiniaz:2015kfq}. However, more recently, master formulas for a scalar particle in a constant background field were derived~\cite{Ahmad:2016vvw}, and the coupling to non-abelian fields in vacuum was also studied~\cite{Ahmadiniaz:2015xoa}. Generalizations to propagators of fields with spin are even more rare. The straightforward procedure would be to consider locally supersymmetric spinning particle models on the worldline~\cite{Gershun:1979fb}---recently Einstein gravity was studied through the BRST quantization of an ${\cal N}=4$ spinning particle model~\cite{Bonezzi:2018box}. However, there are technical difficulties to be overcome in the path integral quantization of such models on the open line, since the gravitino present in the locally supersymmetric model cannot be completely gauged away, and the coherent state boundary conditions for the fermionic coordinates, responsible for providing the spinorial degrees of freedom to the particle, do not appear to be very convenient. A suitable alternative approach is to employ the `Symb' map developed in~\cite{brezinmarinov-77,Fradkin:1991ci} which reproduces the spin-factor potential in terms of fermionic coordinates with antiperiodic boundary conditions, and the resulting particle models are now globally supersymmetric. This approach allowed to compute some previously neglected one-particle reducible contributions to the fermion propagator in a constant field~\cite{Ahmadiniaz:2017rrk}. Moreover, a derivation of a master formula for the tree level $N$-photon fermion propagator is under completion~\cite{fppaper1}. In the present manuscript we instead take a path towards the derivation of tree level amplitudes with gravitons, using as a main tool the worldline approach in curved space. In fact, at the level of one photon---i.e. for the gravitational photoproduction process---the amplitude displays a very interesting factorization property ~\cite{geohal-81, chshso-95, Holstein:2006ry, Bastianelli:2012bz, Bjerrum-Bohr:2014lea, Ahmadiniaz:2016vai}, which is briefly reviewed below in a dedicated subsection. However, this nice factorization property appears not to work beyond the $N=1$ case since there are too few conservation laws---see~\cite{geohal-81,chshso-95} for a detailed discussion. Here, we consider a scalar particle line perturbatively coupled to electromagnetism and to gravity, and provide a master formula which involves the inclusion of $N$ photons and one graviton into the scalar line, i.e. we add a graviton to Daikouji et al's formula~\cite{Daikouji:1995dz} which was also rederived in \cite{Ahmadiniaz:2015kfq}. The inclusion even of a single graviton is by no means trivial for various reasons. Firstly, it boils down to the application of the worldline formalism in curved space which, although being well understood by now, is certainly trickier than its flat space counterpart. In fact, the coupling to gravity, in the perturbative approach requires the use of regularization schemes and careful treatment of the non-trivial path integral measure~\cite{Bastianelli:2006rx}. Moreover, the graviton can either couple directly to the scalar line, but it may also be emitted from a photon line, since gravity couples to the photon stress tensor. This second contribution involves diagrams that are one-particle reducible in the photon lines, and akin to what occurs in the presence of a non-abelian gauge field where, say, a gluon emitted from the scalar line can split in two or three gluons. Similar issues were indeed already discussed, for instance in the worldgraph approach to Yang-Mills amplitudes~\cite{Dai:2008bh} and in worldline calculations~\cite{FMB-thesis}. A particularly elegant feature of the original Bern-Kosower and Bern-Dunbar-Shimada rules for gluon and graviton amplitudes is that they provide a simple rule for constructing the reducible contributions from the irreducible ones at the integrand level, instead of the usual ``sewing trees onto loops'' procedure. Here, we provide a similar novel replacement rule, which allows us to obtain the reducible part of the amplitudes with the graviton in terms of the scalar lines with only photons attached, thus in terms of amplitudes for which a convenient generating master formula exists. For this purpose, it will be essential that in the worldline approach to scattering amplitudes there is a priori no need to impose on-shell conditions on the external lines. In the following we first rederive the $N$-photon scalar propagator and the associated master formula, since it is one of our main tools. Then we consider the insertion of a graviton and single out the irreducible part of the amplitudes---by using a helpful parametrization of the graviton polarization, and the reducible part through the aforementioned replacement rule. This allows us to give a compact formula for the full tree level amplitude with $N$ photons, one graviton and two scalars. We thus test our master formula by checking the on-shell transversality on the photon lines and graviton lines. In the graviton case, this requires a conspiration between the reducible and irreducible contributions that becomes rather transparent in our approach. Some computational details, concerning amplitudes with $N\leq 2$, are relegated to the appendix. \section{$N$-photon scalar propagator from the worldline formalism} \label{sec:photon-ampl} The photon-dressed propagator in scalar QED can be efficiently obtained using the line path integral of a scalar particle in the presence of an external electromagnetic field, \begin{align} \Big\langle \phi(x') \bar \phi(x)\Big\rangle_A = \int_0^\infty dT e^{-m^2T}\int_{x(0)=x}^{x(T)=x'}Dx~e^{-\int_0^T d\tau\big(\frac1{4}\dot x^2+ie\dot x\cdot A(x)\big)}~. \label{eq:propx} \end{align} The $N$-photon scalar propagator, i.e. the scalar propagator with the insertion of $N$ photons can be obtained with the straightforward recipe that we briefly review here. Firstly, write the external field as a sum of $N$ photons \begin{align} A_\mu(x(\tau))= \sum_{l=1}^N\varepsilon_{l,\mu} e^{ik_l\cdot x}\,, \end{align} then extract from~\eqref{eq:propx} the multi-linear part, in the various polarizations $\varepsilon_l$, and Fourier transform in the two external scalar lines. This leads to \begin{align} D^{(N)}(p,p';\varepsilon_1,k_1,\dots, \varepsilon_N,k_N)&=(-ie)^N \int_0^\infty dT e^{-m^2T}\int d^4x\int d^4x' e^{i(p\cdot x+p'\cdot x')}\nonumber\\& \times \int_{x(0)=x}^{x(T)=x'}Dx~e^{-\int_0^Td\tau\frac1{4}\dot x^2}\prod_{l=1}^N\int_0^T d\tau_l \, \varepsilon_l\cdot \dot x(\tau_l) e^{ik_l\cdot x(\tau_l)}~. \end{align} It is thus convenient to split the particle path in terms of a background $\bar x^\mu(\tau)=x^\mu +(x'^\mu-x^\mu)\tfrac{\tau}{T}$ and fluctuations $q^\mu(\tau)$ with vanishing boundary conditions. One thus gets \begin{align} &D^{(N)}(p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N)=(-ie)^N \int_0^\infty dT e^{-m^2T}\int d^4x\int d^4x' e^{i(p\cdot x+p'\cdot x')-\frac1{4T}(x-x')^2} \nonumber\\& \times e^{\sum_l \big(ik_l\cdot x+\tfrac{\varepsilon_l}{T}\cdot(x'-x)\big)}\int_{q(0)=0}^{q(T)=0}Dq~e^{-\int_0^T d\tau\frac1{4}\dot q^2} \prod_{l=1}^N\int_0^Td\tau_l \, e^{ik_l\cdot \big((x'-x)\tfrac{\tau_l}{T}+q(\tau_l)\big)+\varepsilon_l\cdot \dot q(\tau_l)}\Biggr|_{\rm m.l.}~, \end{align} where 'm.l.' indicates that we are only meant to pick out the multilinear part in all the polarizations. The latter path integral thus provides the correlation function of the product of $N$ photon vertex operators \begin{align} V_A[\varepsilon,k]=e^{ik\cdot x+\tfrac{\varepsilon}{T}\cdot(x'-x)}\int_0^Td\tau \,e^{ik\cdot \big((x-x')\tfrac{\tau}{T}+q(\tau)\big)+\varepsilon\cdot \dot q(\tau)}\Biggr|_{\rm lin}\,, \end{align} with respect to the Gaussian measure $\int Dq \,e^{-\frac1{4}\int \dot q^2}$, which has normalization $\frac1{(4\pi T)^{D/2}}$ and yields the Green's functions \begin{align} &\big\langle q^\alpha(\tau) q^{\alpha'}(\tau')\big\rangle = -2\delta^{\alpha\alpha'}\Delta(\tau,\tau')\,,\\ &\label{eq:delta} \Delta(\tau,\tau')=\frac{\tau \tau'}{T}+\frac12 |\tau-\tau'|-\frac12(\tau+\tau')~. \end{align} Thus, after some straightforward algebra one finds the Bern-Kosower-like master formula originally obtained by Daikouji et al~\cite{Daikouji:1995dz} and later in the worldline formalism in \cite{Ahmadiniaz:2015kfq}, i.e. \begin{align} &\widetilde D^{(N)}(p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N)=(-ie)^N\int_0^\infty dT e^{-T(m^2+p'^2)}\prod_{l=1}^N\int_0^T d\tau_l\nonumber\\ &\times\exp\Big\{(p'-p)\cdot \sum_{l=1}^N(-k_l\tau_l +i\varepsilon_l ) +\sum_{l,l'=1}^N\big( k_l\cdot k_{l'}\Delta_{l-l'}-2i\varepsilon_l\cdot k_{l'}\dot \Delta_{l-l'} +\varepsilon_l\cdot \varepsilon_{l'} \ddot \Delta_{l-l'}\big) \Big\}\Biggr|_{\rm m.l.}~, \label{eq:tildeA} \end{align} where \begin{align} \Delta_{l-l'} := \frac12 |\tau_l-\tau_{l'}|\,, \end{align} is the translation-invariant part of~\eqref{eq:delta}. Above we have also stripped off the overall momentum-conservation delta function. The Feynman amplitude for the tree-level scattering of two scalars and $N$ photons can thus be obtained from~\eqref{eq:tildeA} by truncating the external scalar lines, i.e. multiplying by $(p^2+m^2)(p'^2+m^2)$, \begin{align} {\cal D}^{(N)} (p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N)= (p^2+m^2)(p'^2+m^2)\widetilde D^{(N)}(p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N)~. \label{eq:MA} \end{align} Note that, as already mentioned in the Introduction, this expression holds off the mass-shell of the external particles. However, going on-shell leads to transversality in all the photon lines, upon the replacement $\varepsilon_l(k_l)\ \to\ k_l$, as will be reviewed below. \section{Insertion of a graviton} \label{sec:gravi-ampl} The computation of scattering amplitudes of the scalar particle with photons and gravitons, can be performed by considering the worldline representation in curved space~\cite{Bastianelli:2000nm}. For the propagator of a scalar particle minimally coupled to gravity, we have~\footnote{Here `minimally coupled' refers to the minimal coupling in the worldline action, i.e. $\bar\xi=\xi-\tfrac14=0$, which renders the graviton vertex operator linear in $\epsilon_{\mu\nu}$, as opposed to the minimal coupling in the spacetime action, which corresponds to $\xi=0$. } \begin{align} \Big\langle \phi(x') \bar \phi(x)\Big\rangle_{A,g}&= \int_0^\infty dT e^{-m^2T}\int_{x(0)=x}^{x(T)=x'}DxDaDbDc \nonumber \\ &\times e^{-\int_0^Td\tau\big(\frac1{4}g_{\mu\nu}(x)(\dot x^\mu \dot x^\nu +a^\mu a^\nu+b^\mu c^\nu)+ie\dot x\cdot A(x)\big)}\,, \label{eq:dressed-prop} \end{align} where the fields $a^\mu$, and $b^\mu$ and $c^\mu$ are commuting, respectively anti-commuting, auxiliary fields (Lee-Yang ghosts) which were found to suitably represent the Einstein-invariant path integral measure, and have vanishing boundary conditions. By expanding the metric about the flat background \begin{align} g_{\mu\nu}(x) =\delta_{\mu\nu} +\kappa \epsilon_{\mu\nu}e^{ik_0\cdot x}\,, \end{align} and using the same split described above for the particle paths---we can read off the graviton vertex operator \begin{align} V_g[\epsilon,k_0]=e^{ik_0\cdot x+\frac1{T^2}(x'-x)\cdot\epsilon\cdot(x'-x)}\int_0^Td\tau \,e^{ik_0\cdot \big((x'-x)\frac{\tau}{T}+q\big)+\epsilon_{\mu\nu}\big( \frac2T (x'-x)^\mu \dot q^\nu+\dot q^\mu \dot q^\nu +a^\mu a^\nu +b^\mu c^\nu\big)} \Biggr|_{\rm lin}\,, \label{eq:grav-vert-op} \end{align} along with auxiliary fields propagators \begin{align} &\big\langle a^\mu(\tau)a^\nu(\tau')\big\rangle =2\delta^{\mu\nu}\delta(\tau,\tau')\,,\\ &\big\langle b^\mu(\tau)c^\nu(\tau')\big\rangle =-4\delta^{\mu\nu}\delta(\tau,\tau')~. \end{align} Hence, the irreducible part of the tree-level scalar propagator with the insertion of $N$ photons and one graviton reads \begin{align} & D^{(N,1)}(p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N;\epsilon,k_0) =(-ie)^N \left(-\frac{\kappa}{4}\right)\int_0^\infty dT e^{-m^2T}\nonumber\\&\times\int d^4x\int d^4x' e^{i(p\cdot x+p'\cdot x')-\frac1{4T}(x-x')^2} \frac1{(4\pi T)^\frac D2} \, \Bigg\langle\prod_{l=1}^N V_A[\varepsilon_l,k_l]\, V_g[\epsilon,k_0]\Bigg\rangle\,, \end{align} where only the part linear in all the polarizations ($\varepsilon$'s and $\epsilon$) has to be retained. In the next sections we provide a specific recipe to handle this task and obtain a useful master formula for the full Feynman amplitude. \subsection{Irreducible part of the amplitude} \label{sec:irred-part} In order to explicitly compute the irreducible part of the $N$-photon one-graviton amplitude (see Fig. \ref{1grNph-irr}) we find it convenient to parametrize the graviton polarization as \begin{align} &\epsilon_{\mu\nu} := \lambda_{\mu} \rho_{\nu}\,, \label{eq:e}\\ & \varepsilon_{0\mu} := \lambda_{\mu} + \rho_{\mu}~, \label{eq:lr} \end{align} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\textwidth]{1grnph.png} \caption{The Feynman diagram representation (in configuration space) for irreducible contributions to $N$-photon one-graviton amplitude. The diagrams in the second and third lines involve quartic vertices that in the worldline approach come from delta functions, e.g. the first one is given by $\delta(\tau_0-\tau_1)$ etc.} \label{1grNph-irr} \end{center} \end{figure} where, in Eq.~\eqref{eq:e}, symmetrization between indices is implied. Such parametrization has to be understood as a simple book-keeping device to combine photon and graviton insertions together; at the end the graviton polarization is reconstructed from the term simultaneously linear in $\lambda$ and $\rho$. In fact, with a single graviton insertion, the ghost contribution cancels against the singular part of the $\langle \dot q^\mu(\tau_0) \dot q^\nu(\tau_0) \rangle$ propagator that appears in the graviton vertex operator. We can thus neglect the ghost contributions, provided we take $\langle \dot q^\mu(\tau_0) \dot q^\nu(\tau_0) \rangle \cong -\frac{2}{T} \delta^{\mu\nu}$ in the graviton sector. The graviton vertex operator can thus be written as \begin{align} V_g[\epsilon,k_0]=e^{ik_0\cdot x+\frac{\varepsilon_0}{T}\cdot(x'-x)}\int_0^Td\tau_0 \,e^{ik_0\cdot \big((x'-x)\frac{\tau_0}{T}+q(\tau_0)\big)+\varepsilon_0\cdot \dot q(\tau_0) }\Big|_{\rm lin.\, \lambda,\, \rho}\,, \end{align} which has the same form as the photon counterpart, with the only subtlety that the linear part in $\lambda$ and $\rho$ comes from the quadratic part in $\varepsilon_0$. We thus get the ``$N$-photon one-graviton scalar propagator'' \begin{align} &\widetilde D^{(N,1)}(p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N;\epsilon,k_0)=(-ie)^N\left(-\frac{\kappa}{4}\right)\int_0^\infty dT e^{-T(m^2+p'^2)}\prod_{l=0}^N\int_0^T d\tau_l\nonumber\\ &\times\exp\Big\{(p'-p)\cdot \sum_{l=0}^N(-k_l\tau_l +i\varepsilon_l ) +\sum_{l<l'=0}^N\Big( k_l\cdot k_{l'}|\tau_l-\tau_{l'}|+i(\varepsilon_{l'}\cdot k_{l}-\varepsilon_{l}\cdot k_{l'}){\rm sgn}(\tau_l-\tau_{l'})\nonumber\\&\hskip8cm +2\varepsilon_l\cdot \varepsilon_{l'}\, \delta (\tau_l-\tau_{l'}) \Big) \Big\}\Bigr|_{\rm m.l.}~, \label{eq:tildeAg} \end{align} where `m.l.' stands for `multilinear' i.e. linear in all $\varepsilon_l$, $l=1,...,N$ and linear in $\lambda$ and $\rho$, and with $\ddot \Delta_{0-0'} =0$. On the mass shell of the scalar particle, upon truncation of the external scalar lines, the latter provides a contribution to the tree-level amplitude with $N$ photons, one graviton and two scalars that we will refer to as `irreducible' \begin{align} &{\cal D}^{(N,1)}_{irred} (p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N;\epsilon,k_0) \nonumber \\&= (p^2+m^2)(p'^2+m^2) \widetilde D^{(N,1)}(p,p'; \varepsilon_1,k_1,\dots, \varepsilon_N,k_N;\epsilon,k_0)~, \end{align} meaning that it cannot be divided into two subdiagrams by cutting a photon line or the graviton line. Let us single out some special cases of the previous formula which will be helpful later. Let us begin considering the case $N=0$, i.e. the `graviton-scalar' vertex, \begin{align} &\widetilde D^{(0,1)}(p,p';\epsilon,k_0)=\left(-\frac{\kappa}{4}\right)\int_0^\infty dT~e^{-T(m^2+p'^2)}\int_0^T d\tau_ ~e^{(p'-p)\cdot(-k_0{\tau_0} +i\varepsilon_0 )}\Big|_{\rm m.l.}~, \end{align} which, using momentum conservation, can be reduced to \begin{align} \widetilde D^{(0,1)}(p,p';\epsilon,k_0)=\frac{\kappa}{4} (p'-p)^\mu \epsilon_{\mu\nu} (p'-p)^\nu \frac{1}{(p'^2+m^2)(p^2+m^2)}\,, \end{align} and, upon truncation, leads to the amplitude (vertex) \begin{align} {\cal D}^{(0,1)} (p,p';\epsilon,k_0)= \frac{\kappa}{4} (p'-p)^\mu \epsilon_{\mu\nu} (p'-p)^\nu~. \label{eq:g} \end{align} For $N=1$, the irreducible part of the gravitational photoproduction amplitude can be easily obtained from \begin{align} &\widetilde D^{(1,1)}(p,p'; \varepsilon_1,k_1;\epsilon,k_0)=(-ie)\left(-\frac{\kappa}{4}\right)\int_0^\infty dT e^{-T(m^2+p'^2)}\int_0^T d\tau_0 \int_0^T d\tau_1\nonumber\\ &\times e^{(p'-p)\cdot (-k_0 \tau_0 -k_1\tau_1+i\varepsilon_0+i\varepsilon_1)}\, e^{k_0\cdot k_1 |\tau_0-\tau_1| +i(\varepsilon_1\cdot k_0-\varepsilon_0\cdot k_1){\rm sgn}(\tau_0-\tau_1) +2\varepsilon_0\cdot\varepsilon_1\delta(\tau_0-\tau_1)}\Big|_{\rm m.l.}~, \end{align} where the $\delta(\tau_0-\tau_1)$ part yields the seagull diagram, whereas the time ordered parts ($\tau_0>\tau_1$ and $\tau_0<\tau_1$) yield the diagrams where photon and graviton are singly emitted by the scalar line. We thus get the following irreducible contribution to the Feynman amplitude \begin{align} &{\cal D}_{irred}^{(1,1)}(p,p';\varepsilon_1,k_1;\epsilon,k_0)=(p'^2+m^2)(p^2+m^2)\widetilde D^{(1,1)}(p,p'; \varepsilon_1,k_1;\epsilon,k_0)\nonumber\\&= e\kappa \Bigl[ (p-p')\cdot \epsilon \cdot \varepsilon_1 +\frac{\varepsilon_1\cdot p' p\cdot\epsilon\cdot p}{p\cdot k_0} -\frac{\varepsilon_1\cdot p\, p'\cdot\epsilon\cdot p'}{p\cdot k_1} \Bigr]~. \label{eq:one-one-irred} \end{align} Finally, let us consider the irreducible contribution to the two-photon one-graviton amplitude, which is obviously trickier than the previous ones, though the worldline approach allows to obtain a quite compact representation. We report here the final result (the interested reader will find details of the computation to the Appendix~\ref{sec:appendix}) which reads \begin{align} &{\cal D}_{irred}^{(2,1)}(p,p';\varepsilon_1,k_1,\varepsilon_2,k_2;\epsilon,k_0) = \kappa e^2\Bigg\{ 2(\varepsilon_1 \epsilon \varepsilon_2) -2 \frac{\varepsilon_1\cdot \varepsilon_2\, (p'\epsilon p')}{m^2+(p'+k_0)^2}-2 \frac{\varepsilon_1\cdot \varepsilon_2\, (p\epsilon p)}{m^2+(p+k_0)^2}\nonumber\\& +2\frac{\varepsilon_1\cdot p\, (\varepsilon_2\epsilon (p'-p-k_1))}{m^2+(p+k_1)^2}+ 2\frac{\varepsilon_1\cdot p'\, (\varepsilon_2 \epsilon (p-p'-k_1))}{m^2+(p'+k_1)^2} \nonumber\\& +2\frac{\varepsilon_2\cdot p\, (\varepsilon_1 \epsilon (p'-p-k_2))}{m^2+(p+k_2)^2}+ 2\frac{\varepsilon_2\cdot p'\, (\varepsilon_1 \epsilon (p-p'-k_2))}{m^2+(p'+k_2)^2} \nonumber\\& +4\frac{(p'\epsilon p')\, \varepsilon_1\cdot (p+k_2)\, \varepsilon_2\cdot p}{((p+k_2)^2+m^2)((p'+k_0)^2+m^2)} +4\frac{(p'\epsilon p')\, \varepsilon_2\cdot (p+k_1)\, \varepsilon_1\cdot p}{((p+k_1)^2+m^2)((p'+k_0)^2+m^2)} \nonumber\\ &+ 4\frac{(p\epsilon p)\, \varepsilon_1\cdot (p'+k_2)\, \varepsilon_2\cdot p'}{((p+k_0)^2+m^2)((p'+k_2)^2+m^2)}+4\frac{(p\epsilon p)\, \varepsilon_2\cdot (p'+k_1)\, \varepsilon_1\cdot p'}{((p+k_0)^2+m^2)((p'+k_1)^2+m^2)} \nonumber\\ &+4\frac{((p+k_1)\epsilon (p'+k_2))\, \varepsilon_1\cdot p\, \varepsilon_2\cdot p'}{((p+k_1)^2+m^2)((p'+k_2)^2+m^2)}+4\frac{((p+k_2)\epsilon (p'+k_1))\, \varepsilon_2\cdot p\, \varepsilon_1\cdot p'}{((p+k_2)^2+m^2)((p'+k_1)^2+m^2)}\Bigg\}~. \end{align} In the next section we tackle the reducible part of the amplitude. \subsection{Reducible part of the amplitude} \label{sec:red-part} The external graviton can couple directly to the scalar line, as reproduced by the formulas described in the previous section, but it can also couple to the photon lines---see Fig.~\ref{1grNph-red} for the diagrammatic representation of these contributions. From a field theory view point this is encoded in the vertex \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\textwidth]{1grnph-red.png} \caption{Feynman diagram representation of the reducible contribution to $N$-photon one-graviton amplitude.} \label{1grNph-red} \end{center} \end{figure} \begin{align} {\cal V}[A,h] =\frac{\kappa}{2}\int d^4x h_{\mu\nu} T^{\mu\nu} = \frac{\kappa}{2}\int d^4x\, h_{\mu\nu} \Big( F^{\mu\alpha} F^\nu{}_\alpha -\frac14 \delta^{\mu\nu} F^{\alpha\beta}F_{\alpha\beta}\Big)\,, \end{align} which, using the tracelessness of the on-shell graviton, leads to the following tree-level amplitude between two photons and one graviton \begin{align} \Gamma_{g\gamma\gamma}[\varepsilon,k,\varepsilon',k';\epsilon,k_0]=\kappa\Big[ (k\epsilon k) \varepsilon\cdot \varepsilon' + (\varepsilon \epsilon \varepsilon') k\cdot k_0- (\varepsilon\epsilon k) k\cdot \varepsilon'-(k \epsilon \varepsilon') \varepsilon \cdot k_0\Big]\,, \label{eq:ggg} \end{align} where $(a\epsilon b):=a_\mu \epsilon^{\mu\nu}b_\nu$, and we have used the transversality conditions $k_{0\mu} \epsilon^{\mu\nu}=k_{\mu} \varepsilon^\mu =0$ and conservation law $k'=-(k+k_0)$. The latter can be used to construct the reducible part of the amplitude with the following recipe. Let us start from the one-photon two-scalar amplitude \begin{align} {\cal D}^{(1)}(p,p';\varepsilon',k')=e \varepsilon'\cdot (p'-p)~, \label{eq:one-red} \end{align} which can be easily read off from~\eqref{eq:MA}. It yields the reducible part of the one-photon one-graviton two-scalar amplitude by simply multiplying expressions~\eqref{eq:ggg} and~\eqref{eq:one-red}, and using the replacement rule \begin{align} \varepsilon'^\alpha \varepsilon'^\beta\ \longrightarrow\ \frac{\delta^{\alpha\beta}}{k'^2}\,, \end{align} which is the photon propagator in the Feynman gauge. By renaming photon polarization and momentum as $\varepsilon_1$ and $k_1$, we thus get \begin{align} {\cal D}^{(1,1)}_{red}(p,p';\varepsilon_1,k_1;\epsilon,k_0)&= e\kappa (p'-p)_\mu \frac{\varepsilon_1^\mu (k_1 \epsilon k_1) +(\varepsilon_1 \epsilon)^\mu k_1\cdot k_0 -k_1^\mu (\varepsilon_1\epsilon k_1) -(k_1 \epsilon)^\mu \varepsilon_1\cdot k_0}{2k_1\cdot k_0}~. \label{eq:one-one-red} \end{align} In other words we can obtain the latter as \begin{align} {\cal D}^{(1,1)}_{red}(p,p';\varepsilon_1,k_1;\epsilon,k_0)= {\cal D}^{(1)}(p,p';\upsilon_1,k_1+k_0)\,, \end{align} i.e., by starting from~\eqref{eq:one-red} and performing the replacement \begin{align} & \varepsilon^\mu_1\ \to \ \upsilon_1^\mu := \kappa \frac{\varepsilon_1^\mu (k_1 \epsilon k_1)+(\varepsilon_1 \epsilon)^\mu k_1\cdot k_0 -k_1^\mu (\varepsilon_1\epsilon k_1) -(k_1 \epsilon)^\mu \varepsilon_1\cdot k_0}{2k_1\cdot k_0} \label{eq:upsilon}\,, \\ & k_1^\mu \ \to \ k_1^\mu+k_0^\mu~, \end{align} note that~\eqref{eq:upsilon} is transversal upon the replacement $\varepsilon_1\ \to \ k_1$. The rule above can be obviously extended to the $N$-photon two-scalar amplitude constructed above in~\eqref{eq:MA}, which thus yields the following reducible contribution \begin{align} {\cal D}^{(N,1)}_{red}(p,p';\varepsilon_1,k_1,\dots, \varepsilon_N,k_N;\epsilon,k_0) = \sum_{i=1}^N {\cal D}^{(N)}(p,p';\varepsilon_1,k_1,\dots,\upsilon_i,k_i+k_0,\dots \varepsilon_N,k_N) ~. \label{eq:red-part} \end{align} Thus, the full tree-level amplitude with $N$ photons, one graviton and two scalars reads \begin{align} {\cal D}^{(N,1)}(p,p';\varepsilon_1,k_1,\dots, \varepsilon_N,k_N;\epsilon,k_0)&={\cal D}^{(N,1)}_{irred}(p,p';\varepsilon_1,k_1,\dots, \varepsilon_N,k_N;\epsilon,k_0) \nonumber\\&\hspace{-1cm}+\sum_{l=1}^N {\cal D}^{(N)}(p,p';\varepsilon_1,k_1,\dots,\upsilon_l,k_l+k_0,\dots \varepsilon_N,k_N)\,, \label{eq:master} \end{align} where ${\cal D}^{(N,1)}_{irred}$ is given by eq.~\eqref{eq:tildeAg} `truncated' on the external scalar lines. For completeness, let us give the explicit expression for the reducible part of the amplitude with two photons. Let us start from the scalar Compton scattering amplitude, which can be easily obtained from ~\eqref{eq:MA} and reads \begin{align} {\cal D}^{(2)}(p,p';\varepsilon_1,k_1,\varepsilon_2,k_2)&= (-ie)^2\Big\{ 2\varepsilon_1\cdot \varepsilon_2 -\frac{\varepsilon_1\cdot (p'-p-k_2) \varepsilon_2\cdot (p'-p+k_1)}{(p'+k_1)^2+m^2}\nonumber\\& -\frac{\varepsilon_1\cdot (p'-p+k_2) \varepsilon_2\cdot (p'-p-k_1)}{(p'+k_2)^2+m^2}\Bigg\}~. \label{comptonqed} \end{align} By applying the replacement rule given above we get \begin{align} &{\cal D}^{(2,1)}_{red}(p,p';\varepsilon_1,k_1,\varepsilon_2,k_2;\epsilon,k_0) = {\cal D}^{(2)}(p,p';\upsilon_1,k_1+k_0,\varepsilon_2,k_2) +{\cal D}^{(2)}(p,p';\varepsilon_1,k_1,\upsilon_2,k_2+k_0)\nonumber\\ &= \kappa (-ie)^2 \Bigg\{\frac{2}{(k_1+k_0)^2}\big(\varepsilon_{1}^{\mu}\left(k_{1} \epsilon k_{1}\right)+\left(\varepsilon_{1} \epsilon\right)^{\mu} k_{1} \cdot k_{0}-k_{1}^{\mu}\left(\varepsilon_{1} \epsilon k_{1}\right)-\left(k_{1} \epsilon\right)^{\mu} \varepsilon_{1} \cdot k_{0} \big)\varepsilon_{2\mu} \nonumber\\ & -\frac{\varepsilon_2\cdot (p'-p+k_1+k_0)}{(p'+k_1+k_0)^2+m^2}\, \frac{\varepsilon_{1}^{\mu}\left(k_{1} \epsilon k_{1}\right)+\left(\varepsilon_{1} \epsilon\right)^{\mu} k_{1} \cdot k_{0}-k_{1}^{\mu}\left(\varepsilon_{1} \epsilon k_{1}\right)-\left(k_{1} \epsilon\right)^{\mu} \varepsilon_{1} \cdot k_{0}}{(k_1+k_0)^2} \, (p'-p-k_2)_\mu\nonumber\\ & -\frac{\varepsilon_2\cdot (p'-p-k_1-k_0)}{(p'+k_2)^2+m^2}\, \frac{\varepsilon_{1}^{\mu}\left(k_{1} \epsilon k_{1}\right)+\left(\varepsilon_{1} \epsilon\right)^{\mu} k_{1} \cdot k_{0}-k_{1}^{\mu}\left(\varepsilon_{1} \epsilon k_{1}\right)-\left(k_{1} \epsilon\right)^{\mu} \varepsilon_{1} \cdot k_{0}}{(k_1+k_0)^2} \, (p'-p+k_2)_\mu \nonumber\\ & +(1\leftrightarrow 2)\Bigg\}~. \label{eq:red-2-1} \end{align} Below, in Section~\ref{sec:WI-trans}, we test the master formula~\eqref{eq:master} by checking the on-shell transversality conditions in the photon lines and graviton line. However, to conclude the present section, let us briefly review a factorization property that links graviton-photon amplitudes to photon amplitudes. \subsection{On-shell factorization property for the graviton photoproduction amplitude} For a mixed scattering with one graviton and one photon, i.e. for the graviton photoproduction process, the full amplitude involving both the irreducible contributions~\eqref{eq:one-one-irred} and the reducible contribution~\eqref{eq:one-one-red}, on-shell factorizes in terms of the corresponding QED Compton amplitude. It can be easily seen by adopting the decomposition \begin{align} \epsilon^{\mu\nu}\rightarrow \epsilon^\mu\epsilon^\nu\,, \end{align} which yields, \begin{align} \mathcal{M}^{(1,1)}(p,p';\varepsilon_1,k_1;\epsilon,k_0) &=\frac{\kappa e}{k_0\cdot k_1}\Big[\epsilon\cdot p'\, k_0\cdot p-\epsilon\cdot p k_0\cdot p'\Big]\,\Big[\frac{\varepsilon_1\cdot p'\,\epsilon\cdot p}{p'\cdot k_1}+\frac{\varepsilon_1\cdot p\,\epsilon\cdot p'}{p'\cdot k_0}+\epsilon\cdot \varepsilon_1\Big]\nonumber\\ &=H{\cal M}^{(2)}(p,p';\epsilon,k_0,\varepsilon_1,k_1)\,, \end{align} where \begin{align} H=-\frac{\kappa}{2e}\frac{\epsilon\cdot p'\, k_0\cdot p-\epsilon\cdot p k_0\cdot p'}{k_0\cdot k_1}\,, \end{align} and $\mathcal{M}^{(1,1)}(p,p';\varepsilon_1,k_1;\epsilon,k_0)$ and ${\cal M}^{(2)}(p,p';\epsilon,k_0,\varepsilon_1,k_1)$ are respectively the on-shell versions of the graviton photoproduction amplitude and of the scalar QED Compton scattering given in Eq. (\ref{comptonqed}). This factorization property was already studied in \cite{geohal-81,chshso-95,Holstein:2006ry,Bjerrum-Bohr:2014lea,Ahmadiniaz:2016vai}, and seems to be universal for four-body amplitudes with massless gauge bosons. However, beyond the four-particle level, such factorization property is not expected to hold due to the lack of enough conservation laws~\cite{geohal-81}. \section{Ward identities and on-shell transversality} \label{sec:WI-trans} The dressed propagator described above in~\eqref{eq:dressed-prop} is covariant upon $U(1)$ gauge transformations and invariant under diffeomorphisms. The former is described by \begin{align} \Big\langle \phi(x')\bar\phi(x)\Big\rangle_{A,g}\ \to \ \Big\langle \tilde\phi(x')\tilde{\bar \phi}(x)\Big\rangle_{\tilde A, \tilde g}= e^{ie(\alpha(x)-\alpha(x'))} \Big\langle \phi(x')\bar\phi(x)\Big\rangle_{A,g}~. \label{eq:gauge-transf} \end{align} Using that $\delta A_\mu=\partial_\mu \alpha$, the infinitesimal part of~\eqref{eq:gauge-transf} becomes the electromagnetic Ward identity generator \begin{align} \Big[\partial^y_\mu \frac{\delta}{\delta A_\mu(y)}+ie(\delta(y-x)-\delta(y-x'))\Big] \Big\langle \phi(x')\bar\phi(x)\Big\rangle_{A,g}=0~, \label{eq:em-WI} \end{align} which holds off-shell. In momentum space, it yields an infinite set of Ward identities \begin{align} \tilde D^{(N,1)}(p,p';-ik,k,\varepsilon_1,k_1,\dots; \epsilon,k_0) &= -ie\Big[ \tilde D^{(N-1,1)}(p+k,p';\varepsilon_1,k_1,\dots; \epsilon,k_0)\nonumber\\& -\tilde D^{(N-1,1)}(p,p'+k;\varepsilon_1,k_1,\dots; \epsilon,k_0)\Big]\,, \end{align} which can be easily tested with the special cases singled out in the section~\ref{sec:irred-part}. On the other hand, on the scalar mass-shell the contact terms present in~\eqref{eq:em-WI} do not have the correct pole structure and drop out upon truncation, whereas the first term leads to the on-shell transversality condition \begin{align} {\cal M}^{(N,1)}_{irred}(p,p';\varepsilon_1,k_1,\dots, -ik_l,k_l,\dots;\epsilon,k_0)=0~, \end{align} which holds for any photon line. As before ${\cal M}$ is the on-shell limit of ${\cal D}$. Moreover, the gauge invariance of scalar QED (in curved space) ensures that the full amplitude is transversal, i.e. the reducible part of the amplitude must result separately transversal. Indeed, given that~\eqref{eq:upsilon} vanishes upon the replacement $\varepsilon_1\ \to\ k_1$, this is enough to prove the transversality of the reducible part of the amplitude~\eqref{sec:red-part}, as it can easily be checked for the expression~\eqref{eq:red-2-1}. Under infinitesimal diffeomorphisms, $x^\mu \to\ x^\mu -\xi^\mu(x)$, the dressed propagator transforms as \begin{align} \Big\langle \tilde\phi(x')\tilde{\bar \phi}(x)\Big\rangle_{\tilde A, \tilde g}=& \Big\langle \phi(x'){\bar \phi}(x)\Big\rangle_{ A, g} \nonumber\\&+\int d^4y\, \xi^\mu(y)\big( \delta^{(4)}(y-x)\partial_\mu+ \delta^{(4)}(y-x')\partial'_\mu\big)\Big\langle \phi(x'){\bar \phi}(x)\Big\rangle_{ A, g}\,. \end{align} However, using the worldline representation~\eqref{eq:dressed-prop}, one can as well get \begin{align} &\Big\langle \tilde\phi(x')\tilde{\bar \phi}(x)\Big\rangle_{\tilde A, \tilde g}= \Big\langle \phi(x'){\bar \phi}(x)\Big\rangle_{ A, g} \nonumber\\ &+\int d^4y\Big[ 2\nabla_\mu \xi_\nu(y) \frac{\delta}{\delta g_{\mu\nu}(y)} +\big( \xi^\alpha\partial_\alpha A_\mu(y) +\partial_\mu \xi^\alpha A_\alpha(y)\big) \frac{\delta}{\delta A_{\mu}(y)}\Big] \Big\langle \phi(x'){\bar \phi}(x)\Big\rangle_{ A, g} ~, \end{align} which, after some straightforward algebra and using expression~\eqref{eq:em-WI}, can be reduced to \begin{align} &\Biggl[-\nabla^y_\mu \frac{2g_{\nu\alpha}}{\sqrt{g}}\frac{\delta}{\delta g_{\mu\nu}(y)}\nonumber\\& +\frac{1}{\sqrt{g}}\Big(F_{\alpha\mu}\frac{\delta}{\delta A_\mu(y)} -\delta^{(4)}(y-x) \bar D_\alpha -\delta^{(4)}(y-x') D_\alpha'\Big)\Biggr] \Big\langle \phi(x')\bar\phi(x)\Big\rangle_{A,g}=0~, \label{eq:diff-WI} \end{align} which is the diffeomorphism Ward identity generator. Once again there are contact terms which drop out on the scalar particle mass-shell. The two left-over terms both contribute on-shell and thus the irreducible part of the $N$-photon one-graviton amplitude is not, by itself, transversal on the graviton line; rather it fulfills, even on-shell, an inhomogeneous Ward identity. Introducing the field strength tensor $f_i^{\mu\nu} := k_i^{\mu}\varepsilon^{\nu}_i - \varepsilon^{\mu}_ik_i^{\nu}$ for each photon leg, and an ``effective'' photon polarization vector \begin{eqnarray} \tilde\varepsilon_i := \kappa f_i\cdot \xi \,, \label{defepsilontilde} \end{eqnarray} this identity can be written concisely as follows (the same identity holds for the closed-loop case \cite{Bastianelli:2012bz}) \begin{eqnarray} \tilde D^{(N,1)}(p,p';\varepsilon_1,k_1,\dots; k_0\xi,k_0) = \sum_{i=1}^N \tilde D^{(N,0)}(p,p';\varepsilon_1,k_1,\dots,\tilde\varepsilon_i,k_i + k_0, \ldots, \varepsilon_N,k_N)\,. \end{eqnarray} Here we have written the transformation of the (transverse traceless) polarization tensor as \begin{align} \epsilon_{\mu\nu} \ \to\ \epsilon_{\mu\nu} +k_{0\mu} \xi_\nu + k_{0\nu} \xi_\mu~,\quad k_0\cdot \xi =k_0^2=0~, \label{eq:trans-g} \end{align} and used $k_0\xi$ just a shortcut notation for the symmetrized product of the two vectors. However, the full amplitude is expected to be transversal on-shell, i.e., \begin{align} {\cal M}^{(N,1)}(p,p';\varepsilon_1,k_1,\dots, \varepsilon_N,k_N;k_0\xi,k_0)&= 0~ \, . \end{align} Using the ``tree replacement'' rule \eqref{eq:upsilon}, it can be seen quite easily how this comes about: applying the transformation \eqref{eq:trans-g} to $\upsilon_i^{\mu}$, the result can be written as \begin{equation} \upsilon_i^{\mu} \to -\tilde \varepsilon_i^{\mu} + \kappa\frac{k_0\cdot f_i \cdot \xi}{2k_i\cdot k_0} (k_0+k_i)^{\mu} \, . \end{equation} The second term in brackets will drop out when inserted into the photon amplitude because of the transversality in the photon polarizations. The first one will cancel the contribution of the $i$th term on the right-hand side of \eqref{eq:red-part} to the Ward identity. In the Appendix~\ref{sec:examples} we single out a few detailed examples. \section{Conclusions and Outlook} \label{sec:concl} We described a novel worldline approach to the computation of the tree level scattering amplitudes associated to the scalar line coupled to electromagnetism and gravity with all external legs off-shell. In particular, we provided a convenient parametrization for the graviton polarization and a replacement rule, which allowed us to easily compute amplitudes with an arbitrary number of photons and one graviton. The on-shell transversality of the amplitudes was explicitly checked. A priori, our technique can be as well implemented to compute amplitudes with an arbitrary number of gravitons. However, in that case more care is needed in the treatment of chains of contractions between the Lee-Yang ghost fields that represent the non trivial measure~\cite{Bastianelli:1991be, Bastianelli:1992ct}. On the other hand amplitudes with gravitons have always been the subject of extensive studies. In particular, theorems which involve gravitons with low momentum have long been analyzed~\cite{Weinberg:1964ew} and, in the recent past, various soft-graviton theorems---see e.g. Ref.~\cite{Cachazo:2014fwa}---have been studied, due to their connections to the infrared structure of gauge theory and gravity~\cite{Strominger:2017zoo}. The present manuscript wishes to provide a novel approach towards the computation of amplitudes with gravitons, which may shed new light on the structure of such quantities. In fact, our approach does not, a priori, require gravitons to have low-momentum. However, it would be helpful to reconstruct soft graviton theorems from the worldline view point, by suitably implementing from the beginning the low-momentum condition into the graviton vertex operators~\eqref{eq:grav-vert-op}. Yet, the parametrization described in Section~\ref{sec:irred-part}, which allows to simplify the computation of the worldline correlators, keeps holding for each graviton vertex operator. \subsection*{Ackowledgments} The Authors would like to thank Fiorenzo Bastianelli, James Edwards and Diego Trancanelli for helpful discussions. \begin{appendices} \section{Two-photon one-graviton scalar propagator} \label{sec:appendix} We use the master formula~\eqref{eq:tildeAg} to compute the two-photon one-graviton scalar propagator, and the related (irreducible) part of the two-photon one-graviton two-scalar amplitude, whose Feynman diagrams are depicted in~Fig~\ref{2ph1gr}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\textwidth]{1gr2ph-irr.png} \caption{Irreducible contributions to the amplitude with two-photon one-graviton, which are shown here in momentum space. 'Perms' refers to permutations between the photon lines and among the emission points. The last type of diagram, where photons and graviton are all emitted at the same point, is obviously unique.} \label{2ph1gr} \end{center} \end{figure} \noindent It reduces to, \begin{align} &\widetilde D^{(2,1)}(p,p'; \varepsilon_1,k_1, \varepsilon_2, k_2;\epsilon,k_0)=(-ie)^2\left(-\frac{\kappa}{4}\right)\int_0^\infty dT ~e^{-T(m^2+p'^2)}\int_0^T d\tau_0 \int_0^T d\tau_1\int_0^T d\tau_2\nonumber\\ &\times e^{(p'-p)\cdot (-k_0 \tau_0 -k_1\tau_1-k_2\tau_2+i\varepsilon_0+i\varepsilon_1+i\varepsilon_2)}\, e^{k_0\cdot k_1 |\tau_0-\tau_1| +k_0\cdot k_2 |\tau_0-\tau_2| +k_1\cdot k_2 |\tau_1-\tau_2|} \nonumber\\&\times e^{i(\varepsilon_1\cdot k_0-\varepsilon_0\cdot k_1){\rm sgn}(\tau_0-\tau_1)+i(\varepsilon_2\cdot k_0-\varepsilon_0\cdot k_2){\rm sgn}(\tau_0-\tau_2)+i(\varepsilon_2\cdot k_1-\varepsilon_1\cdot k_2){\rm sgn}(\tau_1-\tau_2) }\nonumber\\& \times e^{2\big[\varepsilon_0\cdot\varepsilon_1\delta(\tau_0-\tau_1)+\varepsilon_0\cdot\varepsilon_2\delta(\tau_0-\tau_2) +\varepsilon_1\cdot\varepsilon_2\delta(\tau_1-\tau_2)\big]}\Big|_{\rm m.l.}~. \end{align} Firstly, let us consider contributions involving delta functions, which are linked to seagull diagrams. We find it convenient to `grade' the different contributions in terms of how many delta functions occur. There is only one double-delta term (see the last diagram in Fig. \ref{2ph1gr}), namely \begin{align} e^2\kappa \int_0^\infty dT\, e^{-T(m^2+p'^2)} \int_0^T d\tau_0~ e^{\tau_0 (p'^2-p^2)}\, \varepsilon_0\cdot \varepsilon_1 \varepsilon_0\cdot\varepsilon_2|_{\rm m.l.} ~, \end{align} which, using~\eqref{eq:e} and~\eqref{eq:lr}, reduces to \begin{align} \frac{1}{(p^2+m^2)(p'^2+m^2)} \, e^2\kappa\, 2(\varepsilon_1 \epsilon \varepsilon_2)~, \end{align} whose numerator is the Feynman amplitude of the diagram where two photons and one graviton are emitted at the same point of the scalar line. Note that, also for an arbitrary number $N$ of photons---and a single graviton---this is the largest number of particles that can be emitted at the same point of the scalar line. There are three terms with a single delta function (see the second and third diagrams and their permutations in Fig. \ref{2ph1gr}), that correspond to the six Feynman diagrams where there is the emission of a pair of particles (either two photons or one photon and the graviton) from the same point of the scalar line, and the remaining particle emitted from another point on the line. Let us, for example consider the term that involves $\delta(\tau_1-\tau_2)$ (the third diagram in Fig. \ref{2ph1gr}), which yield the diagrams where two photons are emitted at the same point. The integrand reads \begin{align} &(-ie)^2\left(-\frac{\kappa}{4}\right)\varepsilon_1 \cdot\varepsilon_2 \, \left[ i\varepsilon_0 \cdot(p'-p-(k_1+k_2){\rm sgn}(\tau_0-\tau_1))\right]^2\nonumber\\ &\times e^{(p-p')\cdot(k_0\tau_0+(k_1+k_2)\tau_1)+k_0\cdot(k_1+k_2)|\tau_0-\tau_1|}~, \end{align} which provides two diagrams, according to whether $\tau_1<\tau_0$ or $\tau_0<\tau_1$. After some straightforward algebra that corresponds to the Schwinger integral parametrization of the diagrams, we obtain \begin{align} \frac1{(m^2+p^2)(m^2+p'^2)}(-2e^2 \kappa\varepsilon_1\cdot \varepsilon_2)\Big[\frac{(p'\epsilon p')}{m^2+(p'+k_0)^2}+ \frac{(p\epsilon p)}{m^2+(p+k_0)^2}\Big]~. \end{align} Similarly, the other terms with single delta functions $\delta(\tau_0-\tau_1)$ and $\delta(\tau_0-\tau_2)$ give \begin{align} \frac1{(m^2+p^2)(m^2+p'^2)}\, 2e^2 \kappa\Big[&\frac{\varepsilon_1\cdot p\, (\varepsilon_2 \epsilon (p'-p-k_1))}{m^2+(p+k_1)^2}+ \frac{\varepsilon_1\cdot p'\, (\varepsilon_2 \epsilon (p-p'-k_1))}{m^2+(p'+k_1)^2}\nonumber\\ & +\frac{\varepsilon_2\cdot p\, (\varepsilon_1 \epsilon (p'-p-k_2))}{m^2+(p+k_2)^2}+ \frac{\varepsilon_2\cdot p'\, (\varepsilon_1 \epsilon (p-p'-k_2))}{m^2+(p'+k_2)^2} \Big]~. \end{align} The term without delta functions corresponds to the leftover six Feynman diagrams where the two photons and the graviton and emitted singly by the scalar line (the first diagram in Fig. \ref{2ph1gr} and its permutations), six being the number of permutations of the three particles, which in the present worldline representation correspond to the different orderings of the three times $\tau_i$. The integrand in this case reads \begin{align} &(-ie)^2\left(-\frac{\kappa}{4}\right)\int_0^\infty dT ~e^{-T(m^2+p'^2)}\int_0^T d\tau_0 \int_0^T d\tau_1\int_0^T d\tau_2\,\nonumber\\ &\times e^{(p'-p)\cdot (-k_0 \tau_0 -k_1\tau_1-k_2\tau_2+i\varepsilon_0+i\varepsilon_1+i\varepsilon_2)}\, e^{k_0\cdot k_1 |\tau_0-\tau_1| +k_0\cdot k_2 |\tau_0-\tau_2| +k_1\cdot k_2 |\tau_1-\tau_2|} \nonumber\\&\times \varepsilon_1\cdot (p'-p+k_0{\rm sgn}(\tau_0-\tau_1) -k_2{\rm sgn}(\tau_1-\tau_2))\, \nonumber\\&\times \varepsilon_2\cdot (p'-p+k_0{\rm sgn}(\tau_0-\tau_2) +k_1{\rm sgn}(\tau_1-\tau_2))\nonumber\\ &\times \frac12 \big[\varepsilon_0\cdot (p'-p-k_1{\rm sgn}(\tau_0-\tau_1) -k_2{\rm sgn}(\tau_0-\tau_2))\big]^2\,, \end{align} and yields \begin{align} \frac1{(m^2+p^2)(m^2+p'^2)}\, 4 e^2 \kappa\Big[ &\frac{(p'\epsilon p')\, \varepsilon_1\cdot (p+k_2)\, \varepsilon_2\cdot p}{((p+k_2)^2+m^2)((p'+k_0)^2+m^2)} +(1\leftrightarrow 2)\nonumber\\ +& \frac{(p\epsilon p)\, \varepsilon_1\cdot (p'+k_2)\, \varepsilon_2\cdot p'}{((p+k_0)^2+m^2)((p'+k_2)^2+m^2)} +(1\leftrightarrow 2)\nonumber\\ +&\frac{((p+k_1)\epsilon (p'+k_2))\, \varepsilon_1\cdot p\, \varepsilon_2\cdot p'}{((p+k_1)^2+m^2)((p'+k_2)^2+m^2)} +(1\leftrightarrow 2)\Big]~. \end{align} Thus, \begin{align} &{\cal D}_{irred}^{(2,1)}(p,p';\varepsilon_1,k_1,\varepsilon_2,k_2;\epsilon,k_0) = \kappa e^2\Bigg\{ 2(\varepsilon_1 \epsilon \varepsilon_2) -2 \frac{\varepsilon_1\cdot \varepsilon_2\, (p'\epsilon p')}{m^2+(p'+k_0)^2}-2 \frac{\varepsilon_1\cdot \varepsilon_2\, (p\epsilon p)}{m^2+(p+k_0)^2}\nonumber\\& +2\frac{\varepsilon_1\cdot p\, (\varepsilon_2\epsilon (p'-p-k_1))}{m^2+(p+k_1)^2}+ 2\frac{\varepsilon_1\cdot p'\, (\varepsilon_2 \epsilon (p-p'-k_1))}{m^2+(p'+k_1)^2} \nonumber\\& +2\frac{\varepsilon_2\cdot p\, (\varepsilon_1 \epsilon (p'-p-k_2))}{m^2+(p+k_2)^2}+ 2\frac{\varepsilon_2\cdot p'\, (\varepsilon_1 \epsilon (p-p'-k_2))}{m^2+(p'+k_2)^2} \nonumber\\& +4\frac{(p'\epsilon p')\, \varepsilon_1\cdot (p+k_2)\, \varepsilon_2\cdot p}{((p+k_2)^2+m^2)((p'+k_0)^2+m^2)} +4\frac{(p'\epsilon p')\, \varepsilon_2\cdot (p+k_1)\, \varepsilon_1\cdot p}{((p+k_1)^2+m^2)((p'+k_0)^2+m^2)} \nonumber\\ &+ 4\frac{(p\epsilon p)\, \varepsilon_1\cdot (p'+k_2)\, \varepsilon_2\cdot p'}{((p+k_0)^2+m^2)((p'+k_2)^2+m^2)}+4\frac{(p\epsilon p)\, \varepsilon_2\cdot (p'+k_1)\, \varepsilon_1\cdot p'}{((p+k_0)^2+m^2)((p'+k_1)^2+m^2)} \nonumber\\ &+4\frac{((p+k_1)\epsilon (p'+k_2))\, \varepsilon_1\cdot p\, \varepsilon_2\cdot p'}{((p+k_1)^2+m^2)((p'+k_2)^2+m^2)}+4\frac{((p+k_2)\epsilon (p'+k_1))\, \varepsilon_2\cdot p\, \varepsilon_1\cdot p'}{((p+k_2)^2+m^2)((p'+k_1)^2+m^2)}\Bigg\}\,, \end{align} is the irreducible part of the two-scalar two-photon one-graviton amplitude. \section{Transversality of the amplitudes with one graviton and $N\leq 2$ photons}\label{sec:examples} Let us here check how the transversality of the graviton line explicitly works for $N\leq 2$. For the $N=0$ amplitude of eq.~\eqref{eq:g} we have \begin{align} {\cal M}^{(0,1)}(p,p';k_0\xi,k_0) =\frac{\kappa}{2} (p'-p)\cdot k_0\, (p'-p)\cdot \xi \,, \end{align} which vanishes on-sell because $k_0=-(p+p')$. For $N=1$, using on-shellness, the momentum conservation and the transversality conditions $k_{0\mu} \epsilon^{\mu\nu}=k_{\mu} \varepsilon^\mu =0$, we have \begin{align} {\cal M}^{(1,1)}_{red}(p,p';\varepsilon_1,k_1;k_0\xi,k_0) =-{\cal M}^{(1,1)}_{irred}(p,p';\varepsilon_1,k_1;k_0\xi,k_0) =e\kappa (p'-p)_\mu \Big( \varepsilon_1^\mu k_1\cdot \xi +k_0^\mu \varepsilon_1\cdot \xi\Big)\,, \end{align} so that \begin{align} {\cal M}^{(1,1)}(p,p';\varepsilon_1,k_1;k_0\xi,k_0) =0\,, \end{align} as expected. The computation for the $N=2$ case is of course more complicated. However, let us sketch some details. An useful way to proceed is to identify different \emph{kind} of terms in both the reducible and irreducible parts of the amplitude, that must sum up to zero separately. Let us first consider the part of the amplitude proportional to the product $\varepsilon_1\cdot\varepsilon_2$. After performing the substitution described in Eq.\eqref{eq:trans-g}, and denoting the corresponding reducible and irreducible contributions as $\mathcal{M}_{red}^{\varepsilon_1\varepsilon_2}$ and $\mathcal{M}_{irred}^{\varepsilon_1\varepsilon_2}$, we obtain \begin{align} \mathcal{M}_{irred}^{\varepsilon_1\varepsilon_2}=&-\frac{2\varepsilon_1\cdot\varepsilon_2}{p\cdot k_0}\left(p\cdot k_0 p\cdot\xi\right)-\frac{2\varepsilon_1\cdot\varepsilon_2}{p'\cdot k_0}\left(p'\cdot k_0 p'\cdot\xi\right)=-2\varepsilon_1\cdot\varepsilon_2\xi \cdot(p+p')\,,\\ \mathcal{M}_{red}^{\varepsilon_1\varepsilon_2}=&-\frac{2\varepsilon_1\cdot\varepsilon_2}{k_1\cdot k_0}\left(k_1\cdot k_0 k_1\cdot\xi\right)-\frac{2\varepsilon_1\cdot\varepsilon_2}{k_2\cdot k_0}\left(k_2\cdot k_0 k_2\cdot\xi\right)=\nonumber\\ =&-2\varepsilon_1\cdot\varepsilon_2\xi \cdot(k_1+k_2)=2\varepsilon_1\cdot\varepsilon_2\xi \cdot(p+p')=-\mathcal{M}_{irred}^{\varepsilon_1\varepsilon_2}, \end{align} where in the last line we have used the conservation of total energy-momentum together with the transversality condition given in Eq.~\eqref{eq:trans-g}. Thus, we get \begin{equation} \mathcal{M}_{irred}^{\varepsilon_1\varepsilon_2}+\mathcal{M}_{red}^{\varepsilon_1\varepsilon_2}=0, \end{equation} as expected.\\ Similarly we could consider the part in the total amplitude proportional to $\varepsilon_1\cdot\xi$, and we indicate with $\mathcal{M}_{red}^{\varepsilon_1\xi}$ and $\mathcal{M}_{irred}^{\varepsilon_1\xi}$ respectively the reducible and irreducible contributions. After some manipulations, we obtain \begin{align} \mathcal{M}_{irred}^{\varepsilon_1\xi}=&\frac{\varepsilon_2\cdot p'}{k_2\cdot p'}p\cdot k_0 \varepsilon_1\cdot\xi +2\varepsilon_1\cdot\xi\varepsilon_2\cdot k_0+\frac{\varepsilon_2\cdot p'}{k_2\cdot p'}(p+k_1)\cdot k_0 \varepsilon_1\cdot\xi\nonumber\\ &+\frac{\varepsilon_2\cdot p}{k_2\cdot p}p'\cdot k_0 \varepsilon_1\cdot\xi +\frac{\varepsilon_2\cdot p}{k_2\cdot p}(p'+k_1)\cdot k_0 \varepsilon_1\cdot\xi\nonumber\\ =&\frac{\varepsilon_2\cdot p'}{k_2\cdot p'}p\cdot k_0 \varepsilon_1\cdot\xi +2\varepsilon_1\cdot\xi\varepsilon_2\cdot k_0-\frac{\varepsilon_2\cdot p'}{k_2\cdot p'}p\cdot k_1 \varepsilon_1\cdot\xi+\varepsilon_1\cdot\xi\varepsilon_2\cdot p'\nonumber\\ &+\frac{\varepsilon_2\cdot p}{k_2\cdot p}p'\cdot k_0 \varepsilon_1\cdot\xi-\frac{\varepsilon_2\cdot p}{k_2\cdot p}p'\cdot k_1 \varepsilon_1\cdot\xi+\varepsilon_1\cdot\xi\varepsilon_2\cdot p\nonumber\\ =&\frac{\varepsilon_2\cdot p'}{k_2\cdot p'}\varepsilon_1\cdot\xi p\cdot(k_0-k_1)+\frac{\varepsilon_2\cdot p}{k_2\cdot p} \varepsilon_1\cdot\xi p'\cdot(k_0-k_1) +\varepsilon_1\cdot\xi\varepsilon_2\cdot (k_0-k_1). \end{align} Notice that in the last equality we have exploited the conservation of total energy-momentum, while in the second equality we have used the relations \begin{align}\label{RAN} &k_0\cdot(p+k_1)=-p\cdot k_1+p'\cdot k_2,\nonumber\\ &k_0\cdot(p'+k_1)=-p'\cdot k_1+p\cdot k_2. \end{align} The contribution coming from the reducible part of the amplitude is obtained as \begin{align} \mathcal{M}_{red}^{\varepsilon_1\xi}=&\frac{\varepsilon_2\cdot p'}{p'\cdot k_2k_0\cdot k_1}\varepsilon_1\cdot\xi (p\cdot k_1k_0\cdot k_1-p\cdot k_0k_0\cdot k_1) +2\Bigg(\frac{\varepsilon_1\cdot \xi}{2k_0\cdot k_1}(k_0\cdot k_1\varepsilon_2\cdot k_1\nonumber\\ &-k_0\cdot k_1\varepsilon_2\cdot k_0)\Bigg) +\frac{\varepsilon_2\cdot p}{p\cdot k_2k_0\cdot k_1}\varepsilon_1\cdot\xi (p'\cdot k_1k_0\cdot k_1-p'\cdot k_0k_0\cdot k_1)\nonumber\\ &=-\frac{\varepsilon_2\cdot p'}{k_2\cdot p'}\varepsilon_1\cdot\xi p\cdot(k_0-k_1)-\frac{\varepsilon_2\cdot p}{k_2\cdot p} \varepsilon_1\cdot\xi p'\cdot(k_0-k_1) -\varepsilon_1\cdot\xi\varepsilon_2\cdot (k_0-k_1), \end{align} and the sum of the reducible and irreducible contribution vanishes, that is \begin{equation} \mathcal{M}_{irred}^{\varepsilon_1\xi}+\mathcal{M}_{red}^{\varepsilon_1\xi}=0. \end{equation} By Bose symmetry the contributions proportional to $\varepsilon_2\cdot\xi$ can be obtained from the latter with the replacements $\varepsilon_1\leftrightarrow\varepsilon_2$ and $k_1\leftrightarrow k_2$. Now we are ready to write down all the remaining terms that enter in the transversality expression for the total amplitude. We find it convenient to organize them in terms of their different denominators, which are scalar product of momenta. We thus use the notation $\mathcal{M}_{rem}^{pk}$ to indicate those terms that have the common denominator $p\cdot k$ and similarly with others. We have, \begin{align} \mathcal{M}_{rem}^{p'k_2}=&-\frac{\varepsilon_2\cdot p'}{p'\cdot k_2}2p\cdot\xi\varepsilon_1\cdot(p+k_0)+\frac{\varepsilon_2\cdot p'}{p'\cdot k_2}\varepsilon_1\cdot k_0 p\cdot\xi+\frac{\varepsilon_2\cdot p'}{p'\cdot k_2}\varepsilon_1\cdot k_0\xi\cdot(p+k_1)\nonumber\\ &+\frac{\varepsilon_2\cdot p'}{p'\cdot k_2}2\varepsilon_1\cdot p\xi\cdot(p+k_1)-\frac{\varepsilon_2\cdot p'}{p'\cdot k_2}2p\cdot\varepsilon_1\xi\cdot k_1-\frac{\varepsilon_2\cdot p'}{p'\cdot k_2}\varepsilon_1\cdot k_0\xi\cdot k_1=0\,, \end{align} \begin{align} \mathcal{M}_{rem}^{pk_1}=&-\frac{\varepsilon_1\cdot p}{p\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot (p+k_1)-\frac{\varepsilon_1\cdot p}{p\cdot k_1}2p'\cdot \xi\varepsilon_2\cdot(p'+k_0)+\frac{\varepsilon_1\cdot p}{p\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot p'\nonumber\\ &-\frac{\varepsilon_1\cdot p}{p\cdot k_1}2\varepsilon_2\cdot p'\xi\cdot(p+k_1)+\frac{\varepsilon_1\cdot p}{p\cdot k_1}2(p+k_1)\cdot\varepsilon_2\xi\cdot k_2+\frac{\varepsilon_1\cdot p}{p\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot k_2\nonumber\\ =&\,\frac{\varepsilon_1\cdot p}{p\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot (p'+k_2)-\frac{\varepsilon_1\cdot p}{p\cdot k_1}2p'\cdot \xi\varepsilon_2\cdot(p'+k_0)+\frac{\varepsilon_1\cdot p}{p\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot p'\nonumber\\ &+\frac{\varepsilon_1\cdot p}{p\cdot k_1}2\varepsilon_2\cdot p'\xi\cdot(p'+k_2)-\frac{\varepsilon_1\cdot p}{p\cdot k_1}2(p'+k_0)\cdot\varepsilon_2\xi\cdot k_2+\frac{\varepsilon_1\cdot p}{p\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot k_2=0\,, \end{align} \begin{align} \mathcal{M}_{rem}^{p'k_1}=&-\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}2p\cdot\xi\varepsilon_2\cdot(p+k_0)+\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}p\cdot\xi\varepsilon_2\cdot k_0-\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot(p'+k_1)\nonumber\\ &-\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}2\varepsilon_2\cdot p\xi\cdot(p'+k_1)+\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}2(p'+k_1)\cdot\varepsilon_2\xi\cdot k_2+\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot k_2\nonumber\\ =&-\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}2p\cdot\xi\varepsilon_2\cdot(p+k_0)+\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}p\cdot\xi\varepsilon_2\cdot k_0+\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot(p+k_2)\nonumber\\ &+\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}2\varepsilon_2\cdot p\xi\cdot(p+k_2)-\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}2(p+k_0)\cdot\varepsilon_2\xi\cdot k_2+\frac{\varepsilon_1\cdot p'}{p'\cdot k_1}\varepsilon_2\cdot k_0\xi\cdot k_2=0\,, \end{align} \begin{align} \mathcal{M}_{rem}^{pk_2}=&-\frac{\varepsilon_2\cdot p}{p\cdot k_2}2p'\cdot \xi\varepsilon_1\cdot(p'+k_0)+\frac{\varepsilon_2\cdot p}{p\cdot k_2}\varepsilon_1\cdot k_0\xi\cdot p'+\frac{\varepsilon_2\cdot p}{p\cdot k_2}\varepsilon_1\cdot k_0\xi\cdot(p'+k_1)\nonumber\\ &+\frac{\varepsilon_2\cdot p}{p\cdot k_2}2\varepsilon_1\cdot p'\xi\cdot(p'+k_1)-\frac{\varepsilon_2\cdot p}{p\cdot k_2}2p'\cdot\varepsilon_1\xi\cdot k_1-\frac{\varepsilon_2\cdot p}{p\cdot k_2}\varepsilon_1\cdot k_0\xi\cdot k_1=0\,, \end{align} \begin{align} \mathcal{M}_{rem}^{k_0k_1}=&\frac{\varepsilon_1\cdot k_0\xi\cdot k_1}{k_0\cdot k_1}\varepsilon_2\cdot(k_0+k_1)+\frac{\varepsilon_1\cdot k_0}{k_0\cdot k_1}\varepsilon_2\cdot p'\xi\cdot k_1+\frac{\varepsilon_1\cdot k_0}{k_0\cdot k_1}\varepsilon_2\cdot p\xi\cdot k_1\nonumber\\ =&\,\frac{\varepsilon_1\cdot k_0}{k_0\cdot k_1}\xi\cdot k_1\varepsilon_2\cdot (p+p'+k_0+k_1)\propto \varepsilon_2\cdot k_2=0\,, \end{align} \begin{align} \mathcal{M}_{rem}^{k_0k_2}=&\frac{\varepsilon_2\cdot k_0\xi\cdot k_2}{k_0\cdot k_2}\varepsilon_1\cdot(k_0+k_2)+\frac{\varepsilon_2\cdot k_0}{k_0\cdot k_2}\varepsilon_1\cdot p\xi\cdot k_2+\frac{\varepsilon_2\cdot k_0}{k_0\cdot k_2}\varepsilon_1\cdot p'\xi\cdot k_2\nonumber\\ =&\,\frac{\varepsilon_2\cdot k_0}{k_0\cdot k_2}\xi\cdot k_2\varepsilon_1\cdot (p+p'+k_0+k_2)\propto \varepsilon_1\cdot k_1=0~. \end{align} Thus, all the different contributions sum up to zero and the \emph{transversality} of the total amplitude is proven, i.e., \begin{equation} \mathcal{M}^{(2,1)}\left(p,p';\varepsilon_1,k_1;\varepsilon_2,k_2;k_0\xi,k_0\right)=0~. \end{equation} What we described above is similar to what happens in flat space scalar QCD, for which a worldline approach to the computation of the $N$-gluon scalar propagator was studied in~\cite{Ahmadiniaz:2015xoa}: it yields the irreducible part of the $N$-gluon two-scalar amplitude. However, the non-Abelian nature of the theory implies that in order to compute the full amplitude---which is guaranteed to be transversal on the gluon lines---the latter must be completed with reducible parts~\cite{FMB-thesis}. \end{appendices}
1,116,691,497,994
arxiv
\section{Introduction} \label{intro} A deep understanding of the proton structure is one of the most important topics in modern particle physics. A precise knowledge of the Parton Distribution Functions (PDFs) of the proton is essential in order to make predictions for the Standard Model and beyond the Standard Model processes at hadron colliders. The cross sections of processes in proton-(anti)proton collisions are factorized by a convolution of the matrix element of parton-parton interaction and the proton structure. The latter is described by parton density functions (PDFs). A PDF, $f_i(x,Q^2)$, represents the probability of finding in the proton a parton $i$ (quark or gluon) carrying a fraction $x$ of the proton momentum with $Q$ being the energy scale of the hard interaction. In case of proton-(anti)proton interactions PDFs of both protons enter multiplicatively into the calculation of the process cross section. Therefore the precision of the PDFs is of particular importance for accurate cross-section predictions. In the last decades, the measurements of lepton-nucleon and proton-antiproton scattering have been used to determine the proton PDFs. At low to medium $x$ the PDFs are constrained by HERA data. The measurements at fixed target experiments and Tevatron contribute mainly at high $x$. The recent precise data from Tevatron and the LHC experiments have the potential to improve the precision on the PDFs further. \section{Proton Structure and DIS at HERA} The knowledge of the proton PDFs is obtained to a large extent from the measurements of the structure functions in deep inelastic scattering (DIS) experiments. In Fig.~\ref{dis_diagram}. the diagram of DIS is represented. The lepton is scattered off the nucleon via the exchange of a $\gamma$ or $Z^0$-boson (neutral current, NC, process) or via the exchange of a $W^{\pm}$-boson(charged current, CC). Here the scattering of an electron (or positron) off the proton is discussed. The NC (and similarly CC) cross section can be expressed in terms of the generalized structure functions: \begin{eqnarray} \nonumber \frac{d^2\sigma_{NC}^{e^{\pm} p}}{dxdQ^2}=\frac{2\pi\alpha^2}{xQ^4} \big [ Y_{+} \tilde F_2^{\pm} \mp Y_{-}x \tilde F_3^{\pm} - y^2 \tilde F_L^{\pm} \big ], \end{eqnarray} where $Y_{\pm} = 1 \pm (1-y)^2$ with $y$ being the transferred fraction of the lepton energy. The (generalized) structure function $F_2$ ($\tilde F_2$) is the dominant contribution to the cross section, $x \tilde F_3$ is important at high $Q^2$ and $\tilde F_L$ is sizable only at high $y$. In the framework of perturbative QCD the structure functions are directly related to the parton distribution functions, i.e. in leading order (LO) $F_2$ is the momentum sum of quark and anti-quark distributions, $F_2 \approx x \sum e^2_q (q+ \overline q)$, and $xF_3$ is related to their difference, $xF_3 \approx x \sum 2e_q a_q (q- \overline q)$. At higher orders, terms related to the gluon density distribution ($\alpha_s g$) appear. \begin{figure}[h] \center \resizebox{0.5\columnwidth}{!}{\includegraphics{dis.pdf} } \caption{\it Diagrams of neutral NC and charged CC current deep inelastic scattering processes. The symbols denote the particles, the label "$X$" denotes the hadronic final state.} \label{dis_diagram} \end{figure} In analogy to neutral currents, the inclusive CC $ep$ cross section can be expressed in terms of structure functions and in LO the $e^+p$ and $e^-p$ cross sections are sensitive to different quark densities: \begin{eqnarray} \nonumber \begin{array}{rll} e^{+}: & & \tilde \sigma_{CC}^{e^{+} p} = x[\overline u +\overline c] + (1-y)^2 x[ d+s ] \\ e^{-}: & & \tilde \sigma_{CC}^{e^{-} p} = x[ u +c] + (1-y)^2 x[\overline d +\overline s ]. \end{array} \end{eqnarray} \vspace{0.07cm} At HERA at DESY in Hamburg, electrons (or positrons) were collided with protons at centre-of-mass energies $\sqrt{s} = 225 - 318$~GeV. The measurements of the NC and CC cross sections from HERA extend the kinematic regime in $Q^2$ by more than two orders of magnitude with respect to the fixed target experiments and cover the wide $x$ range from $10^{-7}$ to 0.7. At the HERA collider experiments, H1 and ZEUS, the cross sections of NC and CC DIS are measured with high precision. The measurements of the two experiments are combined and are further used to determine parton distribution functions HERAPDF~\cite{herapdf1.0} . \section{HERAPDF} The PDFs are determined from the structure function measurements using the corresponding coefficient functions calculated to a certain order in perturbative QCD (pQCD). The structure functions, and in turn the PDFs, depend on $x$ and $Q$. The $x$-dependence of the parton distributions is not yet calculable in pQCD and has to be parametrized at a certain starting scale $Q_0$. The dependence on $Q$ is described by the DGLAP evolution equations~\cite{dglap}. Starting from a parameterisation of the PDFs at a starting scale, either by making ad-hoc assumptions on their analytical form or by using the neural-net technology, fits to various sets of experimental data, with HERA DIS data being the backbone, are performed within the DGLAP evolution scheme. The resulting PDFs depend on the order in which the perturbative QCD calculation is performed, the assumptions about the PDF parametrization, the treatment of heavy quarks, the choice for the value of $\alpha_s (M_Z)$ and the treatment of the uncertainties. The data sets included in the PDF fit and the consistency of these data sets determines the experimental uncertainty of the PDFs. \begin{figure}[!h] \center \resizebox{0.75\columnwidth}{!}{\includegraphics{nc_herapdf15.pdf}} \caption{\it Inclusive DIS cross sections for NC in $e^{\pm}$ collisions at HERA. The measurements of the H1 and ZEUS experiments are combined. Open (closed) symbols represent $e^{-}p$ ($e^{+}p$) scattering. The shaded curves represent QCD prediction based on HERAPDF1.5NLO.} \label{nc_hera15} \end{figure} The parton distributions HERAPDF ~\cite{herapdf1.0} are determined using only combined HERA DIS data, where the correlations of the systematic uncertainties are properly taken into account. This allows the usage of the conventional $\chi^2$ tolerance of $\Delta \chi^2=1$. Since this QCD analysis is solely based on $ep$ data, the PDFs do not depend on the approach for nuclear corrections needed for fixed target data. Several phenomenological schemes of heavy quark treatment can be used in the HERAPDF approach. Therefore direct tests of the models are possible. The full statistics of the HERA inclusive CC and NC data are used for NLO and NNLO QCD fits resulting in HERAPDF1.5~\cite{herapdf15}. As an example, the combined NC cross sections are shown in Fig.~\ref{nc_hera15} together with QCD prediction based on HERAPDF1.5NLO. The QCD analysis HERAPDF1.5 follows the formalism, model and paramatrisation assumptions as reported in~\cite{herapdf1.0}. The QCD predictions for the structure functions are obtained by solving the DGLAP evolution equations at NLO (or NNLO) in the $\overline{MS}$ scheme with the renormalisation and factorisation scales chosen to be $Q^2$. The QCD predictions for the structure functions are obtained by the convolution of the PDFs with the NLO coefficient functions calculated using the general mass variable flavour number RT scheme~\cite{RTref}. For the parametrisation of PDFs at the input scale the generic form $xf(x)=Ax^B(1-x)^C(1+Ex^2)$ is used. The parametrised PDFs are the gluon distribution, the valence quark distributions and the $u$-type and $d$-type anti-quark distributions. The normalisation parameters $A$ are constrained by the quark number and momentum sum-rules. \begin{figure}[!h] \center \resizebox{0.75\columnwidth}{!}{\includegraphics{hera15nnlo_10000.pdf}} \caption{\it The parton distribution functions from HERAPDF1.5 NNLO. The gluon and sea distributions are scaled down by a factor of 20. The experimental, model and parametrisation uncertainties are shown.} \label{herapdf15nnlofig} \end{figure} In Fig. ~\ref{herapdf15nnlofig} the parton distributions HERAPDF1.5NNLO at $Q^2$ = 10000 GeV$^2$ are shown. In addition to the experimental uncertainties, the variation of model inputs and parametrisation in the determination of HERAPDF are performed and provided as additional eigenvectors. The model uncertainties are evaluated by varying the input assumptions on minimum $Q^2$ of the data used in the fit, the stran\-ge\-ness fraction and the masses of heavy quarks. The para\-me\-tri\-sa\-tion uncertainty is formed by an envelope of the maximal deviations from the central fit varying parametrisation assumptions. HERA\-PDF1.5NLO and NNLO sets are the recommended HERA PDFs to be used for the predictions of processes at the LHC. The corresponding eigenvectors are available~\cite{lhapdf}. \section{Benchmarking HERAPDF} The PDFs are intrinsic properties of the proton and are therefore process-independent. Cross section predictions for processes in proton-(anti)proton collisions can be obtained using HERAPDF, evolved in $Q^2$ using DGLAP equations. The measurements of jet production at hadron colliders is an important instrument to probe PDFs at high $x$ and also provide additional constraints on the value $\alpha_S(M_Z)$. In Fig.~\ref{d0jets} the jet production cross sections as measured by D0 experiment~\cite{d0jetpaper} is presented. The measurement is confronted with the QCD prediction at NLO~\cite{nlojet++,fastnlo} based on HERA\-PDF1.5NLO. The data is very well described by this prediction. \begin{figure}[!ht] \center \resizebox{0.75\columnwidth}{!}{\includegraphics{d0_herapdf15.pdf}} \caption{\it Jet production cross section as a function of the jet transverse momentum for different ranges of pseudorapidity, as measured by the D0 collaboration. The data are represented by closed symbols. The measurement is compared to the QCD calculation at NLO based on HERAPDF1.5NLO. The total PDF uncertainty and hadronisation corrections on the prediction is shown as shaded bands. } \label{d0jets} \end{figure} In Fig.~\ref{jets_atlas} the jet measurement from ATLAS experiment~\cite{atlas_jets} in a central rapidity bin is shown in comparison with NLO predictions using HERAPDF1.5NLO together with several other PDFs. The QCD prediction using HERAPDF1.5NLO describes the data very well. \begin{figure}[!h] \center \resizebox{0.75\columnwidth}{!}{\includegraphics{atlas_jets_hera15.pdf}} \caption{\it Inclusive jet production cross section as a function of the jet transverse momentum, as measured by the ATLAS collaboration in the rapidity range $0<y<0.3$ The jets are identified using the anti-k$_t$ algorithm with $R = 0.4$. The data are represented in a ratio (stars) to the QCD prediction, using CTEQ6.6~\cite{cteq6.6} as a reference PDFs. The central value for the QCD calculation at NLO based on HERAPDF1.5NLO is represented by closed circles surrounded by the error band shown as the hashed area.} \label{jets_atlas} \end{figure} Production of electroweak bosons provides important constraints on the light quark distributions. For example, the $W$ lepton charge asymmetry $A_l(W) = \frac{(\sigma_{W^+} - \sigma_{W^-})}{ (\sigma_{W^+} + \sigma_{W^-})} \approx \frac{(u_v - d_v)}{u_v + d_v + 2u_{sea}}$ is sensitive to the valence $u$ and $d$ quark ratio. The $W$-boson muon asymmetry as measured by the CMS experiment~\cite{cms_w} is shown in Fig.~\ref{cms_w}. The measurement is compared to NLO predictions~\cite{mcfm} obtained using HERA\-PDF1.5NLO, MSTW08~\cite{mstw08} and CT10W\cite{ct10w} PDFs. The prediction based on HERAPDF1.5NLO describes the data well. \begin{figure}[!h] \center \vspace*{-0.5cm} \resizebox{0.75\columnwidth}{!}{\includegraphics{W_CMS.pdf}} \caption{\it The W muon charge asymmetry as measured by the CMS experiment. The measurement (closed symbols) is compared to the NLO prediction~\cite{mcfm} using HERAPDF1.5NLO (shaded band), MSTW08NLO (dotted line) and CT10W (dashed line).} \label{cms_w} \end{figure} Top quark pair production at the LHC probes the gluon density at high $x$. In Fig.~\ref{top_cms} the cross-section measurement of top pair production is shown as a function of the top-quark pole mass in comparison to approximate NNLO calculations~\cite{hathor,ahrens} based on HERAPDF1.5NNLO. The theory uncertainty accounts for the variation of the QCD scales, PDFs error and the variation of $\alpha_S(M_Z)$ in the PDF. For the PDF uncertainty of HERAPDF1.5NNLO, only the eigenvectors for experimental errors are used. The predictions describe the data very well. \begin{figure}[!h] \center \vspace*{-0.5cm} \resizebox{0.75\columnwidth}{!}{\includegraphics{mt_herapdf_1.pdf}} \caption{\it The top-pair production cross section measured by the CMS experiment (closed square) shown at the assumption on the top mass, used in the analysis. The mass dependence of the $t\bar{t}$ cross section according to approximate NNLO QCD predictions~\cite{hathor} and ~\cite{ahrens} are represented by the shaded and hashed band, respectively. The dependence of the experimental measurement on the assumption on the $m_t$ in the simulation used for efficiency and detector corrections is shown in by light shaded band. The closed circle represents the cross section measurement, corrected for the top pole mass, extracted using the calculation~\cite{hathor}.} \label{top_cms} \end{figure} \section{Global benchmarking excercise} Presently, the determination of PDFs is carried out by several groups, namely MSTW~\cite{MSTW}, CTEQ~\cite{CTEQ}, NNPDF~\cite{NNPDF}, HERA\-PDF~\cite{herapdf1.0}, AB(K)M~\cite{ABKM} and GJR~\cite{GJR}. The large number of PDF parameters and their treatment in the fitting procedure within the different groups results in differences of the PDFs provided. In order to study these differences, a benchmarking exercise is being carried out by the PDF4LHC working group~\cite{pdf4lhc} formed by the members of the PDF fitting groups mentioned above. As an example, the NLO prediction for the Higgs cross section ($M_H = 120$~GeV) at the LHC is shown in Fig.~\ref{higgsfig} for different PDF sets as a function of $\alpha_S(M_Z)$. For different PDF groups not only the value of $\alpha_S (M_Z)$, but also the running of the strong coupling is different, resulting in different cross section predictions. \begin{figure}[!ht] \center \resizebox{0.75\columnwidth}{!}{\includegraphics{ggH120GeVLHC7TeVnlo68cl.pdf}} \caption{\it NLO Higgs cross section predictions ($M_H = 120$ GeV) using different PDFs at the LHC with $\sqrt s = 7$ TeV.} \label{higgsfig} \end{figure} The HERAPDF is an active participant in the benchmarking exercise. In contrast to other PDF groups, HERAPDF is not restricted to one particular heavy flavour treatment scheme, several schemes are implemented and can be tested. Also, by providing the PDF eigenvectors for model parameter and parametrization variations, HERAPDF allows for tests of specific parameterisation and model assumptions during the QCD ana\-ly\-sis of different data sets. In the following, the inclusion of semi-inclusive DIS data in the QCD analysis HERAPDF and the impact of these data on assumptions on $\alpha_S(M_Z)$ and the charm quark mass value in the PDF fit is discussed. \section{Semi-inclusive data in HERAPDFs} Semi-inclusive measurements in DIS like jet and heavy flavour production, provide additional constraints on the PDFs when included into the QCD analysis together with inclusive DIS data. The jet production is directly sensitive to both the gluon distribution in the proton and the strong coupling $\alpha_S$. Therefore, including the jet data in the QCD analysis can help disentangling the effects from the gluon and $\alpha_S$ in the PDF fit. Similarly, charm and beauty production in $ep$ collisions provide direct access to the gluon distribution in the proton, which also depends on the assumption of the charm and beauty mass values used in the PDF fit. \subsection{Including jet data in the PDF fit: HERAPDF1.6} In addition to the combined HERA inclusive DIS data as used in the QCD analysis HERAPDF1.5, H1 and ZEUS measurements of jet production cross sections~\cite{jet_h1_zeus} are included in the PDF fit. The resulting parton distributions HERAPDF1.6~\cite{herapdf1.6} are determined using a fixed value of $\alpha_S(M_Z)$ and also using $\alpha_S(M_Z)$ as a free parameter in the fit. The impact of the inclusion of jet data in the PDF fit on the gluon distribution and the value of $\alpha_S$ is demonstrated in Fig.~\ref{pdf_free_als}. Here, the PDFs obtained using the inclusive data only (HERAPDF1.5) and the PDFs resulting from including the jet data (HERAPDF1.6) are determined using $\alpha_S(M_Z)$ as a free parameter in the QCD analysis. In case of the simultaneous fit of PDFs and $\alpha_s$ in HERAPDF1.5, the uncertainties on the gluon PDF becomes large at low $x$ but as soon as the jet data are included, the correlation between the gluon PDF and $\alpha_s (M_Z)$ is reduced, resulting in significantly reduced uncertainties on the gluon PDF. \begin{figure}[!ht] \hspace*{-0.4cm} \resizebox{0.51\columnwidth}{!}{\includegraphics{hera15f_free_as.pdf}} \resizebox{0.51\columnwidth}{!}{\includegraphics{hera16+jets_free_as.pdf}} \caption{\it Left panel: The parton distribution functions from HERAPDF1.5. Right panel: the parton distribution functions from HERAPDF1.6 (with HERA jet data included in the fit). In both cases, the QCD analysis is performet treating $\alpha_s (M_Z)$ as a free parameter in the fit. The PDFs are presented for $Q^2$=10 GeV$^2$.} \label{pdf_free_als} \end{figure} In Fig.~\ref{alfas_scan} the quality of the PDF fit in terms of $\chi^2$ is represented as a function of the assumption on the value of $\alpha_S (M_Z)$. In case of HERAPDF1.5, where only inclusive data are used, a very shallow minimum in the $\chi^2$ distribution is observed. The inclusion of the jet measurements in the fit results in the clear minimum, which allows the simultaneous determination of the PDFs and $\alpha_S(M_Z)$. \begin{figure}[!h] \center \resizebox{0.7\columnwidth}{!}{\includegraphics{alfa_s_scan.pdf}} \caption{\it Distribution of $\chi^2$ for PDF fit as a function of the assumption on the $\alpha_S(M_Z)$ value. The dashed line corresponds to HERAPDF1.5, where only inclusive DIS data are used. The solid line represents HERAPDF1.6, where the jet data are included.} \label{alfas_scan} \end{figure} A value of $\alpha_s (M_Z)~=~0.1202~\pm~0.0013$(exp)~$\pm~0.0007$ (mod/param) $\pm 0.0012$(hadronisation)$^{+0.0045}_{-0.0036}$(scale) is determined~\cite{herapdf1.6}. This result is in very good agreement with different results of $\alpha_S$ determination at HERA and with the world average as shown in Fig.~\ref{alfas_all}. It is important to note, that the dominant uncertainty arises from the variation of the renormalization and factorisation scales in the NLO calculation for the jet cross sections. This variation is used to mimic the effect of the missing contribution from higher orders. \begin{figure}[!h] \center \resizebox{0.75\columnwidth}{!}{\includegraphics{alfas.pdf}} \caption{\it The summary of $\alpha_S(M_Z)$ determination results using the jet production at HERA as compared to the world average. The upper point corresponds to the simultaneous determination of $\alpha_S(M_Z)$ and the PDF, as described in the text. The experimental uncertainties are represented by solid lines, the theory uncertainties are shown by dashed lines.} \label{alfas_all} \end{figure} \subsection{Charm quark measurements in the PDF fit.} The factorization scheme, used for the PDF determination depends on the assumption on the number of flavours in the proton, which varies depending on the value of the scale, which has to be compared to the threshold, at which charm and beauty quarks can be treated as partons. This threshold is determined by the mass of charm and beauty quarks. Therefore, the treatment of heavy quarks and the assumptions on their masses have particular importance in the QCD analysis of the proton structure. Different approaches to treat heavy quark (heavy quark schemes) are used by different PDF fitting groups, corresponding to different treatment of mass terms in perturbative calculations, but also implying differences in the interpretation and assumptions on the values of the heavy quark masses. Measurements of charm and beauty production can help constraining some of these assumptions. The charm contribution, $F_2^c$, to the proton structure function $F_2$ is measured at H1 and ZEUS using different charm tagging techniques. These measurements are combined~\cite{hera_f2c} taking into account the correlations of the systematic uncertainties. The combined $F_2^c$ data are included in the QCD analysis of the inclusive DIS cross sections, and the effect on the PDFs using different assumptions on the charm quark mass, $m_c$, is studied~\cite{qcd_charm}. The sensitivity of the PDF fit to the $m_c$ value when using combined $F_2^c$ is used to constrain the assumptions on $m_c$ in different heavy quark schemes~\cite{charm_mass_scan}. The $\chi^2$ values of the PDF fit including the charm data are determined as a function of the input values of charm quark mass, $m_c^{mod}$, using different heavy quark schemes, as shown in Fig.~\ref{charm_mass_scan}. Different assumptions on $m_c^{mod}$ in VFN schemes impact the charm contribution to the sea quark distribution and thus affect the composition of $x \overline U(x)$ from the $x\overline u(x)$ and the $x\overline c(x)$ contributions. These in turn influence the value of the $W^{\pm}$ and $Z$ cross section predictions at LHC. In Fig.~\ref{charm_for_lhc} the NLO prediction~\cite{mcfm} for the $W^+$ production cross section is shown, using parton distributions evaluated with different assumptions on $m_c^{mod}$ in various heavy quark schemes. \begin{figure}[!h] \center \vspace*{-0.5cm} \resizebox{0.75\columnwidth}{!}{\includegraphics{charm_mass_scan.pdf}} \caption{\it Comparison of the $\chi^2$ distributions of fits to the inclusive HERA I + F$_2^{\bar cc}$ data using different heavy flavour schemes represented as lines of different styles.} \label{charm_mass_scan} \end{figure} \begin{figure}[!h] \center \vspace*{-0.8cm} \resizebox{0.75\columnwidth}{!}{\includegraphics{charm_w_lhc.pdf}} \caption{\it NLO prediction of $\sigma_{W^{+}}$ at the LHC for $\sqrt s = $ 7 TeV as a function of $m_c^{mod}$ in the input PDF. The lines show predictions for different VFN schemes. The stars show the predictions obtained with the optimal value of $m_c^{mod}$ used in a given scheme. The dashed horizontal lines indicate the range of $\sigma_{W^{+}}$, determined for $m_c^{mod}$ = $m_c^{mod}$ (opt).} \label{charm_for_lhc} \end{figure} Taking into account the whole spread of cross section predictions using the studied schemes, an uncertainty of 7\% on the $W^+$ production cross section arises due to assumption on $m_c^{mod}$ in the PDF. However, when using the optimal values, $m_c^{mod}$ (opt), corresponding to minima from Fig.~\ref{charm_mass_scan} as constrained by HERA charm data, this uncertainty is reduced to 1\%. \section{Summary} Precision of the parton distribution functions is essential for accurate predictions of cross sections of the procecces at hadron colliders. The proton PDFs are determined using the experimental data of DIS and proton-proton collisions. Combined data of HERA collider experiments provide the most precise constraint on the PDFs at small and medium $x$. HERAPDF is one of the modern QCD analyses in which PDFs are determined. The advantages of these PDFs is no need for nuclear corrections (in contrast to PDFs using the fixed target data), consistent treatment of the systematic uncertainties of the experimental data and implementation of several phenomenological approaches of heavy flavour treatment. Currently, HERAPDF1.5 at NLO and NNLO are among the recommended parton densities for predictions of LHC cross sections. Recent developments in the HERAPDF fits include the QCD analyses of HERA inclusive DIS data together with jet and charm measurements. The inclusion of the jet measurements in the HERAPDF analysis reduces the correlation between the gluon distribution and the strong coupling constant. In such a fit, the PDF is determined together with the $\alpha_S(M_Z)$ value. The resulting $\alpha_S(M_Z)$ value is in a very good agreement with the world average and its precision is limited by the missing NNLO calculation for jet production. The inclusion of the charm data reduces the correlation between the gluon density and the value of the charm mass used in different schemes of heavy flavour treatment in the PDF fit. In particular, a proper choice of the charm quark mass value is important for accurate QCD predictions of W and Z boson rates at the LHC. The QCD predictions based on the HERAPDF1.5 describe the measurements at Tevatron and the LHC very well. With increasing precision of the LHC data, particular processes like $W$-boson, jet or top-pair production will provide additional constraints on the PDFs. The open source code for QCD analysis of different data sets HE\-RA\-Fit\-ter~\cite{herafitter} is released by the H1 and ZEUS collaborations. The program aims for implementation of all available schemes for heavy flavour treatment. HERA\-Fit\-ter is used in the ATLAS and CMS experiments to study the impact the electroweak boson production, jet production and top quark production on the proton PDFs.
1,116,691,497,995
arxiv
\section{Introduction} \label{sec:introduction} The widespread availability and ease of use of Online Social Networks (OSN) have made them the ideal setting for the proliferation of fictitious and malicious accounts~\cite{liu2014}. Indeed, recent work uncovered the existence of large numbers of OSN accounts that are purposely created to distribute unsolicited spam, advertise events and products of doubtful legality, sponsor public characters and, ultimately, lead to a bias within the public opinion~\cite{ferrara2016,Jiang2016b}. Moreover, the plague of such spammers and bots leads to an ingenious and lucrative ``underground economy", where account vendors, their customers, and oblivious victims play a piece staging since the very introduction of social networks~\cite{Stringhini:2012, Stringhini:2013,Thomas2013}. One of the most fascinating peculiarities of spambots is that they ``evolve" over time, adopting sophisticated techniques to evade early-established detection approaches, such as those based on textual content of shared messages~\cite{Lee:2010}, posting patterns~\cite{Stringhini:2010} and social relationships~\cite{Ghosh:2012}. As evolving spammers became clever in escaping detection, for instance by changing discussion topics and posting activities, researchers kept the pace and proposed complex models, such as those based on the interaction graphs of the accounts under investigation~\cite{yang2013, hu2014}. Noticeably, spambots evolution still goes on. Recent investigations anecdotally highlight how new waves of \textit{social spambots} are rising~\cite{ferrara2016, zhang2016}. In this paper, we target these new waves, finding evidence of the difficulties for OSN users to distinguish between genuine and malicious accounts. We also highlight the difficulties for OSN administrators to take appropriate countermeasures against the takeover of evolving spambots. Remarkably, a large number of tools and techniques have been proposed by Academia to detect OSN spambots~\cite{ferrara2016,Jiang2016b}. Until recently, such tools have proved to be valid allies for spambots timely detection. Unfortunately, the characteristics of the new wave of social spambots are such that standard classification approaches, where a single account is evaluated according to a set of established features tested over known datasets, are no longer successful. In this work, we demonstrate this claim by investigating the performances of several state-of-the-art tools techniques when struggling against the latest wave of social spambots. The unsatisfactory results of the surveyed techniques call for new approaches capable of turning the tide of this long-lasting fight. Interestingly, we assist to a paradigm-shift in modeling and analyzing online accounts. Independently from each other, new research efforts were born, which leverage characteristics of groups of accounts -- rather than those of a single account -- as a red flag for anomalous behaviors. We provide a review of these prominent research directions, highlighting the new dimensions to sound out for successfully fighting against this novel generation of spambots. \makeatletter{} \begin{table*}[ht] \scriptsize \centering \begin{tabular}{lp{0.45\textwidth}rrcc} \toprule && \multicolumn{3}{c}{{\textbf{statistics}}} &\\ \cmidrule{3-5} &&&&& \textbf{used in} \\ \textbf{dataset} & \textbf{description} & accounts & tweets & year & \textbf{section} \\ \midrule \texttt{genuine accounts} & verified accounts that are human-operated & 3,474 & 8,377,522 & 2011 & \ref{subsec:Twitter},~\ref{subsec:crowdsourcing} \\ \texttt{social spambots \#1} & retweeters of an Italian political candidate & 991 & 1,610,176 & 2012 & \ref{subsec:Twitter},~\ref{subsec:crowdsourcing} \\ \texttt{social spambots \#2 } & spammers of paid apps for mobile devices & 3,457 & 428,542 & 2014 & \ref{subsec:Twitter},~\ref{subsec:crowdsourcing} \\ \texttt{social spambots \#3 } & spammers of products on sale at \textit{Amazon.com} & 464 & 1,418,626 & 2011 & \ref{subsec:Twitter},~\ref{subsec:crowdsourcing} \\ \texttt{traditional spambots \#1} & training set of spammers used by Yang \textit{et al.} in~\cite{yang2013} & 1,000 & 145,094 & 2009 & \ref{subsec:Twitter} \\ \texttt{traditional spambots \#2} & spammers of scam URLs & 100 & 74,957 & 2014 & \ref{subsec:Twitter} \\ \texttt{traditional spambots \#3} & automated accounts spamming job offers & 433 & 5,794,931 & 2013 & \ref{subsec:crowdsourcing} \\ \texttt{traditional spambots \#4} & another group of automated accounts spamming job offers & 1,128 & 133,311 & 2009 & \ref{subsec:crowdsourcing} \\ \texttt{fake followers} & simple accounts that inflate the number of followers of another account & 3,351 & 196,027 & 2012 & \ref{subsec:Twitter} \\ \midrule \texttt{test set \#1} & mixed set of 50\% \texttt{genuine accounts} + 50\% \texttt{social spambots \#1} & 1,982 & 4,061,598 & -- & \ref{sec:theothers},~\ref{sec:newtrends} \\ \texttt{test set \#2} & mixed set of 50\% \texttt{genuine accounts} + 50\% \texttt{social spambots \#3} & 928 & 2,628,181 & -- & \ref{sec:theothers},~\ref{sec:newtrends} \\ \bottomrule \end{tabular} \caption{\small Statistics about the datasets used for this study. \label{tab:datasets}} \end{table*} \vskip 1em \noindent \textbf{Contributions.} Our main contributions are: \begin{itemize} \item We provide empirical evidence of the existence of a novel wave of Twitter spambots, which, up to now, has been just theorized~\cite{ferrara2016}. \item We evaluate if, and to which extent, state-of-the-art detection techniques succeed in spotting such new spambots. \item We critically revise an emerging stream of research, which adopt features tied to groups of accounts rather than individual accounts features. \item We leverage results of a crowdsourcing spambot detection campaign for drawing new guidelines for the annotation of datasets comprising social spambots. \item Finally, we publicly release to the scientific community an annotated dataset\footnote{\scriptsize{\url{http://mib.projects.iit.cnr.it/dataset.html}}}, consisting of genuine accounts, traditional spambots, and ---for the first time--- the novel social spambots. \end{itemize} \makeatletter{} \section{Datasets} \label{sec:datasets} We describe the different Twitter datasets that constitute the real-world data used in our experiments. Table~\ref{tab:datasets} reports the name of the datasets, their brief description, and the number of accounts and tweets they feature. The year represents the average of the creation years of the accounts that belong to the dataset. The \texttt{genuine accounts} dataset is a random sample of genuine (human-operated) accounts. We randomly contacted Twitter users by asking a simple question in natural language. All the replies to our questions were manually verified and all the 3,474 accounts that answered were certified as humans. The accounts that did not answer to our question were discarded and are not used in this study. The \texttt{social spambots \#1} dataset was created after observing the activities of a novel group of social bots that we discovered on Twitter during the last Mayoral election in Rome, in 2014. One of the runners-up employed a social media marketing firm for his electoral campaign, which made use of almost 1,000 automated accounts on Twitter to publicize his policies. Surprisingly, we found such automated accounts to be similar to genuine ones in every way. Every profile was accurately filled in with detailed -- yet fake -- personal information such as a (stolen) photo, (fake) short-bio, (fake) location, etc. Those accounts also represented credible sources of information since they all had thousands of followers and friends, the majority of which were genuine users\footnote{\scriptsize{This was made possible also by the adoption of social engineering techniques, such as the photo of a young attractive woman as the profile picture and the occasional posting of provocative tweets.}}. Furthermore, the accounts showed a tweeting behavior that was apparently similar to those of genuine accounts, with a few tweets posted every day, mainly quotes from popular people, songs, and \textit{YouTube} videos. However, every time the political candidate posted a new tweet from his official account, all the automated accounts retweeted it in a time span of just a few minutes. Thus, the political candidate was able to reach many more accounts in addition to his direct followers and managed to alter Twitter engagement metrics during the electoral campaign. Amazingly, we also found tens of human accounts who tried to engage in conversation with some of the spambots. The most common form of such human-to-spambot interaction was represented by a human reply to one of the spambot tweets quotes. We also discovered a second group of social bots, which we labeled \texttt{social spambots \#2}, who spent several months promoting the \verb|#TALNTS| hashtag. Specifically, \textit{Talnts} is a mobile phone application for getting in touch with and hiring artists working in the fields of writing, digital photography, music, and more. The vast majority of tweets were harmless messages, occasionally interspersed by tweets mentioning a specific genuine (human) account and suggesting him to buy the VIP version of the app from a Web store. Further, we uncovered a third group of social bots, \texttt{social spambots \#3}, which advertise products on sale on \textit{Amazon.com}. The deceitful activity was carried out by spamming URLs pointing to the advertised products. Similarly to the retweeters of the Italian political candidate, also this family of spambots interleaved spam tweets with harmless and genuine ones. \makeatletter{} \begin{table*}[t] \scriptsize \centering \begin{tabular}{lrrrr} \toprule & \multicolumn{4}{c}{{\textbf{accounts}}}\\ \cmidrule{2-5} \textbf{dataset} & total & alive & deleted & suspended \\ \midrule \texttt{genuine accounts} & 3,474 & 3,353 (96.5\%) & 115 (3.3\%) & 6 (0.1\%) \\ \texttt{social spambots \#1} & 994 & 946 (95.2\%) & 2 (0.2\%) & 46 (4.6\%) \\ \texttt{social spambots \#2 } & 3,457 & 3,322 (96.1\%) & 1 (0.1\%) & 134 (3.8\%) \\ \texttt{social spambots \#3 } & 467 & 465 (99.6\%) & 2 (0.4\%) & 0 (0.0\%) \\ \texttt{traditional spambots \#1} & 1,000 & 889 (88.9\%) & 25 (2.5\%) & 86 (8.6\%) \\ \texttt{traditional spambots \#2} & 100 & 1 (1.0\%) & 0 (0.0\%) & 99 (99.0\%) \\ \texttt{fake followers} & 3,351 & 851 (25.4\%) & 38 (1.1\%) & 2,462 (73.5\%) \\ \bottomrule \end{tabular} \caption{\small Statistics about alive, deleted, and suspended accounts, for different groups of genuine and malicious accounts. \label{tab:survivability}} \end{table*} We exploited a Twitter crawler to collect data about all the accounts we suspected to belong to the three groups of social spambots. All the accounts collected in this process have then undergone an internal manual verification phase to certify their automated nature. Among all the distinct retweeters of the Italian political candidate, 50.05\% (991 accounts) were certified as spambots. Similarly, 94.50\% (3,457 accounts) of the accounts who tweeted the \verb|#TALNTS| hashtag resulted as spambots. Finally, 89.29\% (464 accounts) of the accounts that tweeted suspicious \textit{Amazon.com} URLs were also certified as spambots. The three sets of accounts represent our ground truth of novel social spambots. Our internal manual annotation has been carried out by comparing every account to all the others, in order to highlight possible similarities and common behaviors. This is in contrast with the typical annotation process where accounts are labeled one-by-one and by solely exploiting the characteristics of the account under investigation. In addition to genuine users and social spambots, we also collected several datasets of traditional spambots. Such datasets are used throughout the paper as a strong baseline. The \texttt{traditional spambots \#1} dataset is the training set used in~\cite{yang2013}, kindly provided to us by the authors of that work. In~\cite{yang2013}, the dataset has been used to train a machine learning classifier for the detection of evolving Twitter spambots. Accounts belonging to the \texttt{traditional spambots \#2} dataset are rather simplistic bots that repeatedly mention other users in tweets containing scam URLs. To lure users into clicking the malicious links, the content of their tweets invite the mentioned users to claim a monetary prize. The \texttt{traditional spambots \#3} and \texttt{traditional spambots \#4} datasets are related to 2 different groups of bots that repeatedly tweet about open job positions and job offers. Fake followers are another kind of malicious accounts that recently gained interest both from platform administrators and from the scientific world~\cite{cresci2015}. Given that fake followers are rather simplistic in their design and functioning, they can serve as a weak baseline against which to compare social spambots. In April, 2013, we bought 3,351 fake accounts from three different Twitter online markets, namely \textit{fastfollowerz.com}, \textit{intertwitter.com}, and \textit{twittertechnology.com}. All the accounts acquired in this way have been merged in order to obtain the \texttt{fake followers} dataset used in this study. By considering a diverse set of spammer accounts we have captured many of the different dimensions currently exploited by spambots and tamperers to perpetrate their illicit activities. In detail, we have considered (i) fake follower frauds, (ii) retweet frauds, (iii) hashtag promotion, (iv) URL spamming, (v) scamming, and (vi) spam of generic messages. \begin{comment} {\bf mari: x quali accounts vale questa parte qua sotto?} For all the 4,929 accounts of our datasets we then collected behavioral data by crawling the content of their Twitter pages. Furthermore, we also collected data about all their direct followers and friends, and about all the accounts they interacted with in their tweets. \end{comment} \makeatletter{} \section{Real-world experimentation} \label{sec:realworld} \subsection{Twitter monitoring} \label{subsec:Twitter} A first assessment of the extent and the severity of Twitter social spambots problem can be obtained by measuring Twitter's capacity of detecting and removing them from the platform. This section thus answers the research question: \textbf{RQ1 --} \textit{To what extent is Twitter currently capable of detecting and removing social spambots?} Interesting insights can be gained by comparing the rate at which Twitter accounts are removed, for different types of malicious accounts. The intuition is that accounts that are easily identified as malicious can be rapidly removed by platform administrators. Thus, in this experiment, we let different types of accounts behave for a rather long amount of time (i.e., years). Then, we check whether Twitter managed to identify such accounts as malicious and to remove them from the platform. We perform this experiment on our set of genuine accounts, on our 3 groups of social spambots, on 2 groups of traditional spambots, and on the group of fake followers. In order to perform this experiment, we exploited Twitter's responses to API calls and, particularly, the Twitter error codes. Given a query to a specific account, Twitter's API replies with information regarding the status of the queried account. Specifically, accounts that are suspected to perform malicious activities get suspended by Twitter. API queries to a suspended account result in Twitter responding with the error code 63. API queries to accounts that have been deleted by their original owner result in Twitter responding with the error code 50. Instead, for accounts that are neither suspended nor deleted, Twitter replies with the full metadata information of the account, without issuing error codes. By exploiting this response mechanism, we were able to measure the \textit{survivability} of the different groups of accounts. Results of this experiment are reported in Table~\ref{tab:survivability} and are pictorially depicted in Figure~\ref{fig:survivability}. As shown in Table~\ref{tab:survivability}, \texttt{genuine accounts} feature a very high survival rate (96.5\%). In addition, among the no longer available accounts, the vast majority have been deleted by the original owner, rather than suspended by Twitter. These results are quite intuitive, by considering that legitimate accounts rarely perform any kind of malicious activity. Conversely, the simplest kind of malicious accounts, \texttt{fake followers}, have mostly been detected and suspended by Twitter. The same also applies to one of the two groups of traditional spambots, identified as \texttt{traditional spambots \#2} in Table~\ref{tab:survivability}, which features a suspension rate as high as 99\%. The most interesting results are however related to those kinds of malicious accounts that better mimic human behaviors. So far, \texttt{traditional spambots \#1} have largely managed to evade suspension, despite dating back to 2009. Indeed, only 8.6\% of the bots have been suspended, while 88.9\% of them are still alive. This seems to suggest that Twitter's spambot detection mechanisms are still unable to accurately identify such accounts, while recent solutions proposed by Academia have succeeded in this task~\cite{yang2013}. Twitter's performance in suspending malicious accounts is even worse if we consider social spambots. All the 3 groups of social spambots feature very high survival rates, respectively 95.2\%, 96.1\%, and 99.6\%. Even if the difference between the survival rate of social spambots and that of \texttt{traditional spambots \#1} is marginal, these results nonetheless suggest an increased difficulty for the detection of social spambots. Table~\ref{tab:surv-stat-sign} also reports the results of a comparison between the ratios of alive, deleted, and suspended accounts between spambots and \texttt{genuine accounts}. As shown, social spambots feature very small differences with respect to genuine accounts ($\sim\pm3\%$). Some of these differences are not even statistically significant, according to a chi-square test. \texttt{Traditional spambots \#1} have differences $\sim\pm8\%$ that are highly significant ($p < 0.01$) for alive and suspended accounts. Instead, \texttt{traditional spambots \#2} and \texttt{fake followers} show massive differences: $\sim\pm96\%$ and $\sim\pm72\%$, respectively. Figure~\ref{fig:survivability} shows results of the survivability experiment, with respect to the account age\footnote{\scriptsize{Account age is computed as the number of days between the account's creation date and the day we performed the experiment.}}. This can allow to understand if temporal patterns exist in the way malicious accounts are created, and if Twitter's mechanisms for suspending malicious accounts are related to an account's age. For instance, Twitter might be better in detecting and suspending older accounts than newer ones. However, this hypothesis can be ruled out by considering that 99\% of \texttt{traditional spambots \#2} accounts have been suspended despite being younger than most of the social spambots. Overall, an analysis of Figure~\ref{fig:survivability} shows that account suspensions seem to depend on the type of the account, its design and behavior, rather than on its age. \begin{figure} \centering {\includegraphics[width=0.5\textwidth]{img/survivability_new.png}} \caption{\small Survival rates for different types of accounts.\label{fig:survivability}} \end{figure} \makeatletter{} \begin{table}[t] \scriptsize \centering \begin{tabular}{lr@{}l r@{}l r@{}l} \toprule & \multicolumn{6}{c}{{\textbf{accounts}}}\\ \cmidrule{2-7} \textbf{dataset} & \multicolumn{2}{c}{alive} & \multicolumn{2}{c}{deleted} & \multicolumn{2}{c}{suspended} \\ \midrule \texttt{social spambots \#1} & $-$1.&3\%\textsuperscript{*} & $-$3.&1\%\textsuperscript{***} & $+$4.&5\%\textsuperscript{***} \\ \texttt{social spambots \#2 } & $-$0.&4\% & $-$3.&2\%\textsuperscript{***} & $+$3.&7\%\textsuperscript{***} \\ \texttt{social spambots \#3 } & $+$3.&1\%\textsuperscript{***} & $-$2.&9\%\textsuperscript{***} & $-$0.&1\% \\ \texttt{traditional spambots \#1} & $-$7.&6\%\textsuperscript{***} & $-$0.&8\% & $+$8.&7\%\textsuperscript{***} \\ \texttt{traditional spambots \#2} & $-$95.&5\%\textsuperscript{***} & $-$3.&3\% & $+$98.&9\%\textsuperscript{***} \\ \texttt{fake followers} & $-$71.&1\%\textsuperscript{***} & $-$2.&2\%\textsuperscript{***} & $+$73.&4\%\textsuperscript{***} \\ \bottomrule \multicolumn{4}{l}{\rule{0pt}{1.2\normalbaselineskip} \textsuperscript{***}$p < 0.01$, \textsuperscript{**}$p < 0.05$, \textsuperscript{*}$p < 0.1$} \end{tabular} \caption{\small Effect size and statistical significance of the difference between the survivability results of malicious accounts with respect to those of \texttt{genuine accounts}. \label{tab:surv-stat-sign}} \end{table} Results reported in this first experiment already reveal interesting differences between social spambots, traditional spambots, and fake followers. Notably, social spambots appear to be more similar to genuine accounts than to traditional spambots, with regards to Twitter suspensions.\vfill \subsection{Crowdsourcing: tasks and results} \label{subsec:crowdsourcing} This section addresses the following research questions: \textbf{RQ2 --} \textit{Do humans succeed in detecting social spambots in the wild?} \textbf{RQ3 --} \textit{Do they succeed in discriminating between traditional spambots, social spambots, and genuine accounts?} Even if Twitter users were generally capable of distinguishing between traditional spambots and genuine accounts, they might still find it difficult to spot social spambots in the wild. If confirmed, this would provide additional evidence of the evolutionary step characterizing the new social spambots with respect to traditional ones. \begin{figure}[h] \centering {\includegraphics[width=0.49\textwidth]{img/crowdflower.png}} \caption{\small Dataset for the crowdsourcing experiment.\label{fig:crowdflower}} \end{figure} \makeatletter{} \begin{table*}[t] \scriptsize \centering \begin{tabular}{lrrrrrrrrrr} \toprule &&& \multicolumn{5}{c}{{\textbf{detection results}}} &&& \\ \cmidrule{4-8} \textbf{type} & \textbf{accounts$^{\sharp}$} && TP & TN & FP & FN & Accuracy && \textbf{Fleiss' kappa ($\kappa$)} \\ \midrule traditional spambots & 1,516 && 1,385 & 0 & 0 & 131 & 0.9136 && 0.007 \\ social spambots & 1,393 && 328 & 0 & 0 & 1,065 & 0.2355 && 0.186 \\ \texttt{genuine accounts} & 1,377 && 0 & 1,267 & 110 & 0 & 0.9201 && 0.410 \\ \bottomrule \multicolumn{11}{l}{ \begin{minipage}[t]{1.5\columnwidth} $^{\sharp}$: The total number of accounts considered is 4,286 instead of 4,428 because 142 accounts (3.2\%) got deleted, suspended, or protected during our campaign. \end{minipage}} \end{tabular} \caption{\small Results of the crowdsourcing campaign on spambots detection. \label{tab:crowdresults}} \end{table*} To answer these research questions, we asked a large set of real-world users to classify the accounts in our datasets. To obtain a large and diverse set of users, we recruited contributors from the CrowdFlower\footnote{\scriptsize{\url{https://www.crowdflower.com/}}} crowdsourcing platform. Figure~\ref{fig:crowdflower} shows the distribution of the 4,428 accounts that we have employed for this crowdsourcing experiment, picked up from the datasets in Section~\ref{sec:datasets}. Contributors were asked to assign to each account one of the following classes: (i) spambot, (ii) genuine, and (iii) unable to classify. The latter class (iii) has been inserted to deal with Twitter accounts possibly getting deleted, suspended, or protected\footnote{\scriptsize{\textit{Protected} accounts are those accounts whose tweets and timeline are not publicly visible.}} while our crowdsourcing task was ongoing. Notably, our experiment marks a difference with those typically carried out with crowdsourcing. In fact, crowdsourcing tasks are typically aimed at creating a ground truth (i.e., labeled) dataset for later use. For instance, crowdsourcing is often used to create large training-sets for machine learning algorithms. Here, instead, the datasets are labeled in advance. Thus, by asking contributors to (re-)classify our datasets, we are actually evaluating their ability to spot the different types of accounts. \vskip 0.5em \noindent \textbf{Enforcing results reliability.} We only recruited contributors who were tech-savvy and Twitter users themselves, in order to be reasonably sure about their knowledge of Twitter and its dynamics. Furthermore, we required each account to be classified by at least 3 different contributors, with the final class decided by majority voting. We also fixed to 100 the upper threshold of the number of accounts that a single contributor could classify. In this way, we have obtained redundant results from a broad set of contributors. Then, in order to further guarantee the reliability of our crowdsourcing results, we designed a set of ``test" (or ``gold") questions aimed at evaluating the quality of contributors' answers. A test question is one for which the correct answer is already known by the system. Within the crowdsourcing platform, such questions are indistinguishable from standard ones and are randomly mixed among all the questions, so that contributors cannot know whether they are answering to a test or to a standard question. Contributors' answers to test questions were checked against the known correct answers. Only the trusted contributors who answered correctly to more than the 70\% of the test questions have been considered in our study. Our test questions consist of accounts whose nature is ``easily" recognizable, and specifically: (i) a set of traditional spambots sampled from the dataset of Yang {\it et al.}~\cite{yang2013}, (ii) a subset of genuine accounts, and (iii) a set of suspended, deleted, and protected accounts. Notably, by designing test questions with traditional spambots and genuine accounts, and by enforcing the policy of at least 70\% correct answers, we can guarantee that all our trusted contributors are typically able to detect traditional spambots and to distinguish them from genuine accounts. This further strengthens the results of their classification of the novel social spambots. Table~\ref{tab:crowdsetting} shows a recap of the settings used in our crowdsourcing campaign. The thorough description of our campaign, with the complete set of instructions, a list of example accounts, and the task preview, is available online\footnote{\scriptsize{\url{http://wafi.iit.cnr.it/fake/fake/crowdflower/instructions/}}}. The campaign completed when each of the 4,428 accounts was classified by 3 different trusted contributors. \makeatletter{} \begin{table}[h] \scriptsize \centering \begin{tabular}{lr} \toprule Num. accounts to classify & 4,428 \\%4,453\\ Min. contributors per account & 3 \\ Max. answers per contributor & 100 \\ Num. test questions & 25 \\ Min. accuracy threshold & 70\% \\ Reward & 0.1 US\$ per 5 accounts classified \\ \bottomrule \end{tabular} \caption{\small Crowdsourcing campaign settings. \label{tab:crowdsetting}} \end{table} \noindent \textbf{Results of the crowdsourcing campaign.} Overall, we collected 13,284 answers given by 247 trusted contributors from 42 different countries. Figure~\ref{fig:crowdcountries} shows the distribution of answers per country. Figure~\ref{fig:crowdcontrib} depicts the distribution of answers per contributor. CrowdFlower also gives contributors the possibility to evaluate crowdsourcing campaigns for: (i) clarity of instructions, (ii) fairness of the test questions, (iii) ease of the task, and (iv) appropriateness of the payment. Out of the 247 participating contributors, 60 of them ($\sim24\%$) evaluated our campaign, leading to a convincing aggregated score of 3.7/5, as shown in detail in Table~\ref{tab:crowdscore}. Our campaign costed us 410 US\$ in total. \makeatletter{} \begin{table}[t] \scriptsize \centering \begin{tabular}{lr} \toprule Instructions clear & 4.0 / 5 \\ Test questions fair & 3.5 / 5 \\ Ease of job & 3.5 / 5 \\ Pay & 3.8 / 5 \\ \midrule Overall & 3.7 / 5 \\ \bottomrule \end{tabular} \caption{\small Contributors' evaluation of our campaign. \label{tab:crowdscore}} \end{table} \makeatletter{} \begin{table*}[t] \scriptsize \centering \begin{tabular}{llrrrrrr} \toprule && \multicolumn{6}{c}{\textbf{detection results}} \\ \cmidrule{3-8} \textbf{technique} & \textbf{type} & Precision & Recall & Specificity & Accuracy & F-Measure & MCC \\ \midrule \multicolumn{8}{l}{\texttt{test set \#1}} \\ Twitter countermeasures & mixed & \textbf{1.000} & 0.094 & \textbf{1.000} & 0.691 & 0.171 & 0.252 \\ Human annotators & manual & 0.267 & 0.080 & 0.921 & 0.698 & 0.123 & 0.001 \\ BotOrNot?~\cite{davis2016} & supervised & 0.471 & 0.208 & 0.918 & 0.734 & 0.288 & 0.174 \\ C. Yang \textit{et al.}~\cite{yang2013} & supervised & 0.563 & 0.170 & 0.860 & 0.506 & 0.261 & 0.043 \\ Miller \textit{et al.}~\cite{miller2014} & unsupervised & 0.555 & 0.358 & 0.698 & 0.526 & 0.435 & 0.059 \\ Ahmed \textit{et al.}~\cite{ahmed2013}$^{\sharp}$ & unsupervised & 0.945 & 0.944 & 0.945 & 0.943 & 0.944 & 0.886 \\ Cresci \textit{et al.}~\cite{IntSys2015} & unsupervised & 0.982 & \textbf{0.972} & 0.981 & \textbf{0.976} & \textbf{0.977} & \textbf{0.952} \\ \midrule \multicolumn{8}{l}{\texttt{test set \#2}} \\ Twitter countermeasures & mixed & \textbf{1.000} & 0.004 & \textbf{1.000} & 0.502 & 0.008 & 0.046 \\ Human annotators & manual & 0.647 & 0.509 & 0.921 & 0.829 & 0.570 & 0.470 \\ BotOrNot?~\cite{davis2016} & supervised & 0.635 & \textbf{0.950} & 0.981 & 0.922 & 0.761 & 0.738 \\ C. Yang \textit{et al.}~\cite{yang2013} & supervised & 0.727 & 0.409 & 0.848 & 0.629 & 0.524 & 0.287 \\ Miller \textit{et al.}~\cite{miller2014} & unsupervised & 0.467 & 0.306 & 0.654 & 0.481 & 0.370 & -0.043 \\ Ahmed \textit{et al.}~\cite{ahmed2013}$^{\sharp}$ & unsupervised & 0.913 & 0.935 & 0.912 & 0.923 & \textbf{0.923} & 0.847 \\ Cresci \textit{et al.}~\cite{IntSys2015} & unsupervised & \textbf{1.000} & 0.858 & \textbf{1.000} & \textbf{0.929} & \textbf{0.923} & \textbf{0.867} \\ \bottomrule \multicolumn{8}{l}{ \begin{minipage}[t]{1.5\columnwidth} $^{\sharp}$: Modified by employing \textit{fastgreedy} instead of \textit{MCL} for the graph clustering step. \end{minipage}} \end{tabular} \caption{\small Comparison among the spambot detection techniques, tools, and algorithms surveyed in this study. For each test set, the highest values in each evaluation metric are shown in bold.} \label{tab:results} \end{table*} \begin{comment} \begin{figure}[t] \centering {\includegraphics[width=0.30\textwidth]{img/answers_per_country_top20.png}} \caption{\small Answers per country (top 20 countries).\label{fig:crowdcountries}} \end{figure} \begin{figure}[t] \centering {\includegraphics[width=0.35\textwidth]{img/answers_per_contributor.png}} \caption{\small Answers per contributor.\label{fig:crowdcontrib}} \end{figure} \end{comment} \begin{figure}[t] \centering \subfigure[t][Top 20 countries.\label{fig:crowdcountries}]{\includegraphics[width=0.45\columnwidth]{img/answers_per_country_top20.png}} \subfigure[t][Answers per contributor.\label{fig:crowdcontrib}] {\includegraphics[width=0.50\columnwidth]{img/answers_per_contributor.png}} \caption{\label{fig:crowd}Crowdsourcing campaign details.} \end{figure} The most interesting results of our crowdsourcing campaign are undoubtedly related to the detection performance of our human contributors. As reported in Table~\ref{tab:crowdresults}, overall, the human annotators obtained an accuracy of less than 0.24 on the social spambots, with more than 1,000 False Negatives (FN), meaning that contributors classified more than 1,000 accounts as genuine, when they actually belonged to the dataset of the last generation of spambots. Human detection performances for the two other groups of accounts, namely traditional spambots and genuine accounts, are instead quite satisfactory, with an accuracy of 0.91 and 0.92 respectively. These important results further highlight the existence of a striking difference between traditional and social spambots. More worryingly, they also suggest that humans might not be able to detect social spambots in the wild, and to distinguish them from genuine accounts. Given that each account under investigation has been classified by at least 3 different contributors, we have also computed the Fleiss' kappa ($\kappa$) inter-rater agreement metric~\cite{gwet2014}. All inter-rater agreement metrics measure the level of agreement of different annotators on a task. The level of agreement can be also interpreted as a proxy for the difficulty of a task. In our experiment, human contributors showed a decent agreement for the classification of genuine accounts, with $\kappa = 0.410$. Instead, they showed very little agreement while classifying traditional spambots, as represented by $\kappa = 0.007$. This interesting result shows that, overall, the human contributors were able to correctly detect traditional spambots, as shown by the 0.91 accuracy, but also that contributors rarely agreed on the class. Surprisingly, we measured a slightly higher agreement for the classification of social spambots than for traditional ones, with $\kappa = 0.186$. These results imply that humans generally failed in classifying social spambots (accuracy $= 0.2355$) and, furthermore, that they also were more in agreement on this mistake than they were when (correctly) classifying traditional spambots. \vskip 0.5em \noindent \textbf{Annotation guidelines for spambots detection.} Despite the recent advances in machine learning-based detection systems, manual verification of accounts to assess their degree of automation is still carried out by platform administrators~\cite{ferrara2016}. In~\cite{wang2013}, it is reported that human experts ``consistently produced near-optimal results" on a dataset of traditional spambots. However, the results of our crowdsourcing experiment confirmed that the traditional ``account-by-account" annotation process used by human workers to evaluate social media accounts is no longer viable when applied to the detection of the novel wave of social spambots. Given the importance of manual annotation for the creation of ground-truth datasets and for double-checking suspicious accounts on social networking platforms, we call for the adoption of new annotation methodologies that take into account the similarities and synchronized behaviors of the accounts. We have adopted a practical implementation of this methodology to annotate our datasets of social spambots. In particular, we have compared the timelines of large groups of accounts, in order to highlight tweeting similarities among them. By comparing the behaviors of different accounts, rather than by analyzing them one by one, we were able to spot the social spambots among all the collected accounts, as thoroughly described in Section~\ref{sec:datasets}. Therefore, we envisage the possibility to adopt this methodology, as well as similar ones, in order to safeguard the manual annotation process from elusive social spambots. \makeatletter{} \section{Established techniques} \label{sec:theothers} So far, we have demonstrated that neither Twitter nor human operators are currently capable of identifying novel social spambots. Here, we investigate whether established tools and techniques are able to succeed in this task. Thus, our research question is: \textbf{RQ4 --} \textit{Are state-of-the-art scientific applications and techniques able to detect social spambots?} \vskip 0.5em \noindent \textbf{The \textit{BotOrNot?} service.} BotOrNot? is a publicly-available service\footnote{\scriptsize{\url{http://truthy.indiana.edu/botornot/}}} to evaluate the similarity of a Twitter account with the known characteristics of social spambots~\cite{davis2016}. It has been developed by the Indiana University at Bloomington and it was released in May 2014. Claimed capable of detecting social spambots~\cite{ferrara2016}, at the time of writing it was the only publicly-available social spambot detection system. BotOrNot? leverages a supervised machine-learning classifier that exploits more than 1,000 features of the Twitter account under investigation. Specifically, it employs off-the-shelf supervised learning algorithms trained with examples of both humans and bots behaviors, based on the Texas A\&M dataset~\cite{leeKyumin2011} with 15,000 examples of each class and millions of tweets. Similarly to most already established techniques, BotOrNot? performs its analyses on an account-by-account basis. Despite being specifically designed for the detection of social spambots, authors state that the detection performances of BotOrNot? against evolved spambots might be worse than those reported in~\cite{davis2016}. Here, we aim at evaluating this point by querying the BotOrNot? service with our sets of genuine and social spambot accounts. As shown in Table~\ref{tab:results}, BotOrNot? achieves rather unsatisfactory results for the accounts of both \texttt{test set \#1} and \texttt{test set \#2} (such datasets are described in Table~\ref{tab:datasets}). Its detection performances are particularly bad for the accounts of \texttt{test set \#1} -- where the spambots are from the \texttt{social spambots \#1} group. The low values of F-Measure and Mathews Correlation Coefficient (MCC), respectively 0.288 and 0.174, are mainly due to the low Recall. In turn, this represents a tendency of labeling \texttt{social spambots \#1} as genuine accounts. \vskip 1.5em \noindent \textbf{Supervised spambot classification.} Among the many supervised classification approaches to spambot detection proposed in recent years by Academia, we decided to experiment with the one presented by C. Yang \textit{et al.} in~\cite{yang2013}, since it focuses on the detection of \textit{evolving} Twitter spambots. Thus, it is interesting to evaluate if the system recently presented in~\cite{yang2013} is actually able to detect the sophisticated social spambots. This supervised system provides a machine learning classifier that infers whether a Twitter account is genuine or spambot by relying on account's relationships, tweeting timing and level of automation. We have reproduced such a classifier by implementing and computing all the features proposed in~\cite{yang2013}, and by training the classifier with its original dataset. Results in Table~\ref{tab:results} show that the system fails to correctly classify the novel social spambots. Similarly to the results of the BotOrNot? service, the worst results of this system in both \texttt{test set \#1} and \texttt{test set \#2} are related to the Recall metric. This means that also this classifier labeled social spambots as genuine accounts. \vskip 0.5em \noindent \textbf{Unsupervised spambot detection via Twitter stream clustering.} Our initial claim, supported by preliminary work~\cite{ferrara2016, zhang2016}, is that social spambots might be so sophisticatedly designed to make it very difficult to distinguish them from genuine accounts, if observed one by one. If demonstrated, this claim would imply that supervised classification approaches are intrinsically worse than unsupervised ones for the detection of social spambots. For this reason, we have also experimented with unsupervised approaches for spambot detection. The approach in~\cite{miller2014} considers vectors made of 126 features extracted from both accounts and tweets as input of modified versions of the DenStream~\cite{cao2006} and StreamKM++~\cite{ackermann2012} clustering algorithms, to cluster feature vectors of a set of unlabeled accounts. We have implemented the system proposed in~\cite{miller2014} to cluster the accounts of our 2 test sets. As shown in Table~\ref{tab:results}, this achieved the worst performances among all those that we have benchmarked in this study. Low values of both Precision and Recall mean incomplete and unreliable spambot detection. Among the 126 features, 95 are based on the textual content of tweets. However, novel social spambots tweet contents similar to that of genuine accounts (e.g., retweets of genuine tweets and famous quotes). For this reason, an approach almost solely based on tweet content will not be able to achieve satisfactory results. \vskip 0.5em \noindent \textbf{Unsupervised spambot detection via graph clustering.} The approach in~\cite{ahmed2013} exploits statistical features related to URLs, hashtags, mentions and retweets. Feature vectors generated in this way are then compared with one another via an Euclidean distance measure. Distances between accounts are organized in an adjacency matrix, which is later used to construct an undirected weighted graph of the accounts. Then, graph clustering and community detection algorithms are applied in order to identify groups of similar accounts. Graph clustering is done by employing the \textit{Markov cluster algorithm} (\textit{MCL})~\cite{van2008}. We fully implemented this solution and we experimented with our datasets. However, the approach failed to identify 2 distinct clusters, since accounts of both our test-sets were assigned to a single cluster. We also performed a grid search simulation in order to test the best parameter configuration for \textit{MCL}\footnote{\scriptsize{\textit{MCL} admits 2 fundamental parameters: \textit{inflation} and \textit{expansion}.}}, but to no avail. To achieve effective detection results, instead of the \textit{MCL}, we adopted the \textit{fastgreedy} community detection algorithm~\cite{clauset2004}. As reported in Table~\ref{tab:results}, our modified implementation proved effective in detecting social spambots, with an MCC = 0.886 for \texttt{test set \#1} and MCC = 0.847 for \texttt{test set \#2}. \makeatletter{} \section{Emerging trends} \label{sec:newtrends} \makeatletter{} \begin{table}[t] \scriptsize \centering \hspace{-.5cm} \begin{tabular}{p{0.35\columnwidth}cp{0.30\columnwidth}c} \toprule \multicolumn{2}{c}{\textbf{established work}} & \multicolumn{2}{c}{\textbf{emerging trends}}\\ \midrule \mbox{Yardi, Romero \textit{et al.}~\cite{yardi2009}} & 2009 & \mbox{Beutel, Faloutsos,} \mbox{\textit{et al.}~\cite{Beutel:2013,jiang2015,Giatsoglou2015,jiang2016}} & 2013-16 \\ \mbox{Benevenuto \textit{et al.}~\cite{benevenuto2009,benevenuto2010,Ghosh:2012} 2009-12}&& Cao \textit{et al.}~\cite{Cao:2014} & 2014 \\ K. Lee, Caverlee \mbox{\textit{et al.}~\cite{Lee:2010,leeKyumin2011}} & 2010-11 & Yu \textit{et al.}~\cite{yu2015} & 2015 \\ Stringhini \mbox{\textit{et al.}~\cite{Stringhini:2010,Stringhini:2012,Stringhini:2013}} & 2010-13 & Viswanath, Mislove, Gummadi \textit{et al.}~\cite{viswanath2015} & 2015 \\ Viswanath, Mislove, Gummadi \textit{et al.}~\cite{viswanath2011} & 2011 & Cresci \textit{et al.}~\cite{IntSys2015} & 2016 \\ Stein \textit{et al.}~\cite{stein2011} & 2011 & & \\ Thomas \textit{et al.}~\cite{ThomasGMPS11} & 2011 & & \\ Gao \textit{et al.}~\cite{GaoCLPC12} & 2012 & & \\ Cao \textit{et al.}~\cite{cao2012} & 2012 & & \\ Xie \textit{et al.}~\cite{xie2012} & 2012 & & \\ C. Yang \textit{et al.}~\cite{yang2013} & 2013 & & \\ Wang \textit{et al.}~\cite{wang2013} & 2013 & & \\ \mbox{S. Lee \textit{et al.}~\cite{Lee:2013,ComCom14}} & 2013-14 & & \\ Z. Yang \textit{et al.}~\cite{yang2014uncovering} & 2014 & & \\ Liu \textit{et al.}~\cite{weibo14} & 2014 & & \\ Paradise \textit{et al.}~\cite{paradise2014} & 2014 & & \\ Cresci \textit{et al.}~\cite{DASec:2014,cresci2015} & 2014-15 & & \\ Ferrara \textit{et al.}~\cite{botornot,davis2016} & 2014-16& & \\ \bottomrule \multicolumn{4}{l}{ \begin{minipage}[t]{1\columnwidth} This table does not aim to be complete, but rather to testify the emergence of a new trend of research. \end{minipage}} \end{tabular} \caption{\small Recent work in spambot detection.\label{tab:survey}} \end{table} As shown in Table~\ref{tab:results}, the established works benchmarked in Section~\ref{sec:theothers} largely failed to detect the new wave of social spambots. In turn, these results call for novel analytic tools able to keep pace with the latest evolutionary step of spambots. Thus, in this section we revise the most recent literature on spambots detection with the aim of answering the research question: \textbf{RQ5 --} \textit{Is it possible to find a new dimension over which to fight and overcome the novel social spambots?} Traditional spambot detection systems typically rely on the application of well-known machine learning algorithms on the accounts under investigation. However, since 2013, a number of research teams independently started to formalize new approaches for detecting the coordinated and synchronized behavior that characterizes groups of automated malicious accounts~\cite{Beutel:2013}. Table~\ref{tab:survey} groups such techniques as {\it emerging trends}. Despite being based on different key concepts, these studies investigate groups of accounts as a whole, marking a significant difference with the previous literature. Table~\ref{tab:emerging} reports on the new concepts introduced by these emerging works. Focusing on groups has the advantage that, no matter how sophisticated a single spambot can be, a large enough group of spambots will still leave traces of automation, since they do have a common goal (e.g., increasing someone's reputation score). By performing analyses at the group level, this emerging trend might be able to significantly raise the bar for social spambots to evade detection. To support the claim, we experiment with work in~\cite{viswanath2015, IntSys2015}. \makeatletter{} \begin{table}[h] \scriptsize \centering\bgroup \def1.3{1.3} \begin{tabular}{p{0.45\columnwidth}p{0.45\columnwidth}} \toprule \textbf{work} & \textbf{key concept} \\ \midrule Beutel, Faloutsos \textit{et al.}~\cite{Beutel:2013,jiang2015} & detection of lockstep behaviors \\ Beutel, Faloutsos \textit{et al.}~\cite{Giatsoglou2015,jiang2016} & anomalies in synchronicity and normality \\ Cao \textit{et al.}~\cite{Cao:2014} & detection of loosely synchronized actions \\ Yu \textit{et al.}~\cite{yu2015} & detection of latent group anomalies in graphs \\ Viswanath, Mislove, Gummadi \textit{et al.}~\cite{viswanath2015} & distance between distributions of reputation scores \\ Cresci \textit{et al.}~\cite{IntSys2015} & similarity between digital DNA sequences \\ \bottomrule \end{tabular} \egroup \caption{\small Key concepts of emerging trends.\label{tab:emerging}} \end{table} \vskip 0.5em \noindent \textbf{Tamper Detection in Crowd Computations.} The contribution by Viswanath \textit{et al.} in~\cite{viswanath2015} checks whether a given group of accounts (e.g., retweeters of another account, reviewers of a venue on {\it Yelp}) contains a subset of malicious accounts. The intuition behind the methodology is that the statistical distribution of reputation scores (e.g., number of friends and followers) of the accounts participating in a tampered computation significantly diverge from that of untampered ones. The detection of a tampered computation is performed by computing the Kullback-Leibler distance between the statistical distribution of a given reputation score for the computation under investigation with that of a reference -- untampered -- computation. If such a distance exceeds a given threshold, the computation under investigation is labeled as tampered. To test this technique against social spambots, we have computed the statistical distribution of the two reputation scores used in~\cite{viswanath2015} (join date and number of followers) for the genuine and the social spambots accounts of our datasets. The results are in figures~\ref{fig:joindate} and~\ref{fig:followers}. Whereas the genuine accounts feature distributions that almost uniformly span across the possible range of values, social spambots have anomalous distributions. Thus, the technique proposed in~\cite{viswanath2015} is capable of spotting the differences between groups of genuine accounts and the new wave of social spambots. However, the technique cannot \textit{directly} spot the tampering accounts, and, thus, detection and removal of the single accounts must be performed using a separate methodology. \begin{figure} \centering \subfigure[\label{fig:joindate}]{\includegraphics[width=0.32\columnwidth]{img/join_ecdf.png}} \subfigure[\label{fig:followers}]{\includegraphics[width=0.32\columnwidth]{img/foll_ecdf.png}} \subfigure[\label{fig:lcs-human-vs-bot}]{\includegraphics[width=0.32\columnwidth]{img/lcs.png}} \caption{\small Distribution of join date, number of followers and LCS for genuine and social spambots accounts. \label{fig:distribution-join-foll}} \end{figure} \vskip 0.5em \noindent \textbf{Digital DNA for social spambots detection.} Similarly to~\cite{viswanath2015}, the technique in~\cite{IntSys2015} analyses a group of accounts, for detecting possible spambots among them. Authors introduced a bio-inspired technique to model online users behaviors by so-called ``digital DNA" sequences. Extracting digital DNA for an account means associating that account to a string that encodes its behavioral information. Digital DNA sequences are then compared between one another to find anomalous similarities among sequences of a subgroup of accounts. The similarity among digital DNA sequences is computed in~\cite{IntSys2015} by measuring the Longest Common Substring (LCS)---that is, the longest DNA substring shared by all the accounts of the group. Accounts that share a suspiciously long DNA substring are then labeled as spambots. Notably, although working at group level, \cite{IntSys2015} is capable of spotting single spambot accounts. For this reason, we have been able to compare this technique with the ones previously benchmarked in Table~\ref{tab:results}. Applying the technique to our datasets, the similarity curve of genuine accounts is significantly different from that of social spambots, as shown in Figure~\ref{fig:lcs-human-vs-bot}. More specifically, as measured by the LCS metric, \texttt{social spambots \#1} and \texttt{\#3} feature a level of similarity much higher than that of genuine accounts. Results reported in Table~\ref{tab:results} demonstrate that the digital DNA-based technique~\cite{IntSys2015} achieves excellent detection performances. The compelling features of the emerging techniques listed in this section represent a fertile ground for fighting the novel social spambots. We can observe a paradigm-shift for research and development of spambot detection systems, which may exploit such new concepts to achieve better resilience and robustness, to withstand the next evolution of social media spambots. \makeatletter{} \section{Concluding remarks} \label{sec:conclusions} Our long-lasting experiment on malicious accounts survival rate in Twitter demonstrated that spambot detection is still an open issue. Moreover, the already difficult problem to detect spambots in social media is bound to worsen, since the emergence of a new wave of so-called social spambots. By accurately mimicking the characteristics of genuine users, these spambots are intrinsically harder to detect than those studied by Academia in the past years. In our experiments, neither humans nor state-of-the-art spambot detection applications managed to accurately detect the accounts belonging to this new wave of spambots. Indeed, our experiments highlighted that the majority of existing automated systems, as well as crowdsourcing, erroneously label social spambots as genuine (human-operated) accounts. We demonstrated the need for novel analytic tools capable of turning the tide in the arms race against such sophisticated spambots. One promising research direction stems from the analysis of collective behaviors. We highlighted a few emerging approaches that analyze groups as a whole, rather than individuals. The promising outcome of these novel approaches clearly indicates that this is a favorable research avenue. \balance
1,116,691,497,996
arxiv
\section{Introduction} Wealthy individuals tend to become even wealthier \citep{piketty2015capital}, popular websites become even more popular \citep{barabasi1999emergence}, and highly cited papers overshadow less cited ones, earning more future citations \citep{price1976general,redner1998popular}. Social and technological systems that preserve and amplify existing inequalities are said to be characterized by rich-get-richer dynamics \citep{merton1968matthew,price1976general,yule1925ii}. In these systems, initial conditions and randomness early in time drastically affect the course of future events---advantages obtained by agents early on are conserved and reinforced \citep{arthur1989competing,denrell2014perspective}. The above can result in socially objectionable outcomes, such as pervasive inequality in the distribution of wealth, and unfair outcomes where talented people or promising technologies cannot compete with already established ones \citep{page2006path}. In many systems, and increasingly so in the online world, the rich-get-richer dynamics depend on the ranks of the various objects (people, options, institutions etc.) in terms of some quantity of interest. For example, companies or academic institutions might receive job applications based on some status ranking, which in turn can help these institutions retain their status by employing qualified individuals \citep{podolny1993status}. Similarly, scientists might submit their work to journals taking into account the journal's relative rank in terms of impact factor or some other metric, thus highly ranked journals are more likely to publish work of good quality and retain their position in the ranking \citep{hudson2013ranking,laband2013use}. Last but not least, users of online interfaces are more likely to click on entries that appear at the top of the screen, hence making these entries appear more relevant to other users \citep{joachims2005accurately,salganik2006experimental}. In all of these cases, it is the ranking of the different entities that confers an advantage to the more successful ones and thus drives the rich-get-richer dynamics. Although examples of systems characterized by ranking-based rich-get-richer dynamics abound, we still do not understand their dynamics and long-term behavior. There are only two previous relevant works, which have been developed in the context of P\'olya urns and can model ranking-based rich-get-richer systems as extreme cases. In the first such work Hill et al. \cite{hill1980strong} study the case of a P\'olya urn with balls of $d=2$ colors, one ball added at a time, and allow the probability of adding a red ball to be a function of the proportion of red balls. In other words, there is some function $f:[0,1]\to [0,1]$, such that the probability of the next ball being red is $f\left(X_n\slash n\right)$, where $X_n$ denotes the number of red balls at time $n$. If $f$ is taken to be constant in $\left[0,\frac{1}{2}\right) $ and in $\left(\frac{1}{2},1\right] $, then we get a ranking-based urn. In \cite{hill1980strong} it is shown that $X_n\slash n$ converges a.s., and then some results are given regarding the support of the limit (see also \cref{sectionPolyaUrn}). Importantly, a subset of the results in \citep{hill1980strong} allows a nowhere dense set of discontinuities for $f$, so they apply to the ranking-based case. It is not obvious though how to generalize these results to P\'olya urns with more colors or other types of processes. The usual generalization to $d\in \N $ is to have the probability of adding a ball of color $i$ be proportional to a function of the count (or proportion) of balls of that color only, thus not allowing comparison of the counts of balls of different colors (for recent examples see \citep{chung2003generalizations,collevecchio2013preferential,laruelle2019nonlinear} - see also \citep{pemantle2007survey,zhu2009nonlinear} for surveys of results). A notable exception is the work of Arthur et al. \citep{arthur1986strong}, where the probabilities are allowed to depend on the whole vector of proportions of balls of each color. More precisely, there is an urn function \begin{equation} f:\Delta ^{d-1}\to \Delta ^{d-1},\ \ \text{ where }\ \ \Delta ^{d-1}:= \left\{ x\in [0,1]^d,\ \sumdu{i}{}\, x_i=1\right\}, \end{equation} which takes as argument the vector of proportions of balls of each color, and its $i$-th component gives the probability of adding a ball of color $i$. The authors generalize some of the results in \citep{hill1980strong} to any $d\in \N$. In particular they show that under mild conditions on $f$ the process $X_n\slash n$ (where $X_n$ is now a vector) has positive probability of converging to any point $\theta \in \Delta ^{d-1}$ that is a \textit{stable} fixed point of $f$. According to the definition of stability used, in the ranking-based case all fixed points whose coordinates are all distinct are necessarily stable (see \cref{sectionPolyaUrn} for details). However, it is not claimed that the stable fixed points of $f$ are the only possible limits for $X_n\slash n$. Also, convergence of $X_n\slash n$ is shown only for certain special cases that do not cover ranking-based urns. Even in the cases where the above results are applicable to ranking-based systems, their main limitation is that they are restricted to simple P\'olya-type processes, that is processes whose components increase one at a time and the increments are binary. But in many systems with ranking-dependent dynamics (e.g. journal impact factors, university ratings) the quantity of interest can take continuous values and the various components may change simultaneously. Given the paucity of mathematical work that can apply to systems with ranking-based rich-get-richer dynamics, especially for more general increments, our understanding of ranking-based processes remains limited. In this work, we treat the problem in the context of (discrete-time) Markov processes, with an explicit dependence of the dynamics on the ranking. Specifically, we consider a non-homogeneous random walk in $\R^d$, for which the distribution of the steps depends only on the ranking of its components (descending order of their values). The fact that there are only finitely many possible rankings for a vector of $d$ components, and that the distribution of the jumps of the process does not change as long as the ranking doesn't change, allows us to consider separately the transitions between rankings and the dynamics when the ranking remains constant, the latter being nothing more than the dynamics of a sum of i.i.d. random vectors. Indeed, if the ranking converges to some limit value (i.e. eventually becomes constant), then a suitable application of the Strong Law of Large Numbers and the Central Limit Theorem gives us the behavior of $X^i_n\slash n$ in the limit (\cref{marketShareTheorem}). Therefore, the study of the long-term behavior of such processes is in large part a study of the long-term behavior of the ranking. This simplifies the study considerably and allows us to derive results under few assumptions. An essential assumption we make in order to show convergence is a type of a rich-get-richer condition, more precisely a ranking-based reinforcement condition (\cref{qualityOrderingAssumption}), and it is a weaker version of the following statement: conditioned on $X^i_n>X^j_n$, the difference $X^i_{n+1}-X^i_n$ has a larger mean than $X^j_{n+1}-X^j_n$. Our results can be summarized as follows: under the above mentioned ranking-based reinforcement assumption and a finite second moments assumption, we show that in the limit $n\to \infty $ the ranking of the components of the process stops changing almost surely (\cref{settlingTheorem}). Moreover, we characterize the possible rankings in the limit (\cref{theoremTerminalRankings}). The latter result is independent of \cref{qualityOrderingAssumption}, but if this assumption holds, then we can characterize the possible limits for $X_n\slash n$ as well (\cref{marketShareTheorem}). By ``possible limit'' we mean that the probability of converging to this value is positive, for \textit{some} initial condition (distribution of $X_0$). \Cref{propositionLimitRankingsSufficient} gives a condition under which the probability of converging to any of the possible limits is positive for any initial condition. Next we specialize our results to the case of ranking-based P\'olya urns and relate them to the fixed points of the urn function (\cref{propositionPolya}), which allows a comparison with previous results. Even in this special case we get novel results regarding the limiting behavior of $X_n\slash n$. Finally, we describe an application to online rank-ordered interfaces. \section{Main results} \label{sectionResultsGeneral} We begin by defining what we mean by ranking (\cref{sectionRanking}) and ranking-based processes (\cref{sectionFormulation}). \Cref{sectionSettling,sectionLimitRankings} contain our two main results: convergence of ranking and characterization of terminal rankings. In \cref{sectionLimitTheorems} we look at the limit behavior of the process $X_n$ itself and in \cref{sectionInitialDistributions} we consider the role of the initial condition. \subsection{Rankings} \label{sectionRanking} For a finite set $S$, we denote by $|S|$ its cardinality and by $[S]=\{1,\ldots ,|S|\}$ the set of the first $|S|$ positive integers. \begin{definition} \label{definitionRanking} Let $S$ be a finite set. A ranking of $S$ is a function $r:S\to [S]$ with the property that for each $a\in S$, \begin{equation} \label{dummyEq67} card\{b\in S:r(b)<r(a)\}=r(a)-1. \end{equation} \end{definition} We will say that $a$ is ranked higher than $b$ if $r(a)<r(b)$. \Cref{dummyEq67} requires that, for each $a\in S$, exactly $r(a)-1$ elements are ranked higher than $a$. Thus, we will call $r(a)$ the \textit{position} or \textit{rank} of $a$ in the ranking $r$. Note that two elements $a,b\in S$, $a\neq b$, can have the same position in $r$, that is we may have $r(a)=r(b)$. In this case we will say that these elements are equally ranked by $r$. In \cref{appendixWeakOrderings} we show that rankings of a set $S$ are equivalent to weak orderings on $S$. Any bijection $r:S\to [S]$ satisfies \cref{dummyEq67}, hence it is a ranking. Such rankings will be called \textit{strict}. That is, strict rankings are such that no two elements of $S$ are equally ranked. Given a vector $x=(x_1,\ldots ,x_d)\in \R ^d$, we denote by $rk(x)$ the unique ranking $r$ on the set $[d]=\{1,\ldots, d\}$ that satisfies $r(i)<r(j)$ if and only if $X_i>X_j$, for any $i,j\in [d]$. It is easy to check that there is indeed a unique such ranking, given by $r(i)=card\{j\in[d]:X_j>X_i\}+1$. The folk name for this map is the \textit{Standard Competition Ranking}. We will denote by ${\cal R}={\cal R}(d)$ the set of all rankings of the set $[d]$. \subsection{Ranking-based processes} \label{sectionFormulation} Let $(\Omega ,{\cal F},\mathbb{P})$ be a probability space and $\{{\cal F}_n\}_{n\in\N}$ a filtration on it. Let $\nu$ be a probability distribution on $\R^d$ with finite second moments, and for each $r\in {\cal R}(d)$ let $\mu^r$ be a probability distribution on $\R ^d$, also with finite second moments. We consider a time-homogeneous Markov process $X_n\in \R ^d$, adapted to $\{{\cal F}_n\}_n$, with initial distribution $\nu$ and with the law of its increments being $\mu ^r$, where $r$ is the current ranking. More precisely, the transition kernel $\mu$ is given by \begin{equation} \label{dummyEq19} \mu(x,B)=\mu^{rk(x)}(B-x), \ \ \ x\in \R^d,\ B\in {\cal B}(\R^d), \end{equation} where ${\cal B}(\R^d)$ denotes the Borel $\sigma $-algebra of $\R^d$ and $B-x=\{ y\in \R^d:y+x\in B\} $ denotes the translation of $B$ by the vector $-x$. We will call such a process a ($d$-dimensional) \textit{ranking-based process}. \Cref{dummyEq19} implies that for any $B\in {\cal B}(\R^d)$, \begin{equation} \label{eqCondInd1} \Pr{\Delta X_{n+1}\in B\midvert {\cal F}_n}=\mu^{rk(X_n)}(B)\ \ a.s., \end{equation} where $\Delta X_{n+1}=X_{n+1}-X_n$. In particular, the process is space-homogeneous within subsets of $\R^d$ that correspond to a fixed ranking, that is subsets of the form $\{x\in \R^d:rk(x)=r\}$, for $r\in {\cal R}(d)$ (but it is not space-homogeneous in general). \Cref{eqCondInd1} also implies that, conditioned on the ranking at time $n$, $\Delta X_{n+1}$ is independent of ${\cal F}_n$, that is \begin{equation} \label{eqCondInd2} \Delta X_{n+1}\underset{rk(X_n)}{\independent} {\cal F}_n. \end{equation} We will use the shorthand notation $\mu^r(x_i>x_j)$ to mean $\mu^r(\{x\in\R^d:x_i\neq x_j\})$ and similarly for other events that involve comparisons of components of $x$. For each $r\in{\cal R}$, we denote by $Z^r=(Z^r_1,\ldots ,Z^r_d)$ a random variable with distribution $\mu ^r$. This will be especially useful when considering differences of the form $\Delta X^i_{n+1}-\Delta X^j_{n+1}$, whose distribution cannot be directly expressed via $\mu^r$. Note that conditioned on $rk(X_n)$, $\Delta X^i_{n+1}-\Delta X^j_{n+1}$ has the distribution of $Z_i^{rk(X_n)}-Z_j^{rk(X_n)}$, that is, for each $B\in{\cal B}(\R^d)$ \begin{equation} \label{eqZ} \Pr{\Delta X^i_{n+1}-\Delta X^j_{n+1}\in B\midvert rk(X_n)}=\Pr{Z_i^{rk(X_n)}-Z_j^{rk(X_n)}\in B}. \end{equation} We denote by $q^r_i$ the mean and by $\sigma ^r_i$ the standard deviation of the $i$-th component of the distribution $\mu ^r$, that is $q^r_i=\E\left[Z^r_i\right]$ and $\sigma ^r_i=\sqrt{Var(Z^r_i)}$. We will also use the vector notation $q^r=(q^r_1,\ldots ,q^r_d)$ for the mean. For the rest of the paper, we fix $d\in\N$ and a $d$-dimensional ranking based process $X_n$ adapted to $\{{\cal F}_n\}_n$, with the associated $\mu ^r$'s, $\mu$, $Z^r$'s, $q^r_i$'s and $\sigma ^r_i$'s. Strictly speaking, the Markov process $X$ is described by the pair $(\nu, \mu )$. However, we will often abuse terminology and talk about a single process $X$ while allowing the initial distribution $\nu$ to vary. We will use the notation $\mathbb{P}_{\nu}$ for probabilities of events that depend on the initial distribution. The subscript $\nu$ will often be omitted for expressions that do not depend on the initial distribution (as in \cref{eqZ}). Both the distributions $\mu ^r$ and initial distribution $\nu$ will always be assumed to have finite second moments. We will also suppress the integer $d$ in the notation for the set of all rankings of $[d]$ and write ${\cal R}={\cal R}(d)$. \subsection{Convergence of ranking} \label{sectionSettling} As $n\to \infty$, the ranking of $X_n$ may keep changing or it might converge to some particular ranking $r\in {\cal R}$ (where ${\cal R}$ is endowed with the discrete topology). We have the following definition. \begin{definition} \label{defSettling2} Let $X$ be a ranking-based process. We say that $rk(X_n)$ converges to $r \in {\cal R}$ and write $rk(X_n)\to r$ (or $\limn rk(X_n)=r$) , if $rk(X_n)=r$ for all sufficiently large $n$, that is \begin{equation} \{rk(X_n)\to r\}=\cupdu{n_0\in\N}{}\,\capdu{n=n_0}{\infty}\{rk(X_n)=r\}=\underset{n}{\liminf}\{rk(X_n)=r\} . \end{equation} We say that a ranking $r \in {\cal R}$ is terminal (for the transition kernel $\mu$), if there exists some initial distribution $\nu$, such that \begin{equation} \label{eqDefinitionTerminal} \Prd{\nu}{rk(X_n)\to r}>0. \end{equation} Otherwise, we say that $r$ is transient. \end{definition} Knowing that the ranking converges is useful, because then we can predict the long-term behavior of the process (see \cref{sectionLimitTheorems}). We will therefore seek conditions under which the ranking is guaranteed to converge. As a first step, we ask the following question: if we know that $X^i_{n_0}>X^j_{n_0}$ occurs for some $n_0\in\N$, is it likely that $X^i_n>X^j_n$ for all $n>n_0$? The following definition and proposition give a sufficient condition for the probability of this event to be positive and bounded away from zero. \begin{definition} \label{defDominance2} Let $\{X_n\}_{n\in\N}$ be a $d$-dimensional ranking-based process with the associated distributions $\mu^r$ and means $q^r_1,\ldots,q^r_d$, and let $i,j\in [d]$. \begin{itemize} \item We say that $i$ quasi-dominates $j$, if for any ranking $r$ such that $r(i)<r(j)$ we have either $q^r_i>q^r_j$ or $\mu^r(x_i\neq x_j)=0$. \item We say that $i$ dominates $j$ if we further have that for any ranking $r$ such that $r(i)=r(j)$, either $\mu^r(x_i>x_j)>0$ or $\mu^r(x_i\neq x_j)=0$. \end{itemize} \end{definition} Note the relation between quasi-dominance and the (loosely defined) concept of rich-get-richer dynamics: if $i$ quasi-dominates $j$, then $X^i_n$ increases on average faster than $X^j_n$ whenever it is already larger (or they vary in exactly the same way). The extra condition for dominance says that $X^i_n$ has a non-zero probability of passing ahead after a tie (or, again, the two components vary in exactly the same way). \begin{proposition} \label{lemmaProbaNeverChange} Let $i,j\in [d]$. \begin{enumerate} \item If $i$ quasi-dominates $j$, then there exists some $\epsilon>0$ such that for any initial distribution $\nu$ and any ${\cal F}_n$-stopping time $s$, we have \begin{equation} \label{dummyEq75} \Prd{\nu}{\capdu{n=s}{\infty }\left\{ X^{i}_{n}>X^{j}_{n}\right\}\midvert {\cal F}_{s}} \geq \epsilon\ \ \text{ a.s. on }\{s<\infty \}\cap \{X^i_{s}>X^j_{s}\}. \end{equation} \item If $i$ dominates $j$, we further have \begin{equation} \Prd{\nu}{\capdu{n=s}{\infty }\left\{ X^{i}_{n}\geq X^{j}_{n}\right\}\midvert {\cal F}_{s}} \geq \epsilon\ \ \text{ a.s. on }\{s<\infty \}\cap \{X^i_{s}\geq X^j_{s}\}. \end{equation} \end{enumerate} \end{proposition} For a concrete case, if we take the a.s. constant stopping time $s=n_0$, then \cref{dummyEq75} implies in particular that \begin{equation} \Pr[\nu]{\capdu{n=n_0}{\infty}\{X^i_{n}>X^j_{n}\}\midvert X^i_{n_0}>X^j_{n_0}}\geq \epsilon , \end{equation} whenever the expression on the left hand side makes sense (i.e. whenever $\Pr[\nu]{X^i_{n_0}>X^j_{n_0}}>0$). We postpone the proof in order to get to our main result for this section. For ease of reference we state the condition for that theorem separately: \begin{assumption}[Ranking-based reinforcement] \label{qualityOrderingAssumption} For any pair of indices $i,j\in [d]$, either one of them dominates the other, or they quasi-dominate each other. \end{assumption} Note that it is possible for both $i$ and $j$ to dominate each other; the above assumption would still be satisfied. This means that the ``dominance'' relation does not have to be trichotomous. It does not have to be transitive either. However, a transitive trichotomous relation (i.e. a strict total order) on $[d]$ would satisfy \cref{qualityOrderingAssumption}. \begin{example} \label{example:additivePolyaUrn} Let $X_n=(X^1_n,\ldots X^d_n)$ give the number of balls of each of $d$ colors in an urn. At each time step, a single ball is added, with probabilities for each color depending on the ranking. Note that in this case $q^r_i$ is equal to the probability of adding a ball of color $i$ when the ranking is $r$ (see also first paragraph of \cref{sectionPolyaUrn}). These probabilities will be determined as follows: Each color has a propensity $a_i\geq 0$ to be chosen. Moreover, there are real numbers $\lambda _1>\cdots >\lambda _d\geq 0$, with $\lambda _i$ denoting an additive bonus to the propensity of the color(s) currently ranked $i$-th. More specifically, the probability of adding a ball of color $i$, given that the current ranking is $r$, is \begin{equation} \label{eqDummy51} q^r_i=\frac{a_i+\lambda _{r(i)}}{\sumdu{j=1}{d}\left(a_j+\lambda _{r(j)}\right)}. \end{equation} We claim that this process satisfies \cref{qualityOrderingAssumption}. To see this, let $i,j\in [d]$ and suppose without loss of generality that $a_i\geq a_j$. We have the following cases: \begin{itemize} \item $a_i=a_j$: By \cref{eqDummy51} we have that $q^r_i>q^r_j$ whenever $i$ is ranked higher than $j$ and vice versa. That is, $i$ and $j$ quasi-dominate each other. \item $a_i>a_j$: We similarly get that color $i$ quasi-dominates color $j$. Moreover, when $i$ and $j$ are ranked equally (i.e. $r(i)=r(j)$), \cref{eqDummy51} gives $q^r_i>q^r_j$, that is it is more likely for color $i$ to be chosen. This shows that $i$ dominates $j$. \end{itemize} Thus our claim is proved. \end{example} We now state and prove our main theorem for this section. \begin{theorem}[Convergence of ranking] \label{settlingTheorem} Let $X_n$ be a ranking-based process satisfying \cref{qualityOrderingAssumption}. Then, $rk(X_n)$ converges a.s., for any initial distribution $\nu$. \end{theorem} \begin{proof} It is enough to show that for each pair of indices $i,j$, the relative ranking of $X^i_n$ and $X^j_n$ eventually stops changing with probability $1$. So let $i\neq j$ and, without loss of generality, assume that $i$ quasi-dominates $j$ (see \cref{qualityOrderingAssumption}). Define $s_0=0$ and inductively $t_m=\inf \{ n>s_{m-1}:X^i_n>X^j_n\} $ and $s_m=\inf \{ n>t_m:X^i_n\leq X^j_n\} $. Notice that $\{s_m=\infty\}=\capdu{n=t_m}{\infty }\{X^i_n>X^j_n\}$. Therefore, \cref{lemmaProbaNeverChange} applied to $s=t_m$ implies that there exists some $\epsilon$, not depending on $m$, such that \begin{equation} \label{dummyEq46} \begin{aligned} \Prd{\nu}{s_m=\infty \midvert {\cal F}_{t_m}} & \geq \epsilon>0,\ \ a.s. \end{aligned} \end{equation} on $\{t_m<\infty,X^i_{t_m}>X^j_{t_m}\}=\{t_m<\infty \}$. In particular, if $\Pr[\nu]{t_m<\infty }>0$, then \begin{equation} \Pr[\nu]{s_m=\infty\midvert t_m<\infty } \geq \epsilon \end{equation} and \begin{equation} \label{eqDummy49} \begin{aligned} \Prd{\nu}{s_m<\infty } & =\Prd{\nu}{\{ t_m<\infty \} \cap \{ s_m<\infty \}} \\ & =\Prd{\nu}{t_m<\infty}\cdot \Prd{\nu}{s_m<\infty \midvert t_m<\infty \}}\\ & \leq (1-\epsilon )\cdot \Prd{\nu}{t_m<\infty} \\ & \leq (1-\epsilon )\cdot \Prd{\nu}{ s_{m-1}<\infty}. \end{aligned} \end{equation} Although we have assumed $\Pr[\nu]{t_m<\infty }>0$, \cref{eqDummy49} continues to hold even if $\Pr[\nu]{t_m<\infty }=0$, because then $\Pr[\nu]{s_m<\infty }=0$ as well. By \cref{eqDummy49} and induction we have $\Prd{\nu}{s_m<\infty }\leq \left(1-\epsilon \right) ^m$, therefore \begin{equation} \Pr[\nu]{\capdu{m\in\N}{\infty }\left\{ s_m<\infty \right\}}=0. \end{equation} Hence, with probability $1$, either $X^i_n\leq X^j_n$ finitely often (henceforth abbreviated f.o.) or $X^i_n>X^j_n$ f.o. If $X^i_n\leq X^j_n$ f.o., then $X^i_n>X^j_n$ for all sufficiently large $n$, so we are done. Now assume that $X^i_n>X^j_n$ f.o. and separate two cases, according to \cref{qualityOrderingAssumption}: \begin{itemize} \item $j$ also quasi-dominates $i$: We get similarly that either $X^i_n\geq X^j_n$ f.o. or $X^i_n<X^j_n$ f.o. As before, in the first case we are done. In the second case, we have both $X^i_n<X^j_n$ and $X^i_n>X^j_n$ f.o., so that $X^i_n=X^j_n$ for all sufficiently large $n$. \item $i$ dominates $j$: Using the second part of \cref{lemmaProbaNeverChange} we get that either $X^i_n<X^j_n$ f.o. or $X^i_n\geq X^j_n$ f.o. The situation is identical as in the first case. \end{itemize} \end{proof} We now turn to the proof of \cref{lemmaProbaNeverChange}. We will need the following lemma, which generalizes a property of biased random walks to the case that the transition probabilities are not constant, but vary in a finite set. Its proof is given in the Appendix. A related result is obtained in \citep[Th. 2.5.12]{menshikov2016non} by different methods. \begin{lemma} \label{lemmaBiasedRandomWalk} Let $(\Omega , {\cal G},\mathbb{P})$ be a probability space. Let $S$ be a finite set and for each $r\in S$, $\nu^r$ a distribution on $\R$ such that it either has positive mean or $\nu^r(\{0\})=1$. Let $\{ R_n\} _{n\in\N}$ be a sequence of random elements in $S$ and $\{ Y_n\} _{n\in \N }$ a sequence of random variables with $Y_0=0$. Suppose that $\Delta Y_{n+1}$ is conditionally independent of $\{(Y_k,R_k)\}_{k\leq n}$ conditioned on $R_n$, with distribution $\nu ^{R_n}$. In other words, for any $A\in {\cal B}(\R)$, $n\in\N$, \begin{equation} \label{eqDummy93} \Pr{\Delta Y_{n+1}\in A\midvert \{(Y_k,R_k)\}_{k\leq n}}=\nu^{R_n}(A)\ \ a.s. \end{equation} Then, \begin{equation} \Pr{\capdu{n\in\N}{}\{Y_n\geq 0\}}\geq \epsilon >0, \end{equation} where $\epsilon$ depends only on the distributions $\nu^r$, $r\in S$. \end{lemma} We note that if $|S|=1$, then \cref{lemmaBiasedRandomWalk} reduces to the well-known result that a biased one-dimensional random walk with positive mean has positive probability of never admitting negative values (see \cite[Corollary 9.17]{kallenberg2006foundations}). \begin{proof}[Proof of \cref{lemmaProbaNeverChange}] \begin{enumerate} \item Let $s$ and $\nu $ be given and define $\tau = \min\{n\geq s:X^i_n\leq X^j_n\}$ and \begin{equation} Y_n=X^i_{\tau \wedge n}-X^j_{\tau \wedge n} \end{equation} Note that $Y_n>0$ for all $n\geq s$ implies $X^i_n>X^j_n$ for all $n\geq s$. Therefore, it is enough to show that, for some $\epsilon >0$ that does not depend on $s$ or $\nu$, \begin{equation} \label{eqDummy35} \Prd{\nu}{\capdu{n=s}{\infty }\left\{ Y_{n}>0\right\}\midvert {\cal F}_{s}} \geq \epsilon \ \ \text{ a.s. on }\{Y_{s}>0\}. \end{equation} We have \begin{equation} \label{eqDummy37} \Delta Y_{n+1}=\mathbf{1}_{\tau >n}\cdot (\Delta X^i_{n+1}-\Delta X^j_{n+1}), \end{equation} where $\mathbf{1}_A$ denotes the indicator function of the set $A$. It follows that conditioned on $rk(X_n)$ and $\mathbf{1}_{\tau >n}$, $\Delta Y_{n+1}$ is independent of ${\cal F}_n$ (see \cref{eqCondInd1}). Moreover, its conditional distribution is equal to that of $Z^{rk(X_n)}_i-Z^{rk(X_n)}_j$ in the case $\tau >n$ (by \cref{eqZ}), while $\Delta Y_{n+1}=0$ identically otherwise. Let $F\in {\cal F}_s$ be any event with $\Pr{F}>0$ and consider the probability measure $\Pr[\nu,F]{\cdot }=\Pr[\nu]{\cdot\midvert F}$. We apply \cref{lemmaBiasedRandomWalk} for this measure and the sequence $\{Y_{n+s}-Y_{s}\}_{n\in\N}$, with $S={\cal R}\sqcup \{\alpha \}$ (where $\alpha$ is an arbitrary new element) and \begin{equation} R_n= \begin{cases} rk(X_{n+s}), & \text{ if } \tau>n+s,\\ \alpha, & \text{ otherwise,} \end{cases} \end{equation} The distributions $\nu ^r$ in \cref{lemmaBiasedRandomWalk} are equal to the distributions of $Z^r_i-Z^r_j$ for $r\in {\cal R}$, while $\nu ^{\alpha }$ is the singular probability measure satisfying $\nu^{\alpha}(\{0\})=1$. \Cref{lemmaBiasedRandomWalk} thus gives \begin{equation} \Prd{\nu}{\capdu{n\in\N}{}\left\{ Y_{n+s}-Y_{s}\geq 0\right\}\midvert F} \geq \epsilon>0, \end{equation} where $\epsilon $ depends only on the $\mu ^r$'s (distributions of $Z^r$'s). Since $F$ was arbitrary, we get \begin{equation} \Prd{\nu}{\capdu{n\in\N}{}\left\{ Y_{n+s}-Y_{s}\geq 0\right\}\midvert {\cal F}_{s}} \geq \epsilon\ \ a.s., \end{equation} from where \Cref{eqDummy35} follows. \item Let $\tau '=\inf\, \left\{n\geq s: X^i_n\neq X^j_n\right\}$. We may assume that $X^i_s=X^j_s$ and $\tau'<\infty $, since on $\{X^i_s>X^j_s\}$ part (a) applies, while on $\{\tau'=\infty \}$ the result holds trivially. On $\{X^i_s=X^j_s,\tau'<\infty \}$ we have $\capdu{n=s}{\infty }\left\{ X^{i}_{n}\geq X^{j}_{n}\right\}=\capdu{n=\tau '}{\infty }\left\{ X^{i}_{n}\geq X^{j}_{n}\right\}$ and $\tau'\geq s+1$, it is therefore enough to show that \begin{equation} \label{eqDummy71} \Pr[\nu]{\capdu{n=\tau'}{\infty }\left\{ X^{i}_{n}> X^{j}_{n}\right\}\midvert {\cal F}_{\tau'-1}}\geq \epsilon\ \ a.s. \end{equation} or, by part (a), \begin{equation} \label{eqDummy73} \Pr[\nu]{X^{i}_{\tau'}> X^{j}_{\tau'}\midvert {\cal F}_{\tau'-1}}\geq \epsilon'\ \ a.s., \end{equation} for some $\epsilon'>0$ that does not depend on $\nu $ or $s$. Let $R'=\{r\in {\cal R}:r(i)=r(j),\mu^r(x_i>x_j)>0\}$ be the subset of rankings that rank $i$ and $j$ equally, but they give positive probability to $i$ to pass ahead on the next step. Since by assumption all other rankings with $r(i)=r(j)$ satisfy $\mu^r(x_i\neq x_j)=0$, $rk(X_n)$ must take a value in $R'$ before we can have $X^i_n\neq X^j_n$. That is, $rk(X_{\tau '-1})\in R'$ a.s., hence also \begin{equation} \begin{aligned} \Pr{X^i_{\tau'}>X^j_{\tau'}\midvert {\cal F}_{\tau'-1}} & =\Pr{\Delta X^i_{\tau'}>\Delta X^j_{\tau'}\midvert {\cal F}_{\tau'-1}}\\ & =\mu^{rk(X_{\tau'-1})}(x_i>x_j)\\ & \geq \underset{r\in R'}{\min }\, \mu^r(x_i>x_j)>0\ \ a.s., \end{aligned} \end{equation} where the second equality follows from \cref{eqCondInd1}. \end{enumerate} \end{proof} \subsection{Terminal rankings} \label{sectionLimitRankings} \cref{settlingTheorem} says that \cref{qualityOrderingAssumption} guarantees convergence of $rk(X_n)$, but it doesn't say anything about the possible limits. In this section we deal with the question of what the possible limit rankings are. Recall that a ranking is terminal if $\Prd{\nu}{rk(X_n)\to r}>0$ for some probability distribution $\nu$ (\cref{defSettling2}). Our main result in this section is the following: \begin{theorem}[Terminal rankings] \label{theoremTerminalRankings} Let $X_n$ be a $d$-dimensional ranking-based process with the associated distributions $\mu^r$ and means $q^r_1,\ldots ,q^r_d$. A ranking $r$ is terminal if and only if, for any $i,j\in [d]$: \begin{itemize} \item If $r(i)=r(j)$ then $\mu^r(x_i\neq x_j)=0$. \item If $r(i)<r(j)$ then either $q^r_i>q^r_j$ or $\mu^r(x_i\neq x_j)=0$. \end{itemize} \end{theorem} Let us give some intuition behind \cref{theoremTerminalRankings}. If $rk(X_n)\to r$, then there exists some $n_0\in \N $ such that $rk(X_n)=r$ for all $n\geq n_0$, so $\Delta X_{n+1}$ is distributed according to $\mu ^r$ for all $n\geq n_0$. In particular, for any $i\in [d]$, the $\Delta X^i_{n+1}$'s behave like i.i.d. random variables with mean $q^r_i$ and finite variance, hence $X^i_{n}\slash n\to q^r_i$ (see also \cref{marketShareTheorem}). Therefore, if $r$ ranks $i$ higher than $j$, for the ranking to remain equal to $r$, we must have that $q^r_i>q^r_j$. Note that in particular $q^r_i=q^r_j$ is not enough. An exception to the latter is if $\Delta X^i_{n+1}=\Delta X^j_{n+1}$ a.s. (equivalently $\mu^r(x_i\neq x_j)=0$), so that the two components change in exactly the same way. On the other hand, if $i$ and $j$ are ranked equally, then we must necessarily have $\Delta X^i_{n+1}=\Delta X^j_{n+1}$ a.s. for the ranking not to change. The above theorem says that these conditions are not only necessary, but also sufficient for the ranking to have a positive probability to remain the same for all $n\geq n_0$. \Cref{theoremTerminalRankings} characterizes all terminal rankings by an easy to check criterion. Note that it does not require \cref{qualityOrderingAssumption}. However, without that assumption $rk(X_n)$ is not guaranteed to converge (see \cref{settlingTheorem}). Also note that even if we know that $r$ is terminal, we don't know whether $\Prd{\nu}{rk(X_n)\to r}>0$ for a \textit{specific} initial distribution $\nu$. This is the topic of \cref{sectionInitialDistributions} (see in particular \cref{propositionLimitRankingsSufficient}). If we can exclude the case $\mu^r(x_i\neq x_j)=0$, then we get the following simplification of \cref{theoremTerminalRankings}. \begin{corollary} \label{theoremTerminalRankingsSpecial} Suppose that $\mu^r(x_i\neq x_j)>0$ for all $i,j\in [d]$ and all $r\in {\cal R}$. Then, a ranking $r$ is terminal if and only if it is a strict ranking and \begin{equation} \label{eqDescendingMeans} q^{r}_{r^{-1}(1)}>q^{r}_{r^{-1}(2)}>\ldots >q^{r}_{r^{-1}(d)}, \end{equation} where $r^{-1}$ denotes the inverse of $r$. \end{corollary} \begin{proof} The case $\mu^r(x_i\neq x_j)=0$ is excluded by assumption, so by \cref{theoremTerminalRankings} a ranking is terminal if and only if for any $i,j$ with $r(i)<r(j)$ we have $q^r_i>q^r_j$, or equivalently, if for any $i<j$, $q^r_{r^{-1}(i)}>q^r_{r^{-1}(j)}$. \end{proof} For the proof of \cref{theoremTerminalRankings} we are going to need a construction that will also be used again later on. Specifically, given a ranking-based process $X_n$ and a ranking $r\in {\cal R}$, we construct another process $Y_n$ that is identical to $X_n$ up to some point $n_0\in\N$, and it has i.i.d. increments afterwards with distribution $\mu^r$. It has the additional property that it remains equal to $X_n$ as long as their common ranking remains equal to $r$. The benefit of this is that we can work with the simpler process $Y_n$ and then transfer results to $X_n$. \begin{lemma} \label{lemmaMirroring} For any $r\in {\cal R}$ and any $n_0\in\N$, there exists a process $Y_n\in\R^d$ and a filtration ${\cal G}_n\supset {\cal F}_n$ such that: \begin{enumerate}[label=\roman*.] \item $Y_n=X_n$ for all $n\leq n_0$. \item $\{\Delta Y_{n}\}_{n\geq n_0+1}$ is a sequence of i.i.d. random vectors with distribution $\mu ^r$. Moreover, $Y_n\in {\cal G}_n$ for each $n\in\N$, and $\Delta Y_{n+1}\independent {\cal G}_{n}$ for each $n\geq n_0$. \item For any $n>n_0$, on both $\capdu{k=n_0}{n-1}\{rk(X_k)=r\}$ and $\capdu{k=n_0}{n-1}\{rk(Y_k)=r\}$ we have $Y_k=X_k$ a.s. for $k=0,1,\ldots ,n$. In particular, on both $\capdu{k=n_0}{\infty }\{rk(X_k)=r\}$ and $\capdu{k=n_0}{\infty }\{rk(Y_k)=r\}$ we have $Y_n=X_n$ a.s. for all $n\in\N$. \end{enumerate} \end{lemma} A process $Y_n$ that satisfies the above properties (for some filtration ${\cal G}_n$) will be said to \textit{$(r,n_0)$-mimic $X_n$}. \begin{proof} Let $\{U_n\}_{n\in\N}$ be a sequence of i.i.d random vectors in $\R ^d$ with distribution $\mu ^r$, independent of ${\cal F}_{\infty}$, and let ${\cal G}_n=\sigma (U_1,\ldots ,U_n,{\cal F}_n)$ and $\tau =\min \{ n\geq n_0:rk(X_n)\neq r\} $. Define \begin{equation} \label{eqRmirror} Y_n=X_{\tau \wedge n}+\sumdu{m=\tau +1}{n}U_m. \end{equation} with the convention that the sum is $0$ if $\tau+1>n$. Property (i) follows from the fact that $\tau \geq n_0$. For property (ii), note that since $\{\tau \leq n\}$ is ${\cal F}_n$-measurable, we get $Y_n\in {\cal G}_n$. Moreover, \begin{equation} \Delta Y_{n+1}=\mathbf{1}_{\tau >n}\cdot \Delta X_{n+1}+\mathbf{1}_{\tau \leq n}\cdot U_{n+1}, \end{equation} In particular, for any $n\geq n_0$ and any $S\in{\cal B}(\R^d)$, on $\{\tau >n\}$ we have \begin{equation} \Pr[\nu]{\Delta Y_{n+1}\in S\midvert {\cal G}_n}=\Pr[\nu]{\Delta X_{n+1}\in S\midvert {\cal G}_n}=\mu ^{rk(X_n)}(S)=\mu ^r(S)\ \ a.s., \end{equation} where the second equality follows from \cref{eqCondInd1}. Also, on $\{\tau \leq n\}$ we have \begin{equation} \Pr[\nu]{\Delta Y_{n+1}\in S\midvert {\cal G}_n}=\Pr[\nu]{U_{n+1}\in S\midvert {\cal G}_n}=\Pr[\nu]{U_{n+1}\in S}=\mu ^r(S)\ \ a.s. \end{equation} Combining the last two equations we get \begin{equation} \Pr[\nu]{\Delta Y_{n+1}\in S\midvert {\cal G}_n}=\mu ^r(S)\ \ a.s.,\ \ n\geq n_0,\ S\in{\cal B}(\R^d). \end{equation} Therefore, the sequence $\{\Delta Y_{n+1}\}_{n\geq n_0}$ is i.i.d. and, for each $n\geq n_0$, $\Delta Y_{n+1}$ has distribution $\mu ^r$ and is independent of ${\cal G}_n$, which completes the proof of (ii). For property (iii), let $m\leq n$ and note that on the set $\{Y_m\neq X_m\}$ we have $\tau <m<\infty$ and by definition $\tau\geq n_0$, $Y_{\tau}=X_{\tau}$, and $rk(Y_{\tau })=rk(X_{\tau })\neq r$. Therefore, the intersection of $\{Y_m\neq X_m\}$ with both $\capdu{k=n_0}{n-1}\{rk(X_k)=r\}$ and $\capdu{k=n_0}{n-1}\{rk(Y_k)=r\}$ is empty a.s. \end{proof} \begin{proof}[Proof of necessity for \cref{theoremTerminalRankings}] Let $r$ be a terminal ranking. Then, there exists some initial distribution $\nu$ and some $n_0\in \N$ such that $\Prd{\nu}{A}>0$, where \begin{equation} A=\capdu{n=n_0}{\infty }\{rk(X_n)=r\}. \end{equation} Let $Y_n$ $(r,n_0)$-mimic $X_n$. By \cref{lemmaMirroring}iii we have \begin{equation} \capdu{n=n_0}{\infty }\{rk(Y_n)=r\}=\capdu{n=n_0}{\infty }\{rk(X_n)=r\}=A. \end{equation} Fix some $i,j\in [d]$ and note that the sequence $d_n=Y^i_n-Y^j_n$, $n\geq n_0$, performs a random walk, starting at $d_{n_0}=Y^i_{n_0}-Y^j_{n_0}=X^i_{n_0}-X^j_{n_0}$, and with the step $\Delta d_{n+1}=d_{n+1}-d_n$ having the same distribution as $Z^r_i-Z^r_j$ (see \cref{eqZ}). In particular, for any $n\geq n_0$, \begin{equation} \label{dummyEq56} \Pr[\nu]{\Delta d_{n+1}\neq 0}=\Pr{Z^r_i\neq Z^r_j}=\mu^r(x_i\neq x_j). \end{equation} and \begin{equation} \label{dummyEq58} \E_{\nu}\left[\Delta d_{n+1}\right]=\E\left[Z^r_i-Z^r_j\right]=q^r_{i}-q^r_{j}. \end{equation} If $\mu^r(x_i\neq x_j)\neq 0$, then the random walk is non-trivial, and in particular $\capdu{n=n_0}{\infty }\{Y^i_n=Y^j_n\}=\capdu{n=n_0}{\infty }\{d_n=0\}$ has probability $0$. If $r(i)=r(j)$, this means that $\capdu{n=n_0}{\infty }\{rk(Y_n)=r\}$ has probability $0$, contradicting the fact that $\Prd{\nu}{A}>0$. We conclude that if $r(i)=r(j)$, then $\mu^r(x_i\neq x_j)=0$. For the second assertion, assume that in addition to $\mu^r(x_i\neq x_j)\neq 0$, we also have $q^r_{i}\leq q^r_j$. This means that either $d_n\to -\infty $ or the random walk is recurrent. In either case, $\Prd{\nu}{\capdu{n=n_0}{\infty }\{Y^i_n>Y^j_n\}}=\Prd{\nu}{\capdu{n=n_0}{\infty }\{d_n>0\}}=0$. Therefore, if $r(i)<r(j)$, then $\Prd{\nu}{\capdu{n=n_0}{\infty }\{rk(Y_n)=r\}}=0$, again contradicting the fact that $\Prd{\nu}{A}>0$. We conclude that if $r(i)<r(j)$, then either $\mu^r(x_i\neq x_j)=0$ or $q^r_i>q^r_j$. \end{proof} For the sufficiency part of \cref{theoremTerminalRankings}, we are going to prove the following more general result. \begin{lemma}[Terminal rankings sufficient condition] \label{theoremLimitRankingsSufficient} Let $r\in {\cal R}$ and define $A=\{(i,j)\in {\cal R}\times {\cal R}:r(i)<r(j)\}$ and $A'=\{(i,j)\in {\cal R}\times {\cal R}:r(i)=r(j)\}$. Assume that for any $(i,j)\in A$, either $q^r_i>q^r_j$ or $\mu^r(x_i\neq x_j)=0$, and that for any $(i,j)\in A'$, $\mu^r(x_i\neq x_j)=0$. Then, there exists some $M>0$, such that for any initial distribution $\nu$ and any $n_0\in \N $ that satisfy \begin{equation} \label{conditionLargeDifferences} \Prd{\nu }{\capdu{(i,j)\in A}{} \left\{ X^{i}_{n_0}>X^{j}_{n_0}+M\right\},\capdu{(i,j)\in A'}{}\left\{ X^{i}_{n_0}=X^{j}_{n_0}\right\}}>0, \end{equation} we have \begin{equation} \Prd{\nu }{\capdu{n=n_0}{\infty }\left\{ rk(X_n)=r\right\}}>0. \end{equation} \end{lemma} \begin{proof} Consider the collection of random variables $\{U^i_n\}_{n\in\N}^{i\in [d]}$, independent of ${\cal F}_{\infty}$, such that for each $i$, $\{U^i_n\}_n$ are i.i.d. with distribution same as $Z^r_i$. For any pair $(i,j)\in A$, $U^i_n-U^j_n$ is either identically zero (if $\mu^r(x_i\neq x_j)=0$) or it has positive mean and finite variance (if $q^r_i-q^r_j>0$). In the latter case, by the Strong Law of Large Numbers, $\sumdu{m=1}{n}(U^i_m-U^j_m)\to \infty $ a.s. as $n\to \infty $. Therefore, in both cases, $\sumdu{m=1}{n}(U^i_m-U^j_m)$ is bounded below a.s. Hence, there exists some $M>0$, such that for any pair $(i,j)\in A$, \begin{equation} \label{eqDummy27} \Pr{\underset{n\in\N}{\min }\, \sumdu{m=1}{n}(U^i_m-U^j_m)\leq -M}<\frac{1}{d^2}. \end{equation} Now let the initial distribution $\nu$ and $n_0\in\N$ satisfy \cref{conditionLargeDifferences} for the value of $M$ specified in \cref{eqDummy27}, i.e. $\Prd{\nu }{D} >0$, where \begin{equation} \begin{aligned} \label{dummyEq29} D & =\capdu{(i,j)\in A}{} \left\{ X^{i}_{n_0}>X^{j}_{n_0}+M\right\}\cap \capdu{(i,j)\in A'}{} \left\{ X^{i}_{n_0}=X^{j}_{n_0}\right\} \end{aligned} \end{equation} We want to show that $\Prd{\nu}{\capdu{n=n_0}{\infty }rk(X_n)=r}>0$. Let $Y_n$ be a process that $(r,n_0)$-mimics $X_n$ (see \cref{lemmaMirroring}) and note that \cref{dummyEq29} implies \begin{equation} \begin{aligned} D=\capdu{(i,j)\in A}{} \left\{ Y^{i}_{n_0}>Y^{j}_{n_0}+M\right\}\cap \capdu{(i,j)\in A'}{} \left\{ Y^{i}_{n_0}=Y^{j}_{n_0}\right\}. \end{aligned} \end{equation} For any $(i,j)\in A'$, $n\geq n_0$, we have $\Prd{\nu}{\Delta Y^i_{n+1}\neq \Delta Y^j_{n+1}}=\mu^r(x_i\neq x_j)=0$ by \cref{lemmaMirroring}ii and by assumption, hence on the set $D$ we have \begin{equation} \label{eqDummy41} Y^i_n=Y^j_n,\text{ for all }(i,j)\in A',n\geq n_0. \end{equation} We further define \begin{equation} \begin{aligned} B_{(i,j)} & =\capdu{n\geq n_0}{}\left\{ (Y^i_{n}-Y^j_{n})-(Y^i_{n_0}-Y^j_{n_0})>-M\right\},\ \ (i,j)\in A,\\ B & =\capdu{(i,j)\in A}{}\,B_{(i,j)}. \end{aligned} \end{equation} Note that on the set $D\cap B_{(i,j)}$ we have $Y^i_n>Y^j_n$ for all $n\geq n_0$ and any $(i,j)\in A$. Combining this with \cref{eqDummy41}, we get that on the set $D\cap B$, it holds that $rk(Y_n)=r$ for all $n\geq n_0$, which implies $rk(X_n)=r$ for all $n\geq n_0$ (\cref{lemmaMirroring}iii). It is therefore enough to show that $\Prd{\nu}{D\cap B}>0$. Note that for each $(i,j)\in A$, $\{ (Y^i_{n+n_0}-Y^j_{n+n_0})-(Y^i_{n_0}-Y^j_{n_0})\} _{n\in\N}$ has the same distribution as $\left\{\sumdu{m=1}{n}(U^i_m-U^j_m)\right\}_{n\in\N}$, therefore \cref{eqDummy27} implies that $\Prd{\nu }{B_{(i,j)}}>1-1\slash d^2$, and since $card(A)<d^2$, we get $\Prd{\nu }{B}>0$. By assumption we also have $\Prd{\nu}{D}>0$. Finally observe that by \cref{lemmaMirroring}ii, $D\in {\cal G}_{n_0}$ and $B\independent {\cal G}_{n_0}$, hence $\Prd{\nu}{D\cap B}=\Prd{\nu}{D}\cdot \Prd{\nu}{B}>0$, which completes the proof. \end{proof} \begin{proof}[Proof of sufficiency for \cref{theoremTerminalRankings}] By assumption $r$ satisfies the conditions of \cref{theoremLimitRankingsSufficient}. Let $M$ be as in that lemma and define the initial distribution $\nu$ as follows: $X^i_0=(d-r(i))\cdot (M+1)$ a.s. Then, $r(i)=r(j)$ implies $X^i_0=X^j_0$ a.s., while $r(i)<r(j)$ implies $X^i_0>X^j_0+M$ a.s. That is, $\nu$ satisfies \cref{conditionLargeDifferences} with $n_0=0$, hence $\Prd{\nu}{\capdu{n=0}{\infty}\{ rk(X_n)=r\}}>0$, in particular $r$ is terminal. \end{proof} \subsection{Limit theorems for $X_n$} \label{sectionLimitTheorems} In this section we will prove the following theorem about the long term behavior of $X_n$. \begin{proposition}[Strong Law of Large Numbers and Central Limit Theorem] \label{marketShareTheorem} For any $r\in {\cal R}$, \begin{equation} \label{eqSLLN} \underset{n\to \infty }{\lim }\frac{X_n}{n}=q^{r}\ \text{ a.s. on the set }\left\{ \underset{n\to \infty }{\lim} rk(X_n)=r\right\}. \end{equation} Furthermore, for any initial distribution $\nu$, if $\Prd{\nu}{\underset{k\to\infty }{\lim}rk(X_k)=r}>0$, then for each $i\in [d]$, $r\in {\cal R}$, and $x\in\R$, \begin{equation} \label{eqCLT} \limn \Prd{\nu}{\frac{X^i_n-n\cdot q^r_i}{\sqrt{n}\cdot \sigma ^r_i}\leq x\midvert \underset{k\to \infty }{\lim} rk(X_k)=r}=\Phi(x), \end{equation} where $\Phi$ denotes the cumulative distribution function of a standard normal distribution. \end{proposition} For the proof we are going to need a couple of lemmas whose proofs are given in the Appendix. \begin{lemma} \label{lemmaConditioningContinuity} Let $A_n$, $n\in\N$, and $A$ be measurable sets in a probability space, each with positive probability, and suppose that $A_n\to A$ a.s. (i.e. $\Pr{(A_n\backslash A)\cup (A\backslash A_n)}\to 0$). Then, $\Pr{S\midvert A_n}\to \Pr{S\midvert A}$ uniformly in $S\in {\cal F}$. \end{lemma} \begin{lemma} \label{lemmaDoubleConvergence} Let $a_{m,n}\in\R$, $m,n\in\N$, and suppose that $\underset{m\to\infty}{\lim }a_{m,n}=a_n\in \R$ uniformly in $n$, and $\limn a_{m,n}=a\in\R$ for all $m\in\N$. Then, $\limn a_n=a$. \end{lemma} \begin{proof}[Proof of \cref{marketShareTheorem}] Since by definition $\{rk(X_k)\to\infty \}=\cupdu{n_0\in\N}{}\,\capdu{k=n_0}{}\{rk(X_k)=r\}$, it is enough to show that \begin{equation} \label{eqSLLNdummy} \underset{n\to \infty }{\lim }\frac{X_n}{n}=q^{r}\ \text{ a.s. on the set }{\capdu{k=n_0}{}\{rk(X_k)=r\}} \end{equation} and \begin{equation} \label{eqCLTdummy} \limn \Prd{\nu}{\frac{X^i_n-n\cdot q^r_i}{\sqrt{n}\cdot \sigma ^r_i}\leq x\midvert \capdu{k=n_0}{\infty }\{rk(X_k)=r\}}=\Phi(x),\ \ i\in[d], \end{equation} for any $n_0\in\N$ that satisfies $\Prd{\nu}{\capdu{k=n_0}{\infty }\{rk(X_k)=r\}}>0$. Fix such an $n_0$ and let $Y_n$ be a process that $(r,n_0)$-mimics $X_n$ (see \cref{lemmaMirroring}). Since $\{\Delta Y_n\}_{n\geq n_0+1}$ is an i.i.d. sequence whose $i$-th component has mean $q^r_i$ and standard deviation $\sigma ^r_i$, we have by the Strong Law of Large Numbers, \begin{equation} \label{dummyEq20} \frac{Y_n}{n}\to q^r\text{ a.s.} \end{equation} and by the Central Limit Theorem, \begin{equation} \label{dummyEq47} \frac{Y^i_n-n\cdot q^r_i}{\sqrt{n}\cdot \sigma ^r_i}\overset{d}{\to }{\cal N}(0,1). \end{equation} \Cref{eqSLLNdummy} follows from \cref{dummyEq20} and \cref{lemmaMirroring}iii. To show \cref{eqCLTdummy}, first note that \cref{dummyEq47} can be strengthened: since $\Delta Y_{n+1}\independent {\cal F}_n$ for each $n\geq n_0$, we have that for any $m\geq n_0$ and any $x\in\R$, \begin{equation} \label{dummyEq23} \Prd{\nu}{\frac{Y^i_n-n\cdot q^r_i}{\sqrt{n}\cdot \sigma ^r_i}\leq x\midvert \capdu{k=n_0}{m}\{rk(X_k)=r\}}\overset{n\to\infty}{\to} \Phi (x). \end{equation} Furthermore, since $\capdu{k=n_0}{m}\{rk(X_k)=r\}\overset{m\to\infty }{\longrightarrow}\capdu{k=n_0}{\infty }\{rk(X_k)=r\}$, \cref{lemmaConditioningContinuity} implies that \begin{equation} \begin{aligned} \Prd{\nu}{\frac{Y^i_n-n\cdot q^r_i}{\sqrt n\cdot \sigma ^r_i}\leq x\midvert \capdu{k=n_0}{m}\{rk(X_k)=r\}}\overset{m\to\infty}{\to} \\ \Prd{\nu}{\frac{Y^i_n-n\cdot q^r_i}{\sqrt n\cdot \sigma ^r_i}\leq x\midvert \capdu{k=n_0}{\infty }\{rk(X_k)=r\} } \end{aligned} \end{equation} uniformly in $n$. Combining this with \cref{dummyEq23} and \cref{lemmaDoubleConvergence} we get \begin{equation} \Prd{\nu}{\frac{Y^i_n-n\cdot q^r_i}{\sqrt n\cdot \sigma ^r_i}\leq x\midvert \capdu{k=n_0}{\infty }\{rk(X_k)=r\}}\to \Phi(x) , \end{equation} as $n\to \infty$. By \cref{lemmaMirroring}iii, this is equivalent to \cref{eqCLTdummy}. \end{proof} We also have the following partial converse of \cref{marketShareTheorem}. \begin{proposition} \label{marketShareConverse} If $X_n\slash n\to x\in \R ^d$ and the components of $x$ are all distinct, then $rk(X_n)\to rk(x)$ and $x=q^{rk(x)}$. \end{proposition} \begin{proof} Let $i,j\in [d]$ and assume without loss of generality that $x_i>x_j$. Then, for large enough $n$, $X^i_n>X^j_n$, so $rk(X_n)$ ranks $i$ higher than $j$. Since this is true for all pairs $i,j$, we get that for large enough $n$, $rk(X_n)=rk(x)$, hence $rk(X_n)\to rk(x)$. By \cref{marketShareTheorem}, $X_n\slash n\to q^{rk(x)}$. \end{proof} \subsection{Terminal rankings and initial distributions} \label{sectionInitialDistributions} Although \cref{theoremTerminalRankings} gives the possible limits of the ranking for a ranking-based process in principle, it doesn't say for which pairs of initial distributions $\nu$ and terminal rankings $r$ we have $\Prd{\nu}{rk(X_n)\to r}>0$. To see that for the same terminal ranking $r$ it is possible to have $\Prd{\nu}{rk(X_n)\to r}>0$ for some initial distributions $\nu$ and not for others, consider a deterministic system with $d=2$ and such that \begin{equation} \begin{aligned} \Pr{\Delta X_{n+1}=(1,0)\midvert rk(X_n)=\text{id}_2} & =1 \ \ \text{ and }\\ \Pr{\Delta X_{n+1} =(0,1)\midvert rk(X_n)\neq \text{id}_2} & =1, \end{aligned} \end{equation} where $\text{id}_2$ is the identity function on the set $\{1,2\}$. In words, if $X^1_n>X^2_n$, then $X^1_n$ increases by $1$ and $X^2_n$ remains constant. If $X^1_n\leq X^2_n$, then $X^2_n$ increases by $1$ and $X^1_n$ remains constant. Clearly, if we start at $X_0=(0,0)$, $rk(X_n)\to r$ a.s., where $r(1)=2$, $r(2)=1$, while if we start at $X_0=(1,0)$, $rk(X_n)\to \text{id}_2$ a.s. From the above example it might seem that the only reason that a strict ranking $r$ satisfying $q^{r}_{r^{-1}(1)}>q^{r}_{r^{-1}(2)}>\ldots >q^{r}_{r^{-1}(d)}$ might fail to satisfy $\Prd{\nu}{rk(X_n)\to r}>0$ is that it is not reachable from the given initial distribution, in the sense that $\Pr[\nu]{\bigcup_{n=1}^{\infty }\{rk(X_n)=r\}}=0$. However, this is not the only case. For example, let $d=3$, and suppose that \begin{equation} \begin{aligned} \Pr{\Delta X_{n+1}=(5,-2,0)\midvert rk(X_n)=\text{id}_3} & =1\slash 2,\\ \Pr{\Delta X_{n+1} =(-3,3,0)\midvert rk(X_n)=\text{id}_3} & =1\slash 2,\ \ \text{ and}\\ \Pr{\Delta X_{n+1}=(0,0,1)\midvert rk(X_n)\neq \text{id}_3} & =1, \end{aligned} \end{equation} In words, whenever $X^1_n>X^2_n>X^3_n$, with probability $1\slash 2$ the first component will increase by $5$ and the second will decrease by $2$, and also with probability $1\slash 2$ the first component will decrease by $3$ and the second will increase by $3$, while the last component remains constant a.s. For any other ranking, the third component increases by $1$ and the rest remain constant a.s. Now suppose we begin at $X_0=(2,1,0)$ a.s., so that $rk(X_0)=\text{id}_3$ a.s. Clearly, after the first step the ranking will necessarily change and after that $\Delta X_{n}=(0,0,1)$ deterministically, so that for large $n$ we will have either $X^3_n>X^1_n>X^2_n$ or $X^3_n>X^2_n>X^1_n$. We see that despite the fact that $rk(X_0)=\text{id}_3$ and $q^{\text{id}_3}_{1}>q^{\text{id}_3}_2>q^{\text{id}_3}_3$, for the specific initial distribution $\nu$ we get $\Prd{\nu }{rk(X_n)\to {\text{id}_3}}=0$. The above examples might seem discouraging. We have the following positive result, which states that such situations do not arise if a certain condition is satisfied. The condition roughly says that, no matter the ranking, there is some positive probability for any component to increase faster than the rest, and the increments of the rest to follow any given non-strict order. \begin{proposition} \label{propositionLimitRankingsSufficient} Suppose that for any permutation $\sigma $ of $[d]$ and any $r'\in R$, \begin{equation} \label{dummyEq21} \mu^{r'}(x_{\sigma _1}>x_{\sigma _2}\geq x_{\sigma _3}\geq\ldots\geq x_{\sigma _d})>0. \end{equation} Then, for any initial distribution $\nu $ and any terminal ranking $r$, \begin{equation} \Prd{\nu}{rk(X_n)\to r}>0. \end{equation} \end{proposition} \begin{remark} \label{remarkOnlyStrictRankings} The condition of \cref{propositionLimitRankingsSufficient} implies that $\mu^r(x_i\neq x_j)>0$ for all $i,j\in [d]$ and $r\in {\cal R}$, which in particular implies the condition of \cref{theoremTerminalRankingsSpecial}. Consequently, under the condition of \cref{propositionLimitRankingsSufficient}, only strict rankings may be terminal. \end{remark} \begin{example} In a ranking-based P\'olya urn, with probability one, exactly one of the components of $\Delta X_{n+1}$ is $1$ and the rest are $0$ (see also \cref{sectionPolyaUrn}). Therefore, \cref{dummyEq21} is satisfied if and only if for any ranking there is positive probability of adding a ball of any given color. In \cref{example:additivePolyaUrn}, this is equivalent to either $\lambda _d>0$ or $a_i>0$ for all $i\in [d]$. More generally, for processes that change one component at a time, \cref{dummyEq21} is satisfied if and only if, for any ranking, every component has non-zero probability of increasing. \end{example} \begin{proof}[Proof of \cref{propositionLimitRankingsSufficient}] By \cref{remarkOnlyStrictRankings} we may assume that $r$ is a strict ranking. Also, by renaming the indices, we may assume that $r$ is the identity map on $[d]$, i.e. $r(i)=i$ for all $i\in [d]$. Let $M>0$ be as in \cref{theoremLimitRankingsSufficient} and define \begin{equation} \begin{aligned} C^j_{n} & =\left\{X^j_{n}>X^{j+1}_{n}+M\right\},\ \ j\in[d-1],\ \ n\in\N,\\ B^i_n & =\capdu{j=i}{d-1}C^j_{n},\ \ i\in[d-1],\ \ n\in\N, \end{aligned} \end{equation} and $B^d_n=\Omega $, $n\in\N$. By \cref{theoremLimitRankingsSufficient}, it is enough to show that $\Prd{\nu}{\cupdu{n\in\N}{}B^1_n}>0$. We will use (backwards) induction on $i$ to show that $\Prd{\nu}{\cupdu{n\in\N}{}B^i_n}>0$ for all $i\leq d$, with the base case $i=d$ being trivially true. Suppose then that $\Prd{\nu}{\cupdu{n\in\N}{}B^{i+1}_n}>0$ or, equivalently, that there exists some $n\in\N$ such that $\Prd{\nu}{B^{i+1}_n}>0$. Fix such an $n$. From \cref{dummyEq21} and continuity, there exists some $\epsilon>0$ such that $\mu^{r'}(A_i)>0$ for all $r'\in{\cal R}$, where \begin{equation} A_i=\{x\in\R^{d}:x_i-\epsilon >x_{i+1}\geq x_{i+2}\geq \ldots \geq x_{d}\}. \end{equation} For any $j\in [d-1]$ and $k\in\N$, define \begin{equation} D^j_{m,k}=\left\{X^j_{m+k}-X^j_{m}\geq X^{j+1}_{m+k}-X^{j+1}_{m}\right\}. \end{equation} and \begin{equation} D^j_{m,k}(\epsilon)=\left\{X^j_{m+k}-X^j_{m}-\epsilon \geq X^{j+1}_{m+k}-X^{j+1}_{m}\right\}. \end{equation} In particular, $D^j_{m,1}=\left\{\Delta X^j_{m+1}\geq \Delta X^{j+1}_{m+1}\right\}$, and similarly for $D^j_{m,1}(\epsilon)$. Therefore, from \cref{eqCondInd1} we get that for any $m\in\N$, \begin{equation} \label{dummyEq81} \begin{aligned} \Prd{\nu}{D^i_{m,1}(\epsilon),\capdu{j=i+1}{d-1}D^j_{m,1}\midvert {\cal F}_m} & =\mu^{rK(X_m)}(A_i)\\ & \geq \underset{r'\in {\cal R}}{\min }\, \mu^{r'}(A_i)>0\text{ a.s.} \end{aligned} \end{equation} Let $K\in\N$ be such that \begin{equation} \label{dummyEq45} \Pr[\nu]{F_{K}\cap B^{i+1}_n}>0, \end{equation} where \begin{equation} F_{K}=\{X^i_n-X^{i+1}_n>M-K\epsilon \}. \end{equation} This is always possible, since $\cupdu{K\in\N}{}F_{K}=\Omega $ and $\Pr[\nu]{B^{i+1}_n}>0$ by assumption. Applying \cref{dummyEq81} for $m=n,n+1,\ldots ,n+(K-1)$ and using $D^j_{m,1}\in{\cal F}_{m+1}$, it easily follows that \begin{equation} \label{dummyEq79} \begin{aligned} \Prd{\nu}{D^i_{n,K}(K\epsilon),\capdu{j=i+1}{d-1}D^j_{n,K}\midvert {\cal F}_n}>0\ \ a.s. \end{aligned} \end{equation} Observe that \begin{equation} \begin{aligned} C^j_n\cap D^j_{n,K}\subset C^j_{n+K}, & \ \ j=i+1,\ldots ,d-1\\ F_{K}\cap D^i_{n,K}(K\epsilon)\subset C^i_{n+K}. & \end{aligned} \end{equation} Combining these two relations and the definition of $B^i_n$ we get \begin{equation} \begin{aligned} \Pr[\nu]{B^i_{n+K}} & =\Pr[\nu]{\capdu{j=i}{d-1}C^j_{n+K}}\\ & \geq \Pr[\nu]{\capdu{j=i+1}{d-1}\left(C^j_n\cap D^j_{n,K}\right), F_{K},D^i_{n,K}(K\epsilon)}\\ & =\Pr[\nu]{F_{K},B^{i+1}_n,D^i_{n,K}(K\epsilon),\capdu{j=i+1}{d-1}D^j_{n,K}}\\ & =\Pr[\nu]{F_{K},B^{i+1}_n}\cdot \Pr[\nu]{D^i_{n,K}(K\epsilon),\capdu{j=i+1}{d-1}D^j_{n,K}\midvert F_{K},B^{i+1}_n}\\ & >0, \end{aligned} \end{equation} with the last line following from \cref{dummyEq45,dummyEq79}. This concludes the inductive proof. \end{proof} \section{Applications} \label{sectionApplications} \subsection{Ranking-based P\'olya urns and urn functions} \label{sectionPolyaUrn} In this section we look at how our results apply to the case of ranking-based P\'olya urns in terms of urn functions. We call \textit{ranking-based P\'olya urn} a ranking-based process $X_n\in\R^d$ where $\Delta X_n\in \{ 0,1\}^d$ and $\sumdu{i=1}{d}\Delta X^i_n=1$ a.s. Note that in this case \begin{equation} q^{rk(X_n)}_i=\E \left[\Delta X^i_{n+1}\midvert rk(X_n)\right]=\Pr{\Delta X^i_{n+1}=1\midvert rk(X_n)}, \end{equation} that is, $q^r_i$ is the probability of adding a ball of color $i$, when the ranking is $r$. We want to compare our results to \citep{arthur1986strong,hill1980strong}, where the results are stated in terms of the fixed points of the urn function. The urn function $f:\Delta ^{d-1}\to \Delta ^{d-1}$, where \begin{equation} \Delta ^{d-1}:= \left\{ x\in [0,1]^d,\ \sumdu{i}{}\, x_i=1\right\} \end{equation} is the standard $(d-1)$-dimensional simplex, takes as argument the vector of proportions of balls of each color, and its $i$-th component $f_i$ gives the probability of the next ball being of color $i$. For a ranking-based urn, $f(x)$ must be constant in regions of constant ranking, that is, its value may only depend on $rk(x)$. With our notation we have \begin{equation} \label{eqPolyaProba} f_i(x)=\Pr{\Delta X^i_n=1\midvert rk(X_n)=rk(x)}=q^{rk(x)}_i. \end{equation} The next proposition uses our results from \cref{sectionResultsGeneral} to relate the fixed points of $f$ with the limiting behavior of $X_n\slash n$. \begin{proposition} \label{propositionPolya} Consider a ranking-based P\'olya urn with urn function $f$ and let $A$ be the set of fixed points of $f$ whose coordinates are all distinct, i.e. \begin{equation} A=\{ x\in \Delta ^{d-1}: f(x)=x,\ x_i\neq x_j\ \text{ for all } i\neq j\}. \end{equation} Then: \begin{enumerate} \item For any $x\in A$, there is some $\nu$ such that $\Prd{\nu}{X_n\slash n\to x}>0$. If $q^r_i>0$ for all $i\in [d],r\in{\cal R}$, then $\Prd{\nu}{X_n\slash n\to x}>0$ holds for all $\nu$. \item If $q^r_i>0$ for all $i\in [d],r\in{\cal R}$ and furthermore \cref{qualityOrderingAssumption} is satisfied, then for any initial distribution $\nu$, $\limn X_n\slash n\in A$ a.s. (in particular $X_n\slash n$ converges a.s.). \item Conditioned on $\limn X_n\slash n=x\in A$, $\frac{X^i_n-nx_i}{\sqrt{nx_i(1-x_i)}}$ converges to a standard normal distribution. More precisely, for any initial distribution $\nu$, $i\in[d]$, $x\in A$, and $y\in\R$, \begin{equation} \Prd{\nu}{\frac{X^i_n-n\cdot x_i}{\sqrt{nx_i(1-x_i)}}\leq y\midvert \underset{k\to\infty }{\lim }\frac{X_k}{k}=x}\to \Phi(y), \end{equation} whenever $\Pr[\nu]{\underset{k\to\infty }{\lim }\frac{X_k}{k}=x}>0$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Let $x=(x_1,\ldots ,x_d)\in A$ and denote $r=rk(x)$, so that $x_{r^{-1}(1)}>\ldots >x_{r^{-1}(d)}$. Then, $f(x)=q^r$ (\cref{eqPolyaProba}). Since $x$ is a fixed point of $f$, we get $q^{r}=x$, hence also $q^{r}_{r^{-1}(1)}>\ldots >q^{r}_{r^{-1}(d)}$. By \cref{theoremTerminalRankings} $r$ is terminal, so $\Prd{\nu}{rk(X_n)\to r}>0$ for some initial distribution $\nu$. By \cref{marketShareTheorem}, $\Prd{\nu}{X_n\slash n\to x}=\Prd{\nu}{X_n\slash n\to q^r}\geq \Prd{\nu}{rk(X_n)\to r}>0$. If $q^r_i>0$ for all $i\in [d],r\in{\cal R}$, then the condition of \cref{propositionLimitRankingsSufficient} is satisfied, therefore $r$ being terminal implies $\Prd{\nu}{rk(X_n)\to r}>0$ for any initial distribution $\nu$. \item By \cref{settlingTheorem} $rk(X_n)$ converges a.s. and by \cref{theoremTerminalRankingsSpecial} the limit $R$ has to be a strict ranking, in particular $q^{R}_i\neq q^{R}_j$ for all $i\neq j$ a.s. By \cref{marketShareTheorem} $X^i_n\slash n\to q^{R}$ and by \cref{marketShareConverse} $q^{R}=q^{rk(q^{R})}$, which is a fixed point of $f$ by \cref{eqPolyaProba}, thus $q^{R}\in A$. \item Denote $r=rk(x)$. By \cref{marketShareConverse,marketShareTheorem}, $x=q^r$ and \begin{equation} \left\{\underset{k\to\infty }{\lim }X_k\slash k=x\right\}=\left\{\underset{k\to\infty }{\lim }rk(X_k)=r\right\}\ \ a.s. \end{equation} Hence, by the second part of \cref{marketShareTheorem}, \begin{equation} \Prd{\nu}{\frac{X^i_n-n\cdot x_i}{\sqrt{n}\cdot \sigma ^r_i}\leq y\midvert \underset{k\to\infty }{\lim }\frac{X_k}{k}=x}\to \Phi(y). \end{equation} The result follows once we recall that $\sigma ^r_i=\sqrt{Var(Z^r_i)}$ and that $Z^r_i$ is a Bernoulli random variable with parameter $q^r_i=x_i$, hence $\sigma ^r_i=\sqrt{x_i(1-x_i)}$. \end{enumerate} \end{proof} We now compare our results to the ones that appear in \citep{arthur1986strong,hill1980strong}. We are going to restrict ourselves to ranking-based P\'olya urns with the urn function being constant in $n\in\N$ (in \citep{arthur1986strong} the urn function is allowed to be a function of $n$). Part 1 of \cref{propositionPolya}, in particular the case $q^r_i>0$ for all $i\in [d],r\in{\cal R}$, agrees with Theorem 5.1 in \citep{arthur1986strong}. In that theorem, the authors show that $X_n\slash n$ has positive probability of converging to any point $\theta \in \Delta ^{d-1}$ that is a stable fixed point of $f$, in the sense that $f(\theta )=\theta $ and there is a neighborhood $U$ of $\theta $ and a positive-definite matrix $C$ such that \begin{equation} \label{eqStableFixedPoint} \left\langle C(x-f(x)),x-\theta)\right\rangle >0,\ \ \text{ for all }\ \ x\in \Delta ^{d-1}\cap U,\ \ x\neq \theta . \end{equation} Note that in the ranking-based case, where $f$ is piecewise constant, any fixed point $\theta $ with all coordinates being distinct (i.e. $\theta \in A$) is always stable, since then $f(x)=\theta $ identically in a neighborhood of $\theta $, so the above condition is satisfied if we take $C$ to be the identity matrix. The result in \citep{arthur1986strong} is more general than part 1 of \cref{propositionPolya}, because it also applies to fixed points whose coordinates are not distinct. On the other hand, there are no analogues of parts 2 and 3 of our \cref{propositionPolya} in \citep{arthur1986strong} that apply to the ranking-based case (but Theorem 3.1 in that reference is an analogue of part 2 for continuous urn functions $f$). As mentioned in the introduction, in \citep{hill1980strong} the case $d=2$ is studied and it is shown that $X_n\slash n$ converges a.s. Note that we have shown this only if $q^r_i>0$ for all $i\in [d],r\in{\cal R}$, and \cref{qualityOrderingAssumption} is satisfied. In \citep{hill1980strong} no such assumption is made. However, the proof there relies on properties of the real line (when $d=2$, the process is described by $X^1_n$ alone, because $X^2_n=n-X^1_n$), thus it is not obvious how to generalize to $d\in\N$. Regarding the support of the limit, Theorem 4.1 in \citep{hill1980strong} is similar to part 2 of our \cref{propositionPolya}: assuming that $d=2$ and $q^r_i>0$ for all $i\in [d],r\in {\cal R}$, if $A$ contains a single point, then the two results coincide. Part 2 of \cref{propositionPolya} also applies when $A$ contains more than one (i.e. two) points, while Theorem 4.1 in \citep{hill1980strong} does not. On the other hand, if $A$ is empty, which (in the case $d=2$ with $q^r_i>0$ for all $i\in [d],r\in {\cal R}$) is equivalent to \cref{qualityOrderingAssumption} not being satisfied, part 2 of \cref{propositionPolya} does not apply, while Theorem 4.1 in \citep{hill1980strong} implies that $X_n\slash n\to 1\slash 2$. We emphasize that the above is a comparison of results in the special case of ranking-based P\'olya urns (and in the case of \citep{hill1980strong}, when $d=2$). However, both our results and those in \citep{arthur1986strong,hill1980strong} apply to more general settings: our results apply to more general (ranking-based) processes than P\'olya urns, while those in \citep{arthur1986strong,hill1980strong} apply to non-ranking-based P\'olya urns. \subsection{Ranking items in online interfaces} \label{sectionPopularityRanking} A crucial setting where ranking-based reinforcement is common is online rank-ordered interfaces such as search engines, online marketplaces, newspapers and discussion forums. In this section we describe an application of our results to such systems. The model we describe is based on assumptions about the ranking algorithms implemented and user behavior, so we begin with motivating our assumptions. At the end of the section we describe how these assumptions may be relaxed. Online interfaces often facilitate access to information for their users by ranking their content \citep{liu2009learning}. People, in return, pay more attention to and interact more with results that appear higher on ranked lists \citep{germano2019few,joachims2005accurately}. One of the most fundamental and commonly employed ranking algorithms places the options on the screen according to their popularity, that is the number of clicks, sales, citations or upvotes that different options have obtained so far. The rank-by-popularity algorithm is very simple to implement, and many popular websites have relied on it in the past or use some version of it at present.\footnote{For example, Reddit used to order comments by the number of upvotes, Google scholar used to order articles by the number of citations (and still offers that possibility when looking at a profile), Amazon offers the possibility to order options by the number of reviews, Goodreads orders user comments by the number of likes, etc.} A wide array of behavioral models about how people choose among different items in an ordered list have been postulated over the past years in economics, management, marketing and computer science (for a review of models in computer science see \cite{chuklin2015click}). We will consider a staple computer science model for the probability of clicking on a link, called the position-based model \cite[p. 10]{chuklin2015click}. We note that although we will refer to clicks, the model can also be used to describe downloads and citations of papers, purchases of products, likes of comments etc. In the position-based model, a link is first examined by the user and then clicked if its content is considered to be relevant. This can be stated as \begin{equation} \label{dummyEq64} C^i_n=E^i_n\cap D^i_n, \end{equation} where \begin{equation} \begin{aligned} E^i_n& =\{ n\text{-th user examines link }i\},\\ D^i_n& =\{\text{link } i\text{ is relevant to the } n \text{-th user}\},\\ C^i_n & =\{ n\text{-th user clicks on link }i\}, \end{aligned} \end{equation} We are interested in the vector $X_n=\left(X^1_n,\ldots ,X^d_n\right)$, where $X^i_n$ is the number of users that have clicked on link $i$, up to the $n$-th user. Clearly, we have $\Delta X^i_{n+1}=1$ if $C^i_{n+1}$ occurs, and $\Delta X^i_{n+1}=0$ otherwise. Note that $X_n$ is not a ranking-based P\'olya urn as defined in \cref{sectionPolyaUrn}, because more than one of its components may change simultaneously. The probability that a link is examined depends only on the position that it appears in, and typically decreases for later positions. Assuming that results appear according to the rank-by-popularity algorithm, that is, by descending number of clicks so far (and randomly breaking ties), this factor depends only on (a) the current rank of result $i$ with respect to the number of clicks and (b) the number of links that are ranked equally with it. For our purposes, we may allow the probability that a link is examined to depend on the full ranking (i.e. how all of the links are ranked), so we will denote \begin{equation} \label{dummyEq52} a^r_i=\Pr{E^i_{n+1}\midvert rk(X_n)=r}. \end{equation} The expression on the right hand side makes sense whenever $\{rk(X_n)=r\}$ has positive probability. We will be making this assumption below whenever similar expressions appear, without further mention. We also assume that links that appear higher are more likely to be examined, that is, if $r(i)<r(j)$, then $a^r_i>a^r_j$. Finally, we assume that $a^r_i>0$ for all $i\in [d],r\in{\cal R}$, so that there is always positive probability of clicking on any of the links. The probability of link $i$ being relevant to the user depends only on the link itself, that is $D^i_{n+1}$ is independent of $\left\{E^j_{n+1}\right\}_{j\in[d]},\left\{D^j_{n+1}\right\}_{j\neq i}$ and $rk(X_n)$. We denote \begin{equation} \begin{aligned} \label{dummyEq66} u_i=\Pr{D^i_{n+1}} \end{aligned} \end{equation} and assume that $u_i\in (0,1)$. The number $u_i$ can be considered a measure of objective quality of the link (not necessarily known to the ranking algorithm). Combining \cref{dummyEq64,dummyEq52,dummyEq66} we get \begin{equation} \begin{aligned} \label{dummyEq50} q^r_i & =\Pr{\Delta X^i_{n+1}=1\midvert rk(X_n)=r}\\ &=\Pr{C^i_{n+1}\midvert rk(X_n)=r}\\ &=\Pr{E^i_{n+1}\cap D^i_{n+1}\midvert rk(X_n)=r}\\ &=\Pr {E^i_{n+1}\midvert rk(X_n)=r}\cdot \Pr{D^i_{n+1}\midvert E^i_{n+1},rk(X_n)=r}\\ &=\Pr{E^i_{n+1}\midvert rk(X_n)=r}\cdot \Pr{D^i_{n+1}}\\ &=a^r_i\cdot u_i. \end{aligned} \end{equation} Since we are assuming that $a^r_i>0$ for all $i\in [d],r\in{\cal R}$, and $u_i\in (0,1)$ for all $i\in [d]$, we also have $q^r_i\in (0,1)$ for all $i\in [d],r\in{\cal R}$. Moreover, using the fact that the $D^j_{n+1}$'s are independent of everything else and $\Pr{D^j_{n+1}}<1$ for all $j\in [d]$, we get \begin{equation} \label{eqDummy101} \begin{aligned} &\ \ \ \ \Pr{\Delta X^i_{n+1}=1,\Delta X^j_{n+1}=0\ \text{ for all }j\neq i\midvert rk(X_n)=r}\\ & \geq q^r_i\cdot \proddu{j\neq i}{}\Pr{\left(D^j_{n+1}\right)^c}>0, \end{aligned} \end{equation} for any $i\in d,r\in{\cal R}$. Now let $i\neq j$ and suppose (without loss of generality) that $u_i\geq u_j$. Recall that, by assumption, for any ranking $r$ that ranks $i$ higher than $j$, we have $a^r_i>a^r_j$, hence \cref{dummyEq50} gives $q^r_i>q^r_j$. That is, $i$ quasi-dominates $j$. By \cref{eqDummy101}, $\mu^r(x_i\neq x_j)>0$ for all $r\in {\cal R}$, therefore $i$ actually dominates $j$, hence \cref{qualityOrderingAssumption} is satisfied. \Cref{settlingTheorem} now says that $rk(X_n)$ converges a.s. \Cref{eqDummy101} also means that the conditions of \cref{theoremTerminalRankingsSpecial,propositionLimitRankingsSufficient} are satisfied, therefore the possible limits for $rk(X_n)$ are those strict rankings $r$ for which $q^r_{r^{-1}(1)}>\ldots >q^r_{r^{-1}(d)}$. Note that in general there will be more than one rankings $r$ satisfying this condition, especially if the effect of the position is strong ($a^r_i$ decreases quickly with the position of $i$ in the ranking $r$). Thus, it is likely that links of smaller objective quality $u_i$ will end up being ranked higher in the long-term (thus getting more clicks) than links of higher quality. This is an important consequence, since it implies that in general people will be directed towards links that are less likely to be relevant to them, and it reveals an inherent drawback of algorithms that rank results by popularity. Our framework can be generalized to other models of user behaviour. For example, we could allow the probability $a^r_i$ of examining a link to depend on the ranking in an arbitrary way (subject to \cref{qualityOrderingAssumption} being satisfied). In particular the model applies to cases where the position of other links also affects the probability of examining a link at a certain position, such as in the cascade model in computer science \citep{craswell2008experimental} or satisficing models in economics \citep{caplin2011search}. More generally, the assumption that links are first examined and then independently judged to be relevant or not can be discarded altogether; it is enough to require that the links possess some objective quality $u_i$, and whenever link $i$ is ranked higher than $j$ and $u_i>u_j$, it is more likely for $i$ to be clicked (i.e. $q^r_i>q^r_j$). For example, the $q^r_i$'s can be described by a multi-attribute utility model \citep{keeney1993decisions}, where the link position is one of the attributes and $u_i$ is a summary of the rest of the attributes. In a similar vein, we can relax assumptions related to the ranking algorithm. For instance, more sophisticated ranking algorithms may not rank the links based on their number of clicks only, but according to some calculated score that takes into account several other features \citep{page1999pagerank,liu2009learning}. The conceptual framework we developed in this section still applies, as long as the popularity is taken into account in calculating the score. Further, recent algorithmic approaches estimate the objective utility or relevance $u_i$ of different items by debiasing the number of clicks from attention imbalances \citep{joachims2017unbiased,agarwal2019general}. Even for these algorithms, however, ranking-based rich-get-richer dynamics can be at play, if a link's actual or perceived utility for the users depends on the object's popularity \citep{arthur1989competing,muchnik2013social}. For example, when ranking social networking applications, the rank may convey information about their utility, therefore some form of advantage may persist even when correcting for attention disparities. \section{Discussion} We have developed a mathematical framework for describing systems characterized by ranking-based rich-get-richer dynamics. Specifically, we defined a ranking-based process as a discrete-time Markov process in $\R^d$ whose increment distributions depend only on the current ranking of the components of the process. Under a ranking-based reinforcement assumption (\cref{qualityOrderingAssumption}), we showed that the ranking converges (\cref{settlingTheorem}) and proved a Strong Law of Large Numbers and a Central Limit Theorem-type result for the process itself (\cref{marketShareTheorem}). We also found conditions in terms of the Markov transition kernel to check whether a particular ranking is a possible limit ranking. In some cases we were able to characterize the support of the limit of the ranking independently of the initial distribution (\cref{propositionLimitRankingsSufficient}). We also translated our results in terms of urn functions for the special case of ranking-based P\'olya urns, in order to compare them with previous results with which they partially overlap (\cref{sectionPolyaUrn}). Finally, we described an application to rank-ordered web interfaces (\cref{sectionPopularityRanking}). Models of systems with rich-get-richer dynamics have been commonplace in the social, behavioral and computer sciences, and they have been used to describe the observed dynamics in a wide variety of settings. So far, there have been two main families of such models. The first family goes back to Gibrat's law \cite{gibrat1931inegalits}, which states that firms grow proportionally to their current size, and independently of the performance of their competitors. Variations of the notion of proportional growth have been applied across disciplines, for example to model citation growth \citep{allison1982cumulative} and city growth \citep{gabaix1999zipf,gabaix2008power}. Models based on Gibrat's law are inherently unsuitable for capturing ranking-based dynamics, because of their assumption that growth is independent of any competitors. The second family of rich-get-richer models builds on the notion of preferential attachment \citep{simon1955class}, which assumes that entities grow when new units ``attach'' to them, but these new units are more likely to attach to entities that are already larger. Such models are usually described mathematically as P\'olya urns or one of their many generalizations \citep{mahmoud2008polya,pemantle2007survey}. What is common in almost all of these generalized P\'olya urns, and relevant to us, is the fact that the number of balls of a given color added is chosen from a finite set, with probabilities that are each a \textit{continuous} function of the proportion of balls of a \textit{single} color, except that they are normalized to sum to one. Although this allows for some form of competition among colors, it precludes direct comparison of the proportions of balls of different colors, so it does not allow modeling systems where the growth rates depend on the differences between the sizes of different entities, let alone their ranking. Two exceptions are the works of Arthur et al. \cite{arthur1986strong} and of Hill et al. \cite{hill1980strong}, which allow arbitrary comparisons of proportions of balls of different colors, but they only treat the simplest type of P\'olya urn processes. These works do not specifically focus on ranking-based competition, but they partially cover them as extreme cases, with a subset of their results applying to them. See the Introduction and \cref{sectionPolyaUrn} for details. Compared to these existing approaches, our work differs in two main ways. First, our approach is at a more abstract level; the literature related to preferential attachment and Gibrat's law usually starts with a specific model, with the goal of reproducing some empirically observed phenomena, such as outcome unpredictability and skewed popularity distributions. Our approach in contrast is model-independent; we have identified conditions that are sufficient to lead to certain rich-get-richer phenomena, i.e. conditions that when satisfied by \textit{any} model, regardless of the exact assumptions made, they lead to the stated results. This is illustrated in \cref{sectionPopularityRanking}, where we point out that ranking-based rich-get-richer dynamics could be set in motion under a wide array of behavioral or algorithmic assumptions, as long as \cref{qualityOrderingAssumption} is satisfied. In this respect, our work is similar in spirit to the work of Arthur et al. \citep{arthur1986strong}. The second and perhaps more distinctive difference of our work, is the fact that it covers an opposite end of the spectrum of rich-get-richer dynamics. The distributions of the increments of the various components, instead of depending (continuously) on the current level of each of the components separately, they are piecewise constant with respect to the current levels, with discontinuities occurring when the ranking of the components changes. In other words, we focus explicitly on the role of ranking-based competition. However, our framework does not consider other types of competition, nor does it allow for any explicit dependence of the increments on the current level of the process, other than through the ranking. The above delineates a promising future research direction: one could envisage a general mathematical theory of Markov rich-get-richer processes that encompasses all of the above cases, by allowing for an arbitrary dependence of the increments' distribution on the current level of the whole vector of the process, subject to the minimal conditions for rich-get-richer dynamics. The work of Arthur et al. \cite{arthur1986strong} is in this direction for the case of simple P\'olya urn processes, but no such framework currently exists for more general processes. \begin{appendix} \section{Rankings are equivalent to weak orderings} \label{appendixWeakOrderings} The following proposition says that rankings are equivalent to weak orderings. A weak ordering on a set is like a total ordering, except that it allows for ``ties''. More precisely, a weak ordering ``$\succeq $'' on $S$ is a binary relation that is transitive and strongly complete, i.e. that for any two elements $a,b\in S$, at least one of the relations $a\succeq b$ or $b\succeq a$ holds \cite{roberts1985measurement}. Recall that we would get a \textit{total} order, if we further required that $a\succeq b$ and $b\succeq a$ implies $a=b$. \begin{proposition} \label{propositionWeakOrderings} There is a bijection between rankings of a finite set $S$ and weak orderings on $S$, given by $r\mapsto \succeq _r$, where \begin{equation} \label{dummyEq61} a\succeq _r b \text{ whenever } r(a)\leq r(b). \end{equation} The above map satisfies \begin{equation} \label{dummyEq63} r(a)=card\{b\in S:a\nsucceq _r b\}+1. \end{equation} The ranking $r$ is strict if and only if $\succeq _r$ is a total order on $S$. \end{proposition} \begin{proof} It is easy to check that $\succeq _r$, as defined by \cref{dummyEq61} is a weak ordering on $S$. Using \cref{dummyEq61}, \cref{dummyEq63} can be rewritten as \begin{equation} \label{dummyEq65} r(a)=card\{b\in S:r(b)<r(a)\}+1, \end{equation} which is equivalent to \cref{dummyEq67}, so it holds by definition. By \cref{dummyEq63}, $r$ is uniquely determined by $\succeq _r$, so the map $r\mapsto \succeq _r$ is one-to-one. To show that it is onto, let ``$\succeq$'' be a weak ordering on $S$ and define $r:S\to [S]$ by \begin{equation} \label{dummyEq69} r(a)=card\{b\in S:a\nsucceq b\}+1. \end{equation} We claim that $r(b)\leq r(a)$ is equivalent to $b\succeq a$. First note that if $b\succeq a$, then by transitivity $\{c\in S:b\nsucceq c\}$ is a subset of $\{c\in S:a\nsucceq c\}$, hence $r(b)\leq r(a)$. For the converse, assume that $b\nsucceq a$. Then we must have $a\succeq b$, and we get as above that $\{c\in S:a\nsucceq c\}$ is a subset of $\{c\in S:b\nsucceq c\}$, but this time it is a proper subset, because $a$ belongs to the latter. Therefore $r(a)<r(b)$, which completes the proof of our claim. Hence, by \cref{dummyEq61}, $\succeq$ is the same relation as $\succeq_r$, which shows that the mapping $r\mapsto \succeq_r$ is onto. The last assertion follows from the fact that $a\succeq _r b$ and $b\succeq _r a$ hold simultaneously if and only if $r(a)=r(b)$. \end{proof} \section{Supporting proofs} \label{appendix} Here we give the proofs of \cref{lemmaBiasedRandomWalk,lemmaConditioningContinuity,lemmaDoubleConvergence}. For ease of reference, we repeat each statement before the proof. \begin{replemma}{lemmaBiasedRandomWalk} \label{lemmaBiasedRandomWalk2} Let $(\Omega , {\cal G},\mathbb{P})$ be a probability space. Let $S$ be a finite set and for each $r\in S$, $\nu^r$ a distribution on $\R$ such that it either has positive mean or $\nu^r(\{0\})=1$. Let $\{ R_n\} _{n\in\N}$ be a sequence of random elements in $S$ and $\{ Y_n\} _{n\in \N }$ a sequence of random variables with $Y_0=0$. Suppose that $\Delta Y_{n+1}$ is conditionally independent of $\{(Y_k,R_k)\}_{k\leq n}$ conditioned on $R_n$, with distribution $\nu ^{R_n}$. In other words, for any $A\in {\cal B}(\R)$, $n\in\N$, \begin{equation} \label{eqDummy94} \Pr{\Delta Y_{n+1}\in A\midvert \{(Y_k,R_k)\}_{k\leq n}}=\nu^{R_n}(A)\ \ a.s. \end{equation} Then, \begin{equation} \label{eqDummy97} \Pr{\capdu{n\in\N}{}\{Y_n\geq 0\}}\geq \epsilon >0, \end{equation} where $\epsilon$ depends only on the distributions $\nu^r$, $r\in S$. \end{replemma} \begin{proof} Let $\{U^r_n\}^{r\in S}_{n\in\N}$ be a collection of independent random variables, independent of $\{(Y_n,R_n)\}_{n\in\N}$, and such that $U^r_n\sim \nu^r$ for all $r\in S$, $n\in\N$, where the relation $\sim$ means equality in distribution. Define $Y'_0=Y_0=0$ and for each $n\in\N$, \begin{equation} \label{eqDummy85} Y'_{n+1}=Y'_n+U^{R_n}_{n}. \end{equation} Clearly, for any $A\in{\cal B}(\R^d)$, \begin{equation} \label{eqDummy33} \begin{aligned} \Pr{\Delta Y_{n+1}'\in A\midvert \{(Y'_k,R_k)\} _{k\leq n}} & = \Pr{U^{R_n}_{n}\in A\midvert R_n}=\nu^{R_n}(A), \end{aligned} \end{equation} therefore $\{Y'_n\}_{n\in\N}\sim \{Y_n\}_{n\in\N}$. It is hence enough to show that \cref{eqDummy97} holds for the sequence $Y'_n$ instead of $Y_n$. For each $r\in S$, define $\tau ^r_0=-1$ and inductively $\tau ^r_n=\inf\, \{k>\tau ^r_{n-1}:R_k=r\}$. Note that each $Y_n$ is a sum of terms of the form $U^r_{\tau _k}$, for $k=1,\ldots ,m_r$, where $m_r\in \N$, $r\in S$. Therefore, \begin{equation} \label{eqDummy99} \capdu{n\in\N}{}\{Y'_{n}\geq 0\}\supset\capdu{r\in S}{}\capdu{n\in\N}{}\left\{\sumdu{k=1}{n}U^{r}_{\tau ^r_k}\geq 0\right\}, \end{equation} with the convention $U^{r}_{\tau ^r_k}=0$ when $\tau^r_k=\infty $. Since $\tau^r_k\independent\{U^r_n\}_{n\in\N}$, if the $\tau ^r_k$'s were all finite a.s., it would easily follow that $U^r_{\tau ^r_k}$ has the same distribution as $U^r_1$, $k\in\N$, and since $\tau^r_k$ is strictly increasing in $k$ we would even get that $\{U^r_{\tau^r_k}\}_{k\in\N}$ is i.i.d. To deal with the case $\tau^r_n=\infty$, we define the random times $\sigma ^r_n$ as follows: Let $v^r=\sup\,\{n\in\N:\tau ^r_n<\infty \}$ and \begin{equation} \sigma ^r_n=\tau ^r_n\cdot \mathbf{1}_{n\leq v^r}+(\tau ^r_{v^r}+n-v_r)\cdot \mathbf{1}_{n>v^r}. \end{equation} The $\sigma ^r_n$'s are almost surely finite and distinct for fixed $r\in S$, and $\{\sigma ^r_n\}^{r\in S}_{n\in\N}\independent\{U^r_n\}^{r\in S}_{n\in\N}$. Therefore, by \cite[Theorem 2.1]{melfi2000estimation} we get that $\left\{U^r_{\sigma^r_n}\right\}^{r\in S}_{n\in\N}\sim \left\{U^r_{n}\right\}^{r\in S}_{n\in\N}$. (In \cite{melfi2000estimation} it is assumed that the $\sigma ^r_n$'s are all distinct a.s., even for different $r$'s, but this assumption can be substituted by the fact that the sequences $\{U^r_n\}_{n\in\N}$ are independent for different $r$'s and the proof goes through.) Now observe that $\sigma ^r_k=\tau ^r_k$ on $\{\tau ^r_k<\infty \}$, therefore, by \cref{eqDummy99}, \begin{equation} \capdu{n\in\N}{}\{Y'_n\geq 0\}\supset \capdu{r\in S}{}\capdu{n\in\N}{}\left\{\sumdu{k=1}{n}U^r_{\sigma^r_k}\geq 0\right\} \end{equation} Consequently, \begin{equation} \begin{aligned} \Pr{\capdu{n\in\N}{}\{Y'_n\geq 0\}} & \geq \Pr{\capdu{r\in S}{}\capdu{n\in\N}{}\left\{\sumdu{k=1}{n}U^r_{\sigma^r_k}\geq 0\right\}}\\ & =\proddu{r\in S}{}\Pr{\capdu{n\in\N}{}\left\{\sumdu{k=1}{n}U^r_{k}\geq 0\right\}}>0, \end{aligned} \end{equation} because $\{U^r_k\}_{k\in\N}$ is an i.i.d. sequence of random variables that are either identically $0$ or they have a positive mean. \end{proof} \begin{replemma}{lemmaConditioningContinuity} \label{lemmaConditioningContinuityAppendix} Let $A_n$, $n\in\N$, and $A$ be measurable sets in a probability space, each with positive probability, and suppose that $A_n\to A$ a.s. (i.e. $\Pr{(A_n\backslash A)\cup (A\backslash A_n)}\to 0$). Then, $\Pr{S\midvert A_n}\to \Pr{S\midvert A}$ uniformly in $S\in {\cal F}$. \end{replemma} \begin{proof} We have \begin{equation} \begin{aligned} &\hspace{0.47cm} \left|\Pr{S\midvert A}-\Pr{S\midvert A_n}\right|\\ & =\left|\frac{\Pr{S\cap A}}{\Pr{A}}-\frac{\Pr{S\cap A_n}}{\Pr{A_n}}\right|\\ &=\left|\frac{\Pr{S\cap A_n}+\Pr{S\cap A\backslash A_n}-\Pr{S\cap A_n\backslash A}}{\Pr{A}}-\frac{\Pr{S\cap A_n}}{\Pr{A_n}}\right|\\ & \leq \Pr{S\cap A_n}\cdot \left|\frac{1}{\Pr{A}}-\frac{1}{\Pr{A_n}}\right|+\frac{\left|\Pr{S\cap A\backslash A_n}-\Pr{S\cap A_n\backslash A}\right|}{\Pr{A}}\\ &\leq \left|\frac{1}{\Pr{A}}-\frac{1}{\Pr{A_n}}\right|+\frac{\Pr{A\backslash A_n}+\Pr{A_n\backslash A}}{\Pr{A}}\\ \end{aligned} \end{equation} The quantity in the last line does not depend on $S$ and, by assumption, it converges to $0$ as $n\to\infty $. \end{proof} \begin{replemma}{lemmaDoubleConvergence} \label{lemmaDoubleConvergenceAppendix} Let $a_{m,n}\in\R$, $m,n\in\N$, and suppose that $\underset{m\to\infty}{\lim }a_{m,n}=a_n\in \R$ uniformly in $n$, and $\limn a_{m,n}=a\in\R$ for all $m\in\N$. Then, $\limn a_n=a$. \end{replemma} \begin{proof} Let $\epsilon >0$ and let $m_0\in\N$ be such that $|a_{m_0,n}-a_n|<\epsilon $ for all $n\in\N$. Now let $n_0\in\N$ be such that $|a_{m_0,n}-a|<\epsilon $ for all $n\geq n_0$. It follows that $|a_n-a|<2\epsilon$ for all $n\geq n_0$. \end{proof} \end{appendix} \section*{Acknowledgements} We would like to thank and Thorsten Joachims, Gabor Lugosi and Murad Taqqu for their remarks in previous versions of this manuscript. This research was supported in part through NSF Award IIS-1513692. \bibliographystyle{plainnat}
1,116,691,497,997
arxiv
\section{Introduction} \input fig1 In a series of conference papers appeared in the past two years, Bruzual (2004, 2005), I present evolutionary population synthesis models which are identical in all respects to the \citet{BC03} models, hereafter BC03, except in the stellar library used. BC03 use the STELIB library compiled by \citet{STELIB03}. In these papers I explore models built with libraries of higher spectral resolution and which improve upon STELIB on the coverage of the HRD by including at all metallicities a broader and more complete distribution of spectral types and luminosity classes. For reasons of space I present here only a summary of results. The reader is referred to the previous papers, available electronically, for details. The full implementation of the new libraries in the population synthesis models is in preparation by Charlot \& Bruzual (2007). STELIB includes observed spectra of 249 stars in a wide range of metallicities in the wavelength range from 3200 \AA\ to 9500 \AA\ at a resolution of 3 \AA\ FWHM (corresponding to a median resolving power of $\lambda / \Delta\lambda \approx 2000$), with a sampling interval of 1 \AA\ and a signal-to-noise ratio of typically 50 per pixel. HNGSL, the Hubble's New Generation Spectral Library \citep{HL03} contains spectra for a few hundred stars whose fundamental parameters, including chemical abundance, are well known from careful analysis of the visual spectrum. The spectra cover fully the wavelength range from 1700 \AA\ to 10,200 \AA. The advantage of this library over the ones listed below is the excellent coverage of the near-UV and the range from 9000 \AA\ to 10,200 \AA, which is generally noisy or absent in the other data sets. The IndoUS library \citep{VAL04} contains complete spectra over the entire 3460 \AA\ to 9464 \AA\ wavelength region for 885 stars obtained with the 0.9m Coud\'e Feed telescope at KPNO. The spectral resolution is $\approx$ 1 \AA\ and the dispersion 0.44 \AA\ pixel$^{-1}$. The library includes data for an additional 388 stars, but only with partial spectral coverage. See \citet{GB05} for a discussion of the flux calibration problems in the IndoUS library. MILES, the Medium resolution INT Library of Empirical Spectra \citep{MILES06}, contains carefully calibrated and homogeneous quality spectra for 985 stars in the wavelength range 3525 \AA\ to 7500 \AA\ with 2.3 \AA\ spectral resolution and 0.9 \AA\ pixel$^{-1}$ sampling. The stars included in this library were chosen aiming at sampling stellar atmospheric parameters as completely as possible. The ELODIE library is a stellar database of 1959 spectra for 1503 stars, observed with the \'echelle spectrograph ELODIE on the 193 cm telescope at the Observatoire de Haute Provence. The resolution of this library is $R = 42,000$ in the wavelength range from 4000 \AA\ to 6800 \AA\ \citep{PAS01A, PAS01B}. This library has been updated, extended, and used by \citet{LeB04} in the version 2 of the population synthesis code PEGASE. The UVES Paranal Observatory Project \citep{UVES03}, has produced a library of high resolution ($R = \lambda / \Delta\lambda \approx 80,000$) and high signal-to-noise ratio spectra for over 400 stars distributed throughout the HRD. For most of the spectra, the typical final SNR obtained in the V band is between 300 and 500. The UVES POP library is the richest available database of observed optical spectral lines. Progress in compiling libraries at IR wavelengths has been slower than in the optical range. The IRTF Spectral Library, \citet{RVC07}, provides high S/N spectra for stars covering most of the HR diagram: 333 stars with spectral types W-R through M, plus 14 L and T stars. The spectra cover the wavelength range from 0.8 to 4.2 $\mu m$ (in some cases out to 5.0 $\mu m$). R=2000 for the 0.8-2.5 $\mu m$ range and R=2500 for the 2.5-5.0 $\mu m$ range. The spectra are corrected for telluric absorption and then absolutely flux calibrated. In addition, the spectra have been tied to the 2MASS mags when possible. A good fraction of the data is available electronically and ready to be used in synthesis models. Preliminary descriptions of these data can be found in \citet{RJT03} and \citet{CRV05}. A parallel effort by \citet{MQC07} will produce a library of stellar spectra in the K band for a subset of the stars in the MILES library, insuring an adequate coverage of stellar physical parameters and non-solar abundance ratios. There are several on-going projects to improve the existing grids of theoretical model atmospheres including the computation of high resolution theoretical spectra for stars whose physical parameters are of interest for population synthesis. See, for example, \citet{COE07} and references therein. \section{Use of Different Libraries in Population Synthesis Models} The 'standard' BC03 reference model represents a simple stellar population (SSP) computed using the Padova 1994 evolutionary tracks, the \cite{CHAB03} IMF truncated at 0.1 and 100 M$_\odot$, and either the STELIB or the BaSeL 3.1, \cite{WES02}, spectral libraries (see BC03 for details). For illustration purposes I show in Fig 1 the 13 Gyr spectral energy distribution (SED) in the optical range for the solar metallicity standard reference SSP model computed with the following spectral libraries (top to bottom in decreasing order of spectral resolution): \citet{COE07}, IndoUS, MILES, STELIB, HNGSL, \citet{PIC98}, and BaSeL 3.1. The higher resolution models do not provide new information in what respects to color or color evolution, i.e., all the SEDs in Fig 1 have the same overall shape. The major advantage of using high spectral resolution models is to study absorption features. The behavior of line strength indices defined in lower resolution spectra can be explored in detail in the higher resolution SEDs. In some instances, e.g some of the Lick indices, the high resolution spectra show clear evidence that the wavelength intervals defining these indices should be revised because of contamination by other chemical elements. More important, the high resolution spectra provide the opportunity to define new line strength indices that measure the intensity of absorption lines that are unnoticeable in low resolution spectra. \input fig2 \input fig3 Fig 2 shows the behavior in time of several new spectral indices defined by \citet{MJS07} upon inspection of the IndoUS version of the BC03 models. For ages above 5 Gyr any of these indices is a good indicator of the metallicity of the stellar population. When combined with indices that are good age indicators, such as the I$_{200}$ and I$_{275}$ indices defined by \citet{VA99}, we obtain close to orthogonal index-index diagrams in which the age-metallicity degeneracy is broken and can be used to establish approximate ages for the stellar populations in early-type galaxies, cf. Fig. 3. The data points in Fig. 3 represent the values of these indices measured in very high S/N coadded SDSS spectra of early-type galaxies (J. Brinchmann, private communication) with velocity dispersion close to the value indicated in each frame. A tendency is observed for most massive galaxies being older and more metal rich. \subsection{Improved TP-AGB treatment} \input fig4 It has been pointed out by several authors, e.g. \citet{CM06}, \citet{KG07}, that the estimates of the age and mass of the stellar population present in a galaxy depend critically on the ingredients of the stellar population model used to fit the galaxy spectrum. \citet{CM06} have shown that the treatment of the thermally pulsing asymptotic giant branch (TP-AGB) phase of stellar evolution is a source of major discrepancy in the determination of the spectroscopic age and mass of high-z $(1.4<z<2.7)$ galaxies. The mid-UV spectra of these galaxies indicate ages in the range from 0.2-2 Gyr, at which the contribution of TP-AGB stars in the rest-frame near-IR sampled by Spitzer is expected to be at maximum. \citet{CM06} find that in general the \citet{CM05} models (M05 hereafter) provide better fits than the BC03 and other models available in the literature, and indicate systematically lower ages and, on average, 60\% lower masses for the stellar populations sampled in these galaxies. According to \citet{CM06} the source of this discrepancy is primarily a consequence of the different treatment of the TP-AGB phase in the evolutionary models. \citet{PM07} have recently concluded new calculations of the TP-AGB evolutionary phase for stars of different mass and metallicity. The evolution of the stars is now computed accounting for the changes in the chemical composition of the envelopes. As a consequence of this prescription, the signature of TP-AGB stars around 1 Gyr, i.e the red color of the integrated stellar population, becomes more important. Fig. 4 compares the B-V and V-K color evolution of models computed using the \citet{PM07} TP-AGB evolutionary tracks (Bruzual \& Charlot 2007, CB07 hereafter) with the same for the BC03 and M05 models. In B-V the CB07 and BC03 models are identical at all ages. At early and late ages both sets of models have the same V-K color. At intermediate ages the CB07 models are considerably redder than the BC03 models. At late ages, the BC03 and CB07 models match very well the observations of nearby early-type galaxies, whereas the M05 models are too blue. \input fig5 Fig. 5 compares the evolution of the fraction of the K-band luminosity emitted by TP-AGB stars in a solar metallicity SSP for the CB07 and the BC03 models. At maximum, the TP-AGB contributes 60\% of the K-light in the CB07 model but only 40\% in the BC03 model. The peak emission in the BC03 model occurs at around 1 Gyr whereas in the CB07 model it stays high and close to constant from 0.1 to 1 Gyr. The bottom frame of Fig. 5 shows that the stellar mass determined from the CB07 model can be up to 50\% lower than the mass determined from the BC03 model. The lower CB07 values are in agreement with the masses determined by \citet{CM06}. See \citet{GB06} and \citet{CB07} for details. \acknowledgements I thank Cesare Chiosi for his contribution to making all of this possible, Paola Marigo and Leo Girardi for providing their calculations of the TP-AGB evolutionary phase ahead of publication, and St\'ephane Charlot for allowing me to show results of a joint paper in preparation. \input bibliography \end{document}
1,116,691,497,998
arxiv
\section{Introduction} Portfolio theory pioneered by Markowitz in 1950's \cite{markowitz1952portfolio} is at the center of theoretical developments in finance. The mean-variance model tells investors should hold a portfolio on the efficient frontier which trade off portfolio mean (return) against variance (risk). In practice, mean and variance are calculated using estimated sample mean and sample covariance matrix. However, estimation error in sample mean and covariance will significantly affect the accuracy of the portfolio thus perform poorly in practice (see \cite{jobson1981putting, michaud1989markowitz}). Quantitative result on how sample covariance affects the performance are very limited. The bias in sample portfolio weight is discussed in \cite{el2010high} but no practical guidance is given on how large is the bias when use mean-variance model with sample data. We in this work will obtain that the order of magnitude of the error in sample portfolio weight which is large when the sample size $n$ is comparable to the number of assets $p$. And the error decays in the rate of $\sqrt{\frac{p}{n}}$ as $n$ increases. For this reason, there has been many work suggest different approaches to overcome standard mean-variance portfolio optimizations. These suggestions include imposing portfolio constraints (see \cite{jagannathan2003risk, demiguel2009generalized, behr2013portfolio}), use of factor models (\cite{chan1999portfolio}), modifying objective to be more robust (\cite{demiguel2009portfolio}) and improving sample covariance matrix estimation (\cite{ledoit2003improved}). Instead, in this work we use the observation from random matrix theory to provide alternative view on the error in sample covariance matrix. We propose LoCoV, low dimension covariance voting, which effectively exploits the accurate low dimensional covariance to vote on different assets. It outperform the standard sample portfolio by a large margin. We shall first set up the problem. For simplicity, we only discuss minimum-variance portfolio optimization. Assume the true covariance admits diagonalization \[ \Sigma = P^T D^2 P \] where $D$ is a non-negative definite diagonal matrix, and $P$ is an orthogonal matrix. Then a data matrix (asset return) realized by random matrix $\cN (n\times p)$ with i.i.d. standard random variables is \[ X = \cN D P \] a sample covariance matrix is then obtained as \[ \hat{\Sigma} = P^TD\frac{\cN^T \cN}{n} DP \] We define the minimum variance portfolio to be the optimizer of \begin{equation}\label{eqn:true portfolio optimization} \begin{split} \min_w \;& w^T \Sigma \; w \\ s.t. \;& w^T \mathbbm{1} =1 \end{split} \end{equation} where $\mathbbm{1}= \mat[ccc]{1 & \cdots & 1}^T$. In reality, $\Sigma$ is not known, therefore it is replaced by an estimator $\hat{\Sigma}$ to obtain an approximated optimal portfolio. That is we solve \begin{equation}\label{eqn:sample portfolio optimization} \begin{split} \min_w \;& w^T \hat{\Sigma}\; w \\ s.t. \;& w^T \mathbbm{1} =1 \end{split} \end{equation} \section{Universality of optimal portfolio weight and risk} We first derive the solution of minimum-variance by the method of Lagrange multiplier since a closed form is available. Later on based on the explicit form of the solutions, we will investigate probabilistic properties of portfolio weight and risk. Observe that both $\Sigma$ and $\hat{\Sigma}$ take the form $A^T A$ where $A$ is $DP$ for true covariance $\Sigma$ and $A$ is $\frac{1}{\sqrt{n}}\cN DP$ for the sample covariance matrix $\hat{\Sigma}$. We shall define the portfolio optimization in the general form \begin{equation}\label{eqn:general form portfolio optimization} \begin{split} \min_w \;& w^T A^T A \; w \\ s.t. \;& w^T \mathbbm{1} =1 \end{split} \end{equation} Define the Lagrangian function \[ \cL(w) = w^T A^TA w - \lambda (w^T \1 -1) \] Taking derivatives with respect to the portfolio weight $w$, and set the gradient to be zero, \[ \nabla \cL = 2 w^TA^TA - \lambda \1^T =0 \] write gradient as column vector this is \[ 2A^TA w = \lambda \1 \] For real life portfolio optimization, we can assume $A^TA$ ($\Sigma$ or $\hat{\Sigma}$) is invertible since otherwise optimal portfolio weight will have large error or ambiguity. Then we find the optimal portfolio weight \[ w = \frac{\lambda}{2} (A^TA)^{-1}\1 \] We know the portfolio weights should be normalized so that they sum up to 1. Therefore $\lambda/2$ is essentially a normalizing factor. For convenience of notation, we make the following definition. \begin{definition} The \textbf{free (non-normalized) optimal weight} of portfolio optimization \ref{eqn:general form portfolio optimization} is \[ S= \mat[ccc]{s_1 & \cdots & s_p}^T = (A^TA)^{-1}\1 \] And denote its sum as \[ \|S\|_s := \sum_{k=1}^p s_k \] \end{definition} Normalizing the vector $S$ we obtain optimal portfolio weight \[ w^* = \frac{1}{\sum_{i=1}^p s_i} S = \frac{S}{\|S\|_s} \] It is easy to see $\lambda^* = \frac{2}{\sum_{i=1}^p s_i} = 2 \|S\|_s^{-1}$. Then take dot product of $\nabla \cL$ and $w$, we find \[ 0= \nabla \cL^T w = 2 w^TA^TA w - \lambda \1^T w \] and recall $\1^T w =1$, therefore, we find the minimum portfolio risk \[ R(w^*) = w^{*T}A^TA w^* = \lambda^* /2 = \|S\|_s^{-1} \] We summarize the result as follows, \begin{proposition}\label{prop:optimal formulas} For the constrained optimization \ref{eqn:general form portfolio optimization}, the \textbf{free optimal weight} is \begin{equation}\label{eqn:free optimal weight} S= \mat[c]{s_1\\\vdots \\ s_p} = (A^TA)^{-1}\1 \end{equation} Normalizing $S$, we obtain the \textbf{optimal portfolio weight} \begin{equation}\label{eqn:optimal weight} w^* = \|S\|_s^{-1} S \end{equation} and the \textbf{minimum portfolio risk} is \begin{equation}\label{eqn:optimal risk} R(w^*) = \|S\|_s^{-1} \end{equation} where $\|S\|_s^{-1} = \sum_{k=1}^p s_k$. \end{proposition} \subsection{Behavior of sample portfolio} Assume the diagonalization of true covariance matrix \[ \Sigma =P^T D^2P =P^T diag (\sigma_1^2, \cdots, \sigma_p^2)P \] By proposition \ref{prop:optimal formulas}, plugging in $A^TA = \Sigma$, we find the \textbf{true free optimal weight} and \textbf{true optimal portfolio weight} of \ref{eqn:true portfolio optimization} are \begin{equation}\label{eqn:true weight formula} S_{\Sigma} = \Sigma^{-1}\1 = P^T D^{-2}P \1 , \qquad w^* = \|S_{\Sigma} \|_s^{-1} S_{\Sigma} \end{equation} Then recall the return (data matrix) is generated as $X= \cN DP$ where $\cN$ is a $n\times p$ matrix with i.i.d. standard random variables (mean zero and variance one). This leads to the sample covariance matrix \[ \hat{\Sigma} = P^T D\frac{\cN^T\cN}{n} D P \] Plugging in $A^TA = \hat{\Sigma}$ for proposition \ref{prop:optimal formulas}, we obtain \textbf{sample free optimal weight} and \textbf{sample optimal portfolio weight} of \ref{eqn:sample portfolio optimization} \begin{equation}\label{eqn:sample weight formula} S_{\hat{\Sigma}} = P^T \hat{\Sigma}^{-1} P\1 = P^T D^{-1} \left( \frac{\cN^T\cN}{n}\right)^{-1} D^{-1}P \1 , \qquad \hat{w}^* = \|S_{\hat{\Sigma}}\|_s^{-1} S_{\hat{\Sigma}} \end{equation} The difference between $S_{\hat{\Sigma}}$ and $S_{\Sigma}$ depends on the random matrix (inverse of sample covariance) $M:=\left( \frac{\cN^T\cN}{n}\right)^{-1}$, diagonal matrix $D$ and orthogonal matrix $P$. $M$ is the inverse of a sample covariance matrix. It is possible to directly use the formula for inverse from Cramer rule to analyze this random matrix and show $\E M = I$. Since this work mainly focus on improving the accuracy of portfolio, we will not pursue the probabilistic properties here (which shall be discussed in another work elsewhere). Instead we use several experiments to show the sample portfolio weight $\hat{w}^*$ is centered around the true portfolio weight $w^*$. \subsection{First example: sample portfolio of independent assets} We shall start with the simplest case that all assets are independent, i.e. the matrix $P$ is identity. This means the true covariance matrix is a diagonal matrix $ \Sigma =D^2 $. Then by \ref{eqn:true weight formula} \textbf{true free optimal weight} and \textbf{true optimal portfolio weight} \[ S_{\Sigma} = \Sigma^{-1}\1 = D^{-1}D^{-1}\1 = \mat[ccc]{\sigma_1^{-2} & \cdots & \sigma_p^{-2}}^T, \qquad w^* = \|S_{\Sigma} \|_s^{-1} S_{\Sigma} \] Similarly by \ref{eqn:sample weight formula} \textbf{sample free optimal weight} and \textbf{sample optimal portfolio weight} \[ S_{\hat{\Sigma}} = \hat{\Sigma}^{-1}\1 = D^{-1} \left( \frac{\cN^T\cN}{n}\right)^{-1} D^{-1}\1 , \qquad \hat{w}^* = \|S_{\hat{\Sigma}}\|_s^{-1} S_{\hat{\Sigma}} \] \begin{figure}[H] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=1\linewidth]{portfolio_diagonal.eps} \end{subfigure} \end{figure \begin{figure}[ht]\ContinuedFloat \begin{subfigure}[ht]{\textwidth} \centering \includegraphics[width=1\linewidth]{portfolio_diagonal100.eps} \end{subfigure} \caption{We select eigenvalues of $\Sigma$ equally spaced between 1 to 30. Namely $\sigma_k^2=k, 1\le k\le 30$. We generate 300 samples for each of the two settings $(n,p)= (30,30)$ and $(3000,30)$. when $\frac{p}{n}=1$, the error of the portfolio weight is $O(1)$. when $\frac{p}{n}=1/100$, the error of the portfolio weight is $O(1/10)$} \label{fig:diagonal} \end{figure} On the left figure \ref{fig:diagonal}, true optimal weight is red line which is closely aligned with the mean value of sample optimal weights which is show as blue connected dash-line. As we see the standard deviation in sample portfolio weight is at $O(\sqrt{p/n})$. As $p/n$ decreases, the sample portfolio weight become less volatile around the true portfolio weight. On the right, the sample optimal risk has higher chance of underestimate the true optimal risk. As $p/n$ decreases, the sample portfolio risk become less volatile and more centered around the true portfolio risk. \subsection{Second example: sample portfolio of dependent assets} For general assets with dependence, \ref{eqn:true weight formula} and \ref{eqn:sample weight formula} have provided the formulas. Again we will only use experiments to show relations between the sample portfolio weight $\hat{w}^*$ and the true portfolio weight $w^*$. \begin{figure}[H] \centering \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=\linewidth]{portfolio_dense.eps} \end{subfigure} \end{figure \begin{figure}[ht]\ContinuedFloat \centering \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=1\linewidth]{portfolio_dense100.eps} \end{subfigure} \caption{We still select eigenvalues of $\Sigma$ equally spaced between 1 to 30. Namely $\sigma_k^2=k, 1\le k\le 30$. We now select $P$ to be a random orthogonal matrix according to the Haar measure. } \label{fig:dense} \end{figure} Since we are using non-identity orthogonal matrix $P$ to create dependence among the assets, the true optimal portfolio weight is not ordered. The concentration and deviation properties of the sample portfolio weight has not changed. On the left figure \ref{fig:dense}, true optimal weight is the red line which is still closely aligned with the mean value of sample optimal weights which is shown as blue connected dash-line. As we see the standard deviation in sample portfolio weight is at $O(p/n)$. On the right, the sample optimal risk has higher chance of underestimate the true optimal risk. As $p/n$ decreases, both sample weight and sample risk become more accurate. \subsection{The order of error in sample optimal portfolio} We summarize our findings from previous examples and experiments as the following conjecture \begin{conjecture} Error estimates for $\hat{w}^*$ (\ref{eqn:sample weight formula}) compared with $w^*$( \ref{eqn:true weight formula}): If assume eigenvalues of true covariance matrix $\Sigma$ are $\sigma_k^2$, then \begin{equation*} \E| \hat{w}^*_k - w^*_k| = O\left( \sigma_k \times \sqrt{\frac{p}{n}} \right),\quad \forall 1\le k \le n \end{equation*} The constant in the order depends on smallest and largest eigenvalues of $\Sigma$. \end{conjecture} Even though we can not prove this in full generality, we can show \begin{theorem} Assume the true covariance of assets has diagonalization $\Sigma=P^TD^2P$ with $D= diag (\sigma_1,\cdots \sigma_p)$ and asset return data $X= \cN DP$ where $\cN$ is a $n\times p$ matrix with i.i.d. standard Gaussian random variables (mean zero and variance one). And the sample covariance matrix \[ \hat{\Sigma} = P^T D\frac{\cN^T\cN}{n} D P \] Then error in sample free optimal weight $S_{\hat{\Sigma}}$ of \ref{eqn:sample weight formula} satisfies the bound \[ \E \|S_{\hat{\Sigma}} - S_{{\Sigma}}\|_2 \le O\left( p \; \sigma_{max}\; \sigma_{min}^{-1} \sqrt{\frac{p}{n}} \right) \] with high probability. where $\| \cdot \|$ is the matrix 2-norm. \end{theorem} \begin{proof} From \ref{eqn:true weight formula} and \ref{eqn:sample weight formula}, we know the free optimal weights $S_{\Sigma}$ and $S_{\hat{\Sigma}}$ solves the linear system \begin{align*} \Sigma S_{\Sigma} & = P^T D^{2}P S_{\Sigma} = \1 \\ {\hat{\Sigma}} S_{\hat{\Sigma}} & = P^T D \left( \frac{\cN^T\cN}{n}\right) DP S_{\hat{\Sigma}} = \1 \end{align*} To compare $S_{\Sigma}$ and $S_{\hat{\Sigma}}$, we use perturbation theory of linear systems. Given linear system $Ax=b$ and its perturbed version $(A+B) \hat{x} =b$. then \begin{align*} (A+B)(\hat{x}-x +x) & = b \\ (A+B)(\hat{x}-x) & = Ax- (A+B)x \\ \hat{x}- x &= -(A+B)^{-1}Bx \end{align*} Therefore for any norm $\|\cdot \|$. \[ \|\hat{x} -x\| \le \|(A+B)^{-1} B \| \|x \| \] Replace $A= P^TD^2P$ and $A+B=P^T D \left( \frac{\cN^T\cN}{n}\right) DP$, we find \begin{align*} \|S_{\Sigma} - S_{\hat{\Sigma}} \|_2 & \le \left\| P^T D^{-1} \left( \frac{\cN^T\cN}{n}\right)^{-1} D^{-1} P P^T D \left( \frac{\cN^T\cN}{n}-I\right) DP \right\|_2 \|S_{\Sigma}\|_2 \\ & \le \left\| P^T D^{-1} \left( \frac{\cN^T\cN}{n}\right)^{-1} \left( \frac{\cN^T\cN}{n}-I\right) DP \right\|_2 \|S_{\Sigma}\|_2 \\ \end{align*} Notice $D$ is diagonal and $P$ is orthogonal, we see \[ \|D\|_2= \sigma_{max}, \; \|D^{-1}\|_2= \sigma_{min}^{-1}, \; \|P\|_2 =1 \] Denote $M:= \frac{\cN^T\cN}{n}$. Therefore we have the bound \[ \|S_{\Sigma} - S_{\hat{\Sigma}} \|_2 \le \|S_{\Sigma}\|_2 \sigma_{min}^{-1} \sigma_{max} \left\| I-M^{-1} \right\|_2 \] Notice \begin{align*} \|I-M^{-1}\|_2 & = \max (|1-\lambda_{min}^{-1}|, |1-\lambda_{max}^{-1}|) \\ & \le |1-\lambda_{min}^{-1}|+ |1-\lambda_{max}^{-1}| \\ & = \lambda_{min}^{-1} - \lambda_{max}^{-1} \end{align*} From random matrix theory, eigenvalues of $M$ follows Marchenko-Pastur distribution. Moreover, smallest and largest eigenvalues of $M$ satisfies (see \cite{rudelson2010non}) \[ \E \lambda_{max}(M) \le \left(1+\sqrt{\frac{p}{n}}\right)^2, \; \E \lambda_{min}(M) \ge \left(1-\sqrt{\frac{p}{n}}\right)^2 \] It is known the non-asymptotic behavior of $\lambda_{max}$ and $\lambda_{min}$ satisfies sub-exponential tails \[ \p \left( \left(1-\sqrt{\frac{p}{n}}\right)^2 -t\le \lambda_{min}(M) \le \lambda_{max}(M) \le \left(1+\sqrt{\frac{p}{n}}\right)^2 +t\right) \le 2e^{-\sqrt{n}t} \] The sub-exponential tail properties implies with high probability ($1-O(n^{-c})$) so that $\lambda_{min}\ge 1-O(\sqrt{\frac{p}{n}})$ and $\lambda_{max}\le 1+O(\sqrt{\frac{p}{n}})$ is concentrated around $(\E \lambda_{min})^{-1}- (\E \lambda_{max})^{-1} $. Then with high probability \begin{align*} \E \|I-M^{-1}\|_2 &\le \E \lambda_{min}^{-1} - \E \lambda_{max}^{-1} \\ & \le C\left[\frac{1}{1-O(\sqrt{\frac{p}{n}})} - \frac{1}{1+O(\sqrt{\frac{p}{n}})} \right] \\ &= O(\sqrt{\frac{p}{n}}) \end{align*} Therefore we conclude \[ \|S_{\Sigma} - S_{\hat{\Sigma}} \|_2 \le \|S_{\Sigma}\|_2 \; \sigma_{min}^{-1} \sigma_{max} O(\sqrt{\frac{p}{n}}) \] \end{proof} Notice this result is closely related to how $\hat{w}^*$ behave. For instance, if we assume $D=P=I$, then $\|S_{\Sigma}\|_2 = p$, we see \[ \|w^* - \hat{w}^* \|_2 \le \|\hat{w}^* - S_{\hat{\Sigma}}/p \|_2 + \sigma_{min}^{-1} \sigma_{max} O(\sqrt{\frac{p}{n}}) \] \section{LoCoV: low dimension covariance voting} So far we have seen that large errors are present when we use \ref{eqn:sample portfolio optimization} to approximate \ref{eqn:true portfolio optimization} especially when $p/n$ is not small. The natural question is whether there is a rescue to reduce the errors when $p$ and $n$ are comparable. The answer is positive and we provide LoCoV algorithm, low dimension covariance voting, which consistently outperform the sample optimal portfolio $\hat{w}^*$. Let us start with the motivation behind LoCoV. From random matrix theory, the sample covariance approaches to the true covariance as $p/n \to 0$. Suppose we have $n=30$ samples for $p=30$ assets. Then for any two assets, $X_k$ and $X_t$, the $2 \times 2$ sample covariance matrix $\hat{\Sigma}_{ks}$ for assets $X_k$ and $X_t$ has 30 samples thus feature-to-sample ratio is $2/30$ which is much smaller compared with $30/30$ for the sample covariance matrix $\hat{\Sigma}$ for all 30 assets. On the other hand, philosophically portfolio optimization is to compare different assets and find proper investment hedges (ratios). Since we have a very accurate sample covariance matrix $\hat{\Sigma}_{kt}$ for asset $X_k$ and $X_t$, we can find accurate investment relative-weights ($u_k, u_t$), invest $u_k$ on asset $X_k$ and $u_t$ on asset $X_t$, by solving \ref{eqn:sample portfolio optimization}. As we repeat this process for any pair of two assets, we can use these low dimension covariance matrices $\hat{\Sigma}_{kt}$ to accurately construct ratios $(u_k, u_t)$ and then we utilize all $p^2$ pairs of ratios to vote on each assets and obtain a final portfolio weight vector. \begin{algorithm}[H] \DontPrintSemicolon \caption{\textbf{`LoCoV-$2$'} }\label{alg:locov-2} \KwData{centered asset return $X\in \R^{n\times p}$, $n, p > 0$} Compute sample covariance matrix $ \hat{\Sigma} \gets \frac{1}{n}X^T X $ \textbf{Initialization:} $U \gets \frac{1}{2} I$, $V \gets 0$.\\ \tcp*{$U$ is $p\times p$ relative-weight matrix, $V$ is $p\times 1$ free-weight vector} \For{$i\gets1$ \KwTo $p$}{ \tcc{\emph{1. For asset $i$ find relative-weights}} \For{$j\gets i+1$ \KwTo $p$}{ Extract $2\times 2$ sub-matrix $\hat{\Sigma}_{i,j}$, and solve the 2-assets portfolio optimization \begin{align*} \begin{split} &\min_u \; u^T \hat{\Sigma}_{i,j}\; u \\ & s.t. \; u^T \mathbbm{1} =1 \end{split} \end{align*} or use formula $u=(u_1,u_2) = \hat{\Sigma}_{i,j}^{-1} \1$. $U_{i,j}\gets u_1$ \tcp*{invest $u_1$ in asset $i$ } $U_{j,i} \gets u_2$ \tcp*{invest $u_2$ in asset $j$ } } \tcc{\emph{2. Voting}} Compute free-weight by uniform voting \[ V_i \gets \frac{1}{p} \sum_{j=1}^p U_{i,j} \] } Normalize $V$ \[ w \gets \frac{V}{\|V\|_{s}} = \frac{V}{\sum_{i=1}^p V_i} \] \KwOut{$w$} \end{algorithm} And we can easily generalize this algorithm to that using $k\times k$ dimensional covariance and solve corresponding \ref{eqn:sample portfolio optimization} for $k$ assets instead of using $2\times 2$ low dimensional covariance. Therefore we propose the following `LoCoV-$k$' algorithm. \begin{algorithm}[H] \DontPrintSemicolon \caption{\textbf{`LoCoV-$k$'} ($k\ge 3$) }\label{alg:locov-k} \KwData{centered asset return $X\in \R^{n\times p}$, $n, p > 0$} Compute sample covariance matrix $ \hat{\Sigma} \gets \frac{1}{n}X^T X $ \textbf{Initialization:} $U \gets \frac{1}{k} \1 \1^T$, $V \gets 0$.\\ \tcp*{$\1$ is $p\times 1$ vector of all ones, $V$ is $p\times 1$ free-weight vector} \For{$i\gets1$ \KwTo $p$}{ \tcc{\emph{1. For asset $i$ find relative-weights}} \For{$j\gets 1$ \KwTo $p$}{ Generate index set $I=\{i,l_1,\cdots,l_{k-1}\}$ where $l_1,\cdots l_{k-1}$ random uniformly in $ \{1,\cdots,p\}\setminus\{i\}$. Extract $k\times k$ sub-matrix $\hat{\Sigma}_{I}$, and solve the k-assets portfolio optimization \begin{align*} \begin{split} &\min_u \; u^T \hat{\Sigma}_{I}\; u \\ & s.t. \; u^T \mathbbm{1} =1 \end{split} \end{align*} or use formula $u=(u_0, u_1,\cdots, u_{k-1}) = \hat{\Sigma}_{I}^{-1} \1$. $U_{i,j}\gets \frac{1}{2}u_0+\frac{1}{2}U_{i,j}$ \tcp*{invest $u_0$ in asset $i$ } $U_{l_t,i} \gets \frac{1}{2}u_{t}+ \frac{1}{2}U_{l_t,i}, \quad \forall 1\le t \le k-1 $ \tcp*{invest $u_{t}$ in asset $l_t$ } } \tcc{\emph{2. Voting}} Compute free-weight by uniform voting \[ V_i \gets \frac{1}{p} \sum_{j=1}^p U_{i,j} \] } Normalize $V$ \[ w \gets \frac{V}{\|V\|_{s}} = \frac{V}{\sum_{i=1}^p V_i} \] \KwOut{$w$} \end{algorithm} \medskip In LoCoV-$k$, there are several tweaks from LoCoV-$2$ in order to adapt to $k$-assets. Every time we solve a $k$-assets portfolio optimization problem, we obtain $k$ relative weights. In order to use all $k$ weights, we initialize the relative-weight matrix $U$ with all entries being $\frac{1}{k}$. If there is a new weight generated from the computation, we take average of the existing weight and the new weight. This update will diminish old weights which is only for convenience reading and understanding the algorithm. One could take a more delicate update on entries of $U$, for example keep track of the total number of weights generated for each entry, and then update with an average of all weights. \section{Simulations} We run three experiments and select $\Sigma= I, D^2, P^TD^2P$. For each experiment, we generate 300 samples $\hat{\Sigma}$ and compute corresponding $\hat{w}^*$ and LoCoV estimator. We plot $\hat{w}^*$ in green and LoCoV-weight in black. The experiments show LoCoV consistently outperforms the sample optimal portfolio. \begin{figure}[H] \centering \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=\linewidth]{locov_identity.eps} \end{subfigure} \caption{$\Sigma=I$} \end{figure \begin{figure}[H] \centering \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=1\linewidth]{locov_diagonal.eps} \end{subfigure} \caption{$\Sigma=D^2$ with eigenvalues of $\Sigma$ equally spaced between 1 to 30. Namely $\sigma_k^2=k, 1\le k\le 30$.} \end{figure} \begin{figure}[H] \centering \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=1\linewidth]{locov_dense.eps} \end{subfigure} \caption{$\Sigma=P^TD^2P$ with eigenvalues of $\Sigma$ equally spaced between 1 to 30. Namely $\sigma_k^2=k, 1\le k\le 30$. $P$ is a random orthogonal matrix according to the Haar measure. } \label{fig:simulation} \end{figure} \section{Conclusion and open question} We analyzed the minimum variance portfolio question with the consideration of randomness of sample covariance matrix. In light of random matrix theory, we use experiments showed the error in sample optimal portfolio has the order of the assets-to-sample ratio $p/n$. When number of assets $p$ is not considerably smaller than the number of samples $n$, the sample optimal portfolio fails to provide accurate estimation of true optimal portfolio. Thus we proposed the LoCoV method which exploits the fact that $k$-dimensional sub-covariance matrix is more accurate thus can be used to produce relative weights among $k$ assets. Using relative weights to uniformly vote on given assets eventually improve dramatically on the performance of the portfolio. \subsection{Adapt LoCoV to general mean-variance portfolio} We have not discussed the role of mean return and assumed our data is centered. To adapt to general non-centered mean-variance portfolio optimization, one must modify the $k$-assets optimization sub-problem. Namely, one has to compute sample mean $\mu = \frac{1}{n} X_{i\cdot}$, and then solve the k-assets portfolio optimization \begin{align*} \begin{split} \min_u \; & u^T \hat{\Sigma}_{I}\; u \\ s.t. \; & u^T \mathbbm{1} =1 \\ & \mu u\ge r_0 \end{split} \end{align*} where $r_0$ is the lower bound of expected return. However, there is no guarantee to achieve the mean return $\mu w \ge r_0$ for the voting procedure produced weight $w$. Of course one can try to apply LoCoV first and check whether mean return is above the threshold $r_0$, if not then repeating the process of updating relative-weight matrix $U$ will probably improve. \medskip
1,116,691,497,999
arxiv
\section{Introduction} Detecting signals embedded in noise is a challenge in many research and engineering areas. In the common case of Gaussian noise the standard tool is the matched filter (MF) \citep{tuz01, min02, osc02, bri04, lev08, yao13}. According to the Neyman-Pearson theorem, the MF provides the greatest probability of true detection under the condition of a constant probability of false detection \citep[e.g.][]{kay98}. In non-Gaussian cases it is difficult to derive an MF and even if available it is more cumbersome to use. In particular, it is complex to compute the probability of false detection or false alarm (PFA). In astronomy this situation occurs, for example, in the search for high-energy point sources in the presence of Poisson (i.e. non-additive) noise and where signal and noise together consist of only a few counts per pixel. For this reason, in the past alternative procedures based on heuristic or semi-heuristic arguments have been preferred \citep[][ and reference therein]{ste06}. In a recent work \citet{ofe17} provide an analytical form of the MF with the PFA computed from numerical simulations. This approach lacks flexibility and is time-consuming. In the present paper we propose a method to compute the PFA based on the saddle-point approximation (SA) which is fast, flexible and provides accurate results. In Section~\ref{sec:formalization} the problem of the detection of a signal in Poisson noise is formalized whereas the SA method is described in Sect.~\ref{sec:spa} and its application illustrated in Sect.~\ref{sec:spapoiss}. The results of a few numerical experiments are given in Sects.~\ref{sec:numerical}-\ref{sec:psf} and the limitations of the MF in practical situations are discussed in Sect.~\ref{sec:practical}. Finally, the conclusions are given in Sect.~\ref{sec:conclusions}. \section{The mathematics of the problem} \label{sec:formalization} In this section we describe the basic properties of the MF in the case of Poisson noise. To allow a better understanding we develop the main arguments in one dimension. The extension to higher dimension cases is straightforward and can be done substituting the coordinate system with a multi-dimensional one \footnote{In the case of a two-dimensional map $\boldsymbol{X}$, a one-dimensional signal $\boldsymbol{x}$ is obtained through $\boldsymbol{x} = {\rm VEC}[\boldsymbol{X}]$, with ${\rm VEC}[.]$ the operator that transforms a matrix into a column array by stacking its columns one underneath the other.}. To illustrate the procedure of detection of a deterministic and discrete signal of known structure $\boldsymbol{s} = [s(0), s(1), \ldots, s(N-1)]^T$, with length $N$, and the symbol $^T$ denoting a vector or matrix transpose, we assume the following: \begin{itemize} \item The searched signal takes the form $\boldsymbol{s} = a \boldsymbol{g}$ with $a$ a positive scalar quantity (amplitude) and $\boldsymbol{g}$ typically a smooth function often somehow normalized (e.g., $\sum_{i=0}^{N-1} g(i) = 1$); \\ \item The signal $\boldsymbol{s}$ is contaminated by a Poisson noise, i.e. the observed signal $\boldsymbol{x}$ is given by $\boldsymbol{x} = \mathcal{P}(\boldsymbol{s} + \lambda \boldsymbol{1} )$. Here, $\boldsymbol{1} = [1, 1, \ldots, 1]^T$, the scalar $\lambda$ represents the intensity parameter of the noise background and $\mathcal{P}(\mu \boldsymbol{1})$ denotes a Poisson random vector with independent entries and expected value (i.e. mean) ${\rm E}[\boldsymbol{x}]=\mu$. Although not necessary in what follows, this implies that $\lambda$ is constant across $\boldsymbol{x}$. \end{itemize} Under these conditions, the detection problem consists of deciding whether $\boldsymbol{x} = \boldsymbol{n} = \mathcal{P}(\lambda \boldsymbol{1})$, i.e. it is pure noise (hypothesis $H_0$), or it does contain a contribution from a signal $\boldsymbol{s}$ (hypothesis $H_1$). In this way, it is equivalent to a decision problem between the two hypotheses \begin{equation} \label{eq:decision} \left\{ \begin{array}{ll} \mathcal{H}_0: & \quad \boldsymbol{x} = \mathcal{P}(\lambda \boldsymbol{1}); \\ \mathcal{H}_1: & \quad \boldsymbol{x} = \mathcal{P}(\boldsymbol{s} + \lambda \boldsymbol{1}). \end{array} \right. \end{equation} Under $\mathcal{H}_0$ the probability density function (PDF) of $\boldsymbol{x}$ is given by $p(\boldsymbol{x}| \mathcal{H}_0)$ whereas under $\mathcal{H}_1$ by $p(\boldsymbol{x}| \mathcal{H}_1)$. Deciding between these two alternatives requires fixing the detection criterion. A common and effective criterion consists in maximizing the probability of detection ($\rm PD$) under the constraint that the $\rm PFA$ (i.e., the probability of a false detection) does not exceed a fixed value $\alpha$. The Neyman-Pearson theorem \citep[e.g., see ][]{kay98} allows designing a decision process that pursues this aim: to maximize $\rm PD$ for a given $\rm PFA=\alpha$, choose $\mathcal{H}_1$ if the likelihood ratio \begin{equation} \label{eq:ratio} L(\boldsymbol{x}) = \frac{p(\boldsymbol{x}| \mathcal{H}_1)}{p(\boldsymbol{x}| \mathcal{H}_0)} > \gamma, \end{equation} where the threshold $\gamma$ is found from \begin{equation} \label{eq:p1} \rm PFA = \int_{\{\boldsymbol{x}: L(\boldsymbol{x}) > \gamma\}} p(\boldsymbol{x}| \mathcal{H}_0) d\boldsymbol{x} = \alpha. \end{equation} In a recent work \citet{ofe17} show that criteria~\eqref{eq:ratio} and \eqref{eq:p1} lead to the test \begin{equation} \label{eq:T} T(\boldsymbol{x}) = \boldsymbol{x}^T \boldsymbol{{\mathfrak f}} > \gamma, \end{equation} where \begin{equation} \label{eq:mf} \boldsymbol{{\mathfrak f}}=\ln{\left(\boldsymbol{1} + \frac{\boldsymbol{s}}{\lambda}\right)}. \end{equation} In practice, the test consists of filtering the signal $\boldsymbol{x}$ with $\boldsymbol{{\mathfrak f}}$ (i.e. the matched filter) and checking if the statistic $T(\boldsymbol{x})$ exceeds the threshold $\gamma$. From Eq.~\eqref{eq:mf} it appears that the form of $\boldsymbol{{\mathfrak f}}$ depends on the signal $\boldsymbol{s}$. The issue here is that in many practical applications (e.g. detection of point sources in sky maps) only the template $\boldsymbol{g}$ of the signal is known but not its amplitude $a$. The consequence of using the MF with an incorrect amplitude $a$ is to reduce, for a fixed value of the $\rm PFA$, the $\rm PD$, i.e. to make the MF less efficient. \citet{ofe17} have shown that the results provided by the MF are not very sensitive to the precise value of $a$ (see also below). This is not surprising given the logarithmic dependence of $\boldsymbol{{\mathfrak f}}$ on $\lambda$. However, even for a specific value of $a$, the PDF of $T(\boldsymbol{x})$ under the hypothesis $\mathcal{H}_0$ is not available in analytical form. The reason is that, although $T(\boldsymbol{x})$ is given by a linear composition of Poisson random variables, its PDF is not Poissonian. This does not allow fixing the threshold $\gamma$ corresponding to a prefixed value $\alpha$ of the $\rm PFA$. \citet{ofe17} bypass this problem using numerical simulations. Such method, however, is not flexible and is time-consuming. An alternative to numerical simulations is approximating the unknown PDF of $T(\boldsymbol{x})$ by another PDF. For example, in the situations of high-number count noise regimes, the Gaussian PDF would be a good choice. The same does not hold in low-number count regimes. In this case the SA represents an effective solution. \section{Saddlepoint approximation basics} \label{sec:spa} The SA is a powerful tool able to provide accurate expressions of the PDFs and the corresponding cumulative distribution functions (CDF). Its derivation is rather technical. For this reason, we provide here only a basic introduction useful for practical applications. A simple and informal derivation is outlined in appendix~\ref{sec:appB}, whereas a rigorous derivation is given in \citet{but07}. The SA to the PDF $f_X(x)$ of a continuous random variable $X$ is given by \begin{equation} \label{eq:aPDF} \hat{f}_X(x) = \frac{1}{\sqrt{2 \pi K_X^{(2)}(\hat{s})}} \exp{(K_X(\hat{s}) - \hat{s} x)}, \end{equation} where $K_X(s)$ is the cumulant generating function (CGF) of $f_X(x)$, \begin{equation} K_X(s) = \ln{ M_X(s)} , \end{equation} and $M_X(s)$ the corresponding moment generating function (MGF) (see appendix~\ref{sec:appA}). Moreover, $\hat{s}$ is the unique solution to the equation \begin{equation} \label{eq:K1} K_X^{(1)}(\hat{s}) = x, \end{equation} with $K_X^{(j)}$ denoting the $j$th derivative with respect to $s$. $\hat{f}_X(x)$ will not, in general, integrate to one, although it will usually not be far off. Therefore, it has to be numerically normalized. The SA results are particularly useful to approximate PDFs not obtainable in analytical form for which the corresponding CGF is available. This situation is typical of a random variable $X$ given by the sum of a set of independent random variables $\{ X_i \}$, $i=1,\ldots, N$. Indeed, except special cases (e.g., Gaussian), also when the random variables $\{ X_i \}$ share the same PDF, it is not possible to obtain $f_X(x)$ in closed form. As explained in appendix~\ref{sec:appA}, the CGF of a sum of independent random variables is given by the sum of the respective CGFs. Hence, if the CGFs of the random variables $\{ X_i \}$ are available, the SA can be applied. Once that $\hat{f}_X(x) $ is available, the correspoding CDF $\hat{F}_X(x)$ can be obtained via numerical integration. However, a simple expression, able to provide excellent results, has been proposed by \citet{lug80} \begin{equation} \label{eq:aCDF} \hat{F}_X(x)= \begin{cases} \Phi(\hat{w}) + \phi(\hat{w}) \left(1/\hat{w} - 1/\hat{u} \right) & \text{if } x \neq \mu, \\ \frac{1}{2} + \frac{K_X^{(3)}(0)}{6 \sqrt{2 \pi} K_X^{(2)}(0)^{3/2}} & \text{if } x = \mu. \end{cases} \end{equation} Here, $\phi(.)$ and $\Phi(.)$ represent the standard Gaussian PDF and CDF, respectively, whereas $\hat{w}$ and $\hat{u}$ are given by \begin{equation} \label{eq:wt} \hat{w}={\rm sign}(\hat{s}) \sqrt{ 2 [\hat{s} x - K_X(\hat{s})]}, \end{equation} with ${\rm sign}(y)$ providing the sign of $y$, and \begin{equation} \hat{u}=\hat{s} \sqrt{ K_X^{(2)}(\hat{s})}. \end{equation} The only difficulty in using $\hat{F}_X(x)$ concerns its evaluation when $x$ is close to the expected value $\rm E[X]=\mu$ of $X$. In this case the computation of $\hat{F}_X(x)$ is tricky since $K_X^{(1)}(0)={\rm E}[X]$ and then the solution of Eq.~\eqref{eq:K1} becomes $\hat{s} = 0$. As a consequence, it happens that $K_X(0)=0$ and accordingly, since $\hat{w} = 0$ from Eq.~\eqref{eq:wt}, the first equation in the system~\eqref{eq:aCDF} becomes useless. This is the reason for the second equation in the system~\eqref{eq:wt}. For practical use, however, it is numerically more advantageous to use a linear interpolation based on the SA to ${\rm E}[X] \pm \epsilon$, where $\epsilon$ is chosen small enough to ensure high accuracy, but large enough to ensure numerical stability. In the case $X$ is a discrete random variable, the same equations hold with $x$ substituted by $k$, which takes values in the set of the integer numbers, and keeping in mind that $1- \hat{F}_X(k)$ provides the probability that $X \ge k$. Although the expressions~\eqref{eq:aPDF} and \eqref{eq:aCDF} are computable for any value of $x$ whether real or integer-valued, $\hat{f}_X(k)$ and $\hat{F}_X(k)$ are meaningful approximations of $f_X(k)$ and $F_X(k)$ only for integer-valued arguments. \section{Signal detection in the Poisson noise regime} \label{sec:spapoiss} The SA to the PDF and the CDF of $T(\boldsymbol{x})$ in the Poisson case can be easily computed. Indeed, under the hypothesis $\mathcal{H}_0$, $T(\boldsymbol{x})$ is given by a linear combination (i.e. a sum) of Poisson random variables $X_i$ with common parameter $\lambda$, \begin{equation} \label{eq:Tx} T(\boldsymbol{x}) = y = \sum_{i=0}^{N-1} \mathfrak{f}_i x_i. \end{equation} Hence, its CGF $K_Y(s)$ is given by \begin{align} K_Y(s) &= \sum_{i=0}^{N-1} K_{X_i}(s) \\ &= \lambda \sum_{i=0}^{N-1} ({\rm e}^{\mathfrak{f}_i s} -1), \end{align} with $s \in (-\infty, + \infty)$. It is elementary to see that $K_Y^{(j)}(s)$ is given by \begin{equation} \label{eq:seq} K_Y^{(j)}(s) = \lambda \sum_{i=0}^{N-1} \mathfrak{f}_i^j {\rm e}^{\mathfrak{f}_i s}. \end{equation} In order to be used in Eqs.~\eqref{eq:aPDF} and \eqref{eq:aCDF}, these functions have to be computed for $s=\hat{s}$. This step requires the numerical solution of Eq.~\eqref{eq:K1} that, however, does not present particular difficulties given that, as explained in appendix~\ref{sec:appB}, $K_X^{(1)}(s)$ is an increasing function for $s \in (-\infty, + \infty)$. It is important to stress that, since Eq.~\eqref{eq:K1} cannot be solved when $x=0$, $\hat{f}_X(x)$ is defined only for $x > 0$. But it is easy to see that for $x=0$ it is $\hat{f}_X(0) = \exp{(- \lambda N)}$. This last one is an exact result not only an approximation. When using the SA in the present context, it is necessary to consider that the PDF of $T(\boldsymbol{x})$ is of discrete type but it is not defined on a lattice (i.e. $y$ does not take values on a regular grid of numbers). The point is that the arguments presented above hold only for integer values of $k$ or, with minor modifications, when $k$ takes values on a regular grid of numbers. However, except for extremely small values of $\lambda$, this PDF can be considered ``almost continuous''. Indeed, in the case of two-dimensional maps, from combinatorial arguments it can be realized that, already with values of $\lambda$ as small as $0.01$ and $\boldsymbol{{\mathfrak f}}$ given by a circular Gaussian with standard deviation $\sigma$ of only $2$ pixels, the number of different values that $y$ can take is of the order of several thousands. In any case, there is considerable empirical evidence that the SA is useful and maintains most of its accuracy even with discrete PDFs not defined on a lattice \citep[cf. pag. 27 in ][]{but07}. The numerical simulations presented below confirm this result. \section{Numerical experiments} \label{sec:numerical} Figure~\ref{fig:fig_pdf} compares the histogram $H(y)$ with the SA $\hat{f}_Y(y)$ (normalized to unit area) to the PDF $f_Y(y)$ of the statistic $T(\boldsymbol{x})$ for the central pixel of a set of $10^5$ MF filtered random realizations of a Poisson $13 \times 13$ pixels noise process. Four values of the parameter $\lambda$, say $0.01$, $0.025$, $0.05$, and $0.1$ (units in counts pix$^{-1}$), have been considered. The MF has been computed assuming $\boldsymbol{s}$ as a circular Gaussian with standard deviation $\sigma=2$ pixels normalized to unit volume. For comparison, the Gaussian best fits $\phi(y)$ are also shown. From these figures it is visible that better results are obtainable when increasing the values of $\lambda$. This is not an unexpected result since small values of $\lambda$ imply that most of the pixels have zero counts, a few have one count and very few have larger counts. Especially for ``narrow'' signals $\boldsymbol{s}$, the consequence is a rough PDF for the statistic $T(\boldsymbol{x})$. However, in the case of $\lambda=0.01$, the top-right panel of Fig.~\ref{fig:fig_pdf} shows that, although the SA is not able to reproduce all the details of $H(y)$, it provides a good envelope resulting in a good approximation to the corresponding CDF. This is supported by Fig.~\ref{fig:fig_cdf} which compares the sample CDF $\tilde{F}_Y(y)$ with the $\hat{F}_Y(y)$ corresponding to the PDFs in Fig.~\ref{fig:fig_pdf}. The agreement is excellent. This is more evident in Fig.~\ref{fig:fig_err} where the relative errors $(\hat{F}_Y(y)-\tilde{F}_Y(y))/\tilde{F}_Y(y)$ are plotted versus $\tilde{F}_Y(y)$ for values of $y$ such as $\tilde{F}_Y(y) > 0.7$. This is a useful fact since in signal detection problems it is the complementary CDF $1-F_Y(y)$ and not the PDF $f_Y(y)$ which matters. The SA does not work with very small values of $\lambda$. However, it is questionable that in situations of very low-number count regime the MF could be a useful approach. For example, in the case of a $1000 \times 1000$ pixels map and $\lambda=0.001$, only $N_{\emptyset}=1000$ pixels are expected with values greater than zero, i.e. only one pixel in each area of $30 \times 30$ squared pixels. Under these conditions the use of the MF does not make sense since there is nothing to filter out. Moreover, the expected noise background should consist of $1000$ bumps all with the same shape and amplitude. A more effective approach is based on the observation that the probability to have two counts in a pixel is of order of $5 \times 10^{-7}$. Thereof, all the non-zero pixels are expected to have only one count. In addition, given that the probability of two pixels occupying adjacent positions is of order of $4 N_{\emptyset}/N_ {\rm pix}$, only four of them are expected to be neighboring. In other words, the detection of a signal can be claimed with high confidence in presence of pixels with counts greater than one and possibly contiguous with other non-zero pixels. The conclusion is that, apart from extremely low levels of the noise background, the SA is able to provide excellent results. As visible in Figs.~\ref{fig:fig_pdf}-\ref{fig:fig_err}, this is not true for the Gaussian approximation. Hence, to test if after the MF filtering the value $y$ of a pixels is not due only to the noise, it is sufficient to check if $1-\hat{F}_Y(y) < \alpha$ with $\alpha$ a prefixed $\rm PFA$. \section{Expected performances of the MF} \label{sec:psf} The excellent computation efficiency, but above all the good flexibility of the SA in the calculation of the $\rm PFA$ with different noise levels and functional forms of $\boldsymbol{s}$, allow us to explore the expected performances of the MF in better detail than possible with an approach based on numerical simulations. In particular, in \citet{ofe17} the performances of the MF are compared to those of its most important competitor which is the point spread function filter (PSFF) technique where it is assumed that $\boldsymbol{{\mathfrak f}}=\boldsymbol{g}$. One of the main reasons for such a comparison lies in the fact that in the case of a Gaussian additive white-noise the PSFF and the MF concide. Another reason is that even for relatively small values of $\lambda$ the PDF of the Poisson noise can be reasonably approximated with a Gaussian. Moreover, under the conditions we are working under, the Poissonian noise level is constant across the background area but it changes only close to the signal position. However, since the MF is often used in situations where the amplitude of the signal is smaller than the level of the noise, in first approximation it can be assumed that $\boldsymbol{s} + \lambda \approx \lambda$. Hence, the noise can be considered of additive type with a constant level everywhere. In conclusion, there are situations where the PSFF $\boldsymbol{g}$ can be an acceptable approximation to the MF. The results obtained by \citet{ofe17} indicate that in general the MF is superior to the PSFF for small values of $\lambda$ and similar for larger values of this parameter. But, because of the very large number of numerical simulations necessary to fix a reliable detection threshold for each different experimental situation, their analysis is based on a few cases only. Here, we carry out a set of similar simulations with a larger combination of noise levels and signal intensities. Our numerical experiments confirm the results of \citet{ofe17}. This is shown by Fig.~\ref{fig:fig_comparison} where the completeness (i.e. the fraction of correctly detected signals) of the MF is compared with that of the PSFF for various combinations of $\lambda$ and $a$. It is visible as the results provided by the MF are effectively superior to those of the PSFF. However, the performances move closer and closer for increasing values of $\lambda$. In the experiments $\boldsymbol{g}$ is a circular Gaussian with $\sigma = 2$ pixels, and the completeness has been estimated on the basis of $10^4$ simulated $13 \times 13$ pixels maps assuming a detection threshold corresponding to $\alpha = 10^{-3}$. Although indicative, such experiments, as well those by \citet{ofe17}, are affected by an important limitation: the comparison between the MF and the PSFF is done assuming that $a$ in the MF takes true values. But in real situations this information is not available. An example is sky maps containing point sources with different amplitude. For this reason, we have carried out another set of numerical experiments similar to the previous ones but using incorrect signal amplitudes $a^*$. The results are shown in Figs.~\ref{fig:fig_comparison_f_01}-\ref{fig:fig_comparison_f_100} for $a^*=0.1$, $10$, and $100$. The indication that comes out from these figures is that, for a given value of $\lambda$, the performances of the MF effectively do not critically depend on the exact value of $a$ and, although less remarkably, tend to remain superior to those provided by the PSFF. Only values of $a^*$ very different from the true one determine an appreciable degradation of the results. Finally, another indication which comes out from Figs.~\ref{fig:fig_comparison_f_01}-\ref{fig:fig_comparison_f_100} is that it appears less harmful to use values of $a^*$ greater, rather than smaller, the true value. The reason of this fact can be deduced from Fig.~\ref{fig:fig_filters} which shows the 1-D cut of the MF for different combinations of $\lambda$ and $a$. With $\lambda$ fixed, it is evident that when $a$ increases the filtering action of the MF is stronger. This results in an over-filtered signal but also in a more robust attenuation of the noise. On the contrary, using an $a^*$ smaller that $a$ will result in an insufficiently filtered noise. The former situation appears preferable. \section{Shortcomings of the MF with Poisson noise} \label{sec:practical} Thanks to the SA it is possible to find a partial solution to a still unsolved problem. In particular, the performances of the MF~\eqref{eq:mf} have been tested under the implicit assumption that the position of $\boldsymbol{s}$ within $\boldsymbol{x}$ is known. In real situations this condition is rarely met (e.g. point sources in a two-dimensional map). The standard procedure to circumvent this issue is to cross-correlate $\boldsymbol{x}$ with the MF and then to apply the detection test~\eqref{eq:T} to the resulting most prominent peaks. However, as shown in \citet{vio16} and \citet{vio17} for the Gaussian case, the PDF of the peaks $\{ z \}$ of a random field is different from that of its generic points. Of course, the same holds also for the Poisson case. This is shown by Fig.~\ref{fig:fig_pdf_peaks} where the PDF $\hat{f}_Y(y)$ of a $2000 \times 2000$ pixels Poisson random field with $\lambda=0.1$ and filtered by means of the MF in Eq.~\eqref{eq:mf} with $\boldsymbol{s}$ a circular Gaussian with $\sigma=2$ pixels, is compared with the histogram $H(z)$ of its peaks. It is evident that working with the peaks, and assuming the PDF $\hat{f}_Y(y)$ for the the statistic $T(z)$ in Eq.~\eqref{eq:Tx}, may severely underestimate the $\rm PFA$ with the risk of giving statistical significance to features that belong to the noise. Contrary to the Gaussian noise \citep{che15a, che15b}, the PDF of the peaks is not available for the Poisson noise. Without it, a precise computation of the $\rm PFA$ is not possible. Moreover, there is the additional difficulty that, if $N_p$ peaks are present in a MF filtered map, then a number $\alpha \times N_p$ among them is expected to exceed, by chance, the prefixed detection threshold. For example, if $N_p = 1000$, then there is a high probability that a detection with a nominal $\rm PFA$ equal to $10^{-3}$ is spurious. Hence, the true $\rm PFA$ has to depend on $N_p$ (look-elsewhere effect). The popular way-out to get around this issue is to assume $f_Y(y)$ as the PDF of the peaks (i.e. the peaks are regarded as generic points of the random field) and then to set $\rm PFA = \alpha / N^*$ with $N^*$ the number of independent pixels. If the noise is coloured, as it happens after the filtering operation, pixels are correlated with each other. Therefore, $N^*$ typically is smaller than $N$ and has to be estimated. Usually the estimation of $N^*$ is based on the correlation length of the template $\boldsymbol{g}$. The rational is that pixels with a mutual distance wider than the correlation length can be considered independent. For instance, in the case of a two-dimensional map and $\boldsymbol{g}$ given by a circular Gaussian function with dispersion $\sigma$, \citet{ofe17} suggest that $N^* \approx N/\sigma^2$. This is to point out that such approach provides results which are only correct as an order of magnitude but no alternative is available if additional a priori information is missing. In the present context, such procedures can be efficiently implemented by means of the order statistics, in particular by exploiting the statistical characteristics of the greatest value of a finite sample of identical and independently distributed (iid) random variables from a given PDF \citep{hog13}. Under the iid condition, the PDF $\psi_{Q}(q)$ and the CDF $\Psi_Q(q)$ of the greatest value $q = y_{\max}$ among a set of $N^*$ independent pixels with PDF $\hat{f}_Y(y)$ are given by \begin{equation} \label{eq:gz} \psi_Q(q) = N^* \left[ \hat{F}_Y(y) \right]^{N^*-1} \hat{f}_Y(y), \end{equation} and \begin{equation} \label{eq:gz} \Psi_Q(q) = \left[ \hat{F}_Y(y) \right]^{N^*}, \end{equation} respectively. Hence, by assuming that the greatest value among a set of pixels coincides with that of a peak, a detection can be claimed when for a given peak it is $1-\Psi_Q(q) < \alpha$. This is an alternative way, though equivalent, to threshold the data which does not require the inversion of $\Psi_Q(q)$ to fix the parameter $\gamma$ corresponding to a given $\alpha$. In \citet{vio17} the quantity $1-\Psi_Q(q)$ is called specific probability of false alarm (SPFA). The PDF $\psi_Q(q)$ for the numerical experiment of this section is also shown in Fig.~\ref{fig:fig_pdf_peaks} as well the $\rm SPFA$ corresponding to the largest peak observed in the map. We stress that in the case of large maps, an alternative based on numerical simulations is not viable. \section{Conclusions} \label{sec:conclusions} In this paper we have introduced an efficient and effective implementation of the matched filter in the case of low-number count Poisson noise regime. We have shown that although the probability distribution and the cumulative distribution functions of the pixel counts after the MF filtering are not available they can be approximated with excellent results using the saddlepoint approximation method. With such techniques more accurate estimations of the probability of false detection or false alarm are obtained without making use of empirical or numerical methods. \begin{acknowledgements} The authors warmly thank Martine Pelzer for her careful reading of the paper and correction of the English editing. \end{acknowledgements}
1,116,691,498,000
arxiv
\section{Introduction} It has become a clich\'e to say that coherent states abound in quantum physics \cite{1}. Moreover, it turns out that they can also be applied in the theory of quantum deformations \cite{2} and even in the theory of classical dynamical systems \cite{3}. In spite of the fact that the problem of the quantization of a particle motion on a sphere is at least seventy years old, there still remains an open question concerning the coherent states for a particle on a sphere. Indeed, the celebrated spin coherent states introduced by Radcliffe \cite{4} and Perelomov \cite{5} are labelled by points of a sphere, i.e., the elements of the configuration space. On the other hand, it seems that as with the standard coherent states, the coherent states for a particle on a sphere should be marked with points of the phase space rather than the configuration space. The aim of this work is to introduce the coherent states for a quantum particle on the sphere $S^2$, labelled by points of the phase space, that is the cotangent bundle $T^*S^2$. The construction follows the general scheme introduced in \cite{6} for the case of the motion in a circle, based on the polar decomposition of the operator defining via the eigenvalue equation the coherent states. From the technical point of view our treatment utilizes both the Barut-Girardello \cite{7} and Perelomov approach \cite{5}. Namely, as with the Barut-Girardello approach the coherent states are defined as the eigenvectors of some non-Hermitian operators. On the other hand, in analogy to the Perelomov formalism those states are generated from some ``vacuum vector'', nevertheless in opposition to the Perelomov group-theoretic construction, the coherent states are obtained by means of the non-unitary action. In section 2 we recall the construction of the coherent states for a particle on a circle. Sections 3--6 are devoted to the definition of the coherent states for a particle on a sphere and discussion of their most important properties. For an easy illustration of the introduced approach we study in section 7 the case with the free motion on a sphere. \section{Coherent states for a particle on a circle} In this section we recall the basic properties of the coherent states for a particle on a circle introduced in \cite{6}. Consider the case of the free motion in a circle. For the sake of simplicity we assume that the particle has unit mass and it moves in a unit circle. The classical Lagrangian is \begin{equation} L = \hbox{$\scriptstyle1\over2$}\dot \varphi^2, \end{equation} so the angular momentum canonically conjugate to the angle $\varphi$ is given by \begin{equation} J = \frac{\partial L}{\partial \dot \varphi}=\dot \varphi, \end{equation} and the Hamiltonian can be written as \begin{equation} H = \hbox{$\scriptstyle1\over2$}J^2. \end{equation} Evidently, we have the Poissson bracket of the form \begin{equation} \{\varphi,J\} = 1, \end{equation} implying accordingly to the rules of the canonical quantization the commutator \begin{equation} [\hat\varphi,\hat J] = i, \end{equation} where we set $\hbar=1$. The operator $\hat\varphi$ does not take into account the topology of the circle and (2.5) needs very subtle analysis. The better candidate to represent the position of the quantum particle on the unit circle is the unitary operator $U$ \begin{equation} U = e^{i\hat\varphi}. \end{equation} Indeed, the substitution $\hat\varphi\to\hat\varphi+2n\pi$ does not change $U$, i.e.\ $U$ preserves the topology of the circle. The operator $U$ leads to the algebra \begin{equation} [\hat J,U] = U, \end{equation} where $U$ is unitary. Consider the eigenvalue equation \begin{equation} \hat J|j\rangle = j|j\rangle. \end{equation} Using (2.7) and (2.8) we find that the operators $U$ and $U^\dagger $ are the ladder operators, namely \numparts \begin{eqnarray} U|j\rangle &=& |j+1\rangle,\\ U^\dagger |j\rangle &=& |j-1\rangle. \end{eqnarray} \endnumparts Demanding the time-reversal invariance of representations of the algebra (2.7) we conclude \cite{6} that the eigenvalues $j$ of the operator $\hat J$ can be only integer (boson case) or half-integer (fermion case). We define the coherent states $|\xi\rangle$ for a particle on a circle by means of the eigenvalue equation \begin{equation} Z|\xi\rangle = \xi|\xi\rangle, \end{equation} where $\xi$ is complex. In analogy to the eigenvalue equation satisfied by the standard coherent states $|z\rangle$ \cite{8,9} with complex $z$, of the form \begin{equation} e^{i\hat a}|z\rangle = e^{iz}|z\rangle, \end{equation} where $\hat a\sim \hat q+i\hat p$ is the standard Bose annihilation operator and $\hat q$ and $\hat p$ are the position and momentum operators, respectively, we set \begin{equation} Z := e^{i(\hat \varphi + i\hat J)}. \end{equation} Hence, making use of the Baker-Hausdorff formula we get \begin{equation} Z = e^{-\hat J + \hbox{$\frac{1}{2}$}}U. \end{equation} We remark that the complex number $\xi$ should parametrize the cylinder which is the classical phase space for the particle moving in a circle. The convenient parametrization of $\xi$ consistent with the form of the operator $Z$ such that \begin{equation} \xi = e^{-l + i\varphi}. \end{equation} arises from the deformation of the circular cylinder by means of the transformation \begin{equation} x=e^{-l}\cos\varphi,\qquad y=e^{-l}\sin\varphi,\qquad z=l. \end{equation} The coherent states $|\xi\rangle$ can be represented as \begin{equation} |\xi\rangle = e^{-(\ln\xi) \hat J}|1\rangle, \end{equation} where \begin{equation} |1\rangle = \sum_{j=-\infty}^{\infty}e^{-\frac{j^2}{2}}|j\rangle. \end{equation} The coherent states satisfy \begin{equation} \frac{\langle \xi|\hat J|\xi\rangle}{\langle \xi|\xi\rangle}\approx l, \end{equation} where the maximal error arising in the case $l\to0$ is of order $0.1$ per cent and we have the exact equality in the case with $l$ integer or half-integer. Therefore, $l$ can be identified with the classical angular momentum. Furthermore, we have \begin{equation} \frac{\langle \xi|U|\xi\rangle}{\langle \xi|\xi\rangle} \approx e^{-\frac{1}{4}}e^{i\varphi}. \end{equation} It thus appears that the average value of $U$ in the normalized coherent state does not belong to the unit circle. On introducing the relative average of $U$ of the form \begin{equation} \frac{\langle U\rangle_{\xi}}{\langle U\rangle_{\eta}} := \frac{\langle \xi|U|\xi\rangle}{\langle \eta|U|\eta\rangle}, \end{equation} where $|\xi\rangle$ and $|\eta\rangle$ are the normalized coherent states, we find \begin{equation} \frac{\langle U\rangle_{\xi}}{\langle U\rangle_1} \approx e^{i\varphi}. \end{equation} From (2.21) it follows that that the relative expectation value $\langle U\rangle_{\xi}/\langle U\rangle_1$ is the most natural candidate to describe the average position of a particle on a circle and $\varphi$ can be regarded as the classical angle. We remark that the coherent states on the circle have been recently discussed by Gonz\'ales {\em et al\/} \cite{10}. In spite of the fact that they formally generalize the coherent states described above, the ambiguity of the definition of those states manifesting in their dependence on some extra parameter, can be avoided only by demanding the time-reversal invariance mentioned earlier, which leads precisely to the coherent states introduced in \cite{6}. Since the time-reversal symmetry seems to be fundamental one for the motion of the classical particle in a circle and makes the quantization unique, therefore the generalization of the coherent states discussed in \cite{10} which does not preserve that symmetry is of interest rather from the mathematical point of view. Having in mind the properties of the standard coherent states one may ask about the minimalization of the Heisenberg uncertainty relations by the introduced coherent states for a particle on a circle. In our opinion, in the case with the compact manifolds the minimalization of the Heisenberg uncertainty relations is not an adequate tool for the definition of the coherent states. A counterexample can be easily deduced from (2.7), (2.8) and (2.9). Indeed, taking into account (2.8) and (2.9) we find that for the eigenvectors $|j\rangle$'s of the angular momentum $\hat J$ the equality sign is attended in the Heisenberg uncertainty relations implied by (2.7) such that \begin{equation} (\Delta \hat J)^2\ge\frac{1}{4}\frac{|\langle U \rangle|^2}{1-|\langle U \rangle|^2}. \end{equation} More precisely, for these states (2.22) takes the form $0=0$. On the other hand, the vectors $|j\rangle$'s are clearly rather poor candidate for the coherent states. In our opinion the fact that the coherent states are ``the most classical'' ones is better described by the following easily proven formulae: \begin{eqnarray} (\Delta \hat J)^2&\approx& {\rm const},\\ \frac{\langle U^2\rangle}{\langle U\rangle^2}&\approx&{\rm const}, \end{eqnarray} where the approximations are very good ones. In fact, these relations mean that the quantum variables $\hat J$ and $U$ are at practically constant ``distance'' from their classical counterparts $\langle \hat J\rangle$ and $\langle U\rangle$, respectively, and therefore the quantum observables and the corresponding expectation values connected to the classical phase space are mutually related. We point out that in the case with the standard coherent states for a particle on a real line we have the exact formulae \begin{eqnarray} (\Delta \hat p)^2&=&{\rm const},\\ (\Delta \hat q)^2&=&{\rm const}. \end{eqnarray} It seems to us that the approximative nature of the relations (2.23) and (2.24) is related to the compactness of the circle. \section{Unitary representations of the $e(3)$ algebra and quantum mechanics on a sphere} Our experience with the case of the circle discussed in the previous section indicates that in order to introduce the coherent states we should first identify the algebra adequate for the study of the motion on a sphere. The fact that the algebra (2.7) referring to the case with the circle $S^1$ is equivalent to the $e(2)$ algebra, where $E(2)$ is the group of the plane consisting of translations and rotations, \begin{equation} [\hat J,X_\alpha]={\rm i}\varepsilon_{\alpha\beta}X_\beta, \qquad [X_\alpha,X_\beta]=0,\qquad \alpha,\,\beta=1,\,2, \end{equation} realized in a unitary irreducible representation by Hermitian operators \begin{equation} X_1=r(U+U^\dagger)/2,\qquad X_2=r(U-U^\dagger)/2{\rm i}, \end{equation} where the Casimir is \begin{equation} X_1^2+X_2^2=r^2, \end{equation} and $\varepsilon_{\alpha\beta}$ is the anti-symmetric tensor, indicates that the most natural algebra for the case with the sphere $S^2$ is the $e(3)$ algebra such that \begin{equation} [J_i,J_j]={\rm i}\varepsilon_{ijk}J_k,\qquad [J_i,X_j]={\rm i} \varepsilon_{ijk}X_k,\qquad [X_i,X_j]=0,\qquad i,\,j,\,k=1,\,2,\,3. \end{equation} Indeed, the algebra (3.4) has two Casimir operators given in a unitary irreducible representation by \begin{equation} {\bi X}^2=r^2,\qquad {\bi J}\bdot{\bi X}=\lambda, \end{equation} where dot designates the scalar product. Therefore, as with the generators $X_\alpha $, $\alpha=1,\,2$, describing the position of a particle on the circle, the generators $X_i$, $i=1,\,2,\,3$, can be regarded as quantum counterparts of the Cartesian coordinates of the points of the sphere $S^2$ with radius $r$. We point out that unitary irreducible representations of (3.4) can be labelled by $r$ and the new scale invariant parameter $\zeta =\frac{\lambda }{r}$. It is clear that $\zeta $ is simply the projection of the angular momentum ${\bi J}$ on the direction of the radius vector of a particle. Since we did not find any denomination for such an entity in the literature, therefore we have decided to call $\zeta $ the {\em twist\/} of a particle. Let us now recall the basic properties of the unitary representations of the $e(3)$ algebra. The $e(3)$ algebra expressed with the help of operators $J_3$, $J_\pm=J_1\pm {\rm i}J_2$, $X_3$ and $X_\pm=X_1\pm {\rm i}X_2$, takes the form \numparts \begin{eqnarray} [J_+,J_-] &=& 2J_3,\qquad [J_3,J_\pm]=\pm J_\pm,\\ {}[J_\pm ,X_\mp] &=& \pm 2X_3,\qquad [J_\pm,X_\pm]=0,\qquad [J_\pm ,X_3]=\mp X_\pm,\\ {}[J_3,X_\pm] &=& \pm X_\pm,\qquad [J_3,X_3]=0,\\ {}[X_+,X_-] &=& [X_\pm,X_3]=0. \end{eqnarray} \endnumparts Consider the irreducible representation of the above algebra in the angular momentum basis spanned by the common eigenvectors $|j,m;r,\zeta\rangle$ of the operators ${\bi J}^2=J_+J_-+J_3^2-J_3$, $J_3$, ${\bi X}^2$ and ${\bi J}\bdot{\bi X}/r$ \numparts \begin{eqnarray} &&{\bi J}^2 |j,m;r,\zeta\rangle = j(j+1) |j,m;r,\zeta\rangle,\qquad J_3 |j,m;r,\zeta\rangle=m|j,m;r,\zeta\rangle,\\ &&{\bi X}^2 |j,m;r,\zeta\rangle=r^2 |j,m;r,\zeta\rangle,\qquad ({\bi J}\bdot{\bi X}/r) |j,m;r,\zeta\rangle=\zeta|j,m;r,\zeta\rangle, \end{eqnarray} \endnumparts where $-j\le m\le j$. Recall that \begin{equation} J_\pm |j,m;r,\zeta\rangle=\sqrt{(j\mp m)(j\pm m+1)}\,|j,m\pm 1;r,\zeta\rangle. \end{equation} The operators $X_\pm$ and $X_3$ act on the vectors $|j,m;r,\zeta\rangle$ in the following way: \numparts \begin{eqnarray} X_+ |j,m;r,\zeta\rangle &=&-\frac{r\sqrt{(j+1)^2-\zeta^2}\sqrt{(j+m+1)(j+m+2)}} {(j+1)\sqrt{(2j+1)(2j+3)}}|j+1,m+1;r,\zeta\rangle\nonumber\\ &&{}+\frac{\zeta r\sqrt{(j-m)(j+m+1)}}{j(j+1)}|j,m+1;r,\zeta\rangle\nonumber\\ &&{}+\frac{r\sqrt{j^2-\zeta^2}\sqrt{(j-m-1)(j-m)}}{j\sqrt{(2j-1)(2j+1)}} |j-1,m+1;r,\zeta\rangle,\\ X_- |j,m;r,\zeta\rangle &=&\frac{r\sqrt{(j+1)^2-\zeta^2}\sqrt{(j-m+1)(j-m+2)}} {(j+1)\sqrt{(2j+1)(2j+3)}}|j+1,m-1;r,\zeta\rangle\nonumber\\ &&{}+\frac{\zeta r\sqrt{(j-m+1)(j+m)}}{j(j+1)}|j,m-1;r,\zeta\rangle\nonumber\\ &&{}-\frac{r\sqrt{j^2-\zeta^2}\sqrt{(j+m-1)(j+m)}}{j\sqrt{(2j-1)(2j+1)}} |j-1,m-1;r,\zeta\rangle,\\ X_3 |j,m;r,\zeta\rangle &=&\frac{r\sqrt{(j+1)^2-\zeta^2}\sqrt{(j-m+1)(j+m+1)}} {(j+1)\sqrt{(2j+1)(2j+3)}}|j+1,m;r,\zeta\rangle\nonumber\\ &&\fl\fl{}+\frac{\zeta rm}{j(j+1)}|j,m;r,\zeta\rangle+ \frac{r\sqrt{j^2-\zeta^2}\sqrt{(j-m)(j+m)}}{j\sqrt{(2j-1)(2j+1)}}|j-1,m;r,\zeta \rangle. \end{eqnarray} \endnumparts An immediate consequence of (3.9) is the existence of the minimal $j=j_{\rm min}$ satisfying \begin{equation} j_{\rm min}=|\zeta| . \end{equation} Thus, it turns out that in the representation defined by (3.9) the twist $\zeta $ can be only integer or half integer. We finally write down the orthogonality and completeness conditions satisfied by the vectors $|j,m;r,\zeta\rangle$ such that \begin{eqnarray} &&\langle j,m;r,\zeta|j',m';r,\zeta\rangle=\delta_{jj'}\delta_{mm'},\\ &&\sum_{j=|\zeta|}^{\infty}\sum_{m=-j}^{j} |j,m;r,\zeta\rangle\langle j,m;r,\zeta|=I, \end{eqnarray} where $I$ is the identity operator. \section{Definition of coherent states for a particle on a sphere} Now, an experience with the circle indicates that one should identify by means of the $e(3)$ algebra an analogue of the unitary operator $U$ (2.6), representing the position of a particle on a sphere. To do this, let us recall that a counterpart of the ``position'' $e^{{\rm i}\varphi}$ on the circle $S^1$ is a unit length imaginary quaternion which can be represented with the help of the Pauli matrices $\sigma_i$, $i=1,\,2,\,3$, as \begin{equation} \eta = {\rm i}{\bi n}\bdot{\bsigma}, \end{equation} where ${\bi n}^2=1$. Notice that $\eta$ is simply an element of the $SU(2)$ group and it is related to the $S^2\approx SU(2)/U(1)$ quotient space. Therefore the most natural choice for the ``position operator'' of a particle on a sphere is to set \begin{equation} V=\hbox{$\scriptstyle 1\over r$}\bsigma\bdot{\bi X}, \end{equation} where $X_i$, $i=1,\,2,\,3$ obey (3.4) and (3.9) and we have omitted for convenience the imaginary factor i. Furthermore, let us introduce a version of the Dirac matrix operator \cite{11} \begin{equation} K := -(\bsigma\bdot{\bi J}+1). \end{equation} Observe that \begin{equation} V^\dagger=V,\qquad K^\dagger=K. \end{equation} Making use of the operators $V$ and $K$ we can write the relations defining the $e(3)$ algebra in the space of the unitary irreducible representation introduced above as \numparts \begin{eqnarray} ({\rm Tr}\bsigma K)^2 &=& 4K(K+1),\\ {}[K,V]_+ &=& {\rm Tr}KV,\\ V^2&=&I, \end{eqnarray} \endnumparts where ${\rm Tr}A=A_{11}+A_{22}$, and the subscript ``+'' designates the anti-commutator. In particular, \begin{equation} {\rm Tr}KV=-2{\bi J}\bdot{\bi X}/r=-2\zeta . \end{equation} It should also be noted that in view of (4.4) and (4.5{\em c}) $V$ satisfies the unitarity condition $V^\dagger V=I$. We now introduce the vector operator ${\bi Z}$ generating, via the eigenvalue equation analogous to (2.10), the coherent states for a particle on a sphere $S^2$. The experience with the circle (see eq.\ (2.13)) suggests the following form of the ``polar decomposition'' for the matrix operator counterpart $Z$ of the operator ${\bi Z}$: \begin{equation} Z=e^{-K}V. \end{equation} Indeed, it is easy to see that in the case of the circular motion in the equator defined semiclassically by $J_1=J_2=0$ and $X_3=0$, $Z$ reduces to the diagonal matrix operator with $Z$ given by (2.13) and its Hermitian conjugate on the diagonal. Furthermore, using (4.5{\em b}) we find \begin{equation} Z-Z^{-1} = 2\zeta K^{-1}\sinh K. \end{equation} Motivated by the complexity of the problem we now restrict to the simplest case of the twist $\zeta=0$ when (4.8) takes the form \begin{equation} Z^2=I. \end{equation} In the following we confine ourselves to the case $\zeta=0$. The general case with arbitrary $\zeta\ne0$ will be discussed in a separate work. Besides (4.9) we have also remarkably simple relation (4.5{\em b}) referring to $\zeta=0$ such that \begin{equation} [K,V]_+ = 0. \end{equation} Notice that the case $\zeta=0$ is the ``most classical'' one. Indeed, the projection of the angular momentum onto the direction of the radius vector should vanish for the classical particle on a sphere. It should also be noted that in view of (3.10) $j$'s and $m$'s labelling the basis vectors $|j,m;r,\zeta\rangle$ are integer in the case of the twist $\zeta =0$. We finally point out that the condition $\zeta=0$ ensures the invariance of the irreducible representation of the $e(3)$ algebra under time inversions and parity transformations which change the sign of the product ${\bi J}\bdot{\bi X}$. Clearly demanding the time-reversal or the parity invariance when $\zeta\ne0$ one should work with representations involving both $\zeta$ and $-\zeta$. We now return to (4.7). Making use of (4.10) and the fact that the matrix operator $V$ in view of (4.2) is traceless one we obtain for $\zeta=0$ \begin{equation} {\rm Tr}Z=0. \end{equation} Hence, \begin{equation} Z = \bsigma\bdot{\bi Z}. \end{equation} Taking into account (4.9) we get from (4.12) \begin{equation} {\bi Z}^2=1, \end{equation} and \begin{equation} [Z_i,Z_j]=0,\qquad i,j=1,\,2,\,3. \end{equation} As with (4.2) describing in the matrix language the position of a quantum particle on a sphere, the matrix operator (4.12) can be only interpreted as a convenient arrangement of the operators $Z_i$ generating the coherent states, simplifying the algebraic analysis of the problem. Accordingly, we define the coherent states for a quantum mechanics on a sphere in terms of operators $Z_i$, as the solutions of the eigenvalue equation such that \begin{equation} {\bi Z} |{\bi z}\rangle = {\bi z} |{\bi z}\rangle, \end{equation} where in view of (4.13) ${\bi z}^2=1$. What is ${\bi Z}$ ? Using (4.7), (4.2), (4.3) and setting $\zeta=0$, we find after some calculation \begin{eqnarray} {\bi Z} &=&\left(\frac{e^{\frac{1}{2}}}{\sqrt{1+4{\bi J}^2}}{\rm sinh}\hbox{$\scriptstyle 1\over2 $}\sqrt{1+4{\bi J}^2}+e^{\frac{1}{2}}{\rm cosh}\hbox{$\scriptstyle 1\over2 $} \sqrt{1+4{\bi J}^2}\right){{\bi X}\over r}\nonumber\\ &&{}+{\rm i}\left(\frac{2e^{\frac{1}{2}}}{\sqrt{1+4{\bi J}^2}}{\rm sinh} \hbox{$\scriptstyle 1\over2 $}\sqrt{1+4{\bi J}^2}\right){\bi J}\times{{\bi X}\over r}. \end{eqnarray} We remark that $Z_i$ have the structure resembling the standard annihilation operators. In fact, one can easily check that it can be written as a combination \begin{equation} {\bi Z}=a{\bi X}+{\rm i}b{\bi P}, \end{equation} of the ``position operator'' ${\bi X}$ and the ``momentum'' ${\bi P}$, where the coefficients $a$ and $b$ are functions of ${\bi J}^2$. We finally point out that derivation of the operator ${\bi Z}$ (4.16) without the knowledge of the matrix operator $Z$ seems to be very difficult task. \section{Construction of the coherent states} In this section we construct the coherent states specified by the eigenvalue equation (4.15). On projecting (4.15) on the basis vectors $|j,m;r\rangle\equiv|j,m;r,0\rangle$ and using (3.7{\em a}), (3.8) and (3.9) with $\zeta=0$ we arrive at the system of linear difference equations satisfied by the Fourier coefficients of the expansion of the coherent state $|{\bi z}\rangle$ in the basis $|j,m;r\rangle$. The direct solution of such system in the general case seems to be difficult task. Therefore, we adopt the following technique. We first solve the eigenvalue equation for ${\bi z}= {\bi n}_3=(0,0,1)$, and then generate the coherent states from the vector ${\bi n}_3$ using the fact (see (4.16)) that ${\bi Z}$ is a vector operator. As demonstrated in the next section the case with ${\bi z}={\bi n}_3$ refers to ${\bi x}=(0,0,1)$ and ${\bi l}={\bf 0}$, where ${\bi x}$ is the position and ${\bi l}$ the angular momentum, respectively, i.e., the particle resting on the ``North Pole'' of the sphere. Let us write down the eigenvalue equation (4.15) for ${\bi z}={\bi n}_3$ \begin{equation} {\bi Z} |{\bi n}_3\rangle={\bi n}_3 |{\bi n}_3\rangle. \end{equation} Using the following relations which can be easily derived with the help of (4.16), (3.7{\em a}), (3.8) and (3.9) with $\zeta =0$: \numparts \begin{eqnarray} Z_1 |j,m;r\rangle &=&-\frac{1}{2}e^{-j-1}\sqrt{\frac{(j+m+1)(j+m+2)}{(2j+1)(2j+3)}} |j+1,m+1;r\rangle\nonumber\\ &&{}+\frac{1}{2}e^j\sqrt{\frac{(j-m-1)(j-m)}{(2j-1)(2j+1)}} |j-1,m+1;r\rangle\nonumber\\ &&+\frac{1}{2}e^{-j-1}\sqrt{\frac{(j-m+1)(j-m+2)}{(2j+1)(2j+3)}} |j+1,m-1;r\rangle\nonumber\\ &&{}-\frac{1}{2}e^j\sqrt{\frac{(j+m-1)(j+m)}{(2j-1)(2j+1)}} |j-1,m-1;r\rangle,\\ Z_2 |j,m;r\rangle &=&\frac{{\rm i}}{2}e^{-j-1}\sqrt{\frac{(j+m+1)(j+m+2)}{(2j+1)(2j+3)}} |j+1,m+1;r\rangle\nonumber\\ &&{}-\frac{{\rm i}}{2}e^j\sqrt{\frac{(j-m-1)(j-m)}{(2j-1)(2j+1)}} |j-1,m+1;r\rangle\nonumber\\ &&+\frac{{\rm i}}{2}e^{-j-1}\sqrt{\frac{(j-m+1)(j-m+2)}{(2j+1)(2j+3)}} |j+1,m-1;r\rangle\nonumber\\ &&{}-\frac{{\rm i}}{2}e^j\sqrt{\frac{(j+m-1)(j+m)}{(2j-1)(2j+1)}} |j-1,m-1;r\rangle,\\ Z_3 |j,m;r\rangle &=&e^{-j-1}\sqrt{\frac{(j-m+1)(j+m+1)}{(2j+1)(2j+3)}} |j+1,m;r\rangle\nonumber\\ &&{}+e^j\sqrt{\frac{(j-m)(j+m)}{(2j-1)(2j+1)}}|j-1,m;r\rangle, \end{eqnarray} \endnumparts it can be easily checked that the solution to (5.1) is given by \begin{equation} |{\bi n}_3\rangle=\sum_{j=0}^{\infty}e^{-\frac{1}{2}j(j+1)}\sqrt{2j+1}|j,0;r\rangle. \end{equation} Now, using the commutator \begin{equation} [{\bi w}\bdot{\bi J},{\bi Z}]=-{\rm i}{\bi w}\times{\bi Z}, \end{equation} where ${\bi w}\in{\Bbb C}^3$, we generate the complex rotation of ${\bi Z}$ \begin{equation} e^{{\bi w}\bdot{\bi J}}{\bi Z}e^{-{\bi w}\bdot{\bi J}}= \cosh\sqrt{{\bi w}^2}\,{\bi Z}-{\rm i}\frac{\sinh\sqrt{{\bi w}^2}} {\sqrt{{\bi w}^2}} {\bi w}\times{\bi Z}+\frac{1-\cosh\sqrt{{\bi w}^2}}{{\bi w}^2}{\bi w} ({\bi w}\bdot{\bi Z}). \end{equation} Taking into account (5.5) and (4.15) we find that the coherent states can be expressed by \begin{equation} |{\bi z}\rangle = e^{{\bi w}\bdot{\bi J}}|{\bi n}_3\rangle, \end{equation} where ${\bi w}$ is given by \begin{equation} {\bi w}=\frac{{\rm arccosh}z_3}{\sqrt{1-z_3^2}}{\bi z}\times{\bi n}_3. \end{equation} It thus appears that the coherent states can be written as \begin{equation} |{\bi z}\rangle = \exp\left[\frac{{\rm arccosh}z_3}{\sqrt{1-z_3^2}} ({\bi z}\times{\bi n}_3)\bdot{\bi J}\right] |{\bi n}_3\rangle. \end{equation} We remark that the discussed coherent states are generated analogously as in the case of the circle described by the equation (2.16). The formula (5.8) can be furthermore written in the form \begin{equation} |{\bi z}\rangle = e^{\mu J_-}e^{\gamma J_3}e^{\nu J_+} |{\bi n}_3\rangle, \end{equation} where \begin{equation} \mu =\frac{z_1+{\rm i}z_2}{1+z_3},\qquad \nu=\frac{-z_1+{\rm i}z_2}{1+z_3},\qquad \gamma =\ln\frac{1+z_3}{2}. \end{equation} Finally, eqs.\ (5.9), (5.3), (3.7{\em a}) and (3.8) taken together yield the following formula on the coherent states: \begin{equation} \fl |{\bi z}\rangle =\sum_{j=0}^{\infty}e^{-\frac{1}{2}j(j+1)} \sqrt{2j+1}\sum_{m=0}^{j}\frac{\nu^m}{m!}\frac{(j+m)!}{(j-m)!} e^{\gamma m}\sum_{k=0}^{j+m}\frac{\mu^k}{k!} \sqrt{\frac{(j-m+k)!}{(j+m-k)!}} |j,m-k;r\rangle, \end{equation} where $\mu ,\,\nu$ and $\gamma$ are expressed by (5.10) and ${\bi z}^2=1$. Taking into account the identities \begin{equation} \sum_{s=0}\sp{n}\frac{(s+k)!}{(s+m)!s!(n-s)!}z^s=\frac{k!}{m!n!}\,\, {}_2F_1(-n,k+1,m+1;-z), \end{equation} \begin{equation} C_n^\alpha(x) =\frac{\Gamma(n+2\alpha)}{\Gamma(n+1)\Gamma(2\alpha)} \,{}_2F_1(-n,n+2\alpha,\alpha+\hbox{$\scriptstyle 1\over2 $}; \hbox{$\scriptstyle 1\over2 $}(1-x)), \end{equation} where ${}_2F_1(a,b,c;z)$ is the hypergeometric function, $C_n^\alpha(x)$ are the Gegenbauer polynomials and $\Gamma(x)$ is the gamma function, we obtain \begin{equation} \fl \langle j,m;r|{\bi z}\rangle = e^{-\frac{1}{2}j(j+1)}\sqrt{2j+1}\, \frac{(2|m|)!}{|m|!}\sqrt{\frac{(j-|m|)!}{(j+|m|)!}}\left( \frac{-\varepsilon(m)z_1+{\rm i}z_2}{2}\right)^{|m|} C_{j-|m|}^{|m|+\frac{1}{2}} (z_3), \end{equation} where $\varepsilon(m)$ is the sign of $m$. Let us recall in the context of the relations (5.14) that the polynomial dependence of the projection of coherent states onto the discrete basis vectors, on the complex numbers parametrizing those states is one of their most characteristic properties. Clearly, the polynomials (5.14) should span via the ``resolution of the identity operator'' the Fock-Bargmann representation. We recall that existence of such representation is one of the most important properties of coherent states. The problem of finding the Fock-Bargmann representation in the discussed case of the coherent states for a particle on a sphere is technically complicated and it will be discussed in a separate work. Finally, notice that the coherent states $|{\bi z}\rangle$ are evidently stable under rotations. \section{Coherent states and the classical phase space} We now show that the introduced coherent states for a quantum particle on a sphere are labelled by points of the classical phase space, that is $T^*S^2$. Referring back to eq.\ (4.16) and taking into account the fact that the classical limit corresponds to large $j$'s, we arrive at the following parametrization of ${\bi z}$ by points of the phase space: \begin{equation} {\bi z}=\cosh|{\bi l}|\,\frac{{\bi x}}{r}+{\rm i}\frac{\sinh|{\bi l}|} {|{\bi l}|}\,{\bi l}\times \frac{{\bi x}}{r}, \end{equation} where the vectors ${\bi l},\,{\bi x}\in{\Bbb R}^3$, fulfil ${\bi x}^2=r^2$ and ${\bi l}\bdot{\bi x}=0$, i.e., we assume that ${\bi l}$ is the classical angular momentum and ${\bi x}$ is the radius vector of a particle on a sphere. In accordance with the formulae (4.15) and (4.13) the vector ${\bi z}$ satisfies ${\bi z}^2=1$. Thus, the vector ${\bi z}$ is really parametrized by the points $({\bi x},{\bi l})$ of the classical phase space $T^*S^2$. Consider now the expectation value of the angular momentum operator ${\bi J}$ in a coherent state. The explicit formulae which can be derived with the help of (3.7{\em a}), (3.8), (3.12) and (5.14) are too complicated to reproduce them herein. From computer simulations it follows that \begin{equation} \langle{\bi J}\rangle_{\bi z} =\frac{\langle {\bi z}|{\bi J}|{\bi z}\rangle}{\langle {\bi z}|{\bi z}\rangle}\approx{\bi l}. \end{equation} Nevertheless, in opposition to the case with the circular motion, the approximate relation (6.2) does not hold for practically arbitrary small $|{\bi l}|$. Namely, we have found that whenever $|{\bi l}|\sim1$, then (6.2) is not valid. Note that returning to dimension entities in the formulae like (3.6) we measure $|{\bi l}|$ in the units of $\hbar$, so in the physical units we deal rather with ${\bi L}=\hbar {\bi l}$. For $|{\bi l}|\ge10$ the relative error $|(\langle J_i\rangle_{\bi z}-l_i)/\langle J_i\rangle_{\bi z}|$, $i=1,\,2,\,3$, is small. More precisely, if $|{\bi l}|\sim10$, then $|(\langle J_i\rangle_{\bi z}-l_i)/\langle J_i\rangle_{\bi z}|\sim$1 per cent. In other words, in the case of the motion on a sphere, the quantum fluctuations are not negligible for $|{\bi L}|\sim$1 $\hbar$ and the description based on the concept of the classical phase space is not adequate one. However, it must be borne in mind that the condition $|{\bi L}|\ge$ 10 $\hbar$, when (6.2) holds is not the same as the classical limit $|{\bi l}|\to\infty$. We only point out that $10\,\hbar\approx 10^{-33}\,{\rm J}\cdot{\rm s}$. It thus appears that the parameter ${\bi l}$ in (6.2) can be identified with the classical angular momentum divided by $\hbar$. We now study the role of the parameter ${\bi x}$ in (6.1). As with the momentum operator ${\bi J}$ the explicit relations obtained by means of (3.9) with $\zeta=0$, (3.12) and (5.14) are too complicated to write them down herein. The computer simulations indicate that \begin{equation} \langle{\bi X}\rangle_{\bi z}=\frac{\langle{\bi z}|{\bi X}|{\bi z}\rangle} {\langle {\bi z}|{\bi z}\rangle}\approx e^{-\frac{1}{4}}{\bi x}. \end{equation} It seems that the formal resemblance of the formula (6.3) and (2.19) referring to the case with the circular motion is not accidental one. The range of application of (6.3) is the same as for (6.2), i.e., $|{\bi l}|\ge10$. Because of the term $e^{-\frac{1}{4}}$, it appears that the average value of ${\bi X}$ does not belong to the sphere with radius $r$. Proceeding analogously as in the case of the circle we introduce the relative average value of ${\bi X}$ of the form \begin{equation} \langle\!\langle X_i\rangle\!\rangle_{\bi z}=\frac{\langle X_i\rangle_{\bi z}} {\langle X_i\rangle_{{\bi w}_i}},\qquad i=1,\,2,\,3, \end{equation} where $|{\bi w}_i\rangle$ is a coherent state with \begin{equation} {\bi w}_k=\cosh|{\bi l}|{\bi n}_k+{\rm i}\frac{\sinh|{\bi l}|}{|{\bi l}|} {\bi l}\times{\bi n}_k,\qquad k=1,\,2,\,3, \end{equation} where ${\bi n}_k$ is the unit vector along the $k$ coordinate axis and ${\bi l}$ is the same as in (6.1). In view of (6.3) and (6.4) we have \begin{equation} \langle\!\langle {\bi X}\rangle\!\rangle_{\bi z}\approx{\bi x}. \end{equation} Therefore, the relative expectation value $\langle\!\langle {\bi X}\rangle\! \rangle_{\bi z}$ seems to be the most natural one to describe the average position of a particle on a sphere. We have thus shown that the parameter ${\bi x}$ can be immediately related to the classical radius vector of a particle on a sphere. As with the case of the circular motion (see formulae (2.18) and (2.21)), we interpret the relations (6.2) and (6.6) as the best possible approximation of the classical phase space. In this sense the coherent states labelled by points of such deformed phase space are closest to the classical ones. The quantum fluctuations which are the reason of the approximate nature of (6.2) and (6.6) are in our opinion a characteristic feature of quantum mechanics on a sphere. We finally remark that the discussion of the Heisenberg uncertainty relations analogous to that referring to the circle (see section 2) can be performed also in the case with the coherent states for a particle on a sphere. For example a counterpart of the formula (2.22) is \begin{equation} (\Delta {\bi J})^2\ge\frac{1}{2}\frac{\frac{1}{2}{\rm Tr}\langle V\rangle^2}{1-\frac{1}{2}{\rm Tr}\langle V\rangle^2}, \end{equation} where according to eq.\ (4.2) we have $\langle V\rangle=\frac{1}{r} \bsigma\bdot\langle {\bi X}\rangle$. Such discussion as well as the detailed analysis of the Heisenberg uncertainty relations for the quantum mechanics on a compact manifold will be the subject of a separate paper which is in preparation. \section{Simple application: the rotator} We now illustrate the actual treatment by the example of a free twist 0 particle on a sphere, i.e.\ the rotator. The corresponding Hamiltonian is given by \begin{equation} \hat H=\hbox{$\scriptstyle 1\over2 $}{\bi J}^2. \end{equation} By (3.7{\em a}) the normalized solution of the Schr\"odinger equation \begin{equation} \hat H |E\rangle = E |E\rangle \end{equation} can be expressed by \begin{equation} |E\rangle= |j,m;r\rangle,\qquad E=\hbox{$\scriptstyle 1\over2 $}j(j+1). \end{equation} We now discuss the distribution of the energies in the coherent state. The computer simulations indicate that the function \begin{equation} p_{j,m}({\bi x},{\bi l})=\frac{|\langle j,m;r|{\bi z}\rangle|^2}{\langle {\bi z}|{\bi z}\rangle},\qquad -j\le m\le j, \end{equation} determined by (5.14) and (6.1), which gives the probability of finding the system in the state $|j,m;r\rangle$, when the system is in the normalized coherent state $|{\bi z}\rangle/\sqrt{\langle{\bi z}|{\bi z}\rangle}$, has the following properties. For fixed integer $m=l_3$ the function $p_{j,m}$ has a maximum at $j_{\rm max}$ coinciding with the integer nearest to the positive root of the equation \begin{equation} j(j+1)={\bi l}^2, \end{equation} (see Fig.\ 1). Thus, it turns out that the parameter $\frac{1}{2}{\bi l}^2$ can be regarded as the energy of the particle. Further, for fixed integer $j$ in $p_{j,m}({\bi x},{\bi l})$ (see Fig.\ 2), such that (7.5) holds, the function $p_{j,m}$ has a maximum at $m_{\rm max}$ coinciding with the integer nearest to $l_3$. It thus appears that the parameter $l_3$ can be identified with the projection of the momentum on the $x_3$ axis. \section{Conclusion} In this work we have introduced the coherent states for a quantum particle on a sphere. An advantage of the formalism used is that the coherent states are labelled by points of the classical phase space. The authors have not found alternative constructions of coherent states for a quantum mechanics on a sphere preserving this fundamental property of coherent states. As pointed out in Sec.\ 6, the quantum fluctuations arising in the case of the motion on a sphere are bigger than those taking place for the circular motion. This observation is consistent with the appearance of the additional degree of freedom for the motion on a sphere. We remark that as with the particle on a circle, we deal within the actual treatment with the deformation of the classical phase space expressed by the approximate relations (6.2) and (6.6). We also point out that besides (6.2) and (6.6) the quasi-classical character of the introduced coherent states is confirmed by the behaviour of the distribution of the energies investigated in section 7. It seems that the approach introduced in this paper is not restricted to the study of the quasi-classical aspects of the quantum motion on a sphere. For example, the results of this work would be of importance in the theory of quantum chaos. In fact, in this theory the kicked rotator is one of the most popular model systems. Because of the well known difficulties in the analysis of the Heisenberg uncertainty relations occuring in the case with observables having compact spectrum like the position operator ${\bi X}$ satisfying the $e(3)$ algebra (3.4) we have not studied them herein. The analysis of the Heisenberg uncertainty relations as well as the discussion of the case with a nonvanishing twist will be performed in future work. \section*{References}
1,116,691,498,001
arxiv
\section{Introduction} \label{Introduction} The study of matchings in cubic graphs has a long history in combinatorics, dating back to Petersen's theorem~\cite{konig}. Recently, the problem has found several applications in computer graphics and geographic information systems~\cite{biedl01,gopi04,remacle11,daniels11}. Before presenting the contributions of this paper, we consider the following motivating example in the area of computer graphics. Triangle meshes are often used to model solid objects. Nevertheless, quadrangulations are more appropriate than triangulations for some applications~\cite{daniels11, tri2quad}. In such situations, we can convert a triangulation into a quadrangulation by merging pairs of adjacent triangles (Figure~\ref{fig:bunny}). Hence, the problem can be modeled as a matching problem by considering the dual graph of the triangulation, where each triangle corresponds to a vertex and edges exist between adjacent triangles. The dual graph of a triangle mesh is a bridgeless cubic graph, for which Petersen's theorem guarantees that a perfect matching always exists~\cite{bm,biedl01}. Also, such a matching can be computed in $O(n \log^2 n)$ time~\cite{diks10}. \begin{figure}[t] \centering \includegraphics[width = .7\textwidth]{bunny_detail2} \caption{Stanford Bunny Model: triangular mesh (left) and two quadrangular meshes.} \label{fig:bunny} \end{figure} Unfortunately, from the computer graphics perspective, some pairs of triangles lead to undesirable quadrilaterals (for example, when the triangles are skinny or lie on very different planes). A natural extension to the cubic graph model assigns a weight to each edge (i.e., to each pair of adjacent triangles), which expresses how desirable the corresponding quadrilateral is. In Figure~\ref{fig:bunny} (middle and right) we can compare the results when two different weight functions are used to create quadrangular meshes, observe that the middle one has more skinny quadrilaterals than the right one. However, even when using good weight functions, an inherent difficulty arises: The maximum weight matching may not be a perfect matching. In this paper, we study the relationship between these two types of matchings, in order to understand how much worse (in terms of total weight) we do by selecting the maximum weight perfect matching instead of the maximum weight matching. We provide bounds for the ratio between the maximum weight of a perfect matching and the maximum weight of a matching. We take advantage of the existing rich literature about bridgeless cubic graphs, a historical graph class much studied in the context of important graph theory conjectures, such as: The Four Color Conjecture~\cite{Appel}, the Berge-Fulkerson Conjecture, and the Cycle Double Cover Conjecture~\cite{Celmins}. We formalize the aforementioned concepts in the next paragraphs, after some definitions. Let $G=(V,E)$ be a connected undirected graph. A \emph{bridge} is an edge $uv \in E$ such that all paths between $u$ and $v$ go through $uv$. A graph is \emph{bridgeless} if it has no bridges. A graph is \emph{cubic} if every vertex has degree exactly $3$. A cubic graph is bridgeless if and only if it is biconnected~\cite{bm}. A \emph{matching} in $G$ is a set $M \subset E$ such that no two edges in $M$ share a common vertex. Recall that given a matching $M$ in a graph $G$, we say that $M$ \emph{saturates} a vertex $v$ and that vertex $v$ is \emph{$M$-saturated}, if some edge of $M$ is incident to $v$~\cite{bm}. A matching $P$ is \emph{perfect} if $|P| = |V|/2$. A matching is \emph{maximal} if it is not a subset of any other matching and is \emph{maximum} if it has maximum cardinality. A cubic graph $G$ is \emph{Tait-colorable} if the edges of $G$ can be partitioned into three perfect matchings. All Tait-colorable graphs are bridgeless~\cite{bm}. Let $w : E \rightarrow \RE^+$ be the \emph{weight} of the edges. It will be convenient to allow for the weight of some edges to be zero as long as there is at least one edge with nonzero weight. Given a subset $E' \subseteq E$, we refer to the quantity $w(E') = \sum_{e \in E'} w(e)$ as the \emph{weight} of $E'$. A \emph{maximum weight matching} is a matching $M^*(G)$ of maximum possible weight in $G$. A \emph{maximum weight perfect matching} is a perfect matching $P^*(G)$ of maximum possible weight (among all perfect matchings of $G$). Given a graph $G$ which admits a perfect matching, we define $$\eta(G) = \min_{w:E\rightarrow \RE^+} \frac{w(P^*(G))}{w(M^*(G))}.$$ The value of $\eta(G)$ can be as small as $0$. To see that, consider the path of length $3$ where the middle edge has weight $1$ and the two remaining edges have weight $0$. The graph $G$ has a single perfect matching $P$ with weight $w(P)=0$, while there is a non-perfect matching with weight $1$. Note that we allow edge weights to be $0$, for otherwise, $\eta(G)$ could be made arbitrarily small as the weights approach $0$. A graph $G$ with $\eta(G)=0$ represents one extreme of the problem. In this case, requiring a matching to be perfect may result in a matching with zero weight, where a matching with arbitrarily high weight may exist. In the other extreme, we have graphs $G$ with $\eta(G) = 1$. In this case, for every $w$ there is a perfect matching with the same weight as the maximum weight matching. Our first result consists of a precise characterization of these two extremes (Section~\ref{extreme}). Consider a graph $G$ that is known to be a member of a graph class $\GG$. Since $\eta(G)$ is only defined for graphs that admit a perfect matching, we assume that all graphs in $\GG$ admit perfect matchings. Different graphs $G,G' \in \GG$ may have $\eta(G) \neq \eta(G')$. In the worst case scenario, in terms of our motivation of approximating a maximum weight matching with a perfect matching, we have a graph $G$ with a small value of $\eta(G)$. Therefore, we define the value of $\eta(\GG)$ in terms of this worst case behavior: $$\eta(\GG) = \min_{G \in \GG} \eta(G).$$ Sometimes, when the graph $G$ or the graph class $\GG$ is clear from the context, we refer to $\eta(G)$ or $\eta(\GG)$ simply as $\eta$. An immediate consequence of this definition is that given two graph classes $\GG_1 \subseteq \GG_2$, we have $\eta(\GG_1) \geq \eta(\GG_2)$. Therefore, in order to prove bounds on $\eta$ that apply to as many graph classes as possible, it is useful to obtain lower bounds on $\eta$ for ``large'' graph classes and upper bounds on $\eta$ for ``small'' graph classes. It is important to remark that in many applications is not possible to know a priori all graphs that will be given, thus the worst case scenario is very useful information. Therefore, we will treat the upper bounds as valuable knowledge about the whole graph class even when it is proved only for a single example, although most of the obtained upper bounds were extended to infinite families of cubic graphs. We show that $\eta(\GG) \geq 1/3$ where $\GG$ is the target class of bridgeless cubic graphs, and therefore to all its subclasses. We show in Section~\ref{cubic} that $\eta(G) = 1/3$ for a particular planar hamiltonian cubic graph $G$, and hence for all the classes that contain it. Note that both planar bridgeless cubic graphs and hamiltonian cubic graphs are Tait-colorable~\cite{bm}. We also show that the Petersen graph has $\eta = 1/3$, and hence the class of generalized Petersen graphs has $\eta = 1/3$. For the class of bipartite cubic graphs, we propose an interesting gap. We show that the smallest bipartite planar nonhamiltonian bridgeless cubic graph~\cite{asano82} has $\eta \leq 1/2$. Therefore, the class of nonhamiltonian bipartite cubic graphs has $1/3\leq \eta \leq 1/2$. Finally, we investigate the two well-known families of Blanu\v{s}a First and Blanu\v{s}a Second snarks and present an infinite family of snarks starting from the Petersen graph for which all members satisfy $\eta = 1/3$. \section{Extreme Cases} \label{extreme} We defined $\eta(G)$ for a graph $G$ which has a perfect matching. By definition $0 \leq \eta(G) \leq 1$. In order to get used to the definition of $\eta$, we characterize the graphs that attain the extreme values of $\eta$. We start by characterizing the graphs $G$ with $\eta(G) = 1$. \begin{thm} A graph $G$ has $\eta(G) = 1$ if and only if every maximal matching of $G$ is a perfect matching. \end{thm} \begin{proof} First, we prove that if $\eta(G) = 1$, then every maximal matching of $G$ is a perfect matching. For the sake of a contradiction, suppose $\eta(G) = 1$ and let $M$ be a maximal matching that is not a perfect matching. Define $w(e) = 1$ if $e \in M$ and $w(e) = 0$ otherwise. For any perfect matching $P$ of $G$ it holds that $M \not\subseteq P$. Thus, there is at least one edge $e \in M$ such that $e \not \in P$. Therefore, $w(M) > w(P)$ and consequently $\eta(G) < 1$. Second, we prove that if every maximal matching of $G$ is a perfect matching, then $\eta(G) = 1$. For a fixed weight function $w : E \rightarrow \RE^+$, let $M^*(G)$ be the matching of maximum weight. If there are edges with zero weight, it is possible that $M^*(G)$ is not a perfect matching. Nevertheless, there is a perfect matching of maximum weight $P^*(G) \supseteq M^*(G)$ which can be obtained from $M^*(G)$ by including edges of zero weight. Consequently, $w(P^*(G)) = w(M^*(G))$ and $\eta(G)~=~1$. \end{proof} Note that, if we allow only positive nonzero weights, then every matching of maximum weight is a maximal matching. The condition of Theorem 2.1 that every maximal matching is a perfect matching now implies that every matching of maximum weight is actually perfect, the sets of matchings of type M* and P* are equal, and sufficiency is immediate. However, if we allow negative weights, then sufficiency does not hold. On the other hand, necessity holds regardless of allowing zero or negative weights. The previous theorem implies, for example, that every balanced complete bipartite graph $K_{n,n}$ has $\eta(K_{n,n}) = 1$. Next, we characterize graphs $G$ with $\eta(G) = 0$. \begin{thm} A graph $G$ has $\eta(G) = 0$ if and only if there is an edge $e \in E$ that is not contained in any perfect matching. \end{thm} \begin{proof} First, we prove that if for every edge $e \in E$ there is a perfect matching that contains $e$, then $\eta(G) > 0$. We remind the reader that by definition of the weight function, at least one edge $e$ has $w(e) > 0$. Therefore, there is a perfect matching $P$ that contains $e$ and have $w(P) > 0$. Consequently, $\eta(G) > 0$. Second, we prove that if there is an edge $e \in E$ that is not in any perfect matching, then $\eta(G) = 0$. Let the weight of $e$ be $1$ and the weight of all other edges be $0$. In this case, all perfect matchings have weight $0$ and the maximum weight matching has weight $1$. Consequently, $\eta(G) = 0$. \end{proof} The previous theorem implies, for example, that $\eta(G) = 0$ for every cubic graph $G$ which contains a bridge and admits a perfect matching, since an edge that is adjacent to the bridge is not in any perfect matching. Cubic graphs with $1$ or $2$ bridges always admit a perfect matching~\cite{bm}. \section{Bounds for Bridgeless Cubic Graphs} \label{cubic} In this section, we provide upper and lower bounds on $\eta$ for our target class of bridgeless cubic graphs. We start with a lower bound for arbitrary bridgeless cubic graphs. Remark that the lower bound extends to all subclasses. Clearly, if $G$ is Tait-colorable, then each edge is contained in a perfect matching, which implies that $\eta(G)>0$. Actually, we get a better lower bound, since a Tait-colorable graph $G$ admits 3 perfect matchings so that each edge is covered precisely once, which gives $\eta(G)\geq\frac{1}{3}$. The famous Berge-Fulkerson Conjecture~\cite{Fulkerson1971,Giuseppe} says that every bridgeless cubic graph admits a family of 6 perfect matchings such that each edge is covered precisely twice. The proof of Lemma~\ref{lem:lb} establishes the lower bound of $\eta(G) \geq 1/3$, for an arbitrary bridgeless cubic graph, by using a property~\cite{conjecture1} more general than a Tait-coloring but weaker than the Berge-Fulkerson Conjecture. \begin{lem} \label{lem:lb} Let $G$ be a bridgeless cubic graph. Then, $\eta(G) \geq 1/3$. \end{lem} \begin{proof} It is known that given a bridgeless cubic graph $G$, there is an integer $k$ (depending on $G$) such that $G$ has a family of $3k$ perfect matchings that cover each edge of $G$ exactly $k$ times~\cite{conjecture1}. Let $P_1,\ldots,P_{3k}$ denote such perfect matchings. Assume without loss of generality that $w(P_1) \geq \cdots \geq w(P_{3k})$. Let $M^*(G)$ be the maximum weight matching. We have that \[w(M^*(G)) \leq \frac{w(P_1) + \cdots + w(P_{3k})}{k} \leq 3\;w(P_1) \leq 3\;w(P^{*}(G))\] and therefore $\eta(G) \geq 1/3$. \end{proof} Since upper bounds on $\eta$ extend to superclasses, it is useful to prove upper bounds for graphs that are contained in several relevant classes. A particular subclass of bridgeless cubic graphs is the class of Tait-colorable graphs. Two subclasses of Tait-colorable graphs are planar bridgeless cubic graphs and hamiltonian cubic graphs. We start by proving a tight bound for the intersection of the two aforementioned classes. \begin{lem} \label{lem:ub} There are infinitely many planar hamiltonian cubic graphs $G$ with $\eta(G) = 1/3$. \end{lem} \begin{proof} First, let $G$ be the cubic graph represented in Figure~\ref{fig:tait13}(a). Note that $G$ is planar and hamiltonian (see Figure~\ref{fig:tait13}(b)). By Lemma~\ref{lem:lb}, $\eta(G) \geq 1/3$. We now show that $\eta(G) \leq 1/3$. Let $e_1,e_2,e_3$ and $v_{1,2},v_{2,3},v_{1,3}$ be the edges and vertices labeled in Figure~\ref{fig:tait13}(c). We can set $w(e_1)=w(e_2)=w(e_3)=1$ and set all other edge weights to $0$. A perfect matching may contain at most one of $e_1,e_2,e_3$. To see that, note that if a matching contains two such edges $e_i,e_j$, then vertex $v_{i,j}$ indicated in Figure~\ref{fig:tait13}(c) cannot be saturated by any edge of the matching. Therefore, there is a matching $e_1,e_2,e_3$ of weight $3$ and a perfect matching may have weight at most~$1$, which implies that $\eta(G) \leq 1/3$. One way to obtain infinitely many such graphs is to remove the central vertex of the graph in Figure~\ref{fig:tait13} and connect the remaining graph through a matching of size 3 to another planar hamiltonian cubic graph with one vertex removed. Another way to obtain infinitely many such graphs is to remove an edge incident to the central vertex of this graph and connect the remaining graph through a matching of size 2 to another planar hamiltonian cubic graph with one edge removed. Both constructions are classical in the theory of cubic graphs~\cite{isaacs75}. \end{proof} \begin{figure}[ht] \centering \includegraphics[scale = .35]{Tait_atual} \caption{(a)~Tait-colorable cubic graph $G$ used in the proof of Lemma~\ref{lem:ub}. (b)~Tait-coloring and hamiltonian cycle of $G$. (c)~Edges and vertices used in the proof of Lemma~\ref{lem:ub}.} \label{fig:tait13} \end{figure} The following upper bound for bipartite cubic graphs uses an easy but powerful counting argument that will be generalized next. \begin{proposition} \label{prop:cube} Let $Q$ be the cube graph. Then, $\eta(Q) \leq 2/3$. \end{proposition} \begin{proof} Let $e_1,e_2,e_3$ be the edges labeled in Figure~\ref{fig:cube}. We can set $w(e_1)=w(e_2)=w(e_3)=1$ and set all other edge weights to $0$. Since $Q\smallsetminus \{e_1,e_2,e_3\}$ is an independent set, any perfect matching of $Q$ may contain at most two edges among $e_1,e_2,e_3$, and the proposition follows. \end{proof} \begin{figure}[ht] \centering \includegraphics[width = .15\linewidth]{Cubo} \caption{Cube graph $Q$ with edges used in the proof of Proposition~\ref{prop:cube} marked. } \label{fig:cube} \end{figure} Next, we introduce a lemma that generalizes Proposition~\ref{prop:cube} and will help us to prove upper bounds for two cubic graphs in the well-known class of generalized Petersen graphs~\cite{Watkins_gen}. \begin{lem}\label{maxmatchings} Let $M$ be a maximal matching of a bridgeless cubic graph $G$ and $S=V \smallsetminus \{M$-saturated vertices$\}$ be the corresponding independent set. We have the upper bound: $$\eta(G)\leq \frac{\left|V\right|-2|S|}{|V|-|S|}$$ \end{lem} \begin{proof} Since $S$ is an independent set, any perfect matching must have exactly $|S|$ edges not of $M$, each of which with exactly one end vertex in $S$, and therefore at most $\frac{|V|}{2}-|S|$ edges of any perfect matching are in $M$. Every cubic graph has an even number of vertices which implies that $|S|$ is even and $|M|=\frac{|V|}{2}-\frac{|S|}{2}$ and the lemma follows by setting $w(e)=1$, if $e \in M$, $w(e)=0$, otherwise. \end{proof} A generalized Petersen graph $G(n,k)$, for $n \geq 3$ and $1 \leq k \leq \lfloor (n-1)/2 \rfloor$, is a graph with vertex set $\{u_0,\ldots,u_{n-1},v_1,\ldots,v_{n-1}\}$, and the following three types of edges for $0 \leq i \leq n-1$: $u_iu_{i+1}$, $u_iv_i$, and $v_iv_{i+k}$, with subscripts modulo $n$~\cite{Watkins_gen}. The class of generalized Petersen graphs does not contain the graph in Figure~\ref{fig:tait13} but contains the graph of Figure~\ref{fig:cube}, since $Q=G(4,1)$. We consider the Nauru graph $N=G(12,5)$, which has several possible maximal matchings to be studied, and the famous Petersen graph $R=G(5,2)$, the only non Tait-colorable graph in the class of generalized Petersen graphs~\cite{Castagna1972}. \begin{proposition} \label{prop:bipartiteubNauru} Let $N$ be the Nauru graph. Then, $\eta(N) \leq 1/2$. \end{proposition} \begin{proof} Let $M=\{e_1,\dots,e_8\}$ be a maximal matching of $N$ presented in Figure~\ref{fig:nauru}. The set $S$ of 8 vertices that are not end vertices of $e_1,\dots,e_8$ is an independent set of the graph. Hence, $M$ is a maximal matching that is not perfect. Moreover, each perfect matching of the graph has exactly 12 edges, and 8 of such edges must each saturate exactly one vertex of the independent set $S$. Therefore, each perfect matching of $N$ must have at most 4 edges in $M$. So, Lemma~\ref{maxmatchings} gives the upper bound. \end{proof} \begin{figure}[ht] \centering \includegraphics[width = .3\linewidth]{Nauru_atual} \caption{Nauru graph $N$ with the maximal matching considered in the proof of Proposition~\ref{prop:bipartiteubNauru}.} \label{fig:nauru} \end{figure} Similarly, we show that the Petersen graph $R = G(5,2)$ has $\eta(R) = 1/3$. \begin{proposition} \label{lem:petersenub} Let $R$ be the Petersen graph. Then, $\eta(R) = 1/3$. \end{proposition} \begin{proof} By Lemma~\ref{lem:lb}, $\eta(R) \geq 1/3$. We now show that $\eta(R) \leq 1/3$. Let $M=\{e_1,e_2,e_3\}$ be the maximal matching presented in Figure~\ref{fig:petersen}. The set $S$ of 4 vertices that are not end vertices of $e_1,e_2,e_3$ is an independent set of the graph. Hence, $M$ is a maximal matching that is not perfect. Moreover, each perfect matching of the graph has exactly 5 edges, and 4 of such edges must each saturate exactly one vertex of the independent set $S$. Therefore, each perfect matching of $R$ must have at most 1 edge in $M$. So, Lemma~\ref{maxmatchings} gives again the upper bound. \end{proof} \begin{figure}[ht] \centering \includegraphics[width = .2\linewidth]{PetersenR} \caption{Petersen graph $R$ with the maximal matching considered in the proof of Proposition~\ref{lem:petersenub}.} \label{fig:petersen} \end{figure} The smallest planar nonhamiltonian bipartite bridgeless cubic graph satisfies $\eta \leq 1/2$, by a construction that actually can be applied to an infinite family. \begin{lem} \label{lem:bipartiteub} There are infinitely many nonhamiltonian cubic graphs $G$ with $\eta(G) \leq 1/2$. \end{lem} \begin{proof} Let $e_1,e_2$ be the edges labeled in Figure~\ref{fig:spbn}(a). We can set $w(e_1)=w(e_2)=1$ and set all other edge weights to $0$. Observe in Figure~\ref{fig:spbn}(b) that there are~3 connected components after removing $e_1$ and $e_2$ and 2 components are odd, therefore a perfect matching may contain at most one of $e_1,e_2$, and the lemma follows. To obtain infinitely many such graphs, we can remove any vertex, which is not incident to $e_1$ nor $e_2$ and connect the remaining graph through a matching of size 3 to another (planar or not) cubic graph with one vertex removed. Note that such a construction does not affect the key property in Figure~\ref{fig:spbn}(b): two components of the obtained graph after removing $e_1$ and $e_2$ are still odd. \end{proof} \begin{figure}[ht] \centering \includegraphics[scale = .4]{planar_bip_nonha} \caption{(a) The smallest bipartite planar nonhamiltonian bridgeless cubic graph with edges used in the proof of Lemma~\ref{lem:bipartiteub} marked. (b) The three connect components after removing $e_1$ and $e_2$; observe that there are two odd components, therefore there is no perfect matching containing both $e_1$ and $e_2$.} \label{fig:spbn} \end{figure} A \emph{snark} is a cubic bridgeless graph that is not Tait-colorable, the smallest snark is the Petersen graph $R$~\cite{bm}. A \emph{dot-product} of two cubic graphs $G$ and $H$ is any cubic graph obtained from $G\smallsetminus \{x,y\}$ and $H\smallsetminus \{aa', bb'\}$, where $x$ and $y$ are two adjacent vertices of $G$ and $aa'$ and $bb'$ is a matching of $H$. Let $a,a',b,b'$ be the four vertices of degree 2 in $H\smallsetminus \{aa', bb'\}$, and $x_1,x_2,y_1,y_2$ be the four vertices of degree 2 in $G\smallsetminus \{x,y\}$. We connect $H\smallsetminus \{aa', bb'\}$ to $G\smallsetminus \{x,y\}$ through a matching of size 4 in the resulting graph $ax_1,a'x_2,by_1,b'y_2$. The dot-product is a famous operation for constructing infinitely many snarks, since the dot-product of two snarks is a snark. The two Blanu\v{s}a snarks $B^1$ and $B^2$ of order 18 were obtained by considering $G=H=R$, the Petersen graph~\cite{Blanusa}. Two infinite families Blanu\v{s}a First and Blanu\v{s}a Second (see Figure~\ref{fig:B1}) were subsequently defined by applying recursively the dot-product with $R$ starting respectively with $B^1$ and $B^2$~\cite{Watkins}. \begin{proposition}\label{B1} Let $B^1$ be the first member of the Blanu\v{s}a First family. Then, $\eta(B^1) \leq 2/5$. \end{proposition} \begin{proof} Let $M=\{e_1,e_2,e_3,e_4,e_5\}$ be the matching of $B^1$ shown in Figure~\ref{fig:B1}. We claim that a perfect matching of $B^1$ can contain at most two edges of $M$. So, setting $w(e_1)=w(e_2)=w(e_3)=w(e_4)=w(e_5)=1$ and all other edge weights to $0$, we have the upper bound. Indeed, to prove the claim, note first that the removal of $M$ from $B^1$ leaves four isolated vertices on the left, say $v_{2,5}, v_{3,5}, v_{2,4} $ and $v_{3,4}$, and a matching of size two on the right. If a matching $M'$ of $B^1$ contains $e_1$, and two other edges of the set $\{e_2, e_3, e_4, e_5\}$, then one of the vertices $v_{2,5}, v_{3,5}, v_{2,4} $ and $v_{3,4}$ cannot be saturated by $M'$. If a perfect matching of $B^1$ contains three edges of the set $\{e_2, e_3, e_4, e_5\}$, then it has to contain the fourth edge, as otherwise the five remaining vertices on the right cannot be all saturated. Now, the remaining six vertices on the left cannot be all saturated. \end{proof} \begin{figure}[ht] \centering \includegraphics[scale = .3]{Blanusa_1} \hspace{.5cm} \includegraphics[scale=.3]{Blanusa_2_atual} \caption{Both Blanu\v{s}a snarks with the relevant matchings.} \label{fig:B1} \end{figure} Note that a snark obtained by the dot-product of the Petersen graph (with two nonadjacent edges at distance 2 removed) and another snark (with two adjacent vertices removed) satisfies this upper bound, which applies to infinite families of snarks, including the historical Blanu\v{s}a First. Unfortunately, we were not able to establish the same upper bound $2/5$ for the other Blanu\v{s}a snark, but Lemma~\ref{maxmatchings} provides the upper bound of $1/2$. \begin{proposition}\label{B2} Let $B^2$ be the first member of the Blanu\v{s}a Second family. Then, $\eta(B^2) \leq 1/2$. \end{proposition} \begin{proof} Let $M=\{e_1,e_2,e_3,e_4,e_5,e_6\}$ be the maximal matching of $B^2$ shown in Figure~\ref{fig:B1}. The set $S$ of 6 vertices that are not end vertices of $e_1,e_2,e_3,e_4,e_5,e_6$ is an independent set of the graph. Hence, $M$ is a maximal matching that is not perfect. Moreover, each perfect matching of the graph has exactly 9 edges, and 6 of such edges must each saturate exactly one of the independent set $S$. Therefore, each perfect matching of $B^2$ must have at most 3 edges in $M$. So Lemma~\ref{maxmatchings} gives the upper bound. \end{proof} The weaker bound obtained next applies to infinite families of snarks, including the historical Blanu\v{s}a Second. \begin{proposition}\label{B2geral} A snark obtained by the dot-product of the Petersen graph (with two nonadjacent edges at distance 1 removed) and another snark (with two adjacent vertices removed) satisfies the upper bound of $2/3$. \end{proposition} \begin{proof} Let $M=\{e_1,e_2,e_3,e_4,e_5,e_6\}$ be the matching of $B^2$ shown in Figure~\ref{fig:B2}. We claim that a perfect matching of $B^2$ can contain at most four edges of $M$. So, setting $w(e_1)=w(e_2)=w(e_3)=w(e_4)=w(e_5)=w(e_6)=1$ and all other edge weights to $0$, we have the upper bound. Indeed, to prove the claim, note first that the removal of $M$ from $B^2$ leaves two isolated vertices on the left, say $a$ and $b$, and a matching of size two on the right. If a matching $M'$ of $B^2$ contains 5 edges of $M$ and is such that $M'$ contains $\{e_3,e_4,e_5,e_6\}$ and exactly one edge of $\{e_1, e_2\}$, then one of the vertices $a, b$ cannot be saturated by $M'$. If a perfect matching of $B^2$ contains 3 edges of the set $\{e_3,e_4,e_5,e_6\}$ and also $e_1$ and $e_2$, then it has to contain the fourth edge of the set $\{e_3,e_4,e_5,e_6\}$, as otherwise the five remaining vertices on the right cannot be all saturated. Now, the remaining two vertices $a$ and $b$ on the left cannot be all saturated. \end{proof} \begin{figure}[ht] \centering \includegraphics[scale = .3]{Blanusa_2} \caption{The Blanu\v{s}a snark $B^2$ with the matching considered in Proposition~\ref{B2geral}.} \label{fig:B2} \end{figure} \section*{Infinitely many snarks with $\eta = 1/3$} Starting from the Petersen graph $R$ with its matching $M$ from Figure~\ref{fig:petersen}, it is possible to obtain an infinite family of snarks with the same upper bound of $1/3$ using a classical construction in the theory of cubic graphs~\cite{isaacs75}, as follows. To construct a new member $G$ of the family with a matching $M_G$, start from two already constructed graphs $H_1$ and $H_2$ with their matchings $M_{H_1}$ and $M_{H_2}$, and remove from them edges $uv \not\in M_{H_1}$ and $xy \not\in M_{H_2}$. Without loss of generality, we can assume that $u$ is $M_{H_1}$-saturated and $v$ is not, and likewise that $x$ is $M_{H_2}$-saturated and $y$ is not. Now we let $G$ be the graph obtained from these graphs by joining $u$ with $y$ and $v$ with $x$. Finally, let $M_G$ be the union of $M_{H_1}$ and $M_{H_2}$, and note that in $G$ it is also true that any edge not in $M_G$ has one end that is $M_G$-saturated and one that is not. Any graph $G$ with a matching $M_G$ obtained by this construction will have $|M_G| = 3|V(G)|/10$, and from Lemma~\ref{maxmatchings} we have $\eta(G) \leq 1/3$. \section*{} The following theorem combines the upper and lower bounds on $\eta$ for several graph classes, and follows immediately from the previous lemmas. \begin{thm} The following graph classes have $\eta = 1/3$: Tait-colorable graphs, planar bridgeless cubic graphs, hamiltonian cubic graphs, generalized Petersen graphs and all members of an infinite family of snarks constructed starting from the Petersen graph. Also, the class $\BB$ of Tait-colorable graphs composed by bipartite nonhamiltonian cubic graphs has $1/3 \leq \eta(\BB) \leq 1/2$. \end{thm} \section{Conclusion} We introduced the parameter $\eta$ to quantify the cost of perfection for matchings. We characterized graphs with extreme values of $\eta$ and provided tight bounds for $\eta(\GG)$ for relevant graph classes $\GG$. The dual graph of a triangulation is a bridgeless cubic graph and many recent works have been devoted to quadrangulations~\cite{tri2quad,gopi04,lizier10a,remacle11,daniels11}. The specific classes studied here are related to important triangulations for computer-graphic applications. For instance, hamiltonian meshes are used to accelerate the graphics pipeline~\cite{arkin96,eppstein04}, also bipartite cubic graphs (planar or not) can be used to improve the rendering of 3D geometric models~\cite{sander08}. The obtained bounds aid the decision process of whether to use a perfect matching or, alternatively, use a two step quadrangulation method~\cite{tarini10}, which first obtains a maximum weight matching and then deals with the unmatched triangles. Many open problems still remain. For the class of bipartite nonhamiltonian cubic graphs, all we know is that $1/3 \leq \eta \leq 1/2$. We propose to extend the construction which gives $\eta = 1/3$ to the Petersen graph to other infinite families of snarks. Another possible direction of work consists of calculating $\eta$ for cubic graphs that are dual of 4-8~meshes~\cite{velho01,velho03,velho04} (Figure~\ref{fig:48mesh}). Such meshes received a lot of attention recently~\cite{amorim12,goes08,weber07,goldenstein05}. Furthermore, the problem of bounding $\eta$ is interesting \emph{per se}, therefore it is natural to investigate the value of $\eta$ for other graph classes whose graphs admit a perfect matching, such as regular bipartite graphs. \begin{figure}[ht] \centering \includegraphics[width = .5\linewidth]{Mesh48} \caption{A 4-8~mesh (left) and its dual bipartite cubic graph (right).} \label{fig:48mesh} \end{figure} \section*{Acknowledgments} The authors would like to thank Hugo Nobrega for the insightful discussions, Vahan Mkrtchyan for the proof of Lemma~\ref{lem:lb}, and Stanford Computer Graphics Laboratory for the bunny model. \bibliographystyle{plain}
1,116,691,498,002
arxiv
\section{Introduction} The various versions of the two-dimensional sigma models are among the most important and best studied models of quantum field theory. On one hand, such models possess important symmetries, in particular conformal invariance. On the other hand, they can be analyzed in detail with difficult, but currently available mathematical methods. Here, we shall investigate its probably most general and physically and mathematically richest version, the two-dimensional supersymmetric nonlinear sigma-model, introduced in~\cite{brink1976locally, deser1976complete}. This model possesses a subtle mathematical structure, see~\cite{deligne1999quantum, jost2009geometry}. The physical and mathematical structure of the model depends on the symmetries it possesses. These include generalized conformal invariance, super Weyl symmetry, and supersymmetry, hence the name of the model. While supersymmetry requires anti-commuting variables, a version of this model with all fields commuting has been intensively studied by mathematicians in the last decade. The mathematical analysis started with various reduced forms of this model. The simplest instance are harmonic functions, which correspond to the linear sigma model, and they have played an important role in analysis and geometry for a long time. The nonlinear version leads to harmonic maps instead of functions, and these are likewise well studied objects with many applications in geometric analysis. In the super version, the map gets coupled with a super partner, a vector spinor. Chen--Jost--Li--Wang~\cite{chen2006dirac, chen2005regularity} initiated the analysis of such coupled fields, which they called Dirac-harmonic maps. The full physical model contains still more additional terms, some of which were considered in~\cite{chen2007, branding2015some, branding2015energy, branding2016, jost2015geometric}. Based on those works, we are now in a position to address the full model, including the gravitino terms. The supersymmetric action functional has been mathematically studied from an algebraic and geometric perspective in a systematic way in~\cite{jost2014super}. Here we shall start to explore the analytic aspects. Let $(M,g)$ be a closed, oriented surface and $(N,h)$ a closed Riemannian manifold. We will study the super action functional $\mathbb{A}$ defined on the space \begin{equation} \mathcal{X}^{1,2}_{1,4/3}(M,N)=\{(\phi,\psi)\big| \phi\in W^{1,2}(M,N), \psi\in\Gamma^{1,4/3}(S\otimes\phi^*TN)\}, \end{equation} where by $\Gamma^{1,4/3}(S\otimes\phi^*TN)$ we mean the space of $W^{1,4/3}$ sections of the twisted spinor bundle \(S\otimes \phi^*TN\). Furthermore, in this paper the Riemannian metric \(g\) and the gravitino \(\chi\) are considered parameters of the functional. Even though an \(L^4\)-integrability condition suffices for the finiteness of \(\mathbb{A}\), we will always assume the gravitino \(\chi\) is a smooth section of \(S\otimes TM\). The action functional is \begin{equation}% \label{A-intro} \begin{split} \mathbb{A}(\phi, \psi;g, \chi)\coloneqq \int_M & |\mathop{}\!\mathrm{d} \phi|_{T^*M\otimes \phi^*TN}^2 + \langle \psi, \slashed{D} \psi \rangle_{S\otimes \phi^*TN} \\ & -4\langle (\mathds{1}\otimes\phi_*)(Q\chi), \psi \rangle_{S\otimes\phi^*TN} -|Q\chi|^2_{S\otimes TM} |\psi|^2_{S\otimes \phi^*TN} -\frac{1}{6} \Rm^{N}(\psi) \mathop{}\!\mathrm{d} vol_g. \end{split} \end{equation} Here $Q$ is a projection operator mapping to a subspace of \(S\otimes TM\), \(\mathds{1}\otimes\phi_*\colon S\otimes TM\to S\otimes \phi^*TN\) and \(R^N(\psi)\) is a contraction of the pullback of the curvature of $N$ along $\phi$ with the field \(\psi\) to the fourth order. While the precise geometric setup will be explained in Section~\ref{Sec:Preliminaries}, we already give local expressions for the third and fifth summand. Let \(\{e_\alpha\}\) be a local orthonormal frame of \(TM\) and \(\{y^i\}\) local coordinates on \(N\). Writing \(\chi = \chi^\alpha\otimes e_\alpha\) and \(\psi = \psi^i\otimes \phi^*\left(\partial_{y^i}\right)\) it holds \begin{align} -4\langle (\mathds{1}\otimes\phi_*)(Q\chi), \psi \rangle_{S\otimes\phi^*TN} &=2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \phi_* e_\beta, \psi \rangle_{S\otimes \phi^*TN}, \\ -\frac{1}{6}\Rm^{N}(\psi) &=-\frac{1}{6}\Rm^{N}_{ijkl}\langle \psi^i, \psi^k \rangle_S \langle \psi^j,\psi^l \rangle_S. \end{align} Since the action functional is somewhat involved and contains many different fields and at the same time possesses rich symmetries, the derivation of the associated Euler--Lagrange equations requires substantial computations. This will be the first achievement of this paper. The result is: \begin{thm}[restate=ELThm, label=thm:EL] The Euler--Lagrange equations for the super action functional $\mathbb{A}$ are given by \begin{equation} \label{EL-eq} \begin{split} \tau(\phi)=&\frac{1}{2}\Rm^{\phi^*TN}(\psi, e_\alpha\cdot\psi)\phi_* e_\alpha-\frac{1}{12}S\nabla R(\psi) \\ & -(\langle \nabla^S_{e_\beta}(e_\alpha \cdot e_\beta \cdot \chi^\alpha), \psi \rangle_S + \langle e_\alpha \cdot e_\beta \cdot \chi^\alpha, \nabla^{S\otimes\phi^*TN}_{e_\beta} \psi \rangle_S), \\ \slashed{D}\psi =& |Q\chi|^2\psi +\frac{1}{3}SR(\psi)+2(\mathds{1}\otimes \phi_*)Q\chi. \end{split} \end{equation} \end{thm} These equations already make the growth order transparent with which the various fields enter. $SR(\psi)$ stands for a term involving the curvature of the target $N$ that is cubic in $\psi$, see~\eqref{def-SR} and $S\nabla R(\psi)$ involves derivatives of that curvature and is quartic in $\psi$, see~\eqref{def-SNR}. We shall then turn to the properties of their solutions. More precisely, we want to show the regularity of weak solutions, that is, those that satisfy the Euler--Lagrange equations in the sense of distributions. The basic issues in geometric analysis are the existence, uniqueness and smoothness of nontrivial critical points. That is, one wishes to show the existence of weak solutions and then their uniqueness and regularity. In this paper, we settle the smoothness. The Euler--Lagrange equations~\eqref{EL-eq} of this action functional turn out to be critical for the Sobolev framework, in the sense that, with initial data assumed to lie in some Sobolev spaces, the classical bootstrap arguments are not strong enough to improve the regularity. That is, the powerful scheme of elliptic regularity theory does not directly apply, and we need to utilize the structure of the equations, and in particular their symmetries, in a subtler way. Our analytical tools are the Morrey spaces, which can be viewed as finer subspaces of the Lebesgue spaces. With estimates on Riesz potentials, we can then iteratively improve the regularity, and get the system away from the critical case. Related methods have been used in~\cite{wang2010remark, sharp2016regularity, branding2015some}. Then the Rivi\`ere regularity theory (see e.g.~\cite{riviere, riviere2010conformally, sharp2013decay}) can be applied to the map component of the critical pairs. Finally, we can show that \begin{thm}% \label{theorem 1} The critical points of the super action functional \begin{equation} \begin{split} \mathbb{A}\colon\mathcal{X}^{1,2}_{1,4/3}(M,N) & \to \mathbb{R}, \\ (\phi,\psi)& \mapsto \mathbb{A}(\phi,\psi;g,\chi), \end{split} \end{equation} are smooth, provided \(g\) and \(\chi\) are smooth. \end{thm} This result should also help in finding solutions of its associated Euler--Lagrange equations. Moreover, our method is of interest in its own right, as we shall explain later. Further geometric and analytic aspects of this model will be addressed in subsequent work. As in the aforementioned works, we shall work with the version of the model that only has commuting fields. As explained in~\cite{chen2011boundary}, this depends on an appropriate representation of the Clifford algebra involved. Thus, in contrast to~\cite{jost2014super}, we shall not have to work in the category of supermanifolds, but can confine ourselves to the setting of Riemannian geometry. Yet, in the framework of supermanifolds, the action functional~\eqref{A-intro} and its symmetries obtain a natural geometric interpretation. In~\cite{jost2014super} it was shown that the fields \(g\) and \(\chi\) determine a \emph{super Riemann surface}, a super geometric generalization of a Riemann surface. Recall that Teichmüller theory can be developed with the help of the harmonic action functional. The functional \(\mathbb{A}\) can be seen as a super analogue of the harmonic action functional. Hence it is expected that an understanding of the solution space of the Euler--Lagrange equations~\eqref{EL-eq} helps to study geometric properties of the moduli space of super Riemann surfaces. Concerning the organization of this paper, we shall first set up the geometric background for the model and introduce the action functional as well as its basic properties. Then we shall derive its Euler--Lagrange equations. For our regularity scheme, we need to bring the equations into a suitable form. This treatment of the Euler--Lagrange equations which builds upon~\cite{zhu2009regularity, wang2009regularity, chen2011boundary, sharp2016regularity, branding2015some} is crucial for our paper, and we hope that it will also be useful for the further mathematical investigation of the model. We can then finally show the regularity of weak solutions of the Euler--Lagrange equations. The main lemma in improving the regularity appears in the last section in a somewhat more general form than needed for our present purposes. \section{Preliminaries}% \label{Sec:Preliminaries} In this section we summarize the geometrical background and thereby also fix the notation used in what follows in the subsequent sections. The main purpose of this section is to provide a geometrical setup such that the action functional~\eqref{A-intro} can be seen as a real-valued action functional with non-vanishing Dirac-action. Those two requirements will be satisfied using a real four-dimensional spinor representation. In contrast, in the description of non-linear sigma models on two-dimensional manifolds, two-dimensional real or complex spinor representations are usually taken into account, see for example~\cite{chen2006dirac, jost2014super}. For the convenience of the reader we add some comments on how these different geometrical settings are related. \subsection{} Let \((M,g)\) be a closed, oriented, two-dimensional Riemannian spin manifold with fixed spin structure. The corresponding \(\operatorname{Spin}(2)\) principal bundle is denoted by \(P_{\SpinGroup}\). For any bilinear form \(b\) on \(TM\) we denote by \(\operatorname{Cl}(M, b)\) the corresponding Clifford algebra bundle, which is isomorphic to the quotient of the tensor algebra by the two-sided ideal generated by \begin{equation} X\otimes Y + Y\otimes X - 2b(X, Y), \end{equation} where \(X, Y\in\Gamma(TM)\). In the following we will only use \(b=\pm g\). The typical fiber of \({\operatorname{Cl}\left(M, g\right)}\), denoted by \(\operatorname{Cl}_{2,0}\), is a simple algebra and isomorphic to \(\mathfrak{gl}(2,\mathbb{R})\). We denote this isomorphism by \(\gamma^+\colon \operatorname{Cl}_{2,0}\to\mathfrak{gl}(2,\mathbb{R})\). Hence, the spinor bundle of \({\operatorname{Cl}\left(M, g\right)}\) is given by \(\Sigma=P_{\SpinGroup}\times_{\gamma^+} \mathbb{R}^2\) where \(\operatorname{Spin}(2)\subset\mathfrak{gl}(2,\mathbb{R})\) acts by left-multiplication on \(\mathbb{R}^2\). We denote the Clifford multiplication of a tangent vector \(X\) with \(s\in\Gamma(\Sigma)\) by \(\gamma^+(X)s\) or simply by \(X\cdot s\) if no confusion arises. By its construction as an associated bundle to \(P_{\SpinGroup}\), the bundle \(\Sigma\) possesses a natural fiber metric~\(g_{\SpinorBundleP}\) such that the Clifford action by tangent vectors is symmetric. The Levi-Civita connection on~\(TM\) lifts to the spin connection \(\nabla^{\Sigma}\) on \(\Sigma\). The spin Dirac operator is defined with respect to a local \(g\)-orthonormal frame \(e_a\) by \({\pd_{\Sigma}} s = e_a \cdot \nabla^{\Sigma}_{e_a} s\) for a section \(s\) of \(\Sigma\). It is easy to see that \({\pd_{\Sigma}}\) is antisymmetric and hence for any spinor~\(s\) the Dirac action vanishes, that is, \begin{equation}% \label{eq:Diracvanish} \int_M g_{\SpinorBundleP}\left(s, {\pd_{\Sigma}} s\right) \mathop{}\!\mathrm{d}{vol}_g = 0. \end{equation} In order to avoid the vanishing of the Dirac action one may work with anti-commuting spinors, see for example~\cite{jost2014super} and references therein. Another possibility to obtain a non-vanishing Dirac action is to consider the complexification \(\Sigma^{\mathbb{C}}=\Sigma\otimes\mathbb{C}\) and the resulting Hermitian form~\(h_{\SpinorBundlePC}\). Then the operator \(i{\pd_{\Sigma}^\mathbb{C}}\), where \({\pd_{\Sigma}^\mathbb{C}}\) is the complex linear extension of \({\pd_{\Sigma}}\), is symmetric. Consequently the Dirac action \begin{equation} \int_M h_{\SpinorBundlePC}\left( s, i{\pd_{\Sigma}^\mathbb{C}} s\right) \mathop{}\!\mathrm{d}{vol}_g, \qquad s\in\Gamma(\Sigma^{\mathbb{C}}) \end{equation} does not vanish identically and is real valued. An equivalent reformulation of this approach was introduced in~\cite{chen2006dirac}. Notice, however, that the third summand of~\eqref{A-intro} involves a scalar product of two different spinors. If this scalar product were to be implemented by \(h_{\SpinorBundlePC}\), the action functional~\eqref{A-intro} would not be guaranteed to be real-valued. Whence we replace the two-dimensional complex spinor representation of the approach presented in~\cite{chen2006dirac} by a four-dimensional real one. This step will be explained next. \subsection{} The typical fiber of the Clifford algebra bundle \({\operatorname{Cl}\left(M, -g\right)}\) is the Clifford algebra \(\operatorname{Cl}_{0,2}\). As a real associative algebra with unit the Clifford algebra \(\operatorname{Cl}_{0,2}\) is isomorphic to the quaternions~\(\mathbb{H}\). Consequently, the left-regular representation of \(\operatorname{Cl}_{0,2}\) on itself is irreducible. Hence, we may regard the vector bundle \(S=P_{\SpinGroup}\times_{\operatorname{Spin}(2)}\operatorname{Cl}_{0,2}\) as a spinor bundle, where \(\operatorname{Spin}(2)\subset\operatorname{Cl}_{0,2}\) acts via the left-regular representation of \(\operatorname{Cl}_{0,2}\). The spinor bundle \(S\) is a four-dimensional real vector bundle. Notice that \(\operatorname{Cl}_{0,2}\) is a \(\mathbb{Z}_2\)-graded module over the \(\mathbb{Z}_2\)-graded algebra \(\operatorname{Cl}_{0,2}\). As a consequence also the spinor bundle \(S=\SpinorBundleM^0\oplus \SpinorBundleM^1\) is a \(\mathbb{Z}_2\)-graded module over the \(\mathbb{Z}_2\)-graded algebra bundle \({\operatorname{Cl}\left(M, -g\right)}\). Here, both the even and the odd part of \(S\) are isomorphic to \(\Sigma\) as associated bundles to \(P_{\SpinGroup}\). The Clifford action \(\gamma(X)\) of a tangent vector \(X\) on \(S\) must be of the form \begin{equation} \label{eq:RepresentationM} \gamma(X) = \begin{pmatrix} 0 & -\gamma^+(X) \\ \gamma^+(X) & 0 \\ \end{pmatrix} \end{equation} because it is odd with respect to the \(\mathbb{Z}_2\)-grading. Recall that \(\gamma^+(X)\) denotes the Clifford multiplication of \(X\) on \(\Sigma\), where \(X\) is considered as an element of \({\operatorname{Cl}\left(M, g\right)}\). The induced metric and spin connection on \(S\) are denoted, respectively, by \(g_{\SpinorBundleM} = g_{\SpinorBundleP}\oplusg_{\SpinorBundleP}\) and \(\nabla^S = \nabla^\Sigma\oplus\nabla^\Sigma\). The action of \(TM\subset{\operatorname{Cl}\left(M, -g\right)}\) on \(S\) is skew-symmetric with respect to \(g_{\SpinorBundleM}\). Whence the spin Dirac operator \(\pd = e_\alpha\cdot\nabla^S_{e_\alpha}\colon \Gamma(S)\to\Gamma(S)\) is symmetric with respect to the \(L^2(S)\) scalar product \begin{equation} \left<s, t\right>_{L^2(S)} =\int_M g_{\SpinorBundleM}(s, t) \mathop{}\!\mathrm{d}{vol}_g \qquad s,t\in\Gamma(S). \end{equation} In particular, the Dirac action \(\left<s,\pd s\right>_{L^2(S)}\) is non-trivial, as opposed to its \(\operatorname{Cl}_{2,0}\) counterpart~\eqref{eq:Diracvanish}. Furthermore, \(\pd\) is essentially self-adjoint, see~\cite[Chapter II, Theorem 5.7]{lawson1989spin}\footnote{Notice that this reference uses a different sign convention and naming scheme for Clifford algebras.}. \subsection{} We now explain the different complex structures on the spinor bundles \(\Sigma\) and \(S\). This will be needed later on and help to clarify the relation to the geometrical setup introduced in~\cite{chen2006dirac}. Recall that the Riemann surface \(M\) possesses an integrable almost complex structure \(J_M\) that is defined by \begin{equation} g(J_M X, Y) = \mathop{}\!\mathrm{d}{vol}_g(X, Y) \end{equation} for all tangent vectors \(X\) and \(Y\). Consequently, the tangent bundle \(TM\) is a holomorphic line bundle. When seen as \(TM\subset{\operatorname{Cl}\left(M, g\right)}\), the almost complex structure \(J_M\) can be realized as right-multiplication by the volume form \(\omega\). With respect to a local oriented \(g\)-orthonormal frame \(e_\alpha\) the volume form is given by \(\omega=e_1\cdot e_2\). Similarly, left-multiplication by \(\omega\) induces an almost complex structure on \(\Sigma\), which we denote by \(J_\Sigma\). The bundle \(\Sigma^{\mathbb{C}}=\Sigma\otimes\mathbb{C}\) decomposes in eigen bundles of \(iJ_\Sigma^\mathbb{C}\), where \(J_\Sigma^\mathbb{C}\) denotes the complex linear extension of \(J_\Sigma\). The complex line bundles \(W = (\Sigma, J_\Sigma)\) of eigenvalue \(-1\) and \(\overline{W} = (\Sigma, -J_\Sigma)\) of eigenvalue \(+1\) are, respectively, the so-called bundles of ``left- and right-handed'' Weyl spinors. On \(W=(\Sigma, J_\Sigma)\) there is a bilinear form with values in \(T^*M\) given by \begin{equation} g_{\SpinorBundleP}(s, e_\alpha\cdot t) e^\alpha, \qquad s,t \in \Gamma(\Sigma), \end{equation} where \(e^\alpha\) is the dual basis to the \(g\)-orthonormal frame \(e_\alpha\). The compatibility of Clifford multiplication and almost complex structures, \(\left(J_M X\right)\cdot t = X\cdot J_\Sigma t = -J_\Sigma\left(X\cdot t\right)\), turns the bilinear form into a complex linear isomorphism \(W\otimes_\mathbb{C}W=T^*M\). In particular \(W\) is a holomorphic vector bundle. In other words, holomorphic tangent vector fields on a Riemann surface with fixed spin structure have a ``square root''. Conversely, on a Riemann surface $(M,J_M)$ every square root of~\(TM\) gives rise to a spin structure on $M$. Obviously, the complex vector bundle \((S, J_\Sigma\oplusJ_\Sigma)\) is isomorphic to \(W\oplusW\). In addition, the spinor bundle \(S\) possesses three almost complex structures \(I_S, J_S, K_S\in \operatorname{End}(S)\) that commute with the Clifford multiplication and satisfy the quaternionic relations: \(I_S^2 = J_S^2 = K_S^2 = -\Id_S\) and \(I_S = J_S\circ K_S = - K_S\circ J_S\), etc. Explicitly, they are given by \(I_S(s,t) = (-t,s)\), \(J_S(s,t) = (J_\Sigma s,-J_\Sigma t)\) and \(K_S(s,t) = (J_\Sigma t,J_\Sigma s)\) for all spinors \((s,t)\in S=\SpinorBundleM^0\oplus\SpinorBundleM^1\). Hence, \(S\) may alternatively be viewed as a quaternionic line bundle. This may not come as a big surprise for \(\operatorname{Cl}_{0,2}\simeq \mathbb{H} = \mathbb{R}\oplus\mathbb{R}^3\). When viewed as complex vector bundles of rank two, the three complex spinor bundles \((S, I_S)\), \((S, J_S)\) and \((S, K_S)\) are isomorphic and may be identified with \(\Sigma^{\mathbb{C}} = W\oplus\overline{W}\), whereby $\operatorname{Cl}(M,\pm g)\otimes\mathbb{C}\simeq_\mathbb{C}\operatorname{End}(\Sigma^{\mathbb{C}})$. Let us take a closer look at the identification of \((S, I_S)\) with \(\Sigma^{\mathbb{C}}\). The spinor \((s, t)\in S=\SpinorBundleM^0\oplus\SpinorBundleM^1\) is identified with \(s\otimes1 + t\otimes i\in \Sigma^{\mathbb{C}}=\Sigma\otimes\mathbb{C}\). In particular \(I_S\) is identified with \(\Id_\Sigma\otimes i\). Hence Equation~\eqref{eq:RepresentationM} can be rewritten as \(\gamma(X) = \gamma^+(X)\otimes i\), that is, the Clifford multiplication by~\(X\) on \(S\) differs from the Clifford multiplication by \(X\) on \(\Sigma\) by a factor of \(i\). In this way any representation of \({\operatorname{Cl}\left(M, g\right)}\) on \(\Sigma\) yields a purely imaginary representation of \({\operatorname{Cl}\left(M, -g\right)}\) on~\(\Sigma^{\mathbb{C}}\). Furthermore, we obtain the following identifications of Dirac-operators: \begin{equation} \pd = {\pd_{\Sigma}}\otimes i = i{\pd_{\Sigma}^\mathbb{C}}. \end{equation} We now derive a convenient local expression for the Dirac operator. Let us first assume that \((M, g)\) is the Euclidean space with standard coordinates \(x\) and \(y\). The holomorphic tangent bundle of \(M\) is then spanned by \(\partial_z = \frac12\left(\partial_x - i\partial_y\right)\). The spinor bundle \((S, I_S)=W\oplus\overline{W}\) possesses a complex base \(s\), \(\overline{s}\) such that \(s\inW\), \(\overline{s}\) is the complex conjugate of \(s\) and \(s\otimes s =\mathop{}\!\mathrm{d}{z}\). With respect to this basis the Clifford multiplication of \({\operatorname{Cl}\left(M, -g\right)}\) on \((S, I_S)\) is represented by \begin{align} \gamma(\partial_x) &= \begin{pmatrix} 0 & 1 \\ -1 & 0 \\ \end{pmatrix}, & \gamma\left(\partial_y\right) &= \begin{pmatrix} 0 & -i \\ -i & 0 \\ \end{pmatrix}. \end{align} Hence the Euclidean Dirac-operator is given by \begin{equation} \pd = 2 \begin{pmatrix} 0 & \partial_z \\ -\partial_{\overline{z}} & 0 \\ \end{pmatrix}, \end{equation} that is, by the standard Cauchy--Riemann operators. The general, non-Euclidean Dirac-operator differs from the Euclidean one by a rescaling and zero-order terms. In particular, this means that the regularity theory developed for Cauchy--Riemann equations applies. \subsection{} In this paragraph we introduce the “super partner” of the metric, called gravitino. \begin{Def} A \emph{gravitino} is a smooth section of the bundle \(S\otimes TM\). \end{Def} \begin{rmk} Sometimes in the literature, e.g.~\cite{jost2014super}, a gravitino is defined as a section of the bundle $S\otimes T^*M$, but here we use the Riemannian metric $g$ to identify $T^*M$ with $TM$, for later convenience. \end{rmk} The Clifford multiplication gives a surjective map \begin{equation} \begin{split} \gamma\colon S\otimes TM&\toS \\ s\otimes v&\mapsto v\cdot s \end{split} \end{equation} and has a canonical right-inverse that is given with respect to a local \(g\)-orthonormal base \(\{e_\alpha\}\) of \(TM\) by \begin{equation} \begin{split} \sigma\colonS&\to S\otimes TM \\ s&\mapsto -\frac12\delta^{\alpha\beta}e_\alpha\cdot s\otimes e_\beta. \end{split} \end{equation} Consequently the bundle \(S\otimes TM\) has an orthogonal direct sum decomposition \(S\otimes TM\congS\oplus\ker\gamma\) and the maps \(P=\sigma\circ\gamma\) and \(Q=1-P\) are projection operators on \(S\) and \(\ker \gamma\) respectively. With respect to the \(g\)-orthonormal frame \(\{e_\alpha\}\) the gravitino $\chi$ can locally be expressed as \(\chi = \chi^\alpha\otimes e_\alpha\) with $\chi^\alpha \in \Gamma_{loc}(S)$. The projection operators \(P\) and \(Q\) are given by \begin{align} P\chi &= -\frac{1}{2} e_\beta \cdot e_\alpha \cdot \chi^\alpha\otimes e_\beta, & Q\chi &= -\frac{1}{2} e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes e_\beta. \end{align} Later we will mostly be concerned with the sections of \(\ker\gamma\), because only \(Q\chi\) appears in the action functional. Notice that \(\ker\gamma\) can be identified with \((S, J_\Sigma\oplusJ_\Sigma)\otimes_{\mathbb{C}} TM\) because gravitinos of the form \(s\otimesJ_M v - (J_\Sigma\oplusJ_\Sigma)s \otimes v\) span \(\ker\gamma\). Using the almost complex structure \(J_\Sigma\oplusJ_\Sigma\) on \(S\) and \(T^*M=W\otimes_\mathbb{C} W\) we obtain the following decomposition \begin{equation} \begin{split} S\otimes TM &= \left(W\oplus W\right)\oplus \left(\left(W\oplus W\right) \otimes_\mathbb{C}\left(W^*\otimes_\mathbb{C} W^*\right)\right) \\ &=W\oplus W\oplus \left(W\otimes_\mathbb{C} W^*\otimes_\mathbb{C} W^*\right)\oplus \left(W\otimes_\mathbb{C} W^*\otimes_{\mathbb{C}} W^*\right) \end{split} \end{equation} This is the decomposition of \(S\otimes TM\) into irreducible representations of \(\operatorname{Spin}(2)\). Up to a metric identification, the bundle \(S\otimes TM\) decomposes into two representations of type \(\frac12\) and two of type \(\frac32\). The operator \(Q\) projects onto the \(\frac32\)-parts. \subsection{} We recall the definition of the field \(\phi\) and its super partner \(\psi\), see~\cite{chen2006dirac}. Let $(N,h)$ be a Riemannian manifold, with Levi-Civita connection $\nabla^N\equiv \nabla^{TN}$. Consider a smooth map $\phi\colon M \to N$ with tangent map $T\phi\colon TM\to TN$. It induces a pullback bundle~$\phi^*TN$ over~$M$. Equip the tensor product bundle $S\otimes \phi^*TN$ with the induced metric and connection. More precisely, let $\{y^i\}$ be local coordinates on N, so that $\{\phi^*(\frac{\partial}{\partial y^i})\}$ is a local frame of $\phi^* TN$. Then the local sections, which will be referred to as ``(local) vector spinors'', can be written as $\psi=\psi^j\otimes \phi^*(\frac{\partial}{\partial y^j})$, $\varphi=\varphi^k\otimes \phi^*(\frac{\partial}{\partial y^k})$. The induced metric and connection can be expressed by \begin{gather} \langle \psi, \varphi \rangle_{S\otimes\phi^* TM} =\langle \psi^j,\varphi^k\rangle_{S} \cdot\big\langle \phi^*\frac{\partial}{\partial y^j}, \phi^*\frac{\partial}{\partial y^k}\big\rangle_{\phi^*TN}, \\ \nabla^{S\otimes\phi^*TN}_X \psi = \nabla^S_X \psi^j \otimes\phi^*(\frac{\partial}{\partial y^j}) + \psi^j \otimes \nabla^{\phi^* TN}_X \phi^*(\frac{\partial}{\partial y^j}), \end{gather} where $\nabla^{\phi^* TN}_X \phi^*(\frac{\partial}{\partial y^j})=\phi^*(\nabla^{TN}_{T\phi(X)} \frac{\partial}{\partial y^j})$, for any $X\in TM$. The twisted spin Dirac operator~$\slashed{D}$ on $S\otimes\phi^*TN$ is defined as follows: In a local \(g\)-orthonormal frame \(e_\alpha\) as above, \begin{equation} \begin{split} \slashed{D}\psi\coloneqq e_\alpha \cdot \nabla^{S\otimes \phi^*TN}_{e_\alpha} \psi &= e_\alpha \cdot \nabla^{S}_{e_\alpha} \psi^j \otimes \phi^*(\frac{\partial}{\partial y^j}) + e_\alpha \cdot \psi^j \otimes \nabla^{\phi^* TN}_{e_\alpha} \phi^*(\frac{\partial}{\partial y^j}) \\ &= \slashed{\partial} \psi^j \otimes \phi^*(\frac{\partial}{\partial y^j}) + e_\alpha \cdot \psi^j \otimes \phi^*(\nabla^{ TN}_{T\phi e_\alpha} \frac{\partial}{\partial y^j}). \end{split} \end{equation} Similarly to the spin Dirac operator \(\slashed{\partial}\) the twisted spin Dirac operator \(\slashed{D}\) is essentially self-adjoint with respect to the scalar product in \(L^2(S\otimes\phi^*TN)\). \section{The Action Functional}% \label{Sec:AF} We want to consider the following action functional: \begin{equation} \label{eq:AF} \begin{split} \mathbb{A}(\phi, \psi;g, \chi)&\coloneqq \int_M |\mathop{}\!\mathrm{d} \phi|_{T^*M\otimes \phi^*TN}^2 + \langle \psi, \slashed{D} \psi \rangle_{S\otimes \phi^*TN} \\ &\qquad -4\langle (\mathds{1}\otimes\phi_*)(Q\chi), \psi \rangle_{S\otimes\phi^*TN} -|Q\chi|^2_{S\otimes TM} |\psi|^2_{S\otimes \phi^*TN} -\frac{1}{6} \Rm^{\phi^*TN}(\psi) \mathop{}\!\mathrm{d} vol_g, \end{split} \end{equation} where the last curvature term is locally defined by \[-\frac{1}{6}\Rm^{\phi^*TN}(\psi) =-\frac{1}{6}\Rm^{\phi^*TN}_{ijkl}\langle \psi^i, \psi^k \rangle_S \langle \psi^j,\psi^l \rangle_S. \] Notice that we use the following conventions for the curvature tensor: \begin{equation} \Rm^{TN}_{ijkl} = \left<\Rm^{TN}\left(\frac{\partial}{\partial y^k}, \frac{\partial}{\partial y^l}\right) \frac{\partial}{\partial y^j}, \frac{\partial}{\partial y^i}\right> = \left<\nabla_{\frac{\partial}{\partial y^k}}\nabla_{\frac{\partial}{\partial y^l}}\frac{\partial}{\partial y^j} - \nabla_{\frac{\partial}{\partial y^l}}\nabla_{\frac{\partial}{\partial y^k}}\frac{\partial}{\partial y^j}, \frac{\partial}{\partial y^i}\right> \end{equation} We will abbreviate $\Rm^{\phi^*TN}$ as $\Rm^N$. Hence, the curvature term can be written as \begin{equation} \begin{split} \Rm^N(\psi)&=\Rm^N_{ijkl} \langle \psi^i, \psi^k \rangle_S \langle \psi^j, \psi^l \rangle_S =\Rm^N_{ijkl} \langle \psi^k, \psi^i \rangle_S \langle \psi^l, \psi^j \rangle_S \\ &=\langle \Rm^N(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l})\frac{\partial}{\partial y^j},\frac{\partial}{\partial y^i} \rangle_{TN} \langle \psi^k, \psi^i \rangle_S \langle \psi^l, \psi^j \rangle_S \\ &=\big\langle \langle \psi^l, \psi^j \rangle_S \psi^k \otimes \phi^*(\Rm^N(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l})\frac{\partial}{\partial y^j}), \psi^i\otimes \phi^*(\frac{\partial}{\partial y^i}) \big\rangle_{S\otimes \phi^*TN}. \end{split} \end{equation} So if we set \begin{equation}% \label{def-SR} SR(\psi)\coloneqq\langle \psi^l, \psi^j \rangle_S \psi^k \otimes \phi^*(\Rm^N(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l})\frac{\partial}{\partial y^j}), \end{equation} then \[ \Rm^N(\psi)=\langle SR(\psi), \psi \rangle_{S\otimes\phi^*TN}. \] Note that since $P$ and $Q$ give an orthogonal decomposition, \[ |Q\chi|^2_{S\otimes TM}= \langle \chi, Q\chi \rangle. \] This formula is convenient when expressing the terms locally. \begin{rmk} In order to obtain a real-valued action functional we work here with the real spinor bundle \(S\) and the real scalar product \(g_{\SpinorBundleM}=\left<\cdot, \cdot\right>_S\). Alternatively we might also work with the complex spinor bundle \(\Sigma^{\mathbb{C}}\) and the hermitian form \(h_{\SpinorBundlePC}\). We recall that the hermitian form \(h_{\SpinorBundleM}\) on \((S, I_S)\) induced by \(g_{\SpinorBundleM}\) can be written as \begin{equation} 2h_{\SpinorBundleM}\left(s, t\right) = g_{\SpinorBundleM}\left(s, t\right) - ig_{\SpinorBundleM}\left(I_S s, t\right) \end{equation} and coincides with \(h_{\SpinorBundlePC}\) under the complex linear isomorphism \(\Sigma^{\mathbb{C}}\simeq(S, I_S)\). All summands in~\eqref{eq:AF} except the third one are symmetric in the spinors and would consequently be real. For those terms the approach here and in~\cite{chen2006dirac} coincide. For the third term one could use equally the real part of \begin{equation} -8h_{\SpinorBundlePC}\otimes\phi^*h\left((\mathds{1}\otimes\phi_*)(Q\chi), \psi\right). \end{equation} We will refrain from using that expression later on. \end{rmk} The functional $\mathbb{A}(\phi,\psi;g,\chi)$ has rich symmetries. It is invariant under generalized conformal transformations of the metric in the sense that \begin{equation} \mathbb{A}(\phi,e^{-u}\psi;e^{2u}g,e^{-2u}\chi) =\mathbb{A}(\phi,\psi;g,\chi) \end{equation} where $u\in C^\infty(M)$. To verify the conformal invariance we use the rescaling of the spinor metric~\(g_{\SpinorBundleM}\) by \(e^ug_{\SpinorBundleM}\) and that \(\slashed{D}^{e^{2u}g}e^{-u}\psi = e^{-2u}\slashed{D}^g\psi\), see also~\cite[Proposition~1.3.10]{ginoux2009}. Here \(\slashed{D}^g\) denotes the Dirac operator defined with respect to the metric \(g\). Moreover, the functional stays invariant under super Weyl transformations: \begin{equation} \mathbb{A}(\phi,\psi;g,\chi+\chi')=\mathbb{A}(\phi,\psi;g,\chi) \end{equation} with $Q\chi'=0$. This follows directly from the fact that the action functional only involves \(Q\chi\) and not \(P\chi\). $\mathbb{A}$ is also \(\operatorname{Spin}(2)\)-gauge-invariant, in particular under the following $\mathbb{Z}_2$-action on the spinor bundle $S$: \begin{equation} \mathbb{A}(\phi,\psi;g,\chi)=\mathbb{A}(\phi,-\psi;g,-\chi). \end{equation} These symmetries will be naturally inherited by its critical points. They are useful when dealing with the solution space of the Euler--Lagrange equations. A detailed discussion of the symmetries of \(\mathbb{A}\) and the corresponding conservation laws can be found in~\cite{jost2017symmetries}. As already mentioned in the introduction the functional~\eqref{eq:AF} is essentially the action functional of the two-dimensional nonlinear supersymmetric sigma model, see~\cite{brink1976locally, deser1976complete, jost2014super}. In contrast to what is discussed there, we deal with commuting spinors. For that matter the action functional~\eqref{eq:AF} does in general not possess supersymmetry, except in special cases, see~\cite{jost2017symmetries}. Furthermore, a term which vanishes identically at critical points is omitted here. \section{Euler--Lagrange Equations} \subsection{} Now we derive the Euler--Lagrange equations for $\mathbb{A}$. Fix $(g,\chi)$ and vary $(\phi, \psi)$ via $(\Phi, \Psi)$ with variational fields $(\xi, \eta)$. Here \begin{align} \xi &= \left.\frac{\partial}{\partial t}\Phi\right|_{t=o}, & \eta &= \left.\nabla_{\partial_t}^{S\otimes\Phi^*TN}\Psi\right|_{t=0}. \end{align} At a critical point, we have \[ 0=\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\Big|_{t=0} \mathbb{A}(\Phi(t),\Psi(t);g, \chi) = \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\Big|_{t=0} (\textrm{I+II+III+IV+V}).\] Here we denote by the roman numerals \(\textrm{I}, \dotsc, \textrm{V}\) the summands under the integral in the action functional \(\mathbb{A}\). We calculate them term by term. \begin{enumerate} \item As for harmonic maps, \[\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \textrm{I} = \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \int_M |d_x\Phi|^2= \int_M \langle -2\tau(\Phi), \Phi_* (\partial_t) \rangle_{\Phi^*TN}, \] where $\tau(\Phi)$ is the tension field of $\Phi$ w.r.t.\ $M$. Hence, \[ \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\Big|_{t=0} \textrm{I}=\int_M \langle -2\tau(\phi), \xi \rangle_{\phi^*TN}.\] \item With \[ \nabla^{S\otimes\Phi^*TN}_{\partial_t} \slashed{D} \Psi = \slashed{D} \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi+\Rm^{\Phi^*TN} (\Phi(\partial_t), \Phi_* e_\alpha)e_\alpha \cdot \Psi,\] we get \begin{equation} \begin{split} \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \textrm{II} &=\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \int_M \langle \Psi, \slashed{D} \Psi \rangle_{S\otimes \Phi^*TN} =\int_M \langle \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi, \slashed{D} \Psi \rangle + \langle \Psi, \nabla^{S\otimes\Phi^*TN}_{\partial_t}\slashed{D} \Psi \rangle \\ &= \int_M \langle \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi, \slashed{D} \Psi \rangle + \langle \Psi, \slashed{D} \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi +\Rm^{\Phi^*TN} (\Phi_*(\partial_t), \Phi_* e_\alpha)e_\alpha \cdot \Psi \rangle \\ &= \int_M \langle \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi, \slashed{D} \Psi \rangle + \langle \slashed{D} \Psi,\nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi\rangle +\langle \Psi,\Rm^{\Phi^*TN} (\Phi_*(\partial_t), \Phi_* e_\alpha)e_\alpha \cdot \Psi \rangle \\ &= \int_M 2 \langle \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi, \slashed{D} \Psi \rangle +\langle \Rm^{\Phi^*TN} (\Psi, e_\alpha \cdot\Psi)\Phi_* e_\alpha ,\Phi_*(\partial_t)\rangle. \end{split} \end{equation} Thus \[\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\Big|_{t=0} \textrm{II} =\int_M 2\langle \eta, \slashed{D} \psi \rangle + \langle \Rm^{\phi^*TN}(\psi, e_\alpha \cdot \psi)\phi_* e_\alpha, \xi \rangle. \] \item Under a local orthonormal frame $\{ e_\alpha\}$, \[ -4\langle (\mathds{1}\otimes\Phi_*)(Q\chi), \Psi \rangle_{S\otimes\Phi^*TN} =2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \Phi_* e_\beta, \Psi \rangle_{S\otimes \Phi^*TN}. \] Then \begin{equation} \begin{split} & \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \textrm{III} =\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \int_M 2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \Phi_* e_\beta, \Psi \rangle_{S\otimes \Phi^*TN} \\ &\;=\int_M 2\langle \nabla^{S\otimes\Phi^* TN}_{\partial_t}(e_\alpha\cdot e_\beta\cdot\chi^\alpha\otimes\Phi_* e_\beta), \Psi\rangle +2\langle e_\alpha\cdot e_\beta\cdot\chi^\alpha\otimes\Phi_* e_\beta,\nabla^{S\otimes\Phi^*TN}_{\partial_t}\Psi \rangle, \end{split} \end{equation} where the first integrand can be rewritten as \begin{equation} \begin{split} 2\langle \nabla^{S\otimes \Phi^* TN}_{\partial_t}&(e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \Phi_* e_\beta), \Psi \rangle = 2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \nabla^{\Phi^* TN}_{\partial_t}\Phi_* e_\beta, \Psi \rangle \\ &= 2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \nabla^{\Phi^* TN}_{e_\beta}\Phi_* \partial_t, \Psi \rangle \\ &= 2 e_\beta \langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \Phi_* \partial_t, \Psi \rangle -2\langle \nabla^S_{e_\beta}( e_\alpha \cdot e_\beta \cdot \chi^\alpha) \otimes\Phi_* \partial_t, \Psi \rangle \\ &\quad -2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes\Phi_* \partial_t, \nabla^{S\otimes \Phi^*TN}_{e_\beta} \Psi \rangle. \end{split} \end{equation} The first summand vanishes after integration on the closed manifold~$M$ since it is a divergence of some vector field. Therefore \begin{equation} \begin{split} \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\Big|_{t=0} \textrm{III} =\int_M &-2\langle \nabla^S_{e_\beta}( e_\alpha \cdot e_\beta \cdot \chi^\alpha) \otimes \xi, \psi \rangle -2\langle e_\alpha \cdot e_\beta\cdot \chi^\alpha \otimes\xi,\nabla^{S\otimes \phi^*TN}_{e_\beta}\psi \rangle \\ & +2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \phi_* e_\beta, \eta \rangle \\ =\int_M &-2\big\langle(\langle \nabla^S_{e_\beta}(e_\alpha \cdot e_\beta \cdot \chi^\alpha), \psi \rangle_S +\langle e_\alpha\cdot e_\beta\cdot\chi^\alpha,\nabla^{S\otimes\phi^*TN}_{e_\beta}\psi\rangle_S), \xi \big\rangle_{\phi^*TN} \\ & +2\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \phi_* e_\beta, \eta \rangle_{S\otimes \phi^*TN}. \end{split} \end{equation} Here, by abuse of notation we denote by \(\langle \nabla^S_{e_\beta}(e_\alpha \cdot e_\beta \cdot \chi^\alpha), \psi \rangle_S\), the section of \(\phi^*TN\) that arises by metric contraction of \(\psi\) by \(\nabla^S_{e_\beta}(e_\alpha \cdot e_\beta \cdot \chi^\alpha)\). \item Likewise we have \begin{equation} \begin{split} \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \textrm{IV} =& -\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \int_M |Q\chi|^2 \langle \Psi, \Psi \rangle_{S\otimes\Phi^*TN} \\ =& -\int_M |Q\chi|^2 (\langle \nabla^{S\otimes \Phi^* TN}_{\partial_t} \Psi, \Psi \rangle + \langle \Psi,\nabla^{S\otimes \Phi^* TN}_{\partial_t} \Psi \rangle) \\ =& -\int_M 2|Q\chi|^2 \langle \Psi, \nabla^{S\otimes \Phi^* TN}_{\partial_t} \Psi \rangle. \end{split} \end{equation} Thus, \[ \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\Big|_{t=0} \textrm{IV} = \int_M -2|Q\chi|^2 \langle \psi, \eta \rangle. \] \item In local coordinates, we compute \begin{equation} \begin{split} \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t} \textrm{V} &=\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\int_M-\frac{1}{6}\Phi^*\Rm^N_{ijkl}\langle\Psi^i,\Psi^k\rangle_S \langle\Psi^j,\Psi^l\rangle_S\\ &= -\frac{1}{6}\int_M \partial_t (\Phi^*\Rm^N_{ijkl}\langle \Psi^i, \Psi^k \rangle_S \langle \Psi^j, \Psi^l \rangle_S). \end{split} \end{equation} The integrand reads \begin{equation} \begin{split} \partial_t (\Phi^*\Rm^N_{ijkl}&\langle \Psi^i, \Psi^k \rangle \langle \Psi^j, \Psi^l \rangle) \\ = &(\nabla^{\Phi^*TN}_{\partial_t} \Phi^*\Rm^N_{ijkl})\langle\Psi^i,\Psi^k\rangle \langle\Psi^j,\Psi^l\rangle \\ & + \Phi^*\Rm^N_{ijkl}\langle \nabla^S_{\partial_t}\Psi^i, \Psi^k \rangle \langle \Psi^j, \Psi^l \rangle + \Phi^*\Rm^N_{ijkl}\langle \Psi^i, \nabla^S_{\partial_t} \Psi^k \rangle \langle \Psi^j, \Psi^l \rangle \\ & + \Phi^*\Rm^N_{ijkl}\langle \Psi^i, \Psi^k \rangle \langle \nabla^S_{\partial_t} \Psi^j, \Psi^l \rangle + \Phi^*\Rm^N_{ijkl}\langle \Psi^i, \Psi^k \rangle \langle \Psi^j, \nabla^S_{\partial_t} \Psi^l \rangle \\ = & (\nabla^{\Phi^*TN}_{\partial_t} \Phi^*\Rm^N_{ijkl})\langle\Psi^i,\Psi^k\rangle \langle\Psi^j,\Psi^l\rangle +4\Phi^*\Rm^N_{ijkl}\langle \nabla^S_{\partial_t}\Psi^i,\Psi^k\rangle \langle\Psi^j,\Psi^l\rangle\\ = &\Phi^*(\nabla^{TN}_{T\Phi(\partial_t)}\Rm^N)_{ijkl}\langle\Psi^i,\Psi^k\rangle \langle\Psi^j, \Psi^l \rangle +4\langle \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi, SR(\Psi) \rangle \\ = &\big\langle \Phi^*(\nabla^{TN}\Rm^N)_{ijkl}\langle\Psi^i,\Psi^k\rangle \langle\Psi^j,\Psi^l\rangle, \Phi_*\partial_t \big\rangle +4 \langle \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi, SR(\Psi) \rangle. \\ \end{split} \end{equation} We define $S\nabla R$ analogously to $SR$, that is, \begin{equation}\label{def-SNR} S\nabla R(\Psi)\coloneqq \Phi^*(\nabla^{TN} \Rm^N)_{ijkl}\langle \Psi^i, \Psi^k \rangle \langle \Psi^j, \Psi^l \rangle. \end{equation} Using the metric to identify it with the corresponding vector field, we get \begin{equation} \partial_t (\Phi^*\Rm^N_{ijkl}\langle \Psi^i, \Psi^k \rangle \langle \Psi^j, \Psi^l \rangle) = \langle S\nabla R (\Psi), \Phi_* \partial_t \rangle + 4 \langle \nabla^{S\otimes\Phi^*TN}_{\partial_t} \Psi, SR(\Psi) \rangle. \end{equation} Then, \[ \frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d} t}\Big|_{t=0} \textrm{V} = -\frac{1}{6}\int_M \langle S\nabla R (\psi), \xi \rangle + 4\langle \eta, SR(\psi) \rangle. \] \end{enumerate} From the preceding computations, we obtain \begin{equation} \begin{split} 0= \int_M &\big\langle -2\tau(\phi)+\Rm^N(\psi,e_\alpha\cdot\psi)\phi_* e_\alpha-\frac{1}{6}S\nabla R(\psi), \xi \big\rangle_{\phi^*TN} \\ &+\big\langle -2(\langle \nabla^S_{e_\beta}(e_\alpha \cdot e_\beta \cdot \chi^\alpha), \psi \rangle_S + \langle e_\alpha \cdot e_\beta \cdot \chi^\alpha, \nabla^{S\otimes\phi^*TN}_{e_\beta} \psi \rangle_S), \xi \big\rangle_{\phi^*TN}\\ &+2 \langle \slashed{D} \psi -|Q\chi|^2 \psi -\frac{1}{3}SR(\psi) +e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes \phi_* e_\beta, \eta \rangle_{S\otimes \phi^*TN}. \end{split} \end{equation} We can thus verify Theorem~\ref{thm:EL} which we restate here: \ELThm* \begin{Def} A pair $(\phi,\psi)\in \mathcal{X}^{1,2}_{1,4/3}(M,N)$ satisfying~\eqref{EL-eq} in the sense of distributions is a weak solution of the system. \end{Def} \subsection{} We rewrite the Euler--Lagrange equations~\eqref{EL-eq} in terms of local coordinates on $N$. Let $\{ y^i \} $ be a local coordinate system on $N$. Then $\{ \phi^*(\frac{\partial}{\partial y^i}) \}$ is a local frame for the vector bundle $\phi^*TN$. Then~\eqref{EL-eq} can be written as \begin{equation} \begin{split} \tau(\phi)^i \phi^*(\frac{\partial}{\partial y^i}) =& \frac{1}{2}\langle\psi^k, e_\alpha\cdot\psi^l\rangle \Rm^N\big(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l}\big)\big(e_\alpha(\phi^j)\phi^*(\frac{\partial}{\partial y^j})\big) -\frac{1}{12}(\nabla \Rm^N)_{mjkl} \langle \psi^m,\psi^k \rangle \langle \psi^j, \psi^l\rangle \\ & \quad -\big( \langle \nabla^S_{e_\beta} (e_\alpha \cdot e_\beta \cdot \chi^\alpha), \psi^i \rangle + \langle e_\alpha\cdot e_\beta \cdot\chi^\alpha,\nabla^S_{e_\beta}\psi^i\rangle \big)\phi^*(\frac{\partial}{\partial y^i}) \\ & \quad-\langle e_\alpha \cdot e_\beta\cdot \chi^\alpha,\psi^k \rangle \nabla^{\phi^*TN}_{e_\beta} \phi^*(\frac{\partial}{\partial y^k}) \\ =& \Big( \frac{1}{2}\langle \psi^k, e_\alpha \cdot \psi^l \rangle e_\alpha(\phi^j)\Rm^{i,N}_{\; jkl} -\frac{1}{12}(\nabla^i \Rm^N)_{mjkl} \langle \psi^m,\psi^k \rangle \langle \psi^j, \psi^l\rangle \\ & \quad - e_\beta( \langle e_\alpha \cdot e_\beta \cdot \chi^\alpha,\psi^i \rangle) -\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha, \psi^k \rangle e_\beta(\phi^j) \Gamma^{i,N}_{jk} \Big) \phi^*(\frac{\partial}{\partial y^i}) \end{split} \end{equation} and \begin{equation} \begin{split} \slashed{\partial}\psi^i \otimes &\phi^*(\frac{\partial}{\partial y^i}) + e_\alpha \cdot \psi^k \otimes e_\alpha(\phi^j)\Gamma^{i,N}_{jk} \phi^*(\frac{\partial}{\partial y^i}) \\ &= |Q\chi|^2 \psi^i \otimes \phi^*(\frac{\partial}{\partial y^i}) +\frac{1}{3} \langle \psi^l, \psi^j \rangle \psi^k\otimes \Rm^{i,N}_{jkl} \phi^*(\frac{\partial}{\partial y^i}) -e_\alpha \cdot e_\beta \cdot \chi^\alpha \otimes e_\beta(\phi^i) \phi^*(\frac{\partial}{\partial y^i}). \end{split} \end{equation} Since the curvature of $M$ does not appear in those formulas, we may omit the upper index $N$ for the curvature terms, and we will label it again whenever needed. We may introduce local coordinates on $M$ such that a conformal transformation brings the metric into the following form \[ g= (\mathop{}\!\mathrm{d} x^1)^2 + (\mathop{}\!\mathrm{d} x^2)^2,\] and then $\{e_\alpha\equiv \frac{\partial}{\partial x^\alpha}\}$ is a local orthonormal frame. We define the vector fields $V^j$ on $M$, $j=1,\dotsc, n$, via \[ \langle V^j,W \rangle_{TM}= \langle e_\alpha \cdot W \cdot \chi^\alpha, \psi^j \rangle_{S} \] for any vector field $W$ on $M$. Thus, \[ V^j= V^{j,\beta}e_\beta = \langle e_\alpha \cdot e_\beta \cdot \chi^\alpha, \psi^j \rangle e_\beta.\] In particular, noting that $\nabla_{e_\alpha} e_\beta=0$, we have \[ \diverg V^j= e_\beta(V^{j,\beta})= e_\beta \langle e_\alpha \cdot e_\beta \cdot \chi^\alpha, \psi^j \rangle. \] and \[\langle e_\alpha \cdot e_\beta \cdot \chi^\alpha, \psi^k \rangle e_\beta(\phi^j) \Gamma^{i,N}_{jk} =V^{k,\beta} e_{\beta}(\phi^j) \Gamma^i_{jk}= \Gamma^i_{jk}V^k(\phi^j)=\Gamma^i_{jk}\langle V^k,\nabla\phi^j\rangle_{TM}.\] Thus, in those local coordinates the Euler--Lagrange equations become \begin{equation}\label{local form of EL-eq on N} \begin{split} \tau^i(\phi)=& \frac{1}{2}\langle \psi^k, e_\alpha \cdot \psi^l \rangle e_\alpha(\phi^j)\Rm^{i}_{\; jkl} -\frac{1}{12}(\nabla^i \Rm)_{mjkl} \langle \psi^m,\psi^k \rangle \langle \psi^j, \psi^l\rangle \\ & -\diverg V^i -\Gamma^{i}_{jk}\langle V^k,\nabla\phi^j \rangle, \\ \slashed{\partial} \psi^i =& -\Gamma^i_{jk}\nabla \phi^j\cdot\psi^k+|Q\chi|^2\psi^i +\frac{1}{3}R^i_{\;jkl}\langle\psi^l,\psi^j\rangle\psi^k -e_\alpha\cdot \nabla \phi^i \cdot \chi^\alpha, \end{split} \end{equation} for $1\le i \le n$. One sees that the right hand side of the first equation lies in $L^1$ while that of the second equation lies in $L^{4/3}$. This shows that the Euler--Lagrange equations are critical for the Sobolev elliptic theory. Thus, the regularity of weak solutions is a subtle issue. \subsection{} To get the regularity of weak solutions, we embed $N$ isometrically into some Euclidean space. In order to see what happens to the various fields involved, we start with a general consideration. Let $(N',h')$ be another Riemannian manifold and $f\colon N\to N'$ a smooth immersion. We get a composition \[\phi'\equiv f\circ\phi\colon M\to N\to N',\] and induced maps of vector bundles which fit into the following commutative diagram \begin{diag} \matrix[mat, column sep=small](m) { & TM & & & (f\circ\phi)^*TN'& & f^*TN'& & TN' \\ TM & & & \phi^*TN & & TN & & & \\ & & & & & & & & \\ & & M & & & & N & & N' \\ } ; \path[pf] (m-1-2) edge node{\( (f\circ\phi)_* \)} (m-1-5) edge [dashed](m-4-3) (m-1-2) edge [commutative diagrams/equal](m-2-1) (m-1-5) edge node[auto]{\(\Hat{\Hat{\phi}}\)} (m-1-7) edge [dashed](m-4-3) (m-1-7) edge node[auto]{\(\Hat{f} \)} (m-1-9) edge [dashed](m-4-7) (m-1-9) edge (m-4-9) (m-2-1) edge [near end]node[auto]{\(\quad\phi_* \)} (m-2-4) edge (m-4-3) (m-2-4) edge [commutative diagrams/crossing over]node[auto]{\(\Hat{\phi} \)} (m-2-6) edge (m-4-3) edge node[auto]{\(\Hat{\phi}^*(f_*)\)} (m-1-5) (m-2-6) edge node[auto]{\( f_* \)} (m-1-7) edge (m-4-7) (m-4-3) edge node[auto]{\( \phi \)} (m-4-7) (m-4-7) edge node[auto]{\( f \)} (m-4-9) ; \end{diag} Note that $T\phi=\Hat{\phi}\circ \phi_*$, etc. Let $A$ be the second fundamental form of $f$, i.e., $A(X,Y)=(\nabla_X \mathop{}\!\mathrm{d} f)(Y)$ for any $X,Y\in\Gamma(TN)$. Then the tension fields of $\phi$ and $\phi'$ are related by \begin{equation} \tau(\phi')= \Hat{\phi}^*(f_*)(\tau(\phi))+A(\phi)\big(T\phi( e_\alpha),T\phi(e_\alpha)\big). \end{equation} Now let $(N',h')=(\mathbb{R}^K,\delta)$ be a Euclidean space with standard global coordinate functions $(u^a)_{a=1,\dotsc,K}$, and let $f\colon(N,h)\to (\mathbb{R}^K,\delta)$ be an isometric embedding. Then the second fundamental form $A$ is perpendicular to $N$ in the sense that, for any $X,Y\in\Gamma(TN)$, extended locally to $\mathbb{R}^K$ and still denoted by $X,Y$ respectively, the following orthogonal decomposition holds: \begin{equation} \nabla^e_X Y=\nabla^N_X Y+ A(X,Y)\in TN\oplus T^\bot N=f^*T\mathbb{R}^K, \end{equation} where $\nabla^e$ denotes the flat connection on Euclidean space. See~\cite{bai2004anintroductiontoRG, jost2008riemannian}. Moreover, for any normal vector field $\xi\in \Gamma(T^\bot N)$, \begin{equation} \langle\xi, A(X,Y)\rangle=\langle\xi,\nabla^e_X Y\rangle=-\langle\nabla^e_X\xi,Y\rangle=\langle P(\xi;X),Y\rangle \end{equation} where $P(\xi;X)=-(\nabla^e_X \xi)^\top$ is the shape operator of $N$. As in~\cite{zhu2009regularity} and~\cite{chen2011boundary}, we take a local orthonormal frame $\{\nu_l|l=n+1,\dotsc,K\}$ of $T^\bot N$. (These can be smoothly extended to a tubular neighborhood of $N$, and thus be defined in an open subset of $\mathbb{R}^K$). Then \begin{equation} A(X,Y)=\sum_l \langle A(X,Y),\nu_l\rangle\nu_l=-\sum_{l} \langle Y,\nabla^e_X \nu_l\rangle \nu_l. \end{equation} In terms of the global frame $\{\frac{\partial}{\partial u^a}\}$ we write the vector fields $X,Y,Z$ tangent to the submanifold $N$ as \begin{equation} X=X^a \frac{\partial}{\partial u^a}, \quad Y=Y^b \frac{\partial}{\partial u^b}, \quad Z=Z^c \frac{\partial}{\partial u^c}. \end{equation} Then \begin{equation} \begin{split} A(X,Y)=\;&\sum_l -\langle Y^b\frac{\partial}{\partial u^b},\nabla^e_{X^a\frac{\partial}{\partial u^a}}\nu_l\rangle\nu_l =-\sum_{l,b}X^a Y^b \frac{\partial \nu_l^b}{\partial u^a}\nu_l,\\ P(A(X,Y);Z)=\;& -(\nabla^e_Z A(X,Y))^\top=\sum_{l,b} Z^c X^a Y^b \frac{\partial\nu_l^b}{\partial u^a}(\frac{\partial \nu_l}{\partial u^c})^\top. \end{split} \end{equation} Since $A$ is symmetric: $A(X,Y)=A(Y,X)$, we have \begin{equation}\label{symmetry of A} A(X,Y)=-\sum_{l,b}X^a Y^b \frac{\partial \nu_l^b}{\partial u^a}\nu_l=-\sum_{l,b}X^b Y^a \frac{\partial \nu_l^b}{\partial u^a}\nu_l, \end{equation} \begin{equation}\label{symmetry of P} P(A(X,Y);Z)=\sum_{l,b} Z^c X^a Y^b \frac{\partial\nu_l^b}{\partial u^a}(\frac{\partial \nu_l}{\partial u^c})^\top =\sum_{l,b} Z^c X^b Y^a \frac{\partial\nu_l^b}{\partial u^a}(\frac{\partial \nu_l}{\partial u^c})^\top. \end{equation} We recall here the Gauss equation for $X,Y,Z,W\in \Gamma(TN)$: \begin{equation} \begin{split} \left<\Rm\left(X, Y\right)Z, W\right> &= \left<A\left(X, W\right), A\left(Y,Z\right)\right> - \left<A\left(X, Z\right), A\left(Y, W\right)\right> \\ &= \left<P\left(A\left(Y, Z\right); X\right), W\right> - \left<P\left(A\left(X, Z\right); Y\right), W\right> \end{split} \end{equation} Since this holds for all $W\in \Gamma(TN)$, we have \begin{equation}% \label{Rm-P} \Rm\left(X, Y\right)Z = P\left(A\left(Y, Z\right); X\right) - P\left(A\left(X, Z\right); Y\right). \end{equation} We will denote the induced map on the tensor product bundles by \begin{equation} f_\#\equiv\mathds{1}\otimes \Hat{\phi}^*(f_*)\colon S\otimes\phi^*TN\to S\otimes\phi'^*TN'. \end{equation} Then $\psi'\equiv f_\#(\psi)$ is a section of the latter bundle, i.e., a spinor field along the map $\phi'$. In local coordinates, \begin{equation} \psi=\psi^i\otimes\phi^*(\frac{\partial}{\partial y^i}), \quad \psi'=\psi'^a\otimes \phi'^*(\frac{\partial}{\partial u^a}), \end{equation} where \begin{equation}\label{psi'-psi} \psi'^a(x)=\frac{\partial u^a}{\partial y^i}(\phi(x))\psi^i(x). \end{equation} Moreover, the Dirac terms corresponding to $\phi$ and $\phi'$ are related via (see~\cite{chen2011boundary}) \begin{equation}\label{D'-D} \slashed{D}'\psi'=f_\# \slashed{D}\psi+\mathcal{A}(\phi_* e_\alpha, e_\alpha\cdot\psi), \end{equation} where \begin{equation} \mathcal{A}(\phi_* e_\alpha, e_\alpha\cdot\psi) \equiv e_\alpha\cdot\psi^i\otimes \phi^*\big(A(T\phi (e_\alpha),\frac{\partial}{\partial y^i})\big). \end{equation} \subsection{} We are now ready to write the Euler--Lagrange equations in terms of $(\phi',\psi')$. Apply $f_\#$ to $\slashed{D} \psi$ and use~\eqref{D'-D}: \begin{equation} \begin{split} \slashed{D}'\psi'-\mathcal{A}(\phi_* e_\alpha,e_\alpha\cdot\psi)=|Q\chi|^2\psi'+\frac{1}{3}f_\# (SR(\psi)) +2(\mathds{1}\otimes\phi'_*)Q\chi. \end{split} \end{equation} We compute the following terms: \begin{itemize} \item Note that \begin{equation} \begin{split} Tf \left(\frac{\partial}{\partial y^i}\right) = \frac{\partial f^a}{\partial y^i}\frac{\partial}{\partial u^a}=\frac{\partial u^a}{\partial y^i}(\phi)\frac{\partial}{\partial u^a} \end{split} \end{equation} and \begin{equation} T\phi'(e_\alpha)=\frac{\partial \phi^i}{\partial x^\alpha}Tf\left(\frac{\partial}{\partial y^i}\right) =\frac{\partial\phi^i}{\partial x^\alpha}\frac{\partial f^a}{\partial y^i}\frac{\partial}{\partial u^a} =\frac{\partial \phi'^a}{\partial x^\alpha}\frac{\partial}{\partial u^a}. \end{equation} Using~\eqref{psi'-psi} and the expression for $A$, we have \begin{equation} \begin{split} \mathcal{A}(\phi_* e_\alpha, e_\alpha\cdot\psi) =\;& e_\alpha\cdot\psi^i\otimes \phi^*(A(T\phi(e_\alpha),\frac{\partial}{\partial y^i})) \\ =\;& -e_\alpha\cdot\psi^i\otimes \sum_{l,b}\frac{\partial\phi'^a}{\partial x^\alpha} \frac{\partial u^b}{\partial y^i}(\phi) \frac{\partial \nu_l^b}{\partial u^a}(\phi') \phi'^* \nu_l \\ =\;& -\sum_{l,b} \frac{\partial \phi'^a}{\partial x^\alpha}e_\alpha \cdot \frac{\partial u^b}{\partial y^i}\psi^i \otimes \frac{\partial \nu_l^b}{\partial u^a} \nu_l^c(\phi')\phi'^*(\frac{\partial}{\partial u^c}) \\ =\;& -\sum_{l,b} \nabla\phi'^a \cdot \psi'^b \otimes \frac{\partial \nu_l^b}{\partial u^a} \nu_l^c(\phi')\phi'^*(\frac{\partial}{\partial u^c}). \end{split} \end{equation} \item Recalling~\eqref{def-SR} and~\eqref{Rm-P}, \begin{equation} \begin{split} f_\# SR(\psi) =\;&f_\#\big(\langle\psi^l,\psi^j\rangle\psi^k \otimes\Rm(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l})\frac{\partial}{\partial y^j}\big) \\ =\;& f_\#\big\{\langle\psi^l,\psi^j\rangle\psi^k \otimes \big(P(A(\frac{\partial}{\partial y^j},\frac{\partial}{\partial y^l});\frac{\partial}{\partial y^k}) -P(A(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^j});\frac{\partial}{\partial y^l})\big)\big\}\\ =\;& \langle\psi^l,\psi^j\rangle\psi^k \otimes\frac{\partial u^a}{\partial y^j}\frac{\partial u^b}{\partial y^l}\frac{\partial u^c}{\partial y^k} \frac{\partial \nu_l^b}{\partial u^a}(\frac{\partial \nu_l}{\partial u^c})^\top(\phi') \\ &\quad -\langle\psi^l,\psi^j\rangle\psi^k \otimes\frac{\partial u^a}{\partial y^k}\frac{\partial u^b}{\partial y^j}\frac{\partial u^c}{\partial y^l} \frac{\partial \nu_l^b}{\partial u^a}(\frac{\partial \nu_l}{\partial u^c})^\top(\phi') \\ =\;& \big(\langle\psi'^b,\psi'^a\rangle\psi'^c -\langle\psi'^c,\psi'^b \rangle\psi'^a\big)\otimes \frac{\partial \nu_l^b}{\partial u^a}(\frac{\partial \nu_l}{\partial u^c})^{\top,d}\phi'^*(\frac{\partial}{\partial u^d}). \end{split} \end{equation} \item For the last term: \begin{equation} \begin{split} 2(\mathds{1}\otimes\phi'_*)Q\chi =\;& -e_\alpha\cdot e_\beta\cdot \chi^\alpha \otimes\phi'_* e_\beta =-e_\alpha\cdot e_\beta\cdot\chi^\alpha\otimes \frac{\partial \phi'^a}{\partial x^\beta}\phi'^*(\frac{\partial}{\partial u^a})\\ =\;& -e_\alpha\cdot \nabla \phi'^a\cdot\chi^\alpha \otimes \phi'^*(\frac{\partial}{\partial u^a}). \end{split} \end{equation} \end{itemize} We thus obtain the equation for $\psi'$: \begin{equation}% \label{eq-psi'} \begin{split} \slashed{\partial}\psi'^a\otimes\phi'^*(\frac{\partial}{\partial u^a}) =\;& -\sum_{l,b}\nabla\phi'^d\cdot\psi'^b\otimes\frac{\partial\nu_l^b}{\partial u^d}\nu_l^a(\phi')\phi'^*(\frac{\partial}{\partial u^a}) +|Q\chi|^2 \psi'^a\otimes \phi'^*(\frac{\partial}{\partial u^a}) \\ & +\frac{1}{3}\sum_{l,b}\big(\langle\psi'^b,\psi'^d\rangle\psi'^c -\langle\psi'^c,\psi'^b \rangle\psi'^d\big) \otimes \frac{\partial \nu_l^b}{\partial u^d}(\frac{\partial \nu_l}{\partial u^c})^{\top,a} \phi'^*(\frac{\partial}{\partial u^a})\\ & -e_\alpha\cdot \nabla \phi'^a\cdot\chi^\alpha \otimes \phi'^*(\frac{\partial}{\partial u^a}). \end{split} \end{equation} In components, for each $a$, \begin{equation}% \label{eq-psi'-Componentwisely} \begin{split} \slashed{\partial}\psi'^a =\;&-\sum_{l,b}\nabla\phi'^d\cdot\psi'^b\frac{\partial\nu_l^b}{\partial u^d}\nu_l^a(\phi')+|Q\chi|^2 \psi'^a \\ & +\frac{1}{3}\sum_{l,b}\big(\langle\psi'^b,\psi'^d\rangle\psi'^c -\langle\psi'^c,\psi'^b \rangle\psi'^d\big) \frac{\partial \nu_l^b}{\partial u^d}(\frac{\partial \nu_l}{\partial u^c})^{\top,a} -e_\alpha\cdot \nabla \phi'^a\cdot\chi^\alpha. \end{split} \end{equation} Here $\slashed{\partial}$ is the Dirac operator \(\slashed{\partial}\) on \(S\) and each $\psi'^a$ is a local pure spinor field. Next we apply $\Hat{\phi}^*(f_*)$ to $\tau(\phi)$ to get \begin{equation} \begin{split} \tau(\phi')-\sum_{\alpha}A(\phi)\big(T\phi (e_\alpha),T\phi (e_\alpha)\big) =\;& \frac{1}{2}\Hat{\phi}^*(f_*)\Rm^{\phi^*TN}(\psi,e_\alpha\cdot\psi)\phi_* e_\alpha -\frac{1}{12}\Hat{\phi}^*(f_*)(S\nabla R(\psi)) \\ & -\Hat{\phi}^*(f_*) \big((\diverg V^j)\phi^*(\frac{\partial}{\partial y^j})+\nabla^{\phi^*TN}_{V^j} \phi^*(\frac{\partial}{\partial y^j})\big). \end{split} \end{equation} Since $\mathbb{R}^K$ is flat, \begin{equation} LHS=\Delta\phi'-\sum_{\alpha} A(\phi')\big(T\phi' (e_\alpha),T\phi'(e_\alpha)\big) =\Delta \phi' +\sum_{\alpha}\frac{\partial\phi'^a}{\partial x^\alpha}\frac{\partial\phi'^b}{\partial x^\alpha}\frac{\partial\nu_l^b}{\partial u^a}(\phi') \phi^*(\nu_l). \end{equation} We deal with the terms on the right hand side as follows: \begin{itemize} \item Using~\eqref{Rm-P} we get \begin{equation} \begin{split} \Hat{\phi}\big(&\Rm^{\phi^*TN}(\psi,e_\alpha\cdot\psi)\phi_* e_\alpha \big) \\ &= \langle\psi^k,e_\alpha\cdot\psi^l\rangle \Rm(\frac{\partial}{\partial y^k}, \frac{\partial}{\partial y^l})T\phi (e_\alpha)\\ &= \langle\psi^k,e_\alpha\left(\phi^j\right)\cdot\psi^l\rangle \Rm(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^l})\frac{\partial}{\partial y^j} \\ &= \langle\psi^k,\nabla\phi^j\cdot\psi^l\rangle \big( P(A(\frac{\partial}{\partial y^j},\frac{\partial}{\partial y^l});\frac{\partial}{\partial y^k}) -P(A(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^j});\frac{\partial}{\partial y^l})\big) \\ &= \langle\psi^k,\nabla\phi^j\cdot\psi^l\rangle P(A(\frac{\partial}{\partial y^j},\frac{\partial}{\partial y^l});\frac{\partial}{\partial y^k})+\langle\nabla\phi^j\cdot\psi^k,\psi^l\rangle P(A(\frac{\partial}{\partial y^k},\frac{\partial}{\partial y^j});\frac{\partial}{\partial y^l})\\ &=2\langle\psi^k,\nabla\phi^j\cdot\psi^l\rangle P(A(\frac{\partial}{\partial y^j},\frac{\partial}{\partial y^l});\frac{\partial}{\partial y^k})\\ &=2\langle\psi^k,e_\alpha\cdot\psi^l\rangle P\big(A(T\phi (e_\alpha),\frac{\partial}{\partial y^l});\frac{\partial}{\partial y^k}\big) \end{split} \end{equation} Hence \begin{equation} \begin{split} \frac{1}{2}\Hat{\phi}^*(f_*)\Rm^{\phi^*TN}(\psi,e_\alpha\cdot\psi)\phi_* e_\alpha =\;&\langle\psi^k,e_\alpha\cdot\psi^l\rangle \Hat{\phi}^*\left(f_*\right) P\big(A(T\phi(e_\alpha),\frac{\partial}{\partial y^l});\frac{\partial}{\partial y^k}\big)\\ =\;&\langle\psi^k,e_\alpha\cdot\psi^l\rangle \frac{\partial\phi'^a}{\partial x^\alpha}\frac{\partial u^b}{\partial y^l}\frac{\partial u^c}{\partial y^k} \frac{\partial\nu_l^b}{\partial u^a}(\frac{\partial\nu_l}{\partial u^c})^\top \\ =\;&\langle\psi'^c,\nabla\phi'^a\cdot\psi'^b\rangle \frac{\partial\nu_l^b}{\partial u^a}(\frac{\partial\nu_l}{\partial u^c})^\top \end{split} \end{equation} \item To push $S\nabla R$ forward, we note that we can extend the local coordinate functions, which are defined in an open subset of $N$, so that they are constant in normal directions. Thus~$y^i$, $i=1,\dotsc,n$, are defined in a tubular neighborhood of a domain in $N$, which is an open subset of $\mathbb{R}^K$. The derivatives of $y^i$ with respect to $u^a$ are uniquely defined on $N$. Then \begin{equation} \begin{split} -\frac{1}{12}\Hat{\phi}^*(f_*)(S\nabla R(\psi)) =\;&-\frac{1}{12}\Hat{\phi}^*(f_*)\big((\nabla\Rm)_{mjkl}\langle\psi^m,\psi^k\rangle\langle\psi^j,\psi^l\rangle\big) \\ =\;&-\frac{1}{12}\big((\nabla\Rm)_{abcd}\langle\psi'^a,\psi'^c\rangle\langle\psi'^b,\psi'^d\rangle\big), \end{split} \end{equation} where \begin{equation} (\nabla\Rm)_{abcd}(x)=\big((\nabla\Rm)_{ijkl}\frac{\partial y^i}{\partial u^a}\frac{\partial y^j}{\partial u^b} \frac{\partial y^k}{\partial u^c}\frac{\partial y^l}{\partial u^d}\big)(\phi'(x)). \end{equation} Moreover, using Gauss equation again, one has \begin{equation} (\nabla \Rm)_{ijkl}=2(\langle \nabla A_{ik}, A_{jl}\rangle-\langle \nabla A_{il}, A_{jk}\rangle). \end{equation} where we have written $A_{ij}\equiv A(\frac{\partial}{\partial y^i}, \frac{\partial}{\partial y^j})$. See for example~\cite{branding2015some, jost2015geometric}. Hence, \begin{equation} \begin{split} -\frac{1}{12}\Hat{\phi}^*(f_*)&(S\nabla R(\psi)) \\ &= -\frac{1}{6}\big(\langle \nabla A_{ik}, A_{jl}\rangle-\langle \nabla A_{il}, A_{jk}\rangle\big) \frac{\partial y^i}{\partial u^a}\frac{\partial y^j}{\partial u^b} \frac{\partial y^k}{\partial u^c}\frac{\partial y^l}{\partial u^d} \langle\psi'^a,\psi'^c\rangle\langle\psi'^b,\psi'^d\rangle \\ &\eqqcolon Z(A,\nabla A)_{abcd} \langle\psi'^a,\psi'^c\rangle\langle\psi'^b,\psi'^d\rangle. \end{split} \end{equation} \item In the same way as we have defined the vector fields $V^j,j=1,\dotsc,n$, we can define vector fields $V'^a, a=1\dotsc,K$, on $M$ by \begin{equation} \langle V'^a,W\rangle_{TM}=\langle e_\alpha\cdot W\cdot \chi^\alpha,\psi'^a\rangle_S,\quad\forall W\in\Gamma(TM). \end{equation} Then \begin{equation} \begin{split} \Hat{\phi}^*&(f_*) \big((\diverg V^j)\phi^*(\frac{\partial}{\partial y^j})+\nabla^{\phi^*TN}_{V^j} \phi^*(\frac{\partial}{\partial y^j})\big)\\ &= e_\beta\langle e_\alpha\cdot e_\beta\cdot\chi^\alpha,\psi^j\rangle_S\frac{\partial u^a}{\partial y^j}\phi'^*(\frac{\partial}{\partial u^a}) +V^j(\frac{\partial u^a}{\partial y^j}(\phi'))\phi'^*(\frac{\partial}{\partial u^a}) -V^{j,\beta}A(T\phi (e_\beta),\frac{\partial}{\partial y^j}) \\ &= e_\beta(\langle e_\alpha\cdot e_\beta\cdot\chi^\alpha,\psi'^a\rangle)\phi'^*(\frac{\partial}{\partial u^a}) +\frac{\partial \phi'^a}{\partial x^\beta} V'^{b,\beta}\frac{\partial \nu_l^b}{\partial u^a}\phi'^*(\nu_l)\\ &= (\diverg V'^a)\phi'^*(\frac{\partial}{\partial u^a})+\langle V'^b,\nabla\phi'^a\rangle\frac{\partial\nu_l^b}{\partial u^a}\phi'^*(\nu_l). \end{split} \end{equation} \end{itemize} Therefore the equation for $\phi'$ is \begin{equation}% \label{eq-phi'} \begin{split} \Delta \phi' =\;& -\sum_{\alpha,l}\frac{\partial\phi'^a}{\partial x^\alpha}\frac{\partial\phi'^b}{\partial x^\alpha}\frac{\partial\nu_l^b}{\partial u^a}(\phi') \phi'^*(\nu_l) +\sum_{b,l}\langle\psi'^c,\nabla\phi'^a\cdot\psi'^b\rangle \frac{\partial\nu_l^b}{\partial u^a}(\frac{\partial\nu_l}{\partial u^c})^\top(\phi')\\ & +Z(A,\nabla A)_{abcd}\langle\psi'^a,\psi'^c\rangle\langle\psi'^b,\psi'^d\rangle -(\diverg V'^a)\phi'^*(\frac{\partial}{\partial u^a})-\langle V'^b,\nabla\phi'^a\rangle\frac{\partial\nu_l^b}{\partial u^a}\phi'^*(\nu_l). \end{split} \end{equation} In components, for each $a$, \begin{equation}% \label{eq-phi'-Componentwisely} \begin{split} \Delta \phi'^a =\;&-\sum_{\alpha,b,l}\frac{\partial\phi'^c}{\partial x^\alpha}\frac{\partial\phi'^b}{\partial x^\alpha}\frac{\partial\nu_l^b}{\partial u^c}(\phi')\nu_l^a(\phi') +\sum_{b,l}\langle\psi'^c,\nabla\phi'^d\cdot\psi'^b\rangle \frac{\partial\nu_l^b}{\partial u^d}(\frac{\partial\nu_l}{\partial u^c})^{\top,a} (\phi') \\ & +Z^a(A,\nabla A)_{ebcd}\langle\psi'^e,\psi'^c\rangle\langle\psi'^b,\psi'^d\rangle -\diverg V'^a-\langle V'^b,\nabla\phi'^c\rangle\frac{\partial\nu_l^b}{\partial u^c}\nu_l^a(\phi'). \end{split} \end{equation} As in~\cite{zhu2009regularity} and~\cite{chen2011boundary}, we shall transform the equation in a suitable form for later use. Since $\phi'_* e_\alpha$ is tangent to $N$ while $\nu_l$ is perpendicular to $N$, they are orthogonal: \begin{equation}\label{orthogonality} \sum_{b}\frac{\partial \phi'^b}{\partial x^\alpha} \nu_l^b=0, \quad \forall \alpha, \forall l. \end{equation} Hence \begin{equation} \sum_{\alpha,b,l} \frac{\partial\phi'^c}{\partial x^\alpha} \frac{\partial\phi'^b}{\partial x^\alpha}\frac{\partial\nu_l^a}{\partial u^c} \nu_l^b=0, \end{equation} and we can add it to the first summand of~\eqref{eq-phi'-Componentwisely} to get a term of the form \begin{equation} \sum_{\alpha,b,l} \frac{\partial\phi'^b}{\partial x^\alpha}\big(\frac{\partial\phi'^c}{\partial x^\alpha}\frac{\partial\nu_l^a}{\partial u^c} \nu_l^b -\frac{\partial\phi'^c}{\partial x^\alpha}\frac{\partial\nu_l^b}{\partial u^c}\nu_l^a \big) =\sum_{\alpha,b} \omega_\alpha^{ab} \frac{\partial\phi'^b}{\partial x^\alpha}, \end{equation} with \begin{equation} \omega_\alpha^{ab}=-\Big(\frac{\partial\phi'^c}{\partial x^\alpha}\frac{\partial\nu_l^a}{\partial u^c} \nu_l^b -\frac{\partial\phi'^c}{\partial x^\alpha}\frac{\partial\nu_l^b}{\partial u^c}\nu_l^a\Big) =-\omega_{\alpha}^{ba}. \end{equation} The second summand of~\eqref{eq-phi'-Componentwisely} can also be arranged into such a form. Actually, using the symmetry~\eqref{symmetry of P}, we get \begin{equation} \begin{split} &\sum_{b,l}\langle\psi'^c,\nabla\phi'^d\cdot\psi'^b\rangle\frac{\partial\nu_l^b}{\partial u^d}(\frac{\partial\nu_l}{\partial u^c})^{\top,a}\\ =\;& \sum_{b,l}\langle\psi'^c,\nabla\phi'^b\cdot\psi'^d\rangle\frac{\partial\nu_l^b}{\partial u^d}(\frac{\partial\nu_l}{\partial u^c})^{\top,a}\\ =\;&\frac{1}{2} \sum_{b,l}\big(\langle\psi'^c,\nabla\phi'^b\cdot\psi'^d\rangle +\langle\nabla\phi'^b\cdot\psi'^d,\psi'^c\rangle\big) \frac{\partial\nu_l^b}{\partial u^d}(\frac{\partial\nu_l}{\partial u^c})^{\top,a}\\ =\;&\frac{1}{2} \sum_{\alpha,b,l} \langle\psi'^c,e_\alpha\cdot\psi'^d\rangle \frac{\partial\phi'^b}{\partial x^\alpha} \frac{\partial\nu_l^b}{\partial u^d}(\frac{\partial\nu_l}{\partial u^c})^{\top,a} +\langle e_\alpha\cdot\psi'^d,\psi'^c\rangle \frac{\partial\phi'^b}{\partial x^\alpha} \frac{\partial\nu_l^b}{\partial u^d}(\frac{\partial\nu_l}{\partial u^c})^{\top,a}\\ =\;&\frac{1}{2}\sum_{\alpha,b,l}\langle\psi'^c,e_\alpha\cdot\psi'^d\rangle \frac{\partial\phi'^b}{\partial x^\alpha} \frac{\partial\nu_l^b}{\partial u^d}(\frac{\partial\nu_l}{\partial u^c})^{\top,a} -\langle\psi'^c,e_\alpha\cdot\psi'^d\rangle \frac{\partial\phi'^b}{\partial x^\alpha} \frac{\partial\nu_l^b}{\partial u^c}(\frac{\partial\nu_l}{\partial u^d})^{\top,a}. \end{split} \end{equation} Since $\phi'_* e_\alpha$ is tangent to $N$, \begin{equation} \sum_b\frac{\partial\phi'^b}{\partial x^\alpha}\frac{\partial\nu_l^b}{\partial u^d}=\sum_b\frac{\partial\phi'^b}{\partial x^\alpha}(\frac{\partial\nu_l}{\partial u^d})^b =\sum_{b} \frac{\partial\phi'^b}{\partial x^\alpha}(\frac{\partial\nu_l}{\partial u^d})^{\top,b}. \end{equation} Thus the above term equals \begin{equation} \begin{split} \frac{1}{2}\sum_{\alpha,b,l}\langle\psi'^c,e_\alpha\cdot\psi'^d\rangle \big((\frac{\partial\nu_l}{\partial u^d})^{\top,b}(\frac{\partial\nu_l}{\partial u^c})^{\top,a} -(\frac{\partial\nu_l}{\partial u^d})^{\top,a}(\frac{\partial\nu_l}{\partial u^c})^{\top,b}\big) \frac{\partial\phi'^b}{\partial x^\alpha} \equiv \sum_{\alpha,b} F^{ab}_\alpha \frac{\partial\phi'^b}{\partial x^\alpha}, \end{split} \end{equation} with \begin{equation} F^{ab}_\alpha=\sum_{\alpha,l} \langle\psi'^c,e_\alpha\cdot\psi'^d\rangle \big((\frac{\partial\nu_l}{\partial u^d})^{\top,b}(\frac{\partial\nu_l}{\partial u^c})^{\top,a} -(\frac{\partial\nu_l}{\partial u^d})^{\top,a}(\frac{\partial\nu_l}{\partial u^c})^{\top,b}\big) =-F^{ba}_\alpha. \end{equation} Similarly, using~\eqref{orthogonality}, the last summand of~\eqref{eq-phi'-Componentwisely} can be rearranged as \begin{equation} \begin{split} \sum_{c,l} \langle V'^c,\nabla\phi'^b\rangle\frac{\partial\nu_l^c}{\partial u^b}\nu_l^a(\phi') &=\sum_{c,l,\alpha} \frac{\partial \nu_l^c}{\partial u^b}V'^c_\alpha\nu_l^a(\phi')\frac{\partial\phi'^b}{\partial x^\alpha} \\ &=\sum_{b,c,l,\alpha}\big(\frac{\partial \nu_l^c}{\partial u^b}V'^c_\alpha\nu_l^a(\phi')-\frac{\partial\nu_l^c}{\partial u^a}V'^c_\alpha \nu^b_l(\phi')\big) \frac{\partial\phi'^b}{\partial x^\alpha} \\ &\equiv -\sum_{\alpha,b}T^{ab}_\alpha\frac{\partial\phi'^b}{\partial x^\alpha} \end{split} \end{equation} where \begin{equation} T^{ab}_\alpha=-\sum_{c}\left(\frac{\partial \nu_l^c}{\partial u^b}V'^c_\alpha\nu_l^a(\phi')-\frac{\partial\nu_l^c}{\partial u^a}V'^c_\alpha \nu^b_l(\phi')\right) =-T^{ba}_\alpha. \end{equation} \begin{rmk} Actually, for our proof of the local regularity of weak solutions, we don't need to write the second term and the last term into such an antisymmetric structure, see~\cite{branding2015some} for a similar treatment for a simpler model. Former regularity proofs, see e.g.~\cite{zhu2009regularity, wang2009regularity, chen2011boundary}, however, did need that structure. But it is also convenient to have such a structure. \end{rmk} Therefore, the equations for $\phi'$ appear in the elegant form: \begin{equation}\label{new-eq-phi} \Delta\phi'^a=\sum_{b,\alpha}(\omega^{ab}_\alpha+F^{ab}_\alpha+T^{ab}_\alpha)\frac{\partial\phi'^b}{\partial x^\alpha} +Z^a(A,\nabla A)_{ebcd}\langle\psi'^e,\psi'^c\rangle\langle\psi'^b,\psi'^d\rangle -\diverg V'^a, \end{equation} for $a=1,\dotsc,K$, where the coefficients of first derivative of $\phi'$ are antisymmetric. \section{Regularity of weak solutions} We now come to the crucial contribution of our paper, the regularity of weak solutions. In order to make the action functional $\mathbb{A}$ well-defined and finite-valued, we need to assume \begin{align} \phi &\in W^{1,2}(M,N), & \psi &\in W^{1,4/3}(\Gamma(S\otimes \phi^*TN)). \end{align} The issue then is higher regularity of such weak solutions. More precisely, we shall show that~$(\phi,\psi)$ are smooth when they satisfy~\eqref{EL-eq} in the weak sense. By the Sobolev embedding theorem, $\phi \in L^p(M,N)$ for any $p\in [1,\infty)$ and $\psi\in L^4(\Gamma(S\otimes\phi^*TN))$. Since $f\colon N\to \mathbb{R}^K$ is a smooth embedding, $(\phi',\psi')$ have the same regularity as $(\phi,\psi)$, and so it suffices to show smoothness of the former. As the regularity is a local issue, we can take $\phi'\colon B_1\to\mathbb{R}^K$ defined in the euclidean unit disc $B_1\subset \mathbb{R}^2\cong \mathbb{C}^1$. Over \(B_1\) the bundle \(S\otimes \phi^*TN'\) is trivial with typical fiber \(\mathbb{C}^2\otimes\mathbb{R}^K\). Hence \(\psi'\colon B_1\to \mathbb{C}^2\otimes\mathbb{R}^K\) is a vector valued function. \subsection{} As we have seen, $\psi'$ satisfies~\eqref{eq-psi'} or equivalently~\eqref{eq-psi'-Componentwisely}. By the following lemma, which will be proved in Section~\ref{Sec:ProofOfLemma}, all powers of $\psi'$ are integrable. \begin{lemma}\label{reg for Dirac} Let $p\in(4,\infty)$ and $\varphi\in L^4(B_1,\mathbb{C}^2\otimes \mathbb{R}^K)$ be a weak solution of the nonlinear system \begin{equation} \slashed{\partial} \varphi^i =A^i_j \varphi^j +B^i, \quad 1\le i \le K, \end{equation} where $A\in L^2(B_1, \mathfrak{gl}(2, \mathbb{C})\otimes\mathfrak{gl}(K, \mathbb{R}))$ and $B\in L^2(B_1,\mathbb{C}^2\otimes\mathbb{R}^K)$. There exists a $\varepsilon_0=\varepsilon_0(p)>0$ such that if $\|A\|_{L^2(B_1)}\le \varepsilon_0$, then $\varphi\in L^p_{loc}(B_1)$. \end{lemma} It follows from Lemma~\ref{reg for Dirac} that $\psi'\in L^p_{loc}(B_1)$ for any $p\in[1,\infty)$. Since locally the Dirac operator is given by the classical Cauchy--Riemann operators \(\partial_{z}\) and \(\partial_{\overline{z}}\), it follows from the elliptic theory that $\psi'\in W^{1,q}(B_{1/2})$ for any $q\in [1,2)$. \subsection{} We use the aforementioned Rivi\`ere's regularity theory to deal with $\phi'$. More precisely, we use the following result which is an extension of~\cite{riviere} to improve the regularity of $\phi'$. \begin{thm}\label{Sharp-Topping} \emph{(\cite{riviere2010conformally, sharp2013decay})} Let $p\in (1,2)$. Suppose that $u\in W^{1,2}(B_1,\mathbb{R}^K)$ is a weak solution of \begin{equation} -\Delta u=\Omega \nabla u+f, \end{equation} where $\Omega \in L^2(B_1,\mathfrak{so}(K)\otimes \mathbb{R}^2)$ and $f\in L^p(B_1,\mathbb{R}^K)$. Then $u\in W^{2,p}_{loc}(B_1)$. \end{thm} In the previous section we have written the equation for $\phi'$ into such a form, see~\eqref{new-eq-phi}. Since we have seen $\psi'\in L^p_{loc}(B_1), 1\le p<\infty$, the hypotheses of Theorem~\ref{Sharp-Topping} are satisfied. Thus we can conclude that $\phi'\in W^{2,p}_{loc}(B_1)$ for any $p\in[1,2)$. It follows from the Sobolev embedding theorems that $\phi'\in W^{1,q}(B_{1/2})$ for any $q\in [1,\infty)$. \subsection{} We can now apply the standard elliptic theory for a bootstrap argument, see e.g.~\cite{begehr1994complex,gilbarg2001elliptic}, and hence conclude that $(\phi',\psi')$ are smooth. The smoothness of $\phi$ then follows directly. For $\psi$, one can use~\eqref{local form of EL-eq on N} and the elliptic theory for Cauchy--Riemann operators (e.g.~\cite{begehr1994complex}) to conclude that $\psi$ is also smooth. Therefore the full regularity of weak solutions is obtained, completing the proof of Theorem~\ref{theorem 1}. \section{Proof of Lemma~\ref{reg for Dirac}}% \label{Sec:ProofOfLemma} In this section, we provide the proof of Lemma~\ref{reg for Dirac}. We shall use the Dirac type equation to improve the integrability of the spinor. Results of this type were first obtained by~\cite{wang2010remark} and further developed in~\cite{sharp2016regularity, branding2015some}. Actually a stronger result holds in general. Before stating the general result, we recall some basic facts on Morrey spaces, see for example~\cite{giaquinta1983multiple}. Let $U$ be a domain in $\mathbb{R}^n$. For $0\le \lambda\le n$ and $1\le p<\infty$, the Morrey space on $U$ is defined as \begin{equation} \MS{p,\lambda}(U)\coloneqq\Big\{u\in L^p(U)\big| \|u\|_{\MS{p,\lambda}(U)}<\infty \Big\}. \end{equation} Here the $(p,\lambda)$-Morrey norm of $u$ is defined by \begin{equation} \|u\|_{\MS{p,\lambda}(U)}\coloneqq\sup_{x\in U,\;r>0} \Big(\frac{r^\lambda}{r^n}\int_{B_r(x)\cap U} |u(y)|^p \mathop{}\!\mathrm{d} y \Big)^{1/p}. \end{equation} Note that on a bounded domain $U\subset \mathbb{R}^n$, for $1\le p <\infty$ and $0\le \lambda\le n$, it holds that \begin{equation} L^{\infty}(U)=\MS{p,0}(U)\subset \MS{p,\lambda}(U) \subset \MS{p,n}(U)= L^p(U). \end{equation} In this section we consider a map $\varphi\colon B_1\to \mathbb{C}^L\otimes \mathbb{R}^K$ satisfying a first order elliptic system, where $B_1\subset \mathbb{R}^n$ is the euclidean unit ball and $\mathbb{C}^L\otimes \mathbb{R}^K$ is supposed to be the typical fiber of a twisted complex spinor bundle over \(B_1\). \begin{lemma}\label{Morrey type regularity} Let $n\ge 2$ and $4<p<+\infty$. Let $\varphi\in \MS{4,2}(B_1, \mathbb{C}^L\otimes \mathbb{R}^K)$ be a weak solution of the nonlinear system \begin{equation}\label{nonlinear dirac system-2} \slashed{\partial} \varphi^i=A^i_j \varphi^j + B^i, \quad 1\le i\le K, \end{equation} where $A\in \MS{2,2}(B_1, \mathfrak{gl}(L,\mathbb{C})\otimes\mathfrak{gl}(K,\mathbb{R}))$ and $B \in \MS{2,2}(B_1,\mathbb{C}^L\otimes \mathbb{R}^K)$. There exists $\varepsilon_0=\varepsilon_0(n,p)>0$ such that if \begin{equation} \|A\|_{\MS{2,2}(B_1)}\le \varepsilon_0, \end{equation} then $\varphi\in L^p_{loc}(B_1)$. Moreover, for any $U\Subset B_1$, \begin{equation} \|\varphi\|_{L^p(U)} \le C(n,p,U)\big(\|\varphi\|_{\MS{4,2}(B_1)}+\|B\|_{\MS{2,2}(B_1)}\big). \end{equation} \end{lemma} The proof is motivated from that in~\cite{wang2010remark} and is adapted to this system with minor changes. The idea is to use the fundamental solution of the Euclidean Dirac operator and apply Riesz potential estimates. Thanks to the Bochner-Lichnerowicz-Weitzenb\"ock type formulas, e.g.\ see~\cite[Theorem II.8.17]{lawson1989spin},~\cite[Lemma 4.1]{tolksdorf2001clifford},~\cite[Theorem 4.4.2]{jost2008riemannian}, the fundamental solution of the Euclidean Dirac operator can be derived from that of the Euclidean Laplacian. We remark that the $\MS{2,2}$-assumption on $B$ here fits quite well to the proof. \begin{proof} Applying $\slashed{\partial}$ to~\eqref{nonlinear dirac system-2}, we have, for $1\le i\le K$, \begin{equation} -\Delta \varphi^i=\slashed{\partial}^2 \varphi^i=\slashed{\partial}(A^i_j\varphi^j+B^i) \end{equation} in the sense of distributions. Let $x_0\in B_1 $, $|x_0|<1$, and let $0<R<1-|x_0|$. Take a cutoff function $\eta\in C_0^\infty(B_{R}(x_0))$ such that $0 \le \eta \le 1$ and $\eta\equiv 1$ on $B_{ R/2}(x_0)$. For each $1\le i\le K$, define $g^i\colon \mathbb{R}^n \to \mathbb{C}^L$ by \begin{equation}\label{good part-2} g^i(x) =\int_{\mathbb{R}^n} \frac{\partial G(x,y)}{\partial y^\alpha} \frac{\partial}{\partial y^\alpha} \cdot \big(\eta^2( A^i_j\varphi^j+B^i)\big)(y) \mathop{}\!\mathrm{d} y \end{equation} where $G(x,y)$ is the fundamental solution of $\Delta$ on $\mathbb{R}^n$. Thus \begin{equation} \begin{split} -\Delta g^i &= \slashed{\partial} \big(\eta^2 (A^i_j\varphi^j+B^i)\big) \\ &= \slashed{\partial} \big( A^i_j\varphi^j+B^i\big) \quad \quad \text{in \(B_{R/2}(x_0)\)}. \end{split} \end{equation} Setting $h^i\coloneqq\varphi^i-g^i$, we see that $h^i$, $1\le i\le K$, are harmonic in $B_{R/2}(x_0)$: \begin{equation} \Delta h^i=0 \quad \text{in \(B_{R/2}(x_0)\)}. \end{equation} Note that \begin{equation} |g^i(x)|\le C\int_{\mathbb{R}^n}\frac{1}{|x-y|^{n-1}}(\eta^2 |A^i_j\varphi^j+B^i|) \mathop{}\!\mathrm{d} y=C I_1(\eta^2 (A\varphi+B)), \end{equation} where $I_1$ is the Riesz potential operator. By Adams' inequality~\cite[Theorem 3.1]{adams1975note}, for $1<q<\lambda\le n$, \begin{equation} \|I_1(\eta^2 (A\varphi+B))\|_{\MS{\frac{\lambda q}{\lambda-q},\lambda}(\mathbb{R}^n)} \le C\|\eta^2 |A\varphi+B|\|_{\MS{q,\lambda}(\mathbb{R}^n)}. \end{equation} \textbf{\underline{Step 1:}} By hypothesis we have \begin{equation} \begin{split} \|\eta^2(A\varphi+B)\|_{\MS{\frac{4}{3},2}(\mathbb{R}^n)} & \le \|(\eta A)(\eta\varphi)\|_{\MS{\frac{4}{3},2}(\mathbb{R}^n)}+\|\eta^2 B\|_{\MS{\frac{4}{3},2}(\mathbb{R}^n)} \\ & \le \|\eta A\|_{\MS{2,2}(\mathbb{R}^n)} \|\eta \varphi\|_{\MS{4,2}(\mathbb{R}^n)} +\|\eta^2 B\|_{\MS{\frac{4}{3},2}(\mathbb{R}^n)} \\ & \le \|A\|_{\MS{2,2}(B_R(x_0))} \|\varphi\|_{\MS{4,2}(B_R(x_0))}+\|B\|_{\MS{\frac{4}{3},2}(B_R(x_0))}\\ & \le \|A\|_{\MS{2,2}(B_R(x_0))} \|\varphi\|_{\MS{4,2}(B_R(x_0))}+C R^{\frac{1}{2}}\|B\|_{\MS{2,2}(B_R(x_0))}. \end{split} \end{equation} With $q=\frac{4}{3}, \lambda=2, \frac{\lambda q}{\lambda-q}=4$. We get \begin{equation} \begin{split} \|g\|_{\MS{4,2}(\mathbb{R}^n)}&\le C\|I_1(\eta^2 (A\varphi+B))\|_{\MS{4,2}(\mathbb{R}^n)} \le C\|\eta^2(A\varphi+B)\|_{\MS{\frac{4}{3},2}(\mathbb{R}^n)} \\ &\le C\varepsilon_0 \|\varphi\|_{\MS{4,2}(B_R(x_0))}+C R^{\frac{1}{2}}|B|. \end{split} \end{equation} where we have denoted $|B|\equiv \|B\|_{\MS{2,2}(B_1)}$. Note that $|h^i|^4$ is subharmonic in $B_{R/2}(x_0)$: \begin{equation} \Delta |h^i|^4=\Delta{(h^i\overline{h^i})}^2=2|\nabla(h^i\overline{h^i})|^2+2|h^i|^2\big((\Delta h^i)\overline{h^i}+2|\nabla h^i|^2+ h^i\overline{\Delta h^i}\big)\ge0 \end{equation} since $\Delta h^i=0$. Hence $\fint_{B_r(x)}|h^i|^4\mathop{}\!\mathrm{d} y$ is a nondecreasing function in $r$, which implies that for any $1\le i\le m$ and any $\theta\in (0,1/6)$, \begin{equation} \|h^i\|_{\MS{4,2}(B_{\theta R}(x_0))} \le {(4\theta)}^{1/2}\|h^i\|_{\MS{4,2}(B_{R/2}(x_0))}. \end{equation} Recalling $\varphi^i=g^i+h^i$, we get \begin{equation} \begin{split} \|\varphi\|_{\MS{4,2}(B_{\theta R}(x_0))} &\le \|g\|_{\MS{4,2}(B_{\theta R}(x_0))}+\|h\|_{\MS{4,2}(B_{\theta R}(x_0))} \\ &\le C\varepsilon_0 \|\varphi\|_{\MS{4,2}(B_{R}(x_0))} +C|B| R^{\frac{1}{2}} +2\theta^{1/2}\|h\|_{\MS{4,2}(B_{R/2}(x_0))} \\ &\le C\varepsilon_0 \|\varphi\|_{\MS{4,2}(B_{R}(x_0))} +C|B| R^{\frac{1}{2}} +2\theta^{1/2}\big(\|\varphi\|_{\MS{4,2}(B_{R/2}(x_0))}+\|g\|_{\MS{4,2}(B_{R/2}(x_0))} \big) \\ &\le C_0(\varepsilon_0+\theta^{1/2})\|\varphi\|_{\MS{4,2}(B_{R}(x_0))}+C|B|R^{\frac{1}{2}}. \end{split} \end{equation} Fix any $\beta\in (0,\frac{1}{2})$, we can find a $\theta\in(0,\frac{1}{2})$ such that $2C_0 \theta^{1/2} \le \theta^\beta$. Then take $\varepsilon_0$ small enough such that $2C_0 \varepsilon_0\le \theta^\beta$. With such a choice we have \begin{equation}\label{iteration inequality-2} \|\varphi\|_{\MS{4,2}(B_{\theta R}(x_0))}\le \theta^\beta \|\varphi\|_{\MS{4,2}(B_{R}(x_0))}+C|B|R^{\frac{1}{2}}. \end{equation} Note that~\eqref{iteration inequality-2} holds for any $0<R<1-|x_0|$. Thus we can start the following iteration procedure. Let $R<1-|x_0|$. Then for any $0<r< R$, there exists a unique $k\in\mathbb{N}$ such that $\theta^{k+1}R <r\le \theta^k R$. (The case $k=0$ is trivial, and we may thus assume $k\ge 1$). Hence we have \begin{equation} \begin{split} \|\varphi\|_{\MS{4,2}(B_{r}(x_0))} & \le \|\varphi\|_{\MS{4,2}(B_{\theta^k R}(x_0))} \le \theta^\beta\|\varphi\|_{\MS{4,2}(B_{\theta^{k-1}R}(x_0))}+C|B|(\theta^{k-1}R)^{\frac{1}{2}} \\ & \le \theta^{2\beta} \|\varphi\|_{\MS{4,2}(B_{\theta^{k-2}R}(x_0))} +C|B|[\theta^\beta(\theta^{k-2}R)^{\frac{1}{2}}+(\theta^{k-1}R)^{\frac{1}{2}}]\\ & \le \theta^{k\beta} \|\varphi\|_{\MS{4,2}(B_{R}(x_0))} +C|B|R^{\frac{1}{2}}\theta^{(k-1)\beta}[1+\theta^{\frac{1}{2}-\beta}+\cdots+\theta^{(\frac{1}{2}-\beta)(k-1)}]\\ & \le \frac{1}{\theta^\beta} \theta^{(k+1)\beta} \|\varphi\|_{\MS{4,2}(B_{R}(x_0))} +\frac{C|B|R^{\frac{1}{2}-\beta}}{\theta^{2\beta}} \frac{1-\theta^{(\frac{1}{2}-\beta)k}}{1-\theta^{\frac{1}{2}-\beta}} (\theta^{k+1}R)^\beta \\ & \le \frac{1}{\theta^\beta} \big(\frac{r}{R}\big)^\beta \|\varphi\|_{\MS{4,2}(B_{R}(x_0))} +\frac{C|B|}{\theta^{2\beta}-\theta^{\frac{1}{2}+\beta}} r^\beta \end{split} \end{equation} where we used $R\le 1$ in the last inequality. In particular this implies that \begin{equation} \Big(\frac{1}{r^{n-2+4\beta}}\int_{B_r(x_0)}|\varphi|^4\mathop{}\!\mathrm{d} y\Big)^{\frac{1}{4}} \le \frac{1}{(\theta R)^{\beta}} \|\varphi\|_{\MS{4,2}(B_1)} +\frac{C|B|}{\theta^{2\beta}-\theta^{\frac{1}{2}+\beta}} \end{equation} If we restrict to $|x_0|<\frac{1}{4}$ and $R=\frac{1}{2}$, we see that $\varphi \in \MS{4,2-4\beta}(B_{\frac{1}{4}})$, with \begin{equation} \|\varphi\|_{\MS{4,2-4\beta}(B_{1/4})} \le C \|\varphi\|_{\MS{4,2}(B_1)}+C\|B\|_{\MS{2,2}(B_1)}. \end{equation} for some universal constant $C=C(n,\beta)$. \textbf{\underline{Step 2:}} We improve the integrability. Let $|x_0|<\frac{1}{4}$ and $0<R<\frac{1}{4}-|x_0|$. Take a cutoff function $\eta \in C_0^\infty(B_R(x_0))$ and define $g^i, h^i$ as before. Note that \begin{equation} \begin{split} \|\eta^2 (A\varphi+B)\|_{\MS{\frac{4}{3}, 2-\frac{4\beta}{3}}(\mathbb{R}^n)} & \le \|\eta^2 A\varphi\|_{\MS{\frac{4}{3}, 2-\frac{4\beta}{3}}(\mathbb{R}^n)} +\|\eta^2 B\|_{\MS{\frac{4}{3}, 2-\frac{4\beta}{3}}(\mathbb{R}^n)} \\ & \le \|\eta A\|_{\MS{2,2}(\mathbb{R}^n)} \|\eta\varphi\|_{\MS{4,2-4\beta}(\mathbb{R}^n)} +\|\eta^2 B\|_{\MS{\frac{4}{3}, 2-\frac{4\beta}{3}}(\mathbb{R}^n)} \\ & \le \|A\|_{\MS{2,2}(B_R(x_0))} \|\varphi\|_{\MS{4,2-4\beta}(B_R(x_0))} +\| B\|_{\MS{\frac{4}{3}, 2-\frac{4\beta}{3}}(B_R(x_0))} \\ & \le \|A\|_{\MS{2,2}(B_R(x_0))} \|\varphi\|_{\MS{4,2-4\beta}(B_R(x_0))}+C\| B\|_{\MS{2,2}(B_R(x_0))}R^{\frac{1}{2}-\beta}\\ & \le \varepsilon_0\|\varphi\|_{\MS{4,2-4\beta}(B_R(x_0))}+C|B|R^{\frac{1}{2}-\beta}. \end{split} \end{equation} With $q=\frac{4}{3}$ and $\lambda=2-\frac{4\beta}{3}$, (note that we need $1<q<\lambda\le n$, which requires $\beta<\frac{1}{2}$), we see that, $\frac{\lambda q}{\lambda-q}=\frac{4(3-2\beta)}{3-6\beta}$, and \begin{equation} \begin{split} \|g^i\|_{\MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(\mathbb{R}^n)} & \le C\|I_1(\eta^2 (A\varphi+B))\|_{\MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(\mathbb{R}^n)} \le C\|\eta^2 (A\varphi+B)\|_{\MS{\frac{4}{3}, 2-\frac{4\beta}{3}}(\mathbb{R}^n)} \\ & \le C\varepsilon_0\|\varphi\|_{\MS{4,2-4\beta}(B_R(x_0))}+C|B|R^{\frac{1}{2}-\beta}. \end{split} \end{equation} Again, $h^i$ is harmonic in $B_{R/2}(x_0)$ in the sense of distributions and $h^i\in L^4(B_{R/2}(x_0))$, by Weyl's lemma, it is smooth in $B_{R/2}(x_0)$, see e.g.~\cite[Corollary 1.2.1]{jost2013partial}. By shrinking the radius $R$ a little, we may assume $h^i\in L^{\infty}(B_{R/2}(x_0))$. Actually, by Harnack inequality in the disk together with mean value equality one has, for any $R'<R$, \begin{equation} \begin{split} \|h\|_{L^{\infty}(B_{R'/2}(x_0))} \le & C(R,R',n) |h(x_0)| \le C(R,R',n)\|h\|_{L^1(B_{R'/2}(x_0))} \\ \le & C\big(\|\varphi\|_{\MS{4,2}(B_1)} +\|B\|_{\MS{2,2}(B_1)} \big). \end{split} \end{equation} Thus if we restrict to $|x_0|\le \frac{1}{16}$ and $R=\frac{1}{8}$, we see that $h^i\in \MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(B_{\frac{1}{16}})$. By elliptic theory, $\|h\|_{\MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(B_{\frac{1}{16}})}$ can be controlled by $\|\varphi\|_{\MS{4,2}(B_1)}$ and $\|B\|_{\MS{2,2}(B_1)}$. Finally recall that \begin{equation} \varphi^i=h^i+g^i. \end{equation} It follows that \begin{equation}\label{higher Morrey norm of psi-2} \begin{split} \|\varphi\|_{\MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(B_{\frac{1}{16}})} \le & \|g\|_{\MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(B_{\frac{1}{16}})} + \|h\|_{\MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(B_{\frac{1}{16}})} \\ \le & C(\beta,n)\big(\|\varphi\|_{\MS{4,2}(B_1)}+\|B\|_{\MS{2,2}(B_1)}\big). \end{split} \end{equation} \textbf{\underline{Step 3:}} We note that~\eqref{higher Morrey norm of psi-2} holds for any give $0<\beta<\frac{1}{2}$. Since \begin{equation} \lim_{\beta \nearrow \frac{1}{2}} \frac{4(3-2\beta)}{3-6\beta}=+\infty, \end{equation} and \begin{equation} \MS{\frac{4(3-2\beta)}{3-6\beta},2-\frac{4\beta}{3}}(B_{\frac{1}{16}}) \hookrightarrow L^{\frac{4(3-2\beta)}{3-6\beta}}(B_{\frac{1}{16}}), \end{equation} we conclude that $\varphi\in L^p(B_{\frac{1}{16}})$ for any $4<p<+\infty$ and \begin{equation} \|\varphi\|_{L^p(B_{\frac{1}{16}})} \le C(n,p)\big(\|\varphi\|_{\MS{4,2}(B_1)}+\|B\|_{\MS{2,2}(B_1)}\big). \end{equation} This completes the proof of the Lemma. \end{proof} Finally note that in the 2-dimensional case, \begin{equation} \MS{2,2}(B_1)=L^2(B_1), \quad \MS{4,2}(B_1)=L^4(B_1). \end{equation} So Lemma~\ref{reg for Dirac} follows from Lemma~\ref{Morrey type regularity}.
1,116,691,498,003
arxiv
\section{Introduction}\label{section:introduction} \marginpar{MG: introduction edits} The discovery of binary asteroids has led to considering dynamical models formed by four bodies, two of them being typically the Sun and Jupiter. Among possible four-body models (see also \cite{HOWELL1986,scheeres1998restricted,gabern2003restricted,scheeres2005restricted,Alvarez2009,Delgado_2012, burgos2013blue,Burgos_2016,Kepley_James_2017}), a relevant role is played by the models in which three bodies lie on a triangular central configuration. Given that asteroids have often a (very) irregular shape, it is useful to start by investigating the case in which one body has an oblate shape. Among the different questions that this model may rise, we concentrate on the existence of equilibrium points and the corresponding linear stability analysis. Within such framework, we consider a four-body simplified model and we concentrate on the specific example given by the Trojan asteroid 624 Hektor, which is located close to the Lagrangian point $L_4$ of the Sun-Jupiter system, and its small moonlet. In our model Sun, Jupiter and Hektor form an isosceles triangle (nearly equilateral) whose shape remains unchanged over time. The small body represents Hektor's moonlet Skamandrios. Obviously, we could also replace the moonlet by a spacecraft orbiting Hektor. The system Sun-Jupiter-Hektor-Skamandrios plays a relevant role for several reasons. Indeed, Hektor is the largest Jupiter Trojan, it has one of the most elongated shapes among the bodies of its size in the Solar system, and it is the only known Trojan to possess a moonlet (see, e.g., \cite{Dvorak1} for stability regions of Trojans around the Earth and \cite{LC} for dissipative effects around the triangular Lagrangian points). The study of asteroids with satellites presents a special interest for planetary dynamics, as they provide information about constraints on the formation and evolution of the Solar system. Another motivation to study the dynamics of a small body near a Trojan asteroid comes from astrodynamics, as NASA prepares the first mission, Lucy, to the Jupiter's Trojans, which is planned to be launched in October 2021 and visit seven different asteroids: a Main Belt asteroid and at least five Trojans. As a model for the Sun-Jupiter-Hektor dynamics, we consider a system of three bodies of masses $m_1 \geq m_2 \geq m_3$, which move in circular orbits under mutual gravity, and form a triangular central configuration. We refer to these bodies as the primary, the secondary, and the tertiary, respectively. We assume that the first two bodies of masses $m_1, m_2$ are spherical and homogeneous, so they can be treated as point masses, while the third body of mass $m_3$ is oblate. We describe the gravitational potential of $m_3$ in terms of spherical harmonics, and we only retain the most significant ones. We show the existence of a corresponding triangular central configuration, which turns out to be an isosceles triangle; if the oblateness of the mass $m_3$ is made to be zero, the central configuration becomes the well-known equilateral triangle Lagrangian central configuration. We stress that when $m_3$ is oblate, the central configuration is not the same as in the non-oblate case, since the overall gravitational field of $m_3$ is no longer Newtonian; it is well known that central configurations depend on the nature of the gravitational field (see, e.g., \cite{Corbera2004,Arredondo_Perez-Chavela,diacu2016central,Martinez2017}). We note that there exist papers in the literature (e.g., \cite{Asique2016}), which consider systems of three bodies, with one of the bodies non-spherical, which are assumed to form an equilateral triangle central configuration. Such assumption, while it may lead to very good approximations, is not physically correct. The moonlet Skamandrios is represented by a fourth body, of infinitesimal mass, which moves in a vicinity of $m_3$ under the gravitational influence of $m_1, m_2, m_3$, but without affecting their motion. We consider the motion of the infinitesimal mass taking place in the three-dimensional space; it is not restricted to the plane of motion of the three heavy bodies. This situation is referred to as the spatial circular restricted four-body problem, and can be described by an autonomous Hamiltonian system of $3$-degrees of freedom. We `zoom-in' onto the dynamics in a small neighborhood of $m_3$ by performing a Hill's approximation of the restricted four-body problem. This is done by a rescaling of the coordinates in terms of $m_3^{1/3}$, writing the associated Hamiltonian in the rescaled coordinates as a power series in $m_3^{1/3}$, and neglecting all the terms of order $O(m_3^{1/3})$ in the expansion, since such terms are small when $m_3$ is small. This yields an approximation of the motion of the massless particle in an $O(m_3^{1/3})$-neighborhood of $m_3$, while $m_1$ and $m_2$ are `sent to infinity' through the rescaling. \marginpar{MG:description of the Hill approximation changed} This model is an extension of the classical lunar Hill problem \cite{Hill}. Since the tertiary is assumed to be oblate, and the corresponding central configuration formed by the three heavy bodies is not an equilateral triangle anymore, this model also extends Hill's approximation of the restricted four-body problem developed in \cite{Burgos_Gidea}. The Hill approximation is more advantageous to utilize for this system than the restricted four-body problem, since it allows for an analytical treatment, and yields more accurate numerical implementations when realistic parameters are used. The main numerical difficulty in the restricted four-body problem is the large differences of scales among the relevant parameters, i.e. the mass of Hektor is much smaller than the masses of the other two heavy bodies. The rescaling of the coordinates involved in the Hill approximation reduces the difference of scales of the parameters to more manageable quantities; more precisely, in normalized units the oblateness effect in the restricted four-body problem is of the order $O(10^{-15})$, while in the Hill approximation is of the order $O(10^{-7})$ (see Section \ref{sec:Hill_system} for details). Once we have established the model for the Hill four-body problem with oblate tertiary, we study the equilibrium points and their linear stability. We find that there are $2$ pairs of symmetric equilibrium points on each of the $x$-, $y$-, and $z$-coordinate axes, respectively. The equilibrium points on the $x$- and $y$-coordinate axes are just a continuation of the corresponding ones for the Hill four-body problem with non-oblate tertiary \cite{Burgos_Gidea}. The equilibrium points on the $z$-coordinate axis constitute a new feature of the model. In the case of Hektor, these equilibrium points turn out to be outside of the body of the asteroid but very close to the surface, so they are of potential interest for low altitude orbit space missions, such as the one of NASA/JPL's Dawn mission around Vesta (\cite{delsate2011analytical}). This work is organized as follows. In Section \ref{sec:model} we describe in full details the restricted four-body model in which the tertiary is oblate; in particular, we describe the isosceles triangle central configuration of three bodies in which two bodies are point masses and the third is oblate. Hill's approximation is introduced in Section \ref{sec:Hill}. The determination of the equilibria and their stability is given in Section \ref{sec:linear}. \section{Restricted four-body problem with oblate tertiary}\label{sec:model} In this section we develop a model for a restricted four-body problem, which consists of two bigger bodies (e.g., the Sun and Jupiter), a smaller body -- called tertiary -- with oblate shape (e.g. an asteroid), and an infinitesimal mass (e.g., moonlet) around the tertiary. As mentioned in Section \ref{section:introduction}, we consider the three masses $m_1 \geq m_2 \geq m_3$ as moving under the mutual gravitational attraction; the bodies with masses $m_1$ and $m_2$ are considered as point masses, while $m_3$ is the oblate body. We normalize the units of mass so that $m_1+m_2+m_3=1$. We assume that the bodies with masses $m_1$, $m_2$, $m_3$ move on a triangular central configuration, which will be determined in Section~\ref{sec:central_config}, once the gravitational field of the oblate body has been discussed in Section~\ref{sec:nonspheriacal}. We will concentrate on the specific example given by the asteroid Hektor and its moon Skamandrios, where Hektor moves on a central configuration with Jupiter and the Sun. Orbital and physical values are given in Section~\ref{section:data}. The positions of the three man bodies in the triangular central configuration is computed in Section~\ref{sec:location_central_config}, while the equations of motion of the moonlet -- with infinitesimal mass moving in the vicinity of $m_3$ -- are given in Section~\ref{sec:4BP}. \subsection{Data on the Sun-Jupiter-Hektor-Skamandrios system}\label{section:data} The models which we will develop below will be applied to the case of the Sun-Jupiter-Hektor-Skamandrios system. We extract the data for this system from \cite{JPL,Marchis,DESCAMPS2015}. Hektor is approximately located at the Lagrangian point $L_4$ of the Sun-Jupiter system. According to \cite{DESCAMPS2015}, Hektor is approximately $416 \times 131 \times 120$ km in size, and its shape can be approximated by a dumb-bell figure; the equivalent radius (i.e., the radius of a sphere with the same volume as the asteroid) is $R_H=92$ km\footnote{Note that \cite{DESCAMPS2015} claims that there are some typos in the values reported in \cite{Marchis}.}.\marginpar{MG:description changed} Hektor spins very fast, with a rotation period of approximately $6.92$ hours (see the JPL Solar System Dynamics archive \cite{JPL}). The moonlet Skamandrios orbits around Hektor at a distance of approximately $957.5$ km, with an orbital period of $2.965079$ days; see \cite{DESCAMPS2015}. Its orbit is highly inclined, at approximately $50.1^\circ$ with respect to the orbit of Hektor, which justifies choosing as a model the spatial restricted four-body problem rather than the planar one; see \cite{Marchis}. We also note that the inclination of Hektor is approximately $18.17^\circ$ (see \cite{JPL}). Although a more refined model should include a non-zero inclination, we will consider that Sun-Jupiter-Hektor move in the same plane, an assumption that is needed in order for the three bodies to form a central configuration. We will further assume that the axis of rotation of Hektor is perpendicular to the plane of motion. For the masses of Sun, Jupiter and Hektor we use the values of $m_1= 1.989\times10^{30}$ kg, $m_2=1.898\times10^{27}$, and $m_3=7.91\times10^{18}$ kg, respectively. For the average distance Sun-Jupiter we use the value $778.5\times 10^6$ km. In Figure \ref{fig:Hektor_forces} we provide a comparison between the strength of the different forces acting on the moonlet: the Newtonian gravitational attraction of Hektor, Sun, Jupiter, and the effect of the non-spherical shape of the asteroid, limited to the the so-called $J_2$ coefficient, which will be introduced in Section \ref{sec:nonspheriacal}. \begin{figure}\label{fig:Hektor_forces} \includegraphics[width=0.8\textwidth]{Hektor_perturbations_1-eps-converted-to.pdf} \caption{Order of magnitude of the different perturbations acting on the moonlet as a function of its distance from Hektor. The terms Gm, Sun and Jupiter denote, respectively, the monopole terms of the gravitational influence of Hektor, the attraction of the Sun and that of Jupiter. $J_2$ represents the perturbation due to the non-spherical shape of Hektor. The actual distance of the moonlet is indicated by a vertical line.} \end{figure} \subsection{The gravitational field of a non-spherical body}\label{sec:nonspheriacal} We first consider that the tertiary body, representing Hektor, has a general (non-spherical) shape. The gravitational potential, relative to a reference frame centered at the barycenter of the tertiary and rotating with the body, is given in spherical coordinates $(r,\phi,\lambda)$ by (see, e.g., \cite{celletti2018dynamics}): \begin{equation*}\label{potential} V(r,\phi,\lambda)={{\mathcal{G} m_H}\over r}\ \sum_{n=0}^\infty \Bigl({R_H\over r}\Bigr)^n\ \sum_{m=0}^n P_{nm}(\sin\phi)\ (C_{nm}\cos m\lambda+ S_{nm}\sin m\lambda)\ , \end{equation*} where $\mathcal{G}$ is the gravitational constant, $m_H$ is the mass of Hektor, $R_H$ is its average radius, $P_{nm}$ are the Legendre polynomials defined as \begin{equation*}\begin{split} P_n(x)&= {1\over {2^n n!}}\ {{d^n}\over {dx^n}}(x^2-1)^n\\ P_{nm}(x)&=(1-x^2)^{m\over 2}\ {{d^m}\over {dx^m}}P_n(x)\ , \end{split} \end{equation*} and $C_{nm}$ and $S_{nm}$ are the spherical harmonics coefficients. In the case of an ellipsoid of semi-axes $a\geq b\geq c$, we have the following explicit formulas (\cite{Boyce1997}): \begin{eqnarray*} S_{n,m}&=&0,\\ C_{2p+1,2q}&=&0,\\ C_{2p,2q+1}&=&0,\\ C_{2p,2q}&=&\displaystyle{3\over {R_H^{2p}}} {{p!(2p-2q)!}\over {2^{2q}(2p+3)(2p+1)!}} (2-\delta_{0q})\\ &&\displaystyle\sum_{i=0}^{\lfloor{{p-q}\over 2}\rfloor} {{(a^2-b^2)^{q+2i}[c^2-{1\over 2}(a^2+b^2)]^{p-q-2i}}\over {16^i(p-q-2i)!(p+q)!i!}}. \end{eqnarray*} In particular, $C_{20}$ and $C_{22}$ turn out to be given by simple expressions \begin{eqnarray} C_{20}&=&\frac{c^2-\frac{a^2}{2}-\frac{b^2}{2}}{5R_H^2},\\ C_{22}&=&\frac{\frac{a^2}{4}-\frac{b^2}{4}}{5R_H^2}. \end{eqnarray} For the Sun-Jupiter-Hektor data, we take $a=208$ km, $b=65.5$ km, $c=60$ km, $R_H=92$ km, following \cite{DESCAMPS2015} (see Section \ref{section:data}), we calculate the following coefficients: \[\begin{tabular}{llll} $C_{20}=-0.476775$; & $C_{22}=0.230232 $; & & \\ $C_{40}=0.714275$; & $C_{42}=-0.078406 $; & $C_{44}=0.009465 $; &\\ $C_{60}=-1.54769$; & $C_{62}=0.076832 $; &$C_{64}=-0.002507 $; & $C_{66}=0.000201 $.\\ \end{tabular}\] Notice that each term $C_{2p,2q}$ is multiplied in \eqref{potential} by the factor $R_H^{2p}/r^{2p+1}$. For $r$ equal to the average distance from the moonlet to the asteroid, we have $R_H/r\approx 0.096$.\marginpar{MG:explanation on $r$ in $R_h/r$} Therefore, in the following we will ignore the effect of the coefficients $C_{2p,2q}$, with $p\geq 2$, which are at least of $O(R_H^4/r^5)$. Note that the value of $C_{20}$ computed above is significantly bigger in absolute value than the one reported in \cite{Marchis}, which equals $-0.15$. The reason is that we use different estimates for the size of Hektor, following \cite{DESCAMPS2015} (see Section \ref{section:data}). If we consider a frame centered at the barycenter of the tertiary, and which rotates with the angular velocity of the tertiary about the primary, the time dependent gravitational potential takes of the form \begin{equation*}\label{potential_rot2} V(r,\phi,\lambda)={{\mathcal{G} m_H}\over r}\ \sum_{n=0}^{2} \Bigl({R_H\over r}\Bigr)^n\ \sum_{m=0}^n P_{nm}(\sin\phi)\ C_{nm}\cos (m(\lambda+\Theta t)), \end{equation*} where $\Theta$ represents the frequency of the spin of Hektor. For $n=2$, $m=0$ the corresponding term $C_{nm}\cos (m(\lambda+\Theta t))$ in the summation \eqref{potential_rot2} is equal to $C_{20}$ and is independent of time; for $n=2$, $m=2$ the corresponding term $C_{nm}\cos (m(\lambda+\Theta t))$ is equal to $C_{22}\cos (2(\lambda+\Theta t))$, so is time-dependent. We do not consider the other terms in the sum \eqref{potential_rot2}. Since the ratio of the rotation period of Hektor to the orbital period of the moonlet is relatively small, approximately $0.09740991$, in this paper we will only consider the average effect of $C_{22}\cos (2(\lambda+\Theta t))$ on the moonlet, which is zero. In conclusion, in the model below we will only consider the effect of $C_{20}:=-J_2<0$, which amounts to approximating Hektor as an oblate body (i.e., an ellipsoid of revolution obtained by rotating an ellipse about its minor axis); the dimensionless quantity $J_2$ is referred to as the {\sl zonal harmonic} in the gravitational potential. The term corresponding to $C_{20}$ is the larger one, followed by that corresponding to $C_{22}$; however, since the $C_{22}$ term introduces a time dependence, thus further complicating the model, we start by disregarding it and plan to study its effect in a future work. \subsection{Central configurations for the three-body problem with one oblate body} \label{sec:central_config} We now consider only the three heavy bodies, of masses $m_1 \geq m_2 \geq m_3$, with the body of mass $m_3$ being oblate, in which case we only take into account the term corresponding to $C_{20}=-J_2$ in \eqref{potential}. We write the approximation of the gravitational potential of the tertiary \eqref{potential_rot2} in both Cartesian and spherical coordinates (in the frame of the tertiary and rotating with the body): \begin{equation}\begin{split} \label{eqn:C20} V(x,y,z)&=\frac{m_3}{r}- \frac{m_3}{r} \left(\frac{R_3}{r}\right)^2 \left(\frac{J_2}{2}\right) \left (3 \left (\frac{z}{r}\right)^2 -1\right)\\ &=\frac{m_3}{r}+ \frac{m_3}{r} \left(\frac{R_3}{r}\right)^2 \left(\frac{C_{20}}{2}\right)\left (3 \sin\phi^2 -1\right), \end{split} \end{equation} where $m_3$ is the normalized mass of Hektor (the sum of the three masses is the unit of mass), $R_3:=R_H$ is the average radius of Hektor in normalized units (the distance between Sun and Hektor is the unit of distance), the gravitational constant is normalized to $1$, and $\sin\phi=z/r$. We want to find the triangular central configurations formed by $m_1$, $m_2$, $m_3$; we will follow the approach in \cite{Arredondo_Perez-Chavela}. Since for a central configuration the three bodies lie in the same plane, in the gravitational field \eqref{eqn:C20} of $m_3$ we set $\phi=0$, obtaining \begin{equation} \label{eqn:C20plane} V(q)=\frac{m_3}{r}+ \frac{Cm_3}{r^3}, \end{equation} where $q=(x,y)$ is the position vector of an arbitrary point in the plane, $r=\|q\|$ is the distance from $m_3$, and we denote \begin{equation} \label{eqn:C_const} C=R_3^2J_2/2>0.\end{equation} Let $q_i$ be the position vector of the mass $m_i$, for $i=1,2,3$, in an inertial frame centered at the barycenter of the three bodies. The equations of motion of the three bodies are \begin{equation}\label{eqn:3bp} \begin{split} m_1\ddot{q}_1&={m_1m_2(q_2-q_1)}\frac{1}{\|q_2-q_1\|^3}+{m_1m_3(q_3-q_1)}\left[\frac{1}{\|q_3-q_1\|^3}+\frac{3C}{\|q_3-q_1\|^5}\right],\\ m_2\ddot{q}_2&={m_2m_1(q_1-q_2)}\frac{1}{\|q_1-q_2\|^3}+{m_2m_3(q_3-q_2)}\left[\frac{1}{\|q_3-q_2\|^3}+\frac{3C}{\|q_3-q_2\|^5}\right],\\ m_3\ddot{q}_3&={m_3m_1(q_1-q_3)}\frac{1}{\|q_1-q_3\|^3}+{m_3m_2(q_2-q_3)}\frac{1}{\|q_2-q_3\|^3}, \end{split} \end{equation} where the last terms in the first two equations are due to \eqref{eqn:C20plane}, and the gravitational constant is normalized to $\mathcal{G}=1$. Denote $r_{ij}=\|q_i-q_j\|$, for $i\neq j$, ${\bf q}=(q_1,q_2,q_3)$, and ${\bf M}=\textrm{diag}(m_1, m_1,m_2 ,m_2, m_3, m_3)$ the $6\times 6$ matrix with $2$ copies of each mass along the diagonal. Then \eqref{eqn:3bp} can be written as \begin{equation}\label{eqn:v3bp} {{\bf M}}\ddot{{\bf q}}=\nabla U({\bf q}), \end{equation} where \begin{equation}\label{eqn:U} U({\bf q})={m_1m_2}\frac{1}{r_{12}}+{m_1m_3}\left(\frac{1}{r_{13}}+\frac{C}{r_{13}^3}\right)+ {m_2m_3}\left(\frac{1}{r_{23}}+\frac{C}{r_{23}^3}\right) \end{equation} is the potential for the three body problem with oblate $m_3$. Let us assume that the center of mass is fixed at the origin, i.e., \begin{equation}\label{eqn:cm} {\bf M}{\bf q}= \sum_{i=1}^3 m_iq_i=0. \end{equation} We are interested in \emph{relative equilibrium} solutions for the motion of the three bodies, which are characterized by the fact they become equilibrium points in a uniformly rotation frame. Denote by $R(\theta)$ the $6\times 6$ block diagonal matrix consisting of $3$ diagonal blocks the form \[ \left( \begin{array}{rr} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \\ \end{array} \right)\in SO(2). \] Substituting ${\bf q}(t)=R(\omega t){\bf z}(t)$ for some $\omega\in\mathbb{R}$ in \eqref{eqn:v3bp}, where ${\bf z}=(z_1,z_2,z_3)\in\mathbb{R}^6$, we obtain \[\label{eqn:E} {\bf M}\left(\ddot {\bf z}+2\omega J\dot{\bf z}-\omega^2{\bf z}\right)=\nabla U, \] where $J$ is the block diagonal matrix consisting of $3$ diagonal blocks the form \begin{equation}\label{eqn:eq_x} \left( \begin{array}{rr} 0 & -1 \\ 1 & 0 \\ \end{array} \right). \end{equation} The condition for an equilibrium point of \eqref{eqn:E} yields the algebraic equation \begin{equation}\label{eqn:CC1} \nabla U({\bf z})+\omega^2 {\bf M} {\bf z}=0. \end{equation} A solution ${\bf z}$ of the three-body problem satisfying \eqref{eqn:CC1} is referred to as a \emph{central configuration}. This is equivalent to $\ddot z_i=-\omega^2 z_i$, for $i=1,2,3$, meaning that the accelerations of the masses are proportional to the corresponding position vectors, and all accelerations are pointing towards the center of mass. Thus, the solution ${\bf q}(t)$ is a relative equilibrium solution if and only if ${\bf q}(t)=R(\omega t){\bf z}(t)$ with ${\bf z}(t)$ being a central configuration solution, and the rotation $R(\omega t)$ being a circular solution of the Kepler problem. Let $I({\bf z})={\bf z} ^T M {\bf z}=\sum_{i} m_i \|z_i\|^2$ be the moment of inertia. It is easy to see that this is a conserved quantity for the motion, that is, $I({\bf z}(t))=\bar I$ for some $\bar I$ at all $t$. Using Lagrange's second identity (see, e.g., \cite{Gidea-Niculescu}), and that ${\bf M}{\bf z}=0$, normalizing the masses so that $\sum_{i=1}^3 m_i=1$, the moment of inertia can be written as: \begin{equation}\label{eqn:I}I({\bf z})=\sum_{1\leq i<j\leq 3} m_{ij} \|z_i-z_j\|^2:=\sum_{1\leq i<j\leq 3} m_{ij} r_{ij}^2.\end{equation} Thus, central configurations correspond to critical points of the potential $U$ on the sphere ${\bf z}^TM{\bf z}=1$, which can be obtained by solving the Lagrange multiplier problem \begin{equation}\label{eqn:CC_LM_x} \nabla f({\bf z})=0,\qquad I({\bf z})-\bar{I}=0, \end{equation} where $f({\bf z})=U({\bf z})+\frac{1}{2}\omega ^2(I({\bf z})-\bar{I})$. In the above, we used the fact that $\nabla I({\bf z})=2{\bf M}{\bf z}$. We solve this problem in the variables $r_{ij}=\|z_i-z_j\|$ for $1\leq i<j\leq 3$, since both $U$ and $I$ can be written in terms of these variables. This reduces the dimension of the system \eqref{eqn:CC_LM_x} from $7$ equations to $4$ equations. Denote ${\bf r}=(r_{12}, r_{13},r_{23})$, and let $\tilde{f}({\bf r})$ be the function $f$ expressed in the variable ${\bf r}$, that is $\tilde {f}({\bf r}({\bf z}))=f({\bf z})$. By the chain rule, $\nabla_r \tilde {f} \cdot \left (\frac{\partial{\bf r}}{\partial{\bf z}} \right )=\nabla_{\bf z} f({\bf z})$. It is easy to see that the rank of the matrix $\left (\frac{\partial{\bf r}}{\partial{\bf z}} \right )$ is maximal provided that $z_1,z_2,z_3$ are not collinear (for details, see \cite{Corbera2004,Arredondo_Perez-Chavela}). As we are looking for triangular central configurations, this condition is satisfied. Thus, $\nabla_r \tilde {f}({\bf r})=0$ if and only if $\nabla_{\bf z} f({\bf z})=0$. In other words, we can now solve the system \eqref{eqn:CC_LM_x} in the variable ${\bf r}$. We obtain \begin{equation}\label{eqn:CC3} \begin{split} -\frac{1}{r_{12}^2}+\omega^2r_{12}=0,\\ -\frac{1}{r_{13}^2}-\frac{3C}{r_{13}^4}+\omega^2r_{13}=0,\\ -\frac{1}{r_{23}^2}-\frac{3C}{r_{23}^4}+\omega^2r_{23}=0,\\ m_1m_2r_{12}^2+m_1m_3r_{13}^2+m_2m_3r_{23}^2=\bar{I}. \end{split} \end{equation} Note that the function $h(r)=-\frac{1}{r^3}-\frac{3C}{r^5}$ has positive derivative $h'(r)=\frac{3}{r^4}+\frac{15C}{r^6}$ with $C,r>0$, hence it is injective. Thus, the second and third equation in \eqref{eqn:CC3} yield $r_{13}=r_{23}:=u$. Solving for $\omega$ in the first and second equation we obtain: \begin{equation}\label{eqn:omega} \omega=\sqrt{\frac{1}{r_{12}^3}}=\sqrt{\frac{1}{r_{13}^3}+\frac{3C}{r_{13}^5}}. \end{equation} Solving for $r_{12}$ yields: \begin{equation}\label{eqn:uv}r_{12}:=v=\left (\frac{u^5}{u^2+3C} \right)^{1/3}.\end{equation} Notice that $C>0$ implies $0<v<u$. The condition $I({\bf r})=\bar{I}$ yields \[m_1m_2 \left (\frac{u^5}{u^2+3C} \right)^{2/3}+(m_1m_3+m_2m_3)u^2=\bar{I},\] which is equivalent to \[\frac{m_1^3m_2^3u^{10}}{\left(u^2+3C\right)^2} =\left(\bar{I}-(m_1m_3+m_2m_3)u^2 \right )^3.\] To simplify the notation, let $u^2=z$, $m_{1}^3m_{2}^3=a$, $m_1m_3+m_2m_3=c$, $3C=b$, obtaining \[\frac{az^5}{(z+b)^2}= (\bar{I}-cz)^3.\] The function $k(z)=\displaystyle \frac{az^5}{(z+b)^2}$ has derivative \[k'(z)= \frac{3az^6+8abz^5+5ab^2z^4}{(z+b)^4}>0\] and the function $l(z)=(\bar{I}-cz)^3$ has derivative \[l'(z)=-3c(\bar{I}-cz)^2<0.\] Since \[k(0)=0 \textrm{ and } \lim_{z\to+\infty} k(z)=+\infty,\] and \[l(0)=\bar{I}^3>0\textrm{ and }\lim _{z\to +\infty}l(z)=-\infty,\] the equation $k(z)=l(z)$ has a unique solution with $z>0$. We conclude that for each fixed $\bar{I}$, there is a unique solution of \eqref{eqn:CC3}. Thus, we have proved the following result. \begin{prop}\label{prop:CC} In the three-body problem with one oblate primary, for every fixed value $\bar{I}$ of the moment of inertia there exists a unique central configuration, which is an isosceles triangle. \end{prop} We note that, while \cite{Arredondo_Perez-Chavela} studies central configurations of three oblate bodies (as well as of three bodies under Schwarzschild metric), the isosceles central configuration found above is not explicitly shown there (see Theorem 4 in \cite{Arredondo_Perez-Chavela}). To put this in quantitative perspective, when we use the data from Section \ref{section:data} in \eqref{eqn:C_const}, we obtain $C=3.329215\times 10^{-15}$. If we set $u=r_{13}=r_{23}=1$, from \eqref{eqn:uv} we obtain $v=r_{12}=0.9999999999999967=1.0-3.3\times 10^{-15}$. In terms of the Sun-Jupiter distance $r_{12}=778.5\times 10^6$ km, the distance $r_{13}=r_{23}$ differs from the corresponding distance in the equilateral central configuration by $2.7\times 10^{-6}$ km. Practically, this isosceles triangle central configuration is almost an equilateral triangle. \subsection{Location of the bodies in the triangular central configuration} \label{sec:location_central_config} We now compute the expression of the location of the three bodies in the triangular central configuration, relative to a synodic frame that rotates together with the bodies, with the center of mass fixed at the origin, and the location of $m_1$ on the negative $x$-semi-axis. We assume that the masses lie in the $z=0$ plane. Instead of fixing the value $\bar{I}$ of the moment of inertia, we fix $u=r_{13}=r_{23}=1$ and have $v=r_{12}<1$ given by \eqref{eqn:uv}. Then, we obtain the following result. \begin{figure} \includegraphics[width=0.65\textwidth]{central_config} \caption{Triangular central configuration.} \label{central_config} \end{figure} \begin{prop} In the synodic reference frame, the coordinates of the three bodies in the triangular central configuration, satisfying the constraints \begin{eqnarray} \label{eqn:1} (x_2-x_1)^2+(y_2-y_1)^2 &=& v^2,\\ \label{eqn:2} (x_3-x_1)^2+(y_3-y_1)^2 &=& 1,\\ \label{eqn:3} (x_3-x_2)^2+(y_3-y_2)^2 &=& 1,\\ \label{eqn:4} m_1x_1+m_2x_2+m_3x_3 &=& 0,\\ \label{eqn:5} m_1y_1+m_2y_2+m_3y_3 &=& 0,\\ \label{eqn:6} m_1+m_2+m_3&=&1,\\ \label{eqn:7} y_1&=&0 \end{eqnarray} are given by \begin{equation} \label{eqn:xy_CC} \begin{split} x_1=&-\sqrt{v^2m_2^2+v^2m_2m_3+m_3^2},\\ y_1=&0,\\ x_2=&\frac{2v^2m_2+v^2m_3-2v^2m_2^2-2v^2m_2m_3-2m_3^2}{2\sqrt{v^2m_2^2+v^2m_2m_3+m_3^2}},\\ y_2=&-\frac{v\sqrt{4-v^2} m_3}{2\sqrt{v^2m_2^2+v^2m_2m_3+m_3^2}},\\ x_3=& \frac{v^2m_2+2m_3-2v^2m_2^2-2v^2m_2m_3-2m_3^2}{2\sqrt{v^2m_2^2+v^2m_2m_3+m_3^2}},\\ y_3=&\frac{v\sqrt{4-v^2} m_2}{2\sqrt{v^2m_2^2+v^2m_2m_3+m_3^2}}. \end{split} \end{equation} \end{prop} \begin{proof} Denote by $A=x_1-x_2$ and $B=x_1-x_3$, so $x_2=x_1-A$, $x_3=x_1-B$, and $x_3-x_2=A-B$. Substituting these in \eqref{eqn:4} we obtain $(m_1+m_2+m_3)x_1=m_2A+m_3B$. From \eqref{eqn:6} it follows $x_1=m_2A+m_3B=m_3({\bar{\mu}} A+B)$, where we denoted ${\bar{\mu}}:=m_2/m_3$. From \eqref{eqn:7} and \eqref{eqn:5} we have $y_3=-(m_2/m_3)y_2=-{\bar{\mu}} y_2$, and $y_3-y_2=-(1+{\bar{\mu}})y_2$. From \eqref{eqn:1}, we can solve for $y_2$ in terms of $A$ (see \eqref{eqn:11} below). From now on, the objective is to solve for $A$ and $B$, which in turn will yield $x_1,x_2,x_3,y_2, y_3$. Equations \eqref{eqn:1}, \eqref{eqn:2}, \eqref{eqn:3} become: \begin{eqnarray} \label{eqn:11} A^2+y_2^2 &=& v^2,\\ \label{eqn:12} B^2+{\bar{\mu}}^2y_2^2 &=& 1,\\ \label{eqn:13} (B-A)^2+(1+{\bar{\mu}})^2y_2^2 &=& 1. \end{eqnarray} Adding \eqref{eqn:11} and \eqref{eqn:12}, and subtracting \eqref{eqn:13} yields \begin{equation} \label{eqn:14} 2BA-2{\bar{\mu}} y_2^2=v^2. \end{equation} From \eqref{eqn:11}, $y_2^2=v^2-A^2$, so \eqref{eqn:14} yields \begin{equation} \label{eqn:15} B=\frac{v^2+2{\bar{\mu}}(v^2-A^2)}{2A}. \end{equation} Substituting in \eqref{eqn:12} and solving for $A$ yields \begin{equation} \label{eqn:15} A=\pm\frac{v^2(2{\bar{\mu}}+1)}{2\sqrt{v^2{\bar{\mu}}^2+v^2{\bar{\mu}}+1}}. \end{equation} Substituting $y_2^2=v^2-A^2$ in \eqref{eqn:14} and solving for $B$ yields \begin{equation} \label{eqn:16} B= \frac{v^2}{4A}\frac{(2{\bar{\mu}}+1)(v^2{\bar{\mu}}+2)}{v^2{\bar{\mu}}^2+v^2{\bar{\mu}}+1}=\pm\frac{v^2{\bar{\mu}}+2}{2\sqrt{v^2{\bar{\mu}}^2+v^2{\bar{\mu}}+1}}. \end{equation} with $\textrm{sign}(A)=\textrm{sign}(B)$. Substituting $A$, $B$ and ${\bar{\mu}}$ in $x_1=m_2A+m_3B$, and choosing the negative sign for $x_1$ to agree with our initial choice that $x_1<0$, after simplification, we first obtain the value of $x_1$ below. Then, substituting in $x_2=x_1-A$, $x_3=x_1-B$, $y_2^2=v^2-A^2$, $y_3=-(m_2/m_3)y_2$, we compute $x_2$, $x_3$, $y_2$, $y_3$, obtaining \eqref{eqn:xy_CC}. For future reference, we note that if we let $m_3\to 0$ in \eqref{eqn:xy_CC}, we obtain \begin{equation} \label{eqn:xy_CC_m3_zero} \begin{split} x_1=&-vm_2,\\ y_1=&0,\\ x_2=&v(1-m_2),\\ y_2=&0,\\ x_3=&\frac{v}{2}(1-2m_2),\\ y_3=&\frac{\sqrt{4-v^2}}{2}. \end{split} \end{equation} \end{proof} \begin{rem} In the case when the oblateness coefficient $J_2$ of $m_3$ is made equal to zero, then $v=1$, and in \eqref{eqn:xy_CC} we obtain the Lagrangian equilateral triangle central configuration, with the position given by the following equivalent formulas (see, e.g., \cite{Baltagiannis2013}): \begin{equation} \label{eqn:Baltagiannis} \begin{split} x_1 & =\frac{-\lvert K \rvert \sqrt{m_2^2+m_2 m_3+m_3^2}}{K},\\ y_1 &=0,\\ \vspace{3mm} x_2 & =\frac{\lvert K \rvert [(m_2-m_3)m_3+m_1(2m_2+m_3)]}{2K\sqrt{m_2^2+m_2m_3+m_3^2}},\\ y_2 &=\frac{-\sqrt{3}m_3}{2m_2^{\frac{3}{2}}}\sqrt{\frac{m_2^3}{m_2^2+m_2m_3+m_3^2}},\\ \vspace{3mm} x_3 & =\frac{\lvert K \rvert }{2\sqrt{m_2^2+m_2m_3+m_3^2}},\\ y_3 &=\frac{\sqrt{3}}{2m_2^{\frac{1}{2}}}\sqrt{\frac{m_2^3}{m_2^2+m_2m_3+m_3^2}}, \end{split} \end{equation} where $K=m_2(m_3-m_2)+m_1(m_2+2m_3)$. Notice that the equations \eqref{eqn:Baltagiannis} are expressed in terms of $m_1,m_2,m_3$, while \eqref{eqn:xy_CC} are expressed in terms of $m_2, m_3$; we obtain corresponding expressions that are equivalent when we substitute $m_1=1-m_2-m_3$ in \eqref{eqn:Baltagiannis}. One minor difference is that in \eqref{eqn:Baltagiannis} the position of $x_1$ is not constrained to be on the negative $x$-semi-axis, as we assumed for \eqref{eqn:xy_CC}; the position of $x_1$ in \eqref{eqn:Baltagiannis} depends on the quantity $\textrm{sign}(K)$; when $\textrm{sign}(K)>0$ we have $|K|/K=1$, and the equations \eqref{eqn:xy_CC} become equivalent with the equation \eqref{eqn:Baltagiannis}. We remark that when $m_3\to 0$, the limiting position of the three masses in \eqref{eqn:Baltagiannis} is given by: \begin{equation} \label{eqn:limiting} \begin{array}{lll} x_1 =-m_2, & y_1 = 0, & z_1 =0,\\ x_2 =1-m_2,& y_2 =0,& z_2=0,\\ x_3= \frac{1-2m_2}{2},&y_3=\frac{\sqrt{3}}{2},&z_3=0, \end{array} \end{equation} with $(x_1,y_1)$ and $(x_2,y_2)$ representing the position of the masses $m_1$ and $m_2$, respectively, and $(x_3,y_3)$ representing the position of the equilibrium point $L_4$ in the planar circular restricted three-body problem. \end{rem} \subsection{Equations of motion for the restricted four-body problem with oblate tertiary}\label{sec:4BP} Now we consider the dynamics of a fourth body in the neighborhood of the tertiary. This fourth body represents the moonlet Skamandrios orbiting around Hektor. We model the dynamics of the fourth body by the spatial, circular, restricted four-body problem, meaning that the moonlet is moving under the gravitational attraction of Hektor, Jupiter and the Sun, without affecting their motion which remains on circular orbits and forming a triangular central configuration as in Section \ref{sec:central_config}. As before, we assume that Hektor has an oblate shape, with the gravitational potential given by \eqref{eqn:C20}. The equations of motion of the infinitesimal body relative to a synodic frame of reference that rotates together with the three bodies is given by \begin{equation}\label{eqn:PCR4BP}\begin{split}\ddot{x}-2\omega\dot{y}&=\frac{\partial {\Tilde{\Omega}}}{\partial x}={\Tilde{\Omega}}_x\\ \ddot{y}+2\omega\dot{x}&=\frac{\partial {\Tilde{\Omega}}}{\partial x}={\Tilde{\Omega}}_y\\ \ddot{z}&=\frac{\partial {\Tilde{\Omega}}}{\partial z}={\Tilde{\Omega}}_z, \end{split}\end{equation} where the effective potential ${\Tilde{\Omega}}$ is given by \[{\Tilde{\Omega}}(x,y,z) = \frac{1}{2}\omega^2(x^2+y^2)+\left(\sum_{i=1}^{3}\frac{m_i}{r_i}+ \frac{m_3}{r_3} \left(\frac{R_3}{r_3}\right)^2\left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right)\] with $(x_i, y_i,z_i)$ representing the $(x,y,z)$-coordinates in the synodic reference frame of the body of mass $m_i$, $r_{i} = \left((x-x_{i})^2+(y-y_{i})^2+z^2\right)^{\frac{1}{2}}$ is the distance from the moonlet to the mass $m_i$, for $i = 1,2,3$, and $\omega$ is the angular velocity of the system of three bodies around the center of mass given by \eqref{eqn:omega}. The perturbed mean motion of the primaries $\omega$ in the above equation depends on the oblateness parameter. The coordinates $(x_1, y_1)$, $(x_2, y_2)$, $(x_3, y_3)$ of the bodies $m_1$, $m_2$, $m_3$, respectively, are given by \eqref{eqn:xy_CC}, while $z_i=0$, for $i=1,2,3$. For the choice that we have made $r_{13}=r_{23}=u=1$ and $r_{12}=v$ satisfying \eqref{eqn:uv}, we have \begin{equation}\label{omeganew} \omega=\frac{1}{v^3}=\sqrt{1+\frac{3R_3^2J_2}{2}}=\sqrt{1-\frac{3R_3^2C_{20}}{2}}. \end{equation} We remark that if we set $m_2=0$ we obtain the restricted three-body problem with one oblate body and $\omega=\sqrt{1+3R_H^2J_2/2}$ agrees with the formulas in \cite{McCuskey1963,Sharma_Rao_1976,Stoica_Arredondo_2012}. If $m_3$ has no oblateness, i.e., $J_2=C_{20}=0$, then $\omega=1$. We rescale the time so that in the new units the angular velocity is normalized to $1$, obtaining \begin{equation}\label{eqn:eqnmotion}\begin{split} \ddot{x}-2\dot{y}&=\frac{\partial{\Omega}}{\partial{x}}=\Omega_{x},\\ \ddot{y}+2\dot{x}&=\frac{\partial{\Omega}}{\partial{y}}=\Omega_{y},\\ \ddot{z}&=\frac{\partial{\Omega}}{\partial{z}}=\Omega_{z}, \end{split}\end{equation} with $$\Omega(x,y,z)=\frac{1}{2}(x^2+y^2)+\frac{1}{\omega^2}\left(\sum_{i=1}^{3}{ \frac{m_i}{r_i}}+ \frac{m_3}{r_3} \left(\frac{R_3}{r_3}\right)^2\left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right).$$ \begin{comment} Multiplying the equations \eqref{eqn:eqnmotion} by $2\dot{x}$, $2\dot{y}$ and $2\dot{z}$, respectively, we have \begin{equation*}\begin{split} 2\dot{x}\ddot{x}-2\dot{x}\dot{y}=2\dot{x}\Omega_x,\\ 2\dot{y}\ddot{y}+2\dot{y}\dot{x}=2\dot{y}\Omega_y,\\ 2\dot{z}\ddot{z}=2\dot{z}\Omega_z.\end{split}\end{equation*} \begin{comment} Summing the equations, we obtain the following expression: \begin{equation*}\begin{split} 2\dot{x}\ddot{x}+2\dot{y}\ddot{y}+2\dot{z}\ddot{z} =2\dot{x}\Omega_x+2\dot{y}\Omega_y+2\dot{z}\Omega_z.\end{split}\end{equation*} This implies that the equations \eqref{eqn:PCR4BP} have a conserved quantity \begin{equation*}\begin{split} C=-(\dot{x}^2+\dot{y}^2+\dot{z}^2)+2\Omega.\end{split}\end{equation*} Alternatively, we can consider the total energy $H=-C/2$ as conserved quantity: \end{comment} The equations of motion \eqref{eqn:eqnmotion} have the total energy $H$ defined below as a conserved quantity: \marginpar{MG:trivial derivation omitted} \begin{equation*}\begin{split} H=&\frac{1}{2}(\dot{x}^2+\dot{y}^2+\dot{z}^2)-\Omega, \\=&\frac{1}{2}(\dot{x}^2+\dot{y}^2+\dot{z}^2)\\&-\left[\frac{1}{2}(x^2+y^2)+ \frac{1}{\omega^2} \left(\sum_{i=1}^{3}{\frac{m_i}{r_i}}+\frac{m_3}{r_3}\left(\frac{R_3}{r_3}\right)^2 \left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right)\right]. \end{split} \end{equation*} We switch to the Hamiltonian setting by considering the system of symplectic coordinates $(x,y,z,p_x,p_y,p_z)$ with respect to the symplectic form $\varpi=x\wedge p_x+y\wedge p_y+z\wedge p_z$, and making the transformation $\dot{x}=p_x+y$, $\dot{y}=p_y-x$ and $\dot{z}=p_z$. We obtain: \begin{equation} \label{eq2} \begin{split} H= & \frac{1}{2} ((p_{x}+y)^2+(p_{y}-x)^2+p_{z}^2)-\frac{1}{2}(x^2+y^2)\\ & -\frac{1}{\omega^2}\left(\sum_{i=1}^{3}\frac{m_i}{r_i}+ \frac{m_3}{r_3}\left(\frac{R_3}{r_3}\right)^2\left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right)\\ =& \frac{1}{2}(p_{x}^2+p_{y}^2+p_{z}^2)+yp_{x}-xp_{y}\\&-\frac{1}{\omega^2}\left(\sum_{i=1}^{3}\frac{m_i}{r_i}+ \frac{m_3{R_3}^2}{r_3^3} \left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right)\\ =& \frac{1}{2}(p_{x}^2+p_{y}^2+p_{z}^2)+yp_{x}-xp_{y}\\&-\frac{1}{\omega^2}\left(\sum_{i=1}^{3}\frac{m_i}{r_i}+ \frac{m_3}{r_3^3} {C'}(3\sin^2{\phi}-1)\right), \end{split} \end{equation} where we denote $C'={R_3}^2C_{20}/2$. Thus, the equations of motion \eqref{eqn:PCR4BP} are equivalent to the Hamilton equations for the Hamiltonian given by \eqref{eq2}. \section{Hill four-body problem with oblate tertiary}\label{sec:Hill} In this section we describe a model which is obtained by taking a Hill's approximation of the spatial, circular, restricted four-body problem with oblate tertiary. The procedure goes as follows: we shift the origin of the coordinate system to $m_3$, perform a rescaling of the coordinates depending on $m_3^{1/3}$, write the associated Hamiltonian in the rescaled coordinates as a power series in $m_3^{1/3}$, and neglect all the terms of order $O(m_3^{1/3})$ in the expansion, since such terms are small when $m_3$ is small.\marginpar{MG:description changed} Through this procedure the masses $m_1$ and $m_2$ are `sent to infinite distance'. This model is an extension of the classical Hill's approximation of the restricted three-body problem, with the major differences that our model is a four-body problem, and takes into account the effect of the oblateness parameter $C_{20}=-J_2$ of the tertiary; compare with \cite{Hill,MEYER1982,Burgos_Gidea}. \subsection{Hill's approximation of the restricted four-body problem with oblate tertiary in shifted coordinates} \label{sec:Hill_shifted} The main result is the following: \begin{thm} \label{main theorem} Let us consider the Hamiltonian \eqref{eq2}, let us shift the origin of the reference frame so that it coincides with $m_3$ and let us perform the conformal symplectic scaling given by $$(x,y,z,p_{x},p_{y},p_{z})\rightarrow m_{3}^{1/3}(x,y,z,p_{x},p_{y},p_{z}).$$ Accordingly, we rescale the average radius of the tertiary as $R_3=m_3^{1/3}\rho_3$. Expanding the resulting Hamiltonian as a power series in $m_3^{1/3}$ and neglecting all the terms of order $O(m_3^{1/3})$ in the expansion, we obtain the following Hamiltonian describing the Hill's four-body problem with oblate tertiary:\marginpar{MG:description changed} \begin{equation}\begin{split}\label{eqn:hill_hamiltonian} H&=\frac{1}{2}(p_x^2+p_y^2+p_z^2)+yp_x-xp_y \\ &\quad +\frac{4-3v^2}{8}x^2+ \frac{3v^2-8}{8}y^2+\frac{1}{2}z^2-\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)xy \\&\quad -\frac{1}{ (x^2+y^2+z^2)^{\frac{1}{2}}}-\left(\frac{\rho_3^2C_{20}}{2}\right)\frac{1}{ (x^2+y^2+z^2)^{\frac{3}{2}}} \left( \frac{3z^2}{x^2+y^2+z^2} -1\right),\\ &=\frac{1}{2}(p_x^2+p_y^2+p_z^2)+yp_x-xp_y\\ &\quad+\frac{4-3v^2}{8}x^2+ \frac{3v^2-8}{8}y^2+\frac{1}{2}z^2-\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)xy \\&\quad -\frac{1}{ (x^2+y^2+z^2)^{\frac{1}{2}}}-\frac{{c } }{ (x^2+y^2+z^2)^{\frac{3}{2}}} \left( \frac{3z^2}{x^2+y^2+z^2} -1\right), \end{split} \end{equation} where $v=\left(\frac{1}{1-\frac{3}{2}R_3^2C_{20}} \right)^{1/3}$, $\mu=\frac{m_2}{m_1+m_2}$, and $c :=\rho_3^2C_{20}/2=m_3^{-\frac{2}{3}}R^2_3C_{20}/2$. \end{thm} \marginpar{MG: notation $c_{20}$ replaced by $c$ to avoid confusion; Hamiltonian in terms of $c$ added.} \begin{proof} We start by shifting the origin of the coordinate system $(x,y,z)$ to the location of the mass $m_3$ (representing Hektor), via the change of coordinates \[ \begin{array}{lll} \xi=x-x_3, & \eta=y-y_3, & \zeta=z,\\ p_\xi=p_x+y_3, & p_\eta=p_y-x_3, & p_\zeta=p_z . \end{array} \] The Hamiltonian corresponding to \eqref{eq2} becomes \begin{equation} \label{eq3} \begin{split} H =& \frac{1}{2}[(p_{\xi}-y_3)^2+(p_{\eta}+x_3)^2+p_{\zeta}^2]\\&+(\eta+y_3)(p_{\xi}-y_3)-(\xi+x_3)(p_{\eta}+x_3)\\ & -\frac{1}{\omega^2}\left(\sum_{i=1}^{3}\frac{m_i}{\bar{r}_i}+\frac{m_3}{\bar{r}_3} \left(\frac{R_3}{\bar{r_3}}\right)^2\left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right)\\ =& \frac{1}{2} [(p_{\xi}^2-2p_{\xi}y_3+y_3^2)+(p_{\eta}^2+2p_{\eta}x_3+x_3^2)+p_{\zeta}^2]\\ &+\eta p_{\xi}-\eta y_3+y_3p_\xi-y_3^2-\xi p_\eta-\xi x_3-x_3p_\eta-x_3^2\\ &-\frac{1}{\omega^2}\left(\sum_{i=1}^{3}\frac{m_i}{\bar{r}_i}+\frac{m_3}{\bar{r}_3} \left(\frac{R_3}{\bar{r}_3}\right)^2\left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right) \\ =&\frac{1}{2} (p_{\xi}^2+p_{\eta}^2+p_{\zeta}^2)+\eta p_{\xi}-\xi p_\eta-(\xi x_3+\eta y_3)\\ & -\frac{1}{\omega^2}\left(\sum_{i=1}^{3}\frac{m_i}{\bar{r}_i}+ \frac{m_3}{\bar{r}_3}\left(\frac{R_3}{\bar{r}_3}\right)^2\left(\frac{C_{20}}{2}\right)(3\sin^2{\phi}-1)\right)\\&-\frac{1}{2}(x_3^2+y_3^2), \end{split} \end{equation} where $\bar{r}_i^2=(\xi-\bar{x}_i)^2+(\eta-\bar{y}_i)^2+\zeta^2=(\xi+x_3-x_i)^2+(\eta+y_3-y_i)^2+\zeta^2$, with $\bar{x}_i=x_i-x_3$, $\bar{y}_i=y_i-y_3$. Note that $\bar{r}_3=r_3$. Since $-\frac{1}{2}(x_3^2+y_3^2)$ is a constant term, it plays no role in the Hamiltonian equations and it will be dropped in the following calculation. Since $\sin{\phi}=\frac{z}{\bar{r}_3}$, we have \begin{equation} \label{*} \begin{split} H =&\frac{1}{2} (p_{\xi}^2+p_{\eta}^2+p_{\zeta}^2)+\eta p_{\xi}-\xi p_{\eta}-(\xi x_3+\eta y_3)\\ &-\frac{1}{\omega^2}\left[\sum_{i=1}^{3}\frac{m_i}{\bar{r}_i}+\frac{m_3}{\bar{r}_3}\left(\frac{R_3}{\bar{r}_3}\right)^2 \left(\frac{C_{20}}{2}\right)\left(3\left(\frac{\zeta}{\bar{r}_3}\right)^2-1\right)\right]. \end{split} \end{equation} \marginpar{MG:$x$, $y$, $z$ replaced with $\xi$, $\eta$, $\zeta$} We now perform the following conformal symplectic scaling with multiplier $m_3^{-2/3}$,\marginpar{MG: the multiplier is the factor that appears in front of the Hamiltonian (3.6), see Meyer and Hall.} given by (with a little abuse of notation, we call again the new variables $x$, $y$, $z$, $p_x$, $p_y$, $p_z$): \begin{equation} \begin{split} \xi&=m_3^{\frac{1}{3}} x\ ,\qquad \eta=m_3^{\frac{1}{3}} y\ ,\qquad \zeta=m_3^{\frac{1}{3}}z,\\ p_\xi&=m_3^{\frac{1}{3}} p_x\ ,\qquad p_\eta=m_3^{\frac{1}{3}} p_y\ ,\qquad p_\zeta=m_3^{\frac{1}{3}} p_z\ . \end{split} \end{equation} Consistently with this scale change, we also introduce the scaling transformation of the average radius of the smallest body \begin{equation}\label{eqn:J2rescaling} R_3^2 =(m_3^{1/3}\rho_3)^2 =m_3^{2/3}\rho_3^2. \end{equation} The choice of the power of $m_3$ is motivated by the fact that in this way the gravitational force becomes of the same order of the centrifugal and Coriolis forces (see, e.g., \cite{MEYER1982}). The purpose of the subsequent calculation is that, after the above substitutions, we expand the resulting Hamiltonian as a power series in $m_3^{1/3}$ and neglect all the terms of order $O(m_3^{1/3})$ in the expansion.\marginpar{MG:sentence changed} We expand the terms $\displaystyle\frac{1}{\bar{r}_1}$ and $\displaystyle\frac{1}{\bar{r}_2}$ in Taylor series around the new origin of coordinates, obtaining \begin{equation*}\begin{split}f^1:=\frac{1}{\bar{r}_1}=\sum_{k\geq 0}{P_k^1(x,y,z)},\\ f^2:=\frac{1}{\bar{r}_2}=\sum_{k\geq 0}{P_k^2(x,y,z)},\end{split}\end{equation*} where $P_k^j(x,y,z)$ is a homogeneous polynomial of degree $k$, for $j=1,2$. The resulting Hamiltonian takes the form: \begin{equation} \label{eq4} \begin{split} H =& m_3^{-\frac{2}{3}}\left[ \frac{1}{2} (m_3^{\frac{2}{3}}p_x^2+m_3^{\frac{2}{3}}p_y^2+m_3^{\frac{2}{3}}p_z^2) \right. \\&\qquad+m_3^{\frac{2}{3}}yp_x- m_3^{\frac{2}{3}}xp_y-m_3^{\frac{1}{3}}xx_3-m_3^{\frac{1}{3}}yy_3\\ &\qquad -\frac{1}{\omega^2}\left(\sum_{k\geq 1}{m_1m_3^{\frac{k}{3}}P_k^1(x,y,z)}+\sum_{k \geq 1}{m_2m_3^{\frac{k}{3}}P_k^2(x,y,z)}\right.\\ &\left.\left.\qquad +\frac{m_3^{\frac{2}{3}}}{\bar{r}_3}+\frac{m_3^{\frac{2}{3}}}{\bar{r}_3}\left(\frac{\rho_3}{\bar{r}_3}\right)^2 \left(\frac{C_{20}}{2}\right)\left(3\left(\frac{z}{\bar{r}_3}\right)^2-1\right)\right)\right] \\ =& \frac{1}{2}(p_x^2+p_y^2+p_z^2)+yp_x-xp_y-m_3^{-\frac{1}{3}}xx_3-m_3^{-\frac{1}{3}}yy_3\\ &\quad -\frac{1}{\omega^2}\left (m_3^{-\frac{1}{3}} m_1 P_1^1+m_3^{-\frac{1}{3}} m_2 P_1^2\right. \\ &\qquad\qquad +\sum_{k\geq2}m_3^{\frac{k-2}{3}}m_1P_k^1(x,y,z)+\sum_{k\geq2}m_3^{\frac{k-2}{3}}m_2P_k^2(x,y,z) \\ &\qquad\qquad\left. +\frac{1}{\bar{r}_3}+\frac{1}{\bar{r}_3}\left(\frac{\rho_3^2}{\bar{r}_3^2}\right)\left(\frac{C_{20}}{2}\right)\left(3\left(\frac{z}{\bar{r}_3}\right)^2-1\right)\right)\\ =& \frac{1}{2}(p_x^2+p_y^2+p_z^2)+yp_x-xp_y\\ &\quad -m_3^{-\frac{1}{3}}\left(xx_3+yy_3+\frac{m_1 P_1^1}{\omega^2} +\frac{m_2 P_1^2 }{\omega^2}\right)\\ &\quad -\frac{1}{\omega^2}\left(\sum_{k\geq2}{m_3^{\frac{k-2}{3}}m_1P_k^1(x,y,z)}+ \sum_{k\geq2}{m_3^{\frac{k-2}{3}}m_2P_k^2(x,y,z)} \right.\\ & \qquad\qquad\left . +\frac{1}{\bar{r}_3} +\frac{\rho_3^2}{\bar{r}_3^3}\left(\frac{C_{20}}{2}\right) \left(3\left(\frac{z}{\bar{r}_3}\right)^2-1\right)\right). \end{split} \end{equation} In the following, we will disregard in \eqref{eq4} all terms which are of order of $m_3^{1/3}$, as in classical Hill's theory of lunar motion (\cite{MEYER1982}).\marginpar{MG: sentence changed} We compute the first-degree polynomials $P^i_1$, for $i=1,2$, \begin{equation}\label{eqn:hill-0} \begin{split}P^i_1&=\frac{\partial f^i}{\partial x}(0,0,0)x +\frac{\partial f^i}{\partial y}(0,0,0)y+\frac{\partial f^i}{\partial z}(0,0,0)z=\frac{\bar{x}_i}{r_{i3}^3}x+\frac{\bar{y}_i}{r_{i3}^3}y\\&=(x_i-x_3)x+(y_i-y_3)y, \end{split} \end{equation} where $r_{i3}=\sqrt{(x_i-x_3)^2+(y_i-y_3)^2}=u=1$ represents the distance between the mass $m_i$ and $m_3$. We now compute the contribution of the different terms in \eqref{eq4}. Using that $m_1+m_2+m_3=1$, and that $m_1x_1+m_2x_2+m_3x_3=m_1y_1+m_2y_2+m_3y_3=0$, from \eqref{eqn:4}, \eqref{eqn:5}, \eqref{eqn:6}, we obtain that \begin{equation} \label{eqn:poly1}\begin{split} m_3^{-\frac{1}{3}}&(xx_3+yy_3+ \frac{m_1 P_1^1}{\omega^2} +\frac{m_2 P_1^2}{\omega^2})\\ &=m_3^{-\frac{1}{3}}\left.[x(x_3+\frac{m_1}{\omega^2}(x_1-x_3) + \frac{m_2}{\omega^2}(x_2-x_3))\right.\\ & \left.\qquad\quad+y(y_3+\frac{m_1}{\omega^2}(y_1-y_3) + \frac{m_2}{\omega^2}(y_2-y_3))\right]\\ &=m_3^{-\frac{1}{3}}\left[x(x_3+\frac{1}{\omega^2}(m_1x_1+m_2x_2+m_3x_3- x_3))\right.\\ &\,\left.\qquad\quad+y(y_3+\frac{1}{\omega^2}(m_1y_1+m_2y_2+m_3y_3- y_3))\right]\\ &=m_3^{-\frac{1}{3}}\left(1-\frac{1}{\omega^2}\right)(xx_3+yy_3)\\ &=-m_3^{-\frac{1}{3}}\frac{\frac{3}{2}m_3^{\frac{2}{3}}\rho_3^2C_{20}}{1-\frac{3}{2}m_3^{\frac{2}{3}}\rho_3^2C_{20}} (xx_3+yy_3)\\ &= -m_3^{\frac{1}{3}}\frac{\frac{3}{2}\rho_3^2C_{20}}{1-\frac{3}{2}m_3^{\frac{2}{3}}\rho_3^2C_{20}}(xx_3+yy_3), \end{split} \end{equation} where we used \eqref{omeganew} and \eqref{eqn:J2rescaling}. The expression in \eqref{eqn:poly1} is $O(m_3^{1/3})$ so it will be omitted in the Hill approximation.\marginpar{MG:sentence changed} We compute the second degree polynomials $P^i_2$, for $i=1,2$, \begin{equation} \label{eqn:hill-3}\begin{split} P^i_2&=\frac{1}{2}\left(\frac{\partial^2 f^i}{\partial x^2}(0,0,0)x^2+\frac{\partial^2 f^i}{\partial y^2}(0,0,0)y^2 +\frac{\partial^2 f^i}{\partial z^2}(0,0,0)z^2\right)\\ &\quad+\left(\frac{\partial^2 f^i}{\partial x\partial y}(0,0,0)xy+\frac{\partial^2 f^i}{\partial y\partial z}(0,0,0)yz +\frac{\partial^2 f^i}{\partial z\partial x}(0,0,0)zx\right)\\&=\frac{1}{2}\left (\frac{3\bar{x}^2_i}{r_{i3}^5}-\frac{1}{r_{i3}^3}\right)x^2+\frac{1}{2}\left (\frac{3\bar{y}^2_i}{r_{i3}^5}-\frac{1}{r_{i3}^3}\right)y^2+\frac{1}{2}\left (-\frac{1}{r_{i3}^3}\right)z^2\\&\,+\left(\frac{3\bar{x}_i\bar{y}_i}{r_{i3}^5}\right)xy\\ &=\frac{1}{2}\left ( {3\bar{x}^2_i} -1\right)x^2+\frac{1}{2}\left ( {3\bar{y}^2_i} -1\right)y^2-\frac{1}{2}z^2+\left({3\bar{x}_i\bar{y}_i}\right) xy, \end{split} \end{equation} since $r_{13}=r_{23}=u=1$. The corresponding terms in \eqref{eq4} yield \begin{equation}\label{eqn:poly2}\begin{split} \frac{1}{\omega^2}&(m_1P^1_2+m_2P^2_2)\\ &=\frac{1}{\omega^2} \Big[\frac{1}{2}\left((1-m_2-m_3)(3(x_1-x_3)^2-1) +m_2(3(x_2-x_3)^2-1)\right)x^2\\ &\qquad +\frac{1}{2}\left((1-m_2-m_3)(3(y_1-y_3)^2-1) +m_2(3(y_2-y_3)^2-1)\right)y^2\\ &\qquad + \frac{1}{2}\left(-1+m_3\right)z^2\\ &\qquad + 3\left((1-m_2-m_3)(x_1-x_3)(y_1-y_3)+m_2(x_2-x_3)(y_2-y_3)\right)xy\Big]. \end{split} \end{equation} Using \eqref{eqn:xy_CC_m3_zero} and that \[\frac{1}{\omega^2}=\frac{1}{1+3C}=\frac{1}{1-\frac{3}{2}m_3^{\frac{2}{3}}\rho_3^2C_{20}}=1+O(m_3^{1/3}),\] omitting the terms of order $O(m_3^{1/3})$, \marginpar{MG: sentence changed} the expression \eqref{eqn:poly2} becomes \[ \frac{3v^2-4}{8}x^2+ \frac{8-3v^2}{8}y^2-\frac{1}{2}z^2+\frac{3v\sqrt{4-v^2}}{4}(1-2m_2)xy.\] The expression in \eqref{eq4} contains terms of the form \[\sum_{k\geq 3}m_3^{\frac{k-2}{3}}m_1P_k^1(x,y,z)+\sum_{k\geq 3}m_3^{\frac{k-2}{3}}m_2P_k^2(x,y,z),\] which can be written in terms of positive powers of $m_3^{1/3}$, which are omitted in the Hill approximation. \marginpar{MG: sentence changed} The remaining terms in \eqref{eq4} are \[\frac{1}{\bar{r}_3} +\left(\frac{\rho_3^2}{\bar{r}_3^3}\right)\left(\frac{C_{20}}{2}\right) \left(3\left(\frac{z}{\bar{r}_3}\right)^2-1\right) \] and they do not depend on $m_3$. Therefore, when we omit all terms of order $O(m_3^{1/3})$ in \eqref{eq4}, and taking into account that $\frac{1}{\omega^2}=1+O(m_3^{1/3})$, we obtain the following Hamiltonian\marginpar{MG:sentence changed} \begin{equation}\label{eqn:Hill-Ham}\begin{split} H&=\frac{1}{2}(p_x^2+p_y^2+p_z^2)+yp_x-xp_y \\ &\quad -\frac{3v^2-4}{8}x^2- \frac{8-3v^2}{8}y^2+\frac{1}{2}z^2-\frac{3v\sqrt{4-v^2}}{4}(1-2m_2)xy \\&\quad -\frac{1}{ (x^2+y^2+z^2)^{\frac{1}{2}}}-\frac{\rho_3^2}{ (x^2+y^2+z^2)^{\frac{3}{2}}}\left(\frac{C_{20}}{2}\right) \left( \frac{3z^2}{x^2+y^2+z^2} -1\right)\\ &=\frac{1}{2}(p_x^2+p_y^2+p_z^2)+yp_x-xp_y\\ &\quad -\frac{3v^2-4}{8}x^2- \frac{8-3v^2}{8}y^2+\frac{1}{2}z^2-\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)xy \\&\quad -\frac{1}{ (x^2+y^2+z^2)^{\frac{1}{2}}}-\frac{{c } }{ (x^2+y^2+z^2)^{\frac{3}{2}}} \left( \frac{3z^2}{x^2+y^2+z^2} -1\right), \end{split} \end{equation} where we used $\mu=m_2/(m_1+m_2)$ and $c :=\rho_3^2C_{20}/2=m_3^{-\frac{2}{3}}R^2_3C_{20}/2$. \end{proof} We remark that a similar strategy was adopted in \cite{Markellos}, where a Hill's three body problem with oblate primaries has been considered. We refer to the Hamiltonian in \eqref{eqn:Hill-Ham} as the \sl Hill's approximation. \rm It can be thought of as the limiting Hamiltonian, when the primary and the secondary are sent at an infinite distance, and their total mass becomes infinite. It provides an approximation of the motion of the infinitesimal particle in an $O(m_3^{1/3})$ neighborhood of $m_3$. Remarkably, the angular velocity $\omega$ does not appear in the limiting Hamiltonian. We introduce the gravitational potential as \begin{equation} \begin{split} \widehat U(x,y,z)&= \frac{3v^2-4}{8}x^2+ \frac{8-3v^2}{8}y^2-\frac{1}{2}z^2+\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)xy \\&\quad +\frac{1}{ (x^2+y^2+z^2)^{\frac{1}{2}}}+\frac{{c }}{ (x^2+y^2+z^2)^{\frac{3}{2}}} \left( \frac{3z^2}{x^2+y^2+z^2} -1\right), \end{split} \end{equation} and the effective potential as \begin{equation} \begin{split} \widehat\Omega(x,y,z) &=\frac{1}{2}(x^2+y^2)+\widehat U(x,y,z) \\ &= \frac{3v^2}{8}x^2+ \frac{3(4-v^2)}{8}y^2-\frac{1}{2}z^2+\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)xy\\&\quad +\frac{1}{ (x^2+y^2+z^2)^{\frac{1}{2}}}+\frac{c }{ (x^2+y^2+z^2)^{\frac{3}{2}}} \left( \frac{3z^2}{x^2+y^2+z^2} -1\right). \end{split} \end{equation} The equations of motion associated to \eqref{eqn:Hill-Ham} can thus be written as: \begin{equation*}\begin{split}\ddot{x}-2\dot{y}&=\widehat\Omega_x,\\ \ddot{y}+2\dot{x}&=\widehat\Omega_y,\\ \ddot{z}&=\widehat\Omega_z.\end{split}\end{equation*} \begin{rem}\label{rem:L4L5} In the case when $C_{20}=0$, we have that $v=1$ and the Hamiltonian in \eqref{eqn:Hill-Ham} is the same as the one obtained in \cite{Burgos_Gidea}. Also, its quadratic part coincides with the quadratic part of the expansion of the Hamiltonian of the restricted three-body problem centered at the Lagrange libration point $L_{4}$. Moreover, in the special case $\mu=0$ we obtain the classical lunar Hill's problem after some coordinate transformation (see Section~\ref{sec:Hill_system}). \end{rem} \subsection{Hill's four-body model applied to the Sun-Jupiter-Hektor system}\label{sec:Hill_system} In the case of the Sun-Jupiter-Hektor system, using the data from Section \ref{section:data} in the above equations we obtain $\mu=m_2/(m_1+m_2)=1.898\times10^{27}/(1.989\times10^{30}+1.898\times10^{27})=0.0009533386$. Also, for Hektor we have $C_{20}=-0.476775$, the average radius of Hektor is $R_3=92$ km, and the mass of Hektor is $m_H=7.91\times10^{18}$ kg. In the normalized units, where we use the average distance Sun-Jupiter $778.5\times 10^6$ km as the unit of distance, and the mass of Sun-Jupiter-Hektor $7.91\times10^{18}+1.989\times10^{30}+1.898\times10^{27}=1.990898\times 10^{30}$ kg as the unit of mass, we have that the normalized average radius of Hektor is $R_3=92/(778.5\times{10}^{6})=1.18176\times 10^{-7}$ and the normalized mass of Hektor is $m_3=7.91\times10^{18}/1.990898\times 10^{30}=3.97308\times10^{-12}$. Hence, we obtain \begin{equation}\label{eqn:normalized_c } c =m_3^{-\frac{2}{3}}R^2_3C_{20}/2=-1.32716\times 10^{-7}. \end{equation} Also, $\rho_3= m_3^{-\frac{1}{3}}R_3= 0.000746$. We note that if we consider the restricted four-body problem (without the Hill's approximation) described by the Hamiltonian \eqref{eq2}, the oblateness effect is given by the coefficient $C'=R_3^2C_{20}/2=-3.32921544\times 10^{-15}$, which is much smaller then $c $ in \eqref{eqn:normalized_c }. As expected, the Hill's approximation is acting like a `magnifying glass' of the dynamics in a neighborhood of Hektor. \subsection{Hill's approximation of the restricted four-body problem with oblate tertiary in rotated coordinates} In this section we write the Hamiltonian of Hill's approximation of the restricted four-body problem with oblate tertiary in a rotating reference frame in which the primary and the secondary will be located on the horizontal axis. \begin{cor}\label{cor:Hill} The Hamiltonian \eqref{eqn:hill_hamiltonian} is equivalent, via a rotation of the coordinate axes that places the primary and the secondary on the $x$-axes, to the Hamiltonian \begin{equation}\label{eqn:hill_rotated}\begin{split} H&=\frac{1}{2}(p_x^2+p_y^2+p_z^2)+y p_x-xp_y\\ &+\left(\frac{1-\lambda_2}{2}\right)x^2+\left(\frac{1-\lambda_1}{2}\right)y^2+\frac{1}{2}z^2\\ &-\frac{1}{\sqrt{x^2+y^2+z^2}}-\left(\frac{\rho_3^2C_{20}}{2}\right) \frac{1}{(x^2+y^2+z^2)^{\frac{3}{2}}}\left (\frac{3z^2}{x^2+y^2+z^2} -1\right), \end{split}\end{equation} where $\lambda_2$ and $\lambda_1$ are the eigenvalues corresponding to the rotation transformation in the $xy$-plane, and $\rho_3=m_3^{-1/3}R_3$. \end{cor} \begin{proof} We perform a rotation on the $xy-$plane and re-write the Hamiltonian in \eqref{eqn:Hill-Ham} in the framework of the rotating coordinates, which are more suitable for the subsequent analysis. Since the rotation will be performed on the plane, we restrict the computations to the planar case. The planar effective potential restricted to the $xy-$plane is given by \begin{equation*}\begin{split}\widehat\Omega (x,y)=&\frac{3v^2}{8}x^2+\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)xy+\frac{3(4-v^2)}{8}y^2\\ &+\frac{1}{(x^2+y^2)^{1/2}}- \frac{c }{(x^2+y^2)^{3/2}},\end{split}\end{equation*} which can be written in matrix rotation as $$\widehat\Omega = \frac{1}{2}w^TMw+\frac{1}{\lVert w \rVert}- \frac{c }{ \lVert w \rVert^3},$$ where $w=(x,y)^T$ and \[ M= \left[ {\begin{array}{ll} \frac{3v^2}{4} & \frac{3v\sqrt{4-v^2}}{4}(1-2\mu) \\[0.5em] \frac{3v\sqrt{4-v^2}}{4}(1-2\mu) & \frac{3(4-v^2)}{4} \end{array} } \right]. \] Notice that the matrix $M$ is symmetric, so its eigenvalues are real, the eigenvectors $v_1$ and $v_2$ are orthogonal, and the corresponding orthogonal matrix $C=\textrm{col}(v_2,v_1)$ defines a rotation in the $xy$-plane. We find the eigenvalues of $M$ by solving the characteristic equation: \begin{equation} \label{eq18} \begin{split} \det( M-\lambda I )=0 \Rightarrow& \lambda^2-3\lambda +\frac{9v^2(4-v^2)}{4}(\mu-\mu^2)=0,\\ \Rightarrow& \lambda_{1}=\frac{3- 3\sqrt{1-v^2(4-v^2)(\mu-\mu^2)}}{2},\\& \lambda_{2}=\frac{3+ 3\sqrt{1-v^2(4-v^2)(\mu-\mu^2)}}{2}. \end{split} \end{equation} Since $0<\mu\leq \frac{1}{2}$, $\mu-\mu^2\leq 1/4$, and we have $1>1-v^2(4-v^2)(\mu-\mu^2)\geq 1-\frac{1}{4}v^2(4-v^2)=\left(1-\frac{v^2}{2}\right)^2> 0$. Thus $\lambda_1,\lambda_2>0$ and $\lambda_1\neq\lambda_2$. We notice that when $\mu=\frac{1}{2}$, the matrix $M$ is already a diagonal matrix, and the corresponding eigenvalues are $\lambda_1=\frac{3v^2}{4} $ and $\lambda_2=\frac{3(4-v^2)}{4}$. Therefore, below we consider the case $\mu\neq\frac{1}{2}$ for which we proceed to compute the eigenvectors associated to $\lambda_1$ and $\lambda_2$. The eigenvector $v_1$ such that $Mv_1=\lambda_1v_1$ and $\lVert v_1 \rVert=1$ is given by \[ v_1= \left[ {\begin{array}{c} \displaystyle\frac{\frac{3(4-v^2)}{4} -\lambda_1}{\Delta_1} \\[0.75em] \displaystyle-\frac{\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)}{\Delta_1} \end{array} } \right], \] where \[\Delta_1 = \left[\frac{9v^2(4-v^2)}{8}(1-2\mu)^2+\frac{3(v^2-2)}{2}\left(\lambda_1-\frac{3(4-v^2)}{4}\right)\right]^{1/2}.\] Similarly, the eigenvector $v_2$ such that $Mv_2=\lambda_2v_2$ and $\lVert v_2 \rVert=1$ is given by \[ v_2= \left[ {\begin{array}{c} \displaystyle\frac{\frac{3(4-v^2)}{4} -\lambda_2}{\Delta_2} \\[0.75em] \displaystyle-\frac{\frac{3v\sqrt{4-v^2}}{4}(1-2\mu)}{\Delta_2} \end{array} } \right], \] where \[\Delta_2 = \left[\frac{9v^2(4-v^2)}{8}(1-2\mu)^2+\frac{3(v^2-2)}{2}\left(\lambda_2-\frac{3(4-v^2)}{4}\right)\right]^{1/2}.\] The equations of motion for the planar case can be written as \\ $$\ddot{w}-2\mathcal{J}\dot{w}=Mw-\frac{w}{\lVert w \rVert^3}+\frac{{3c }w}{\lVert w \rVert^5},$$ where \[ \mathcal{J}= \left[ {\begin{array}{cc} 0 & 1 \\ -1 & 0 \ \end{array} } \right]. \] Consider the linear change of variable $w=C\bar{w}$ with $\bar{w} = (\bar{x},\bar{y})^T$. By substituting the new variable and multiplying $C^{-1}$ from the left, we obtain\\ $$C^{-1}C\ddot{\bar{w}}-2C^{-1}\mathcal{J} C\dot{\bar{w}}=C^{-1}MC\bar{w}-\frac{C^{-1}C\bar{w}}{\lVert \bar{w} \rVert^3}+3c \frac{C^{-1}C\bar{w}}{\lVert \bar{w} \rVert^5}.$$ Notice that $D=C^{-1}MC$ is the diagonal matrix $D=\textrm{diag}(\lambda_2,\lambda_1)$, that is $\lVert C \bar{w} \rVert^3=\lVert \bar{w} \rVert^3$. Therefore the equation becomes \\ $$\ddot{\bar{w}}-2C^{-1}\mathcal{J}C\dot{\bar{w}}=D\bar{w} -\frac{\bar{w}}{\lVert \bar{w} \rVert^3}+\frac{3c \bar{w}} {\lVert \bar{w} \rVert^5}.$$ Recall that $v_1=(v_{11},v_{12})^T$, $v_2=(v_{21},v_{22})^T$ and $C=\textrm{col}(v_2,v_1)$. Since $C$ is unitary, we have $C^{-1}=C^{T}$ and moreover \[ C^{-1}\mathcal{J} C= \left[ {\begin{array}{cc} 0 & v_{12}v_{21}-v_{11}v_{22} \\ -(v_{12}v_{21}-v_{11}v_{22}) & 0 \ \end{array} } \right]. \] A direct computation shows that $v_{12}v_{21}-v_{11}v_{22}=1$, which implies $C^{-1}\mathcal{J} C=\mathcal{J}$. Since $C^{-1}\mathcal{J}C=C^T\mathcal{J}C=\mathcal{J}$, the matrix $C$ is symplectic by definition. Therefore, the change of coordinates is symplectic. Thus, the equations of motion can be written as \\ $$\ddot{\bar{w}}-2\mathcal{J}\dot{\bar{w}}=D\bar{w}-\frac{\bar{w}}{\lVert \bar{w} \rVert^3}+ \frac{3c \bar{w}}{\lVert \bar{w} \rVert^5}.$$ For $\mu \in [0,\frac{1}{2})$, we obtain the equations \begin{equation} \label{eq19} \begin{split} &\ddot{\bar{x}}-2\dot{\bar{y}}=\bar{\Omega}_{\bar{x}}\\ &\ddot{\bar{y}}+2\dot{\bar{x}}=\bar{\Omega}_{\bar{y}} \end{split} \end{equation} with $$\bar{\Omega}(\bar{x},\bar{y}) = \frac{1}{2}(\lambda_2\bar{x}^2+\lambda_1\bar{y}^2)+\frac{1}{\lVert \bar{w} \rVert}- \frac{c }{\lVert \bar{w} \rVert^3}.$$ From the expressions for $\bar{\Omega}_{\bar{x}}$ and $\bar{\Omega}_{\bar{y}}$, we notice the symmetry properties: $$\bar{\Omega}_{\bar{x}}(\bar{x},-\bar{y})=\bar{\Omega}_{\bar{x}}(\bar{x},\bar{y})\ ,\qquad \bar{\Omega}_{\bar{y}}(\bar{x},-\bar{y})=-\bar{\Omega}_{\bar{x}}(\bar{x},\bar{y}).$$ Using these properties, we see that the equations (\ref{eq19}) are invariant under the transformations $\bar{x}\rightarrow \bar{x}$, $\bar{y} \rightarrow -\bar{y}$, $\dot{\bar x}\rightarrow -\dot{\bar x}$, $\dot{\bar y} \rightarrow \dot{\bar y}$, $\ddot{\bar x} \rightarrow \ddot{\bar x}$ and $\ddot{\bar y}\rightarrow -\ddot{\bar y}$. If we now go back to the spatial problem, we need to replace $\bar\Omega$ by \begin{equation}\label{eqn:eff_poten_rot}\begin{split}\bar{\Omega}(\bar{x},\bar{y},\bar{z})= &\frac{1}{2}(\lambda_2\bar{x}^2+\lambda_1\bar{y}^2-\bar{z}^2)\\ &+\frac{1}{(\bar{x}^2+\bar{y}^2+\bar{z}^2)^{\frac{1}{2}}}- \frac{c }{(\bar{x}^2+\bar{y}^2+\bar{z}^2)^{\frac{3}{2}}} +\frac{3c \bar{z}^2}{(\bar{x}^2+\bar{y}^2+\bar{z}^2)^{\frac{5}{2}}}; \end{split}\end{equation} writing $\bar{\Omega}(\bar{x},\bar{y},\bar{z})=\frac{1}{2}\bar{x}^2+\frac{1}{2}\bar{y}^2+\bar U(\bar{x},\bar{y},\bar{z})$, we can define $\bar U$ as \begin{equation}\label{eqn:grav_poten_rot} \begin{split} \bar{U}(\bar{x},\bar{y},\bar{z})=&\bar{\Omega}(\bar{x},\bar{y},\bar{z})-\frac{1}{2}\bar{x}^2-\frac{1}{2}\bar{y}^2\\ =&\left(\frac{\lambda_2-1}{2}\right)\bar{x}^2+\left(\frac{\lambda_1-1}{2}\right)\bar{y}^2-\frac{1}{2}\bar{z}^2\\ &+\frac{1}{(\bar{x}^2+\bar{y}^2+\bar{z}^2)^{\frac{1}{2}}} - \frac{c }{ (\bar{x}^2+\bar{y}^2+\bar{z}^2)^{\frac{3}{2}}} +\frac{3c \bar{z}^2}{(\bar{x}^2+\bar{y}^2+\bar{z}^2)^{\frac{5}{2}}}. \end{split} \end{equation} In conclusion, the Hamiltonian in these new coordinates is given by the following expression (we omit the bars for $x$ and $y$ to simplify the notation): \begin{equation*} \begin{split} H(x,y,z,p_x,p_y,p_z) &=\frac{1}{2}(p_x^2+p_y^2+p_z^2)+y p_x-xp_y\\ &+\left(\frac{1-\lambda_2}{2}\right)x^2+\left(\frac{1-\lambda_1}{2}\right)y^2+\frac{1}{2}z^2\\ &-\frac{1}{(x^2+y^2+z^2)^{\frac{1}{2}}}+\frac{c }{(x^2+y^2+z^2)^{\frac{3}{2}}} -\frac{3c z^2}{(x^2+y^2+z^2)^{\frac{5}{2}}}, \end{split} \end{equation*} which coincides with \eqref{eqn:hill_rotated}. \end{proof} \begin{rem}\label{rem:L4L5} If we let $C_{20}=0$ and $\mu=0$ in \eqref{eqn:hill_rotated}, we obtain the Hamiltonian for the classical lunar Hill problem, see, e.g., \cite{MEYER1982}. \end{rem} \section{Linear stability analysis of the Hill four-body problem with oblate tertiary}\label{sec:linear} In this section we determine the equilibrium points associated to the potential in \eqref{eqn:eff_poten_rot} and we analyze their linear stability. \subsection{The equilibrium points of the system} Considering the potential \eqref{eqn:eff_poten_rot} (again we omit the bars for a simplified notation), \begin{equation*}\begin{split}\Omega=&\frac{1}{2}(\lambda_2x^2+\lambda_1y^2-z^2) \\&+\frac{1}{(x^2+y^2+z^2)^{\frac{1}{2}}} -\frac{c }{(x^2+y^2+z^2)^{\frac{3}{2}}} +\frac{3c z^2}{(x^2+y^2+z^2)^{\frac{5}{2}}}, \end{split}\end{equation*} we compute its derivatives as \begin{equation} \begin{split} \Omega_x &=\lambda_2x-\frac{x}{(x^2+y^2+z^2)^{\frac{3}{2}}}+\frac{3c x}{(x^2+y^2+z^2)^{\frac{5}{2}}} -\frac{15c z^2x}{(x^2+y^2+z^2)^{\frac{7}{2}}}\\ & =\lambda_2x-\frac{x}{r^3}+\frac{3c x}{r^5}-\frac{15c z^2x}{r^7}\\ \Omega_y &=\lambda_1y-\frac{y}{(x^2+y^2+z^2)^{\frac{3}{2}}}+\frac{3c y}{(x^2+y^2+z^2)^{\frac{5}{2}}} -\frac{15c z^2y}{(x^2+y^2+z^2)^{\frac{7}{2}}}\\ & =\lambda_1y-\frac{y}{r^3}+\frac{3c y}{r^5}-\frac{15c z^2y}{r^7}\\ \Omega_z &=-z-\frac{z}{(x^2+y^2+z^2)^{\frac{3}{2}}}+\frac{9c z}{(x^2+y^2+z^2)^{\frac{5}{2}}} -\frac{15c z^3}{(x^2+y^2+z^2)^{\frac{7}{2}}}\\ & =-z-\frac{z}{r^3}+\frac{9c z}{r^5}-\frac{15c z^3}{r^7}, \end{split} \end{equation} where $r=(x^2+y^2+z^2)^{\frac{1}{2}}$. To find the equilibrium points we have to solve the system \[ \left. \begin{array}{ll} \Omega_{x} = 0\\ \Omega_{y} = 0\\ \Omega_{z} = 0 \end{array} \right\}\Rightarrow \left. \begin{array}{ll} \displaystyle \left(\lambda_2-\frac{1}{r^3}+\frac{3c }{r^5}-\frac{15c z^2}{r^7}\right)x :=Ax =0\\[0.5em] \displaystyle\left(\lambda_1-\frac{1}{r^3}+\frac{3c }{r^5}-\frac{15c z^2}{r^7}\right)y :=B y= 0\\[0.5em] \displaystyle \left(-1-\frac{1}{r^3}+\frac{9c }{r^5}-\frac{15c z^2}{r^7}\right)z:=C z = 0 \end{array} \right\} \] In the above expressions $A$ and $B$ cannot simultaneously equal to $0$ since $\lambda_1\neq\lambda_2$. Also, $A$ and $C$, or $B$ and $C$, cannot simultaneously equal to $0$, since $A-C=\lambda_2+1-\frac{6c }{r^5}>0$ because $c <0$; a similar argument holds for $B$ and $C$. This implies that, for example, if $A= 0$, then $B\neq 0$ and $C\neq 0$, so $y=z=0$ and $x$ is given by the equation $A=0$; the same reasoning applies for the other combinations of variables. Thus, all equilibrium points must lie on the $x$-, $y$-, $z$-coordinate axes. Precisely, we have the following results. \begin{description} \item[$i)$ Equilibrium points on the $x$-axis] In the case $A= 0$, $B\neq0$, $C\neq0$, we must have $y=z=0$. From $A=0$ and $z=0$ we infer $\displaystyle h_A(r):=\lambda_2-\frac{1}{r^3}+\frac{3c }{r^5}=0$. We have $\displaystyle h'_A(r)= \frac{3}{r^4}-\frac{15c }{r^6}>0$, since $c <0$; also, $\lim_{r\to 0}h_A(r)=-\infty$ and $\lim_{r\to \infty}h_A(r)=\lambda_2>0$. Hence, the equation $h_A(r)=0$ has a unique solution $r^*_x>0$, yielding the equilibrium points $(\pm r^*_x,0,0)$. \item[$ii)$ Equilibrium points on the $y$-axis] In the case $B= 0$, $A\neq0$, $C\neq0$, we must have $x=z=0$. From $B=0$ and $z=0$ we infer $\displaystyle h_B(r):=\lambda_1-\frac{1}{r^3}+\frac{3c }{r^5}=0$. We have $\displaystyle h'_B(r)= \frac{3}{r^4}-\frac{15c }{r^6}>0$, since $c <0$; also, $\lim_{r\to 0}h_B(r)=-\infty$ and $\lim_{r\to \infty}h_B(r)=\lambda_1>0$. Hence, the equation $h_B(r)=0$ has a unique solution $r^*_y>0$, yielding the equilibrium points $(0,\pm r^*_y,0,0)$. \item[$iii)$ Equilibrium points on the $z$-axis] In the case $C= 0$, $A\neq0$, $B\neq0$, we must have $x=y=0$, so $z=\pm r$. Hence $C=0$ implies $-1-\frac{1}{r^3}+\frac{9c }{r^5}-\frac{15c r^2}{r^7}=-1-\frac{1}{r^3}-\frac{6c }{r^5} =(-r^5-r^2-6c )/r^5=0$. Let $h_C(r)=-r^5-r^2-6c $. We have $h'_C(r)=-5r^4-2r<0$; also, $\lim_{r\to 0} h_C(r)=-6c >0$ and $\lim_{r\to +\infty} h_C(r)=-\infty$. Hence, the equation $h_C(r)=0$ has a unique solution $r^*_z>0$, yielding the equilibrium points $(0,0,\pm r^*_z)$. \end{description} In the case of the Sun-Jupiter-Hektor system, in normalized units, we obtain $\lambda_1= 0.0021444999866622183$, $\lambda_2=2.997855500013338$, and the equilibrium points location are given by the following figures. \[ \begin{tabular}{|l|l|l|l|} \hline & $x$ & $y$ & $z$ \\ \hline $x$-axis equilibria & $\pm 0.6935267570$ & 0 & 0 \\ $y$-axis equilibria & 0 & $\pm 7.7545747196$ & 0 \\ $z$-axis equilibria & 0 & 0 & $\pm 0.0008923544$ \\ \hline \end{tabular} \] We remark that in the case of the Hill's four body problem without a non-oblate tertiary, the $x$-axis equilibria and the $y$-axis equilibria also exist, see \cite{Burgos_Gidea}; their locations, in the case of Hektor, are very close to the ones in the case of an oblate tertiary. Precisely, we have the following results. \[ \begin{tabular}{|l|l|l|l|} \hline & $x$ & $y$ & $z$ \\ \hline $x$-axis equilibria & $\pm 0.6935265657$ & 0 & 0 \\ $y$-axis equilibria & 0 & $\pm 7.7545747024$ & 0 \\ \hline \end{tabular} \] This result leads us to conclude that the $x$-axis equilibria and the $y$-axis equilibria for the Hill's problem with oblate tertiary are continuations of the ones for the Hill's problem with non-oblate tertiary. On the other hand, the $z$-axis equilibria do not exist for the Hill's problem with non-oblate tertiary, so they are new features of the Hill's problem with oblate tertiary. To summarize, the Hill's three-body problem has 2 equilibrium points, Hill's four-body problem has 4 equilibrium points, and the Hill's four-body problem with oblate tertiary has 6 equilibrium points. \vskip.1in In Fig. \ref{fig:Hektor_z_dependence_on_c} we plot the dependence on the $c $ of the distance from $z$-axis equilibrium point to the origin (in km), when we let the parameter $C_{20}$ range between -0.001 and -0.95. The estimates on the axes of Hektor are $a=208$ km, $b=65.5$ km, $c=60$ km (\cite{DESCAMPS2015}). Using the ellipsoid model, we find $C_{20}=-0.476775$ and hence $z\simeq 100$ km, which lies outside the asteroid. Using $C_{20}=-0.15$ as provided by \cite{Marchis}, we obtain $x\simeq 62$ km, which basically coincides with the surface of the asteroid. We notice that the vertical equilibrium is not outside the Brillouin sphere, which corresponds to the region where the spherical harmonic series expansion is convergent. Inside the Brillouin sphere the series is divergent, if the shape is an ellipsoid. Given that the true shape of the asteroid is unknown, we cannot determine precisely the region where the spherical harmonic series is convergent or divergent and, hence, we cannot decide whether the $z$-equilibrium is indeed real, but just postulate its existence. \begin{figure} \includegraphics[width=0.85\textwidth]{Hektor_z_dependence_on_C20.pdf} \caption{The dependence of the $z$-axis equilibrium point distance on the $C_{20}$ rescaled, see \eqref{eqn:normalized_c }.} \label{fig:Hektor_z_dependence_on_c} \end{figure} To convert to real units, the distances from the equilibrium points to the center need to be multiplied by $m_3^{1/3}$ -- due to the rescaling involved in the Hill procedure --, and by the unit of distance which in this case is the distance Sun-Jupiter. It follows that the $x$-axis equilibrium points are at a distance of $85,512.774$ km from Hektor, the $x$-axis equilibrium points are at a distance of $956,149.406$ km, and the $z$-axis equilibrium points are at a distance of $110.028$ km. As the smallest semi-minor axis of Hektor is $60$ km, the $z$-axis equilibrium points are outside the body of the asteroid. It seems though that for many other asteroids the $z$-axis equilibrium points are located inside their body. \begin{rem} These $z$-axis equilibria also appear in the case of the motion of a particle in a geopotential field (see, e.g., \cite{celletti2014dynamics}). From that model it can be derived that the distance from the $z$-axis equilibrium points to the center is given by $\hat r_z= R_3(-3C_{20})^{1/2}$. When we apply this formula in the case of Hektor, the numerical result is very close to the one found above. This formula can be derived from the equation $h_C(r)=0$ if we drop the term $r^5$. \end{rem} \subsection{Linear stability of the equilibrium points} We study the linear stability of the equilibrium points in the case of Hektor. The Hamiltonian \eqref{eqn:hill_rotated} yields the following system of equations \begin{eqnarray*} &\dot x=v_x, &\dot {v}_x=2v_y+ \Omega_x,\\ &\dot y=v_y, &\dot {v}_y=-v_x+ \Omega_y,\\ &\dot z=v_z, &\dot{v}_z= \Omega_z, \end{eqnarray*} where $\Omega$ is the effective potential given by \eqref{eqn:eff_poten_rot} (again, we omit the overline bar on the variables). The second order derivatives of $\Omega$ are given by \begin{equation}\begin{split} \Omega_{xx}=&\lambda_2-\frac{1}{r^3}+\frac{3x^2}{r^5}+\frac{3c}{r^5} -\frac{15c x^2}{r^7}-\frac{15c z^2}{r^7}+\frac{105c z^2x^2}{r^9},\\ \Omega_{yy}=&\lambda_1-\frac{1}{r^3}+\frac{3y^2}{r^5}+\frac{3c}{r^5} -\frac{15c y^2}{r^7}-\frac{15c z^2}{r^7}+\frac{105c z^2y^2}{r^9},\\ \Omega_{zz}=&-1-\frac{1}{r^3}+\frac{3z^2}{r^5}+\frac{9c}{r^5}-\frac{90c z^2}{r^7}+\frac{105c z^4}{r^9},\\ \Omega_{xy}=&\frac{3xy}{r^5}-\frac{15c xy}{r^7}+\frac{105c z^2xy}{r^9},\\ \Omega_{xz}=&\frac{3xz}{r^5}-\frac{45c xz}{r^7}+\frac{105c z^3x}{r^9},\\ \Omega_{yz}=&\frac{3yz}{r^5}-\frac{45c yz}{r^7}+\frac{105c z^3y}{r^9}. \end{split}\end{equation} The Jacobian matrix describing the linearized system is \begin{equation}\label{eqn:jacobi} \mathscr{J}=\left( \begin{array}{rrrrrr} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \Omega_{xx} & \Omega_{xy} & \Omega_{xz} & 0 & 2 & 0 \\ \Omega_{yx} & \Omega_{yy} & \Omega_{yz} &-2 & 0 & 0 \\ \Omega_{zx} & \Omega_{zy} & \Omega_{zz} & 0 & 0 & 0 \\ \end{array} \right). \end{equation} Since the equilibria are of the form $(\pm r^*_x,0,0)$, $(0,\pm r^*_y,0)$, $(0,0,\pm r^*_z)$, the mixed second order partial derivatives $\Omega_{xy}$, $\Omega_{xz}$, $\Omega_{yz}$ vanish at each of the equilibrium points. Hence Jacobian matrix \eqref{eqn:jacobi} evaluated at the equilibria is of the form: \begin{equation}\label{eqn:jacobi_at_z} \mathscr{J}=\left( \begin{array}{rrrrrr} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \Omega_{xx} &0 & 0 & 0 & 2 & 0 \\ 0 & \Omega_{yy} & 0 &-2 & 0 & 0 \\ 0 & 0 & \Omega_{zz} & 0 & 0 & 0 \\ \end{array} \right), \end{equation} The characteristic equation of \eqref{eqn:jacobi_at_z} is \begin{equation}\label{eqn:charcteristic} (\rho^2-\Omega_{zz})(\rho^4+(4-\Omega_{xx}-\Omega_{yy})\rho^2+\Omega_{xx}\Omega_{yy})=0. \end{equation} The stability of the equilibria depends on the signs of $\Omega_{zz}$ and of $A$, $B$, and $D$. We find the following stability character of the equilibrium positions in the case of the Sun-Jupiter-Hektor system: \begin{description} \item[$i)$ Eigenvalues of $x$-axis equilibria at $(\pm 0.6935267570, 0 ,0)$] \begin{eqnarray*} &2.50694248&-2.50694248,\\ &2.07048307i, &-2.07048307i,\\ &1.99946504i, &- 1.99946504i. \end{eqnarray*} \textbf{Stability type:} center $\times $ center $\times $ saddle. \item[$ii)$ Eigenvalues of $y$-axis equilibria at $(0,7.7545747196,0)$] \begin{eqnarray*} & 0.98901573i, &-0.98901573i,\\ & 0.14036874i, &-0.14036874i,\\ & 1.00107168i, &-1.00107168i \end{eqnarray*} \textbf{Stability type:} center $\times $ center $\times $ center. \item[$iii)$ Eigenvalues of $z$-axis equilibria at $(0,0,\pm 0.0008923544)$] \begin{eqnarray*} &37514.0432165187+0.9999999998i, &-37514.0432165187+0.9999999998j,\\ &37514.0432165187-0.9999999998i, &-37514.0432165187-0.9999999998i\\ &53052.8687i, &-53052.8687i \end{eqnarray*} \textbf{Stability type:} center $\times $ complex saddle. \end{description} Note that for the $z$-axis equilibria the imaginary part of the `Krein quartet' of eigenvalues of $z$-axis equilibria is approximately $\pm 1$. This means that the infinitesimal motion around the equilibrium point is close to the $1:1$ resonance with the rotation of the primary and the secondary. In Fig. \ref{fig:Hektor_eigenvalues} we show that for a range of $r^*_z$ values between $z=0.000892354498497342$ (corresponding to the $c $ value for Hektor) and $z=0.01$ (corresponding to $c =-1.666668333\times 10^{-5}$), the real part and the imaginary part of the `Krein quartet' of eigenvalues; the imaginary part stays close to $\pm 1$. \begin{figure}$\begin{array}{cc} \includegraphics[width=0.5\textwidth]{real_part_eigenvalues} & \includegraphics[width=0.5\textwidth]{imaginary_part_eigenvalues} \end{array}$ \caption{The dependence of the real part (left) and imaginary part (right) of the Krein quartet of eigenvalues on the $z$-axis equilibrium point. The horizontal axis represents the distance $r^*_z$ from the equilibrium point to the origin, the vertical axis the real part (left), and the absolute value of the imaginary part (right) of the eigenvalues. The former never changes sign, and the latter stays within $4\times 10^{-7}$ from $1$.} \label{fig:Hektor_eigenvalues} \end{figure} In Section \ref{sec:z_stability} we will provide an analytic argument that the real part of the `Krein quartet' of eigenvalues is always non-zero, and the imaginary part is close to $\pm 1$ for $r^*_z$ sufficiently small; this result will help us to explain the behavior observed in Fig.~\ref{fig:Hektor_eigenvalues}. In the sequel we give a more detailed analysis of the linear stability of all equilibria, for a wide range of parameters $\mu$ and $c$. \subsubsection{Linear stability of the equilibria on the $z$-axis} \label{sec:z_stability} The $z$-axis equilibrium points are of the form $(0,0,\pm r^*_z)$, with \begin{equation}\label{eqn:z_eq} -(r^*_z)^5-(r^*_z)^2-6c =0, \end{equation} which yields\begin{equation}\label{eqn:z_c20} c=\frac{-(r^*_z)^2- (r^*_z)^5}{6}. \end{equation} Evaluating $\Omega_{xx}$, $\Omega_{yy}$, $\Omega_{zz}$ at the equilibrium point yields: \begin{equation*}\begin{split} \Omega_{xx}=& \lambda_2 - (r^*_z)^{-3}-12c (r^*_z)^{-5},\\ \Omega_{yy}=& \lambda_1 - (r^*_z)^{-3}-12c (r^*_z)^{-5},\\ \Omega_{zz}=& -1+2(r^*_z)^{-3}+24c (r^*_z)^{-5}. \end{split}\end{equation*} Substituting in \eqref{eqn:z_c20} we have \begin{equation}\label{eqn:Omega_e} \begin{split} \Omega_{xx}&=2+\lambda_2+(r^*_z)^{-3},\\ \Omega_{yy}&=2+\lambda_1+(r^*_z)^{-3},\\ \Omega_{zz}&=-5-2(r^*_z)^{-3}. \end{split}\end{equation} Using \eqref{eq18} and denoting $d:=\sqrt{1-v^2(4-v^2)(\mu-\mu^2)}$ we can write \begin{equation}\label{eqn:lambda} \begin{split} \lambda_1=\frac{3}{2}(1-d),\\ \lambda_2=\frac{3}{2}(1+d). \end{split} \end{equation} Also for $c=0$ we have $d_0=\sqrt{1-3(\mu-\mu^2)}$ and \begin{equation}\label{eqn:lambda0} \begin{split} \lambda_{10}=&\frac{3}{2}(1-d_0),\\ \lambda_{20}=&\frac{3}{2}(1+d_0). \end{split} \end{equation} This is in agreement with the results in \cite{Burgos_Gidea}. For future reference, we expand $d$ as a power series in the parameter $c$ as \begin{equation}\label{eqn:d_power} d=d_0+d_1 c+ O(c^2), \end{equation} where the coefficient $d_1$ can be obtained can be obtained from the Taylor's theorem around $c = 0$ as \begin{equation}\label{eqn:d_coeff} \begin{split} d_1=&-\frac{2(\mu-\mu^2)}{d_0} m_3^{2/3}. \end{split} \end{equation} From the characteristic equation \eqref{eqn:charcteristic}, we obtain that the pair of eigenvalues $\rho_{1,2}=\pm (\Omega_{zz})^{1/2}$ is purely imaginary, since by \eqref{eqn:Omega_e}, $\Omega_{zz}<0$. The `Krein quartet' eigenvalues are given by \begin{equation}\label{eqn:quadratic} \rho_{3,4,5,6}= \pm\sqrt{\frac{-A\pm\sqrt{A^2-4B}}{2}}, \end{equation} where \begin{equation*}\begin{array}{lllll}A&=&\displaystyle 4-\Omega_{xx}-\Omega_{yy}&=&-3-\frac{2}{(r^*_z)^3},\\ B&=&\Omega_{xx}\Omega_{yy}&=&\displaystyle 10+{9\over 4} v^2(4-v^2)(\mu-\mu^2) +\frac{7}{(r^*_z)^3}+\frac{1}{(r^*_z)^6}.\end{array}\end{equation*} Then we have \begin{equation*}\begin{split} D:=A^2-4B=&d^2-40-\frac{16}{(r^*_z)^3}=-31-9v^2(4-v^2)(\mu-\mu^2)-\frac{16}{(r^*_z)^3}<0. \end{split}\end{equation*} Since $-A>0$ and $D<0$, we obtain that the eigenvalues $\rho_{3,4,5,6}$ are complex numbers, non-real, non-purely-imaginary, for all parameter values. Let $\rho=a+ib$ be such that $\rho^2=-\frac{A}{2}\pm\frac{\sqrt{4B-A^2}}{2}i:=\alpha+i\beta$. We have \[a+ib=\left(\frac{(\alpha^2+\beta^2)^{\frac{1}{2}}+\alpha}{2}\right)^{\frac{1}{2}}+ \textrm{sign}(\beta)\left(\frac{(\alpha^2+\beta^2)^{\frac{1}{2}}-\alpha}{2}\right)^{\frac{1}{2}}\ i. \] To show that $b$ is approximately $\pm 1$, or $b^2\approx 1$, for $r^*_z\approx 0$, note that \begin{equation*}\begin{split} b^2=&\frac{(\alpha^2+\beta^2)^{\frac{1}{2}}-\alpha}{2}=\frac{A}{4}+\frac{\sqrt{B}}{2}\\ =&-\frac{3}{4}+\frac{1}{2}\left[\left(10+\frac{9}{4}\Upsilon+\frac{7}{(r^*_z)^3}+ \frac{1}{(r^*_z)^6} \right)^{\frac{1}{2}}-\frac{1}{(r^*_z)^3}\right]\\ =&-\frac{3}{4}+\frac{1}{2}\frac{10+\frac{9}{4}\Upsilon+\frac{7}{(r^*_z)^3}+ \frac{1}{(r^*_z)^6} -\frac{1}{(r^*_z)^6}}{\left(10+\frac{9}{4}\Upsilon+\frac{7}{(r^*_z)^3}+ \frac{1}{(r^*_z)^6} \right)^{\frac{1}{2}}+\frac{1}{(r^*_z)^3}}\\ =&-\frac{3}{4}+\frac{1}{2}\frac{10+\frac{9}{4}\Upsilon+\frac{7}{(r^*_z)^3}}{\left(10+\frac{9}{4}\Upsilon+\frac{7}{(r^*_z)^3}+ \frac{1}{(r^*_z)^6} \right)^{\frac{1}{2}}+\frac{1}{(r^*_z)^3}}, \end{split} \end{equation*} where $\Upsilon:=v^2(4-v^2)(\mu-\mu^2)$. Since \[\lim_{r^*_z\to 0} \frac{10+\frac{9}{4}\Upsilon+\frac{7}{(r^*_z)^3}}{\left(10+\frac{9}{4}\Upsilon+\frac{7}{(r^*_z)^3}+ \frac{1}{(r^*_z)^6} \right)^{\frac{1}{2}}+\frac{1}{(r^*_z)^3}} = \frac{7}{2},\] we have that $\lim _{r^*_z\to 0} b^2=-\frac{3}{4}+\frac{7}{4}=1$, so $b^2\approx 1$ for $r^*_z\approx 0$, as in the case of Hektor. We obtained the following result: \begin{prop}\label{prop:_lin_stab} Consider the equilibria on the $z$-axis. For $\mu\in(0,1/2]$, $\Omega_{zz}$, $A$ and $D$ are negative. Consequently, one pair of eigenvalues is purely imaginary, and the two other pairs of eigenvalues are complex conjugate, with the imaginary part close to $\pm i$ for $c$ negative and sufficiently small. The linear stability is of center $\times$ complex-saddle type. \end{prop} \subsubsection{Linear stability of the equilibria on the $y$-axis} \label{sec:y_stability} The $y$-axis equilibrium points are of the form $(0,\pm r^*_y,0)$, with \begin{equation}\label{eqn:y_eq}\lambda_1 (r^*_y)^5-(r^*_y)^2+3c=0,\end{equation} which yields \begin{equation}\label{eqn:y_c20} c=\frac{(r^*_y)^2-\lambda_1 (r^*_y)^5}{3}. \end{equation} Evaluating $\Omega_{xx}$, $\Omega_{yy}$, $\Omega_{zz}$ at the equilibrium point yields: \begin{equation}\label{eqn:y_Omega_second_der_0} \begin{split} \Omega_{xx}=&\lambda_2-\frac{1}{(r^*_y)^3}+\frac{3c}{(r^*_y)^5},\\ \Omega_{yy}=&\lambda_1+\frac{2}{(r^*_y)^3}-\frac{12c}{(r^*_y)^5}, \\ \Omega_{zz}=& -1-\frac{1}{(r^*_y)^3}+\frac{9c}{(r^*_y)^5}. \end{split}\end{equation} Substituting $c$ from \eqref{eqn:y_c20} we obtain \begin{equation}\label{eqn:y_Omega_second_der}\begin{split} \Omega_{xx}=&\lambda_2-\lambda_1 ,\\ \Omega_{yy}=& 5\lambda_1-\frac{2}{(r^*_y)^3}, \\ \Omega_{zz}=& -1-3\lambda_1+\frac{2}{(r^*_y)^3}, \end{split}\end{equation} \begin{comment} \begin{equation}\label{eqn:y_AB}\begin{array}{lll} A&=1-3\lambda_1 +\frac{2}{(r^*_y)^3}&=\frac{9d}{2}-\frac{7}{2}+\frac{2}{(r^*_y)^3},\\ B&= (\lambda_2-\lambda_1)\left (5\lambda_1 -\frac{2}{(r^*_y)^3}\right)&=(3d)\left (\frac{15}{2} -\frac{15d}{2} -\frac{2}{(r^*_y)^3}\right). \\ \end{array}\end{equation} \end{comment} We also expand $r^*_y$ as a power series in the parameter $c$ as \begin{equation}\label{eqn:y_r_power} r^*_y=r_{y0}+r_{y1} c+O(c^2), \end{equation} where $\pmr_{y0}$ is the position of the $y$-equilibrium in the case when $c=0$, which is given by $r_{y0}^3=1/\lambda_{10}$; this is in agreement with \cite{Burgos_Gidea}. The computation of $r_{y1}$ yields \begin{equation}\label{eqn:y_r_coeff} \begin{split} r_{y1}=& \frac{-1 +(1/2)d_1r_{y0}^5}{r_{y0}}, \end{split} \end{equation} with $d_1$ as in \eqref{eqn:d_coeff}. We will also need $\frac{1}{(r^*_y)^3}$ as a power series in the parameter $c$ \begin{equation}\label{eqn:y_r_cube_inv_power} \frac{1}{(r^*_y)^3}=\alpha+\beta c+O(c^2), \end{equation} and a simple calculation yields \begin{equation}\label{eqn:y_r_cube_inv} \begin{split} \alpha=&\frac{1}{r_{y0}^3},\\ \beta=& -\frac{3r_{y1}}{r_{y0}^4}. \end{split} \end{equation} For $\mu=1/2$, we have $d_0=\frac{1}{2}$, $\lambda_{10}=\frac{3}{4}$, $d_1=-m_3^{2/3}$, $r_{y0}=\left(\frac{4}{3}\right)^{1/3}$. It is easy to see that dominant part $d_0$ of $d$ is a strictly decreasing function with respect to $\mu\in(0,1/2]$ and takes values in $[1/2,1)$. The dominant part $\lambda_{10}$ of $\lambda_1$ is increasing with respect to $\mu\in(0,1/2]$ and takes values in $(0,3/4] $. Also, the dominant part $r_{y0}$ of $r^*_y$ is a strictly decreasing function in $\mu\in(0,1/2]$, where $r_{y0}(1/2)=\sqrt[3]{4/3}$ and $r_{y0}\rightarrow\infty$ when $\mu\rightarrow 0$; as a consequence the values of $r_{y0}$ are in the interval $[\sqrt[3]{4/3},\infty)$. From \eqref{eqn:y_Omega_second_der_0} we have \begin{equation*}\begin{split} \Omega_{zz} & =-1-\frac{1}{(r^*_y)^3}+\frac{9c}{(r^*_y)^5}\\ & = -\frac{1}{(r^*_y)^5}((r^*_y)^5+(r^*_y)^2-9c)\\ & < 0 \end{split}\end{equation*} since $r^*_y>0$ and $c$ is negative. Therefore, $\Omega_{zz}<0$ for all admissible values of $\mu$. For $A=4-\Omega_{xx}-\Omega_{yy}$, using \eqref{eqn:lambda0} and the expansions \eqref{eqn:d_power} and \eqref{eqn:y_r_cube_inv_power} we obtain \begin{equation*}\begin{split} A & = 1-3\lambda_1 +\frac{2}{(r^*_y)^3}\\ & = 1-3\lambda_{10}+\frac{2}{(r_{y0})^3}+O(c)\\ & = 1-3\lambda_{10}+2\lambda_{10}+O(c)\\ & > 0 \end{split}\end{equation*} for $c$ small. For $B=\Omega_{xx}\Omega_{yy}$ using \eqref{eqn:lambda0} and the expansions \eqref{eqn:d_power} and \eqref{eqn:y_r_cube_inv_power} we obtain \begin{equation*}\begin{split} B & = (\lambda_2-\lambda_1)\left (5\lambda_1 -\frac{2}{(r^*_y)^3}\right)\\ & = (3d)\left (\frac{15}{2} -\frac{15d}{2} -\frac{2}{(r^*_y)^3}\right)\\ & = (3d_0)\left(5\lambda_{10}-\frac{2}{r_{y0}^3}\right)+O(c)\\ & = (3d_0)\left(5\lambda_{10}-2\lambda_{10}\right)+O(c)\\ & > 0 \end{split}\end{equation*} for $c$ small. For $D=A^2-4B$, using \eqref{eqn:lambda0} and the expansions \eqref{eqn:d_power} and \eqref{eqn:y_r_cube_inv_power} we have \begin{equation*}\begin{split} D & =\left(1-3\lambda_{10}+\frac{2}{r_{y0}^3}\right)^2-4(3d_0)\left(5\lambda_{10}-\frac{2}{r_{y0}^3}\right)+O(c)\\ & =\left(1-\lambda_{10}\right)^2-12(3-2\lambda_{10})\lambda_{10}+O(c). \end{split}\end{equation*} For $\mu\approx 0$ we have $D\approx 1+O(c)$ and for $\mu=1/2$ we have $D=-\frac{215}{16}+O(c)$. The intermediate value theorem implies that $D$ changes its sign from positive to negative for $\mu\in(0,1/2]$, provided $c$ is small. We have thus proved the following result: \begin{prop}\label{prop:y_lin_stab} Consider the equilibria on the $y$-axis. For $\mu\in(0,1/2]$ and for the parameter $c$ negative and small enough, $\Omega_{zz}$ is always negative, the coefficients $A$ and $B$ are always positive, and the value of the discriminant $D$ changes from positive to negative values. Consequently, one pair of eigenvalues is always purely imaginary, and there exists $\mu_*$, depending on $c_{20}$, where the other two pairs of eigenvalues change from being purely imaginary to being complex conjugate. The linear stability changes from center $\times$ center $\times$ center type to center $\times$ complex-saddle type. \end{prop} \subsubsection{Linear stability of the equilibria on the $x$-axis} The $x$-axis equilibrium points are of the form $(\pm r^*_x, 0,0)$, with \begin{equation}\label{eqn:x_eq}\lambda_2(r^*_x)^5-(r^*_x)^2+3c=0,\end{equation} which yields \begin{equation}\label{eqn:x_c20} c=\frac{(r^*_x)^2-\lambda_2 (r^*_x)^5}{3}. \end{equation} Evaluating $\Omega_{xx}$, $\Omega_{yy}$, $\Omega_{zz}$ at the equilibrium point yields: \begin{equation}\label{eqn:x_Omega_second_der_0} \begin{split} \Omega_{xx}=&\lambda_2+\frac{2}{(r^*_x)^3}-\frac{12c}{(r^*_x)^5},\\ \Omega_{yy}=&\lambda_1-\frac{1}{(r^*_x)^3}+\frac{3c}{(r^*_x)^5}, \\ \Omega_{zz}=& -1-\frac{1}{(r^*_x)^3}+\frac{9c}{(r^*_x)^5}. \end{split}\end{equation} Substituting $c$ from \eqref{eqn:x_c20} we obtain \begin{equation}\label{eqn:x_Omega_second_der}\begin{split} \Omega_{xx}=&5\lambda_2-\frac{2}{(r^*_x)^3} ,\\ \Omega_{yy}=& \lambda_1-\lambda_2, \\ \Omega_{zz}=& -1-3\lambda_2+\frac{2}{(r^*_x)^3}, \end{split}\end{equation} We expand $r^*_x$ as a power series in the parameter $c$ as \begin{equation}\label{eqn:r_power} r^*_x=r_{x0}+r_{x1} c+ O(c^2), \end{equation} where $\pmr_{x0}$ is the position of the $x$-equilibrium in the case when $c=0$, which is given by $r_{x0}^3=1/\lambda_{20}$; see \cite{Burgos_Gidea}. The computation of $r_{x1}$ yields \begin{equation}\label{eqn:x_r_coeff} \begin{split} r_{x1}=& \frac{-1-(1/2)d_1r_{x0}^5}{r_{x0}}. \end{split} \end{equation} We will also need $\frac{1}{(r^*_x)^3}$ as a power series in the parameter $c$ \begin{equation}\label{eqn:x_r_cube_inv_power} \frac{1}{(r^*_x)^3}=\alpha'+\beta' c+O(c^2), \end{equation} and a simple calculation yields \begin{equation}\label{eqn:x_r_cube_inv} \begin{split} \alpha' = & \frac{1}{r_{x0}^3},\\ \beta' = & -\frac{3r_{x1}}{r_{x0}^4}. \end{split} \end{equation} From \eqref{eqn:x_Omega_second_der_0} we have \begin{equation*}\begin{split} \Omega_{zz} & = -1-\frac{1}{(r^*_x)^3}+\frac{9c}{(r^*_x)^5}\\ &=-\frac{1}{(r^*_x)^5}((r^*_x)^5+(r^*_x)^2-9c)\\ &<0 \end{split}\end{equation*} since $r^*_x>0$ and $c<0$. Therefore, $\Omega_{zz}<0$ for all admissible values of $\mu$. For $A=4-\Omega_{xx}-\Omega_{yy}$, using \eqref{eqn:lambda0} and the expansions \eqref{eqn:d_power} and \eqref{eqn:x_r_cube_inv_power}, we obtain \begin{equation*}\begin{split} A & = 1-3\lambda_{20}+\frac{2}{(r_{x0})^3}+O(c)\\ & = 1-\lambda_{20}+O(c)\\ &=-\frac{1}{2}-\frac{3}{2}d_0+O(c)\\ & < 0 \end{split}\end{equation*} for $c$ small. For $B=\Omega_{xx}\Omega_{yy}$, using \eqref{eqn:lambda0} and the expansions \eqref{eqn:d_power} and \eqref{eqn:x_r_cube_inv_power}. we obtain \begin{equation*}\begin{split} B&= -(3d_0)\left(5\lambda_{20}-\frac{2}{r_{x0}^3}\right)+O(c)\\ &= -(3d_0)\left(5\lambda_{20}-2\lambda_{20}\right)+O(c)\\ &=-9d_0\left(\frac{3}{2}+\frac{3}{2}d_0\right)\\ &< 0 \end{split}\end{equation*} for $c$ small. For $D=A^2-4B$, using \eqref{eqn:lambda0} and the expansions \eqref{eqn:d_power} and \eqref{eqn:x_r_cube_inv_power} we have \begin{equation*}\begin{split} D &=\left(1-\lambda_{20}\right)^2+36d_0\lambda_{20}+O(c)\\ & >0. \end{split}\end{equation*} for $c$ small. We have proved the following result: \begin{prop}\label{prop:x_lin_stab} Consider the equilibria on the $x$-axis. For $\mu\in(0,1/2]$ and for parameter $c$ negative and small enough, $\Omega_{zz}$ is negative, $A$ and $B$ are negative, and the value of the discriminant $D$ is always positive. Consequently, two pairs of eigenvalues are purely imaginary, and one pair of eigenvalues are real (one positive and one negative). The linear stability is of center $\times$ center $\times$ saddle type. \end{prop} \section*{Acknowledgements}\marginpar{MG: please check acknowledgements} This material is based upon work supported by the National Science Foundation under Grant No. DMS-1440140 while A.C. and M.G. were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2018 semester. This research was carried out (in part) at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration and funded through the Internal Strategic University Research Partnerships (SURP) program. A.C. was partially supported by GNFM-INdAM and acknowledges the MIUR Excellence Department Project awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C18000100006. M.G. and W-T.L. were partially supported by NSF grant DMS-0635607 and DMS-1814543. We are grateful to Rodney Anderson, Edward Belbruno, Ernesto Perez-Chavela, and Pablo Rold\'an for discussions and comments. \bibliographystyle{alpha}
1,116,691,498,004
arxiv
\section{Introduction} \label{sec:intro} \begin{figure}[h] \setlength{\belowcaptionskip}{-0.4cm} \centering \includegraphics[width=1.0\linewidth]{teaser_TTVSR.pdf} \vspace{-0.6cm} \caption{A comparison between TTVSR and other SOTA methods: MuCAN~\cite{li2020mucan} and IconVSR~\cite{chan2021basicvsr}. We introduce finer textures for recovering the target frame from the boxed areas (indicated by yellow) tracked by the trajectory (indicated by green).} \label{fig:teaser} \end{figure} Video super-resolution (VSR) aims to recover a high-resolution (HR) video from a low-resolution (LR) counterpart~\cite{wang2019edvr}. As a fundamental task in computer vision, VSR is usually adopted to enhance visual quality, which has great value in many practical applications, such as video surveillance~\cite{zhang2010super}, high-definition television~\cite{goto2014super}, and satellite imagery~\cite{luo2017video,deudon2020highres}, etc. From a methodology perspective, unlike image super-resolution that usually learns on spatial dimensions, VSR tasks pay more attention to exploiting temporal information. In Fig.~\ref{fig:teaser}, if detailed textures to recover the target frame can be discovered and leveraged at relatively distant frames, video qualities can be greatly enhanced. To solve this challenge, recent years have witnessed an increasing number of VSR approaches, which can be categorized into two paradigms. The former makes attempts to utilize adjacent frames as inputs (e.g., 5 or 7 frames), and align temporal features in an implicit ~\cite{kim20183dsrnet,li2019fast} or explicit manners ~\cite{wang2019edvr,tian2020tdan}. One of the classic works is EDVR that adopts deformable convolutions to capture features within a sliding window~\cite{wang2019edvr}. However, larger window sizes will dramatically increase computational costs which makes this paradigm infeasible to capture distant frames. The latter investigates temporal utilization by recurrent mechanisms~\cite{sajjadi2018frame,yi2021omniscient,chan2021basicvsr}. One of the representative works is IconVSR that uses a hidden state to convey relevant features from entire video frames~\cite{chan2021basicvsr}. Nonetheless, recurrent networks usually lack long-term modeling capability due to vanishing gradient~\cite{hochreiter1998vanishing}, which inevitably leads to unsatisfied results as shown in Fig.~\ref{fig:teaser}. Inspired by the recent progress of Transformer in natural language processing~\cite{vaswani2017attention}, significant progresses have been made in both visual recognition~\cite{carion2020end, dosovitskiy2020image} and generation tasks~\cite{yang2020learning, zeng2020learning}. For example, MuCAN proposes to use attention mechanisms to aggregate inter-frame features~\cite{li2020mucan} for VSR tasks. However, due to the high computational complexity in a video, it only learns from a narrow temporal window, which results in sub-optimal performance as shown in Fig.~\ref{fig:teaser}. Therefore, exploring proper ways of utilizing Transformers in videos remains a big challenge. In this paper, we propose a novel Trajectory-aware Transformer to enable effective video representation learning for Video Super-Resolution (TTVSR). The key insight of TTVSR is to formulate video frames into pre-aligned trajectories of visual tokens, and calculate $\mathcal{Q}$, $\mathcal{K}$, and $\mathcal{V}$ in the same trajectory. In particular, we learn to link relevant visual tokens together along temporal dimensions, which forms multiple trajectories to depict object motions in a video (e.g., the green trajectory in Fig.~\ref{fig:teaser}). We update token trajectories by a proposed location map that online aggregates pixel motions around a token by average pooling. Once video trajectories have been learned, TTVSR calculates self-attention only on the most relevant visual tokens that are located in the same trajectory. Compared with MuCAN that calculates attention across visual tokens in space and time~\cite{li2020mucan}, the proposed TTVSR significantly reduces the computational cost and thus makes long-range video modeling practicable. To further deal with the scale-changing problem that often occur in long-range videos (e.g., the yellow boxes in Fig.~\ref{fig:teaser}), we devise a cross-scale feature tokenization module and enhance feature representations from multiple scales. Our contributions are summarized as follows: \begin{itemize}[nosep] \item We propose a novel trajectory-aware Transformer, which is one of the first works to introduce Transformer into video super-resolution tasks. Our method significantly reduces computational costs and enables long-range modeling in videos. \item Extensive experiments demonstrate that the proposed TTVSR can significantly outperform existing SOTA methods in four widely-used VSR benchmarks. In the most challenging REDS4 dataset, TTVSR gains 0.70db and 0.45db PSNR improvements than BasicVSR and IconVSR, respectively. \end{itemize} \section{Related Work} \label{sec:related} \subsection{Video Super-Resolution} \label{sec:approach:video} In VSR tasks, it is crucial to assist frame recovery with other frames in the sequence. Therefore, according to the number of input frames, VSR tasks can be mainly divided into two kinds of paradigms: based on sliding-window structure~\cite{caballero2017real,kim2018spatio,kim2019video,wang2019edvr,yi2019progressive,isobe2020video2,tian2020tdan,li2020mucan,xu2021temporal,cao2021video} and based on recurrent structure~\cite{huang2017video,sajjadi2018frame,fuoli2019efficient,haris2019recurrent,isobe2020revisiting,isobe2020video,yi2021omniscient,chan2021basicvsr}. \noindent\textbf{Sliding-window structure.} The methods based on sliding-window structure use adjacent frames within a sliding window as inputs to recover the HR frame (e.g., 5 or 7 frames). They mainly focus on using 2D or 3D CNN~\cite{jo2018deep,isobe2020video2,li2019fast,kim20183dsrnet}, optical flow estimation~\cite{caballero2017real,tao2017detail,kim2018spatio} or deformable convolutions~\cite{wang2019edvr,tian2020tdan,dai2017deformable} to design advanced alignment modules and fuse detailed textures from adjacent frames. Typically, to fully utilize the complementary information across frames, FSTRN~\cite{li2019fast} presented a fast spatio-temporal residual network for VSR by adopting 3D convolutions~\cite{tran2015learning}. To better align adjacent frames, VESCPN~\cite{caballero2017real} introduced a spatio-temporal sub-pixel convolution network and first combined the motion compensation and VSR together. EDVR~\cite{wang2019edvr} and TDAN~\cite{tian2020tdan} used deformable convolutions~\cite{dai2017deformable} to align adjacent frames. However, they cannot utilize textures at other moments, especially in relatively distant frames. \noindent\textbf{Recurrent structure.} Rather than aggregating information from adjacent frames, methods based on recurrent structure use a hidden state to convey relevant information in previous frames. FRVSR~\cite{sajjadi2018frame} used the previously SR frame to recover the subsequent frame. Inspired by the back-projection, RBPN~\cite{haris2019recurrent} treated each frame as a separate source, which is combined in an iterative refinement framework. RSDN~\cite{isobe2020video} divided the input into structure and detail components and proposed the two-steam structure-detail block to learn textures. Representatively, OVSR~\cite{yi2021omniscient}, BasicVSR~\cite{chan2021basicvsr}, and IconVSR~\cite{chan2021basicvsr} fused the bidirectional hidden state from the past and future for reconstruction and got significant improvements. They try to fully utilize the information of the whole sequence and synchronously update the hidden state by the weights of reconstruction network. However, due to the vanishing gradient~\cite{hochreiter1998vanishing}, this mechanism makes the updated hidden state loses the long-term modeling capabilities to some extent. \begin{figure*}[t] \setlength{\belowcaptionskip}{-0.1cm} \centering \includegraphics[width=1.0\linewidth]{framework.pdf} \vspace{-0.55cm} \caption{The overview of TTVSR based on location maps. $\mathcal{Q}$, $\mathcal{K}$ and $\mathcal{V}$ are tokens from video frames extracted by embedding networks $\phi(\cdot)$ and $\varphi(\cdot)$, respectively. $\tau_{i}$ indicates a trajectory of $\mathcal{T}$. $\mathcal{L}$ is the set of location map generated by the motion estimation network $\text{H}$. The dotted lines indicates the indexing operation from $\mathcal{K}$ and $\mathcal{V}$ by location maps $\mathcal{L}$ and hard index $h$. $\text{R}(\cdot)$ represents the reconstruction network followed by a pixel-shuffle layer to resize feature maps to the desired size. $\text{U}(\cdot)$ represents the bicubic upsampling operation. $\odot$ and $\oplus$ indicate multiplication and element-wise addition, respectively.} \label{fig:overview} \vspace{-0.2cm} \end{figure*} \subsection{Vision Transformer} \label{sec:approach:ViT} Recently, Transformer~\cite{vaswani2017attention} has been proposed to improve the long-term modeling capabilities of sequence in various fields~\cite{devlin2018bert,dosovitskiy2020image}. In the field of computer vision~\cite{dosovitskiy2020image}, Transformer is used as a new attention-based module to model relationships between tokens in many image-based tasks, such as classification~\cite{dosovitskiy2020image}, inpainting~\cite{zeng2020learning}, super-resolution~\cite{yang2020learning}, generation~\cite{zeng2021improving} and so on. Typically, ViT~\cite{dosovitskiy2020image} unfolded an image into patches as tokens for attention to capture the long-range relationship in high-level vision. TTSR~\cite{yang2020learning} proposed a texture Transformer in low-level vision to search relevant texture patches from Ref image to LR image. In VSR tasks, VSR-Transformer~\cite{cao2021video} and MuCAN~\cite{li2020mucan} tried to use attention mechanisms for aligning different frames with great success. However, due to the heavy computational costs of attention calculation on videos, these methods only aggregate information on the narrow temporal window. Therefore, in this paper, we introduce a trajectory-aware Transformer to improve the long-term modeling capabilities for VSR tasks while keeping the computational cost of attention within an acceptable range. \section{Our Approach} \label{sec:approach} In this section, we first introduce the proposed \textbf{T}rajectory-aware \textbf{T}ransformer for \textbf{V}ideo \textbf{S}uper-\textbf{R}esolution (TTVSR) in Sec.~\ref{sec:approach:TT}, and then discuss the proposed location map for trajectory generation in Sec.~\ref{sec:approach:TG}. Finally, we refocus to our Transformer design based on the location maps and discuss its advantages in Sec.~\ref{sec:approach:LT}. \subsection{Trajectory-Aware Transformer} \label{sec:approach:TT} We introduce the formulation of the TTVSR firstly, followed by trajectory-aware attention and cross-scale feature tokenization. More illustrations can be found in Fig.~\ref{fig:overview}. \noindent\textbf{Formulation.} Given a LR sequence, the goal of VSR tasks is to recover a HR version. Specifically, for our task, when restoring the $T^{\text{th}}$ frame $I_{SR}^{T}$, we denote the current LR frame as $I_{LR}^{T}$ and other LR frames as $\mathbf{I}_{LR}=\{I_{LR}^{t}, t \in [1,T-1]\}$. We use two embedding networks $\phi(\cdot)$ and $\varphi(\cdot)$ to get features from video frames and extract tokens by sliding-windows. The queries $\mathcal{Q}$ and keys $\mathcal{K}$ are extracted by $\phi(\cdot)$ and denoted as $\mathcal{Q}=\phi(I_{LR})=\{q_i^T,i\in [1,N]\}$ and $\mathcal{K}=\phi(\mathbf{I}_{LR})=\{k_i^t,i\in [1,N],t\in [1, T-1]\}$, respectively. The values are extracted by $\varphi(\cdot)$ and denoted as $\mathcal{V}=\varphi(\mathbf{I}_{LR})=\{v_i^t,i\in [1,N],t\in [1, T-1]\}$. The trajectories $\mathcal{T}$ in our approach can be formulated as a set of trajectory, in which each trajectory $\tau_i$ is a sequence of coordinate over time and the end point of trajectory $\tau_i$ is associated with the coordinate of token $q_i$: \begin{equation} \begin{aligned} \mathcal{T} &=\{\tau_i, \:i\in [1,N]\},\\ \tau_i &=\langle \tau_i^t=(x_i^t, y_i^t),\: t\in [1,T] \rangle,\\ \end{aligned} \label{equ:deftrj} \end{equation} where $x_i^t\in [1,H]$, $y_i^t\in [1,W]$, and $(x_i^t, y_i^t)$ represents the coordinate of trajectory $\tau_i$ at time $t$. $H$ and $W$ represents the height and width of the feature maps, respectively. From the aspect of trajectories, the inputs of proposed trajectory-aware transformer can be further represented as visual tokens which are aligned by trajectories $\mathcal{T}$: \begin{align} \begin{aligned} &\mathcal{T}=\{\tau_i, \:i\in [1,N]\},\\ &\mathcal{Q}=\{q_{\tau_i^T},\:i\in [1,N]\},\\ &\mathcal{K}=\{k_{\tau_i^t},\:i\in [1,N],\:t\in [1, T-1]\},\\ &\mathcal{V}=\{v_{\tau_i^t},\:i\in [1,N],\:t\in [1, T-1]\}. \end{aligned} \end{align} The process of recovering the $T^{\text{th}}$ HR frame $I_{SR}^T$ can be further expressed as: \begin{equation} \begin{aligned} I_{SR}^T&=\text{T}_{traj}(\mathcal{Q},\mathcal{K},\mathcal{V},\mathcal{T})\\ &=\text{R}(\mathop{\text{A}_{traj}}_{\tau_i\in \mathcal{T}}(q_{\tau_i^T},k_{\tau_i^t},v_{\tau_i^t}))+\text{U}(I_{LR}^T), \end{aligned} \label{equ:ttvsr} \end{equation} where $\text{T}_{traj}(\cdot)$ denotes the trajectory-aware Transformer. $\text{A}_{traj}(\cdot)$ denotes the trajectory-aware attention. $\text{R}(\cdot)$ represents the reconstruction network followed by a pixel-shuffle layer to resize feature maps to the desired size. $\text{U}(\cdot)$ represents the bicubic upsampling operation. By introducing trajectories into Transformer, the attention calculation on $\mathcal{K}$ and $\mathcal{V}$ can be significantly reduced because it can avoid the computation on spatial dimension compared with vanilla vision Transformers. \noindent\textbf{Trajectory-aware attention.} Thanks to the powerful long-range model ability, the attention mechanisms in vanilla vision Transformer is used to model dependencies of tokens within an image~\cite{dosovitskiy2020image,carion2020end}. However, empowering the attention mechanisms to videos remains a challenge. Thus, we propose a trajectory-aware attention module, which integrates relevant visual tokens located in the same spatio-temporal trajectories with less computational costs. Different from the traditional attention mechanisms that take a weighted sum of keys in temporal. We use hard attention to select the most relevant token along trajectories, its purpose is to reduce blur introduced by weighted sum. We use soft attention to generate the confidence of relevant patches, it is used to reduce the impact of irrelevant tokens when hard attention gets inaccurate results. We use $h_{\tau_i}$ and $s_{\tau_i}$ to represent the results of hard and soft attention. The calculation process can be formulated as: \begin{equation} \begin{split} h_{\tau_i} & = \mathop{\arg\max}\limits_{t}{\langle \frac{q_{\tau_i^T}}{{\parallel q_{\tau_i^T} \parallel}_{2}^{2}}, \frac{k_{\tau_i^t}}{{\parallel k_{\tau_i^t} \parallel}_{2}^{2}} \rangle}, \\ s_{\tau_i} & = \mathop{\max}\limits_{t}{\langle \frac{q_{\tau_i^T}}{{\parallel q_{\tau_i^T} \parallel}_{2}^{2}}, \frac{k_{\tau_i^t}}{{\parallel k_{\tau_i^t} \parallel}_{2}^{2}} \rangle}. \end{split} \end{equation} Based on such formula, the attention calculation in Equ.~\ref{equ:ttvsr} can be formulated as: \begin{equation} \text{A}_{traj}(q_{\tau_i^T},k_{\tau_i},v_{\tau_i}) = \text{C}(q_{\tau_i^T}\:,\:{s_{\tau_i} \odot v_{\tau_i^{h_{\tau_i}}}}), \end{equation} where the operator $\odot$ denotes multiplication. $\text{C}(\cdot)$ denotes the concatenation operation. We fold all the tokens and output a feature map. In general, in the proposed trajectory-aware attention, we integrate features from the whole sequence. Such a design allows attention calculation only along its spatio-temporal trajectory, mitigating the computational cost. \noindent\textbf{Cross-scale feature tokenization.} \label{sec:approach:CFT} The premise of utilizing multi-scale texture from sequences is that the model can adapt to the multi-scale variations in content that often occur. Therefore, we propose a cross-scale feature tokenization module before trajectory-aware attention to extract tokens from multiple scales. It can uniform multi-scale features into a uniform-length token and allows rich textures from larger scales to be utilized for the recovery of smaller ones in the attention mechanism. Specifically, we follow three steps to extract tokens. First, the successive unfold and fold operations are used to expand the receptive field of features. Second, features from different scales are shrunk to the same scale by a pooling operation. Third, the features are split by unfolding operation to obtain the output tokens. It is noteworthy that this process can extract features from a larger scale while keeping the same size as output tokens. It is convenient for attention calculation and token integration. More analyses can be found in the supplementary. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.1cm} \centering \includegraphics[width=1.0\linewidth]{locmap.pdf} \vspace{-0.6cm} \caption{An illustration of the relationship between trajectory $\tau$ and the location maps $\mathcal{L}$ at time $t$.} \label{fig:locmap} \vspace{-0.2cm} \end{figure} \subsection{Location Maps for Trajectory Generation} \label{sec:approach:TG} Existing approaches use feature alignment and global optimization to calculate trajectories of video which are time-consuming and less efficient~\cite{wang2013dense,wang2013action,patrick2021keeping}. Especially in our task, trajectories are updated over time, the computation cost will be further exploded. To solve this problem, we propose a location map for trajectory generation in which the location maps are represented as a group of matrices over time. By such a design, the trajectory generation can be expressed as some matrix operations which are both efficient for computing and friendly for model implementation. Since the trajectories are updated over time, our location maps need also to be updated accordingly. In the formulation of it, we fix the time to $T$ for better illustration. The proposed location maps can be formulated as: \begin{equation} \mathcal{L}^{t} = \begin{bmatrix} (x_1,y_1) & \dots & (x_1,y_W) \\ \dots & \dots & \dots \\ (x_H,y_1) & \dots & (x_H,y_W)\\ \end{bmatrix},\:t\in [1,T], \end{equation} where $\mathcal{L}_{m,n}^{t}$ represents the coordinate at time $t$ in a trajectory which is ended at $(m, n)$ at time $T$. The relationship between the location map $\mathcal{L}^{t}_{m,n}$ and the trajectory $\tau_i^t$ defined in Equ.~\ref{equ:deftrj} can be further expressed as: \begin{equation} \mathcal{L}_{m,n}^{t}=\tau_i^t,\: \text{where}\ \tau_i^T=(m,n),\: i\in [1,N], \label{equ:lmdef} \end{equation} where $m\in [1,H]$ and $n\in [1,W]$. In Fig.~\ref{fig:locmap}, we use a simple case to further illustrate the relationship between location maps and trajectories. \noindent\textbf{Location map updating.} As discussed in the formulation part, the location maps will change over time. We denotes the updated location maps as ${}^*\!\mathcal{L}^{t}$. When changing from time $T$ to time $T+1$, a new location map ${}^*\!\mathcal{L}^{T+1}$ at time $T+1$ should be initialized. Based on Equ.~\ref{equ:lmdef}, the element values of ${}^*\!\mathcal{L}^{T+1}$ are exactly the coordinates of frame $T+1$~\footnote{Where the element values of the matrix are equal to the index matrix.}. Then the rest updated location maps $\{{}^*\!\mathcal{L}^{1},\cdots,{}^*\!\mathcal{L}^{T}\}$ can be obtained by tracking the location maps $\{\mathcal{L}^{1},\cdots,\mathcal{L}^{T}\}$ from time $T+1$ to time $T$ using backward flow $O^{T+1}$. Specifically, $O^{T+1}$ can build the connection of trajectories between time $T$ and time $T+1$ and obtain from a lightweight motion estimation network. Due to the correlations in flow are usually float numbers, we get the updated coordinates in location map $\mathcal{L}^{t}$ by interpolating between its adjacent coordinates: \begin{equation} {}^*\!\mathcal{L}^{t} = \text{S}(\mathcal{L}^{t}, O^{T+1}), \label{motion} \end{equation} where $\text{S}(\cdot)$ represents the spatial sampling operation by spatial correlation $O^{T+1}$ (i.e., $grid\_sample$ in PyTorch). Thus far, we have all the updated location maps for time $T+1$. With the careful design of the location maps, the trajectories in our proposed trajectory-aware Transformer can be effectively calculated and maintained through one parallel matrix operation (i.e., the operation $\text{S}(\cdot)$). More analyses can be found in the supplementary. \subsection{TTVSR based on Location Maps} \label{sec:approach:LT} In this section, we recap the formulation of our proposed TTVSR in Sec.~\ref{sec:approach:TT} and show the relation between TTVSR and location maps in a more intrinsical way. More details can be found in Fig.~\ref{fig:overview}. Since the location map $\mathcal{L}^{t}$ in Equ.~\ref{equ:lmdef} is an interchangeable formulation of trajectory $\tau_i$ in Equ.~\ref{equ:ttvsr}, the proposed TTVSR can be further expressed as: \begin{align} \begin{aligned} I_{SR}^T&=\text{T}_{traj}(\mathcal{Q},\mathcal{K},\mathcal{V},\mathcal{L})\\ &=\text{R}(\mathop{\text{A}_{traj}}_{t,m,n}(q_{\mathcal{L}_{m,n}^{T}},k_{\mathcal{L}_{m,n}^{t}},v_{\mathcal{L}_{m,n}^{t}}))+\text{U}(I_{LR}^T), \end{aligned} \end{align} where $m\in [1, H]$, $n\in [1, W]$, and $t\in [1,T-1]$. In this formulation, we transform the coordinate system in our transformer from the one defined by trajectories to a group of aligned matrices (i.e., the location maps). Such a design has two advantages: First, the location maps provide a more efficient way to enable our TTVSR can directly leverage the information from a distant video frame. Second, as the trajectory is a widely used concept in videos, our design can motivate other video tasks to achieve a more efficient and powerful implementation. \subsection{Training Details} \label{sec:approach:patch} \par For fair comparisons, we follow IconVSR~\cite{chan2021basicvsr} and VSR-Transformer~\cite{cao2021video} to use the same feature extraction network, reconstruction network, and pre-trained SPyNet~\cite{ranjan2017optical} for motion estimation. To leverage the information of the whole sequence, we follow previous works~\cite{huang2017video,chan2021basicvsr} to adopt a bidirectional propagation scheme, where the features in different frames can be propagated backward and forward, respectively. To reduce consumption in terms of time and memory, we generate the visual tokens of different scales from different frames. Features from adjacent frames are finer, so we generate tokens of size $1 \times1$. Features from a long distance are coarser, so we select these frames at a certain time interval and generate tokens of size $4 \times4$. Besides, in Sec.~\ref{sec:approach:CFT}, we use kernels of size $4 \times4$, $6 \times6$, and $8 \times8$ for cross-scale feature tokenization. During training, we use Cosine Annealing scheme~\cite{loshchilov2016sgdr} and Adam~\cite{kingma2014adam} optimizer with $\beta_{1}=0.9$ and $\beta_{2}=0.99$. The learning rates of the motion estimation and other parts are set as $1.25\times 10^{-5}$ and $2\times 10^{-4}$, respectively. We set the batch size as $8$ and input patch size as $64\times 64$. To keep fair comparison, we augment the training data with random horizontal flips, vertical flips, and $90^{\circ}$ rotations. Besides, to enable long-range sequence capability, we use sequences with a length of 50 as inputs. The Charbonnier penalty loss~\cite{lai2017deep} is applied on whole frames between the ground-truth $I_{HR}$ and restored SR frame $I_{SR}$, which can be defined by $\ell=\sqrt{\|I_{HR}-I_{SR}\|^{2}+\varepsilon^2}$. To stabilize the training of TTVSR, we fix the weights of the motion estimation module in the first 5K iterations, and make it trainable later. The total number of iterations is 400K. \section{Experiments} \label{sec:experiments} \setlength{\tabcolsep}{1.0mm}{ \begin{table*}\small \caption{Quantitative comparison (PSNR$\uparrow$ and SSIM$\uparrow$) on the REDS4~\cite{nah2019ntire} dataset for $4\times$ video super-resolution. The results are tested on RGB channels. \textcolor{red}{Red} indicates the best and \textcolor{blue}{blue} indicates the second best performance (best view in color). \#Frame indicates the number of input frames required to perform an inference, and ``r'' indicates to adopt the recurrent structure. } \vspace{-0.2cm} \centering \begin{tabular}{ l || c || c | c | c | c || c } \hline Method & \#Frame & Clip\_000 & Clip\_011 & Clip\_015 & Clip\_020 & Average \\ \hline Bicubic & 1 & 24.55/0.6489 & 26.06/0.7261 & 28.52/0.8034 & 25.41/0.7386 & 26.14/0.7292 \\ RCAN~\cite{zhang2018image} & 1 & 26.17/0.7371 & 29.34/0.8255 & 31.85/0.8881 & 27.74/0.8293 & 28.78/0.8200 \\ CSNLN~\cite{mei2020image} & 1 & 26.17/0.7379 & 29.46/0.8260 & 32.00/0.8890 & 27.69/0.8253 & 28.83/0.8196 \\ \hline TOFlow~\cite{xue2019video} & 7 & 26.52/0.7540 & 27.80/0.7858 & 30.67/0.8609 & 26.92/0.7953 & 27.98/0.7990 \\ DUF~\cite{jo2018deep} & 7 & 27.30/0.7937 & 28.38/0.8056 & 31.55/0.8846 & 27.30/0.8164 & 28.63/0.8251 \\ EDVR~\cite{wang2019edvr} & 7 & 28.01/0.8250 & 32.17/0.8864 & 34.06/0.9206 & 30.09/0.8881 & 31.09/0.8800\\ MuCAN~\cite{li2020mucan} & 5 & 27.99/0.8219 & 31.84/0.8801 & 33.90/0.9170 & 29.78/0.8811 & 30.88/0.8750 \\ VSR-T~\cite{cao2021video} & 5 & 28.06/0.8267 & 32.28/0.8883 & 34.15/0.9199 & 30.26/0.8912 & 31.19/0.8815 \\ \hline BasicVSR~\cite{chan2021basicvsr}& r & 28.39/0.8429 & 32.46/0.8975 & 34.22/0.9237 & 30.60/0.8996 & 31.42/0.8909 \\ IconVSR~\cite{chan2021basicvsr}& r &\textcolor{blue}{28.55}/\textcolor{blue}{0.8478}& \textcolor{blue}{32.89}/\textcolor{blue}{0.9024} & \textcolor{blue}{34.54}/\textcolor{blue}{0.9270} & \textcolor{blue}{30.80}/\textcolor{blue}{0.9033} & \textcolor{blue}{31.67}/\textcolor{blue}{0.8948} \\ \hline \textbf{TTVSR} & r & \textcolor{red}{28.82}/\textcolor{red}{0.8566}& \textcolor{red}{33.47}/\textcolor{red}{0.9100} & \textcolor{red}{35.01}/\textcolor{red}{0.9325} & \textcolor{red}{31.17}/\textcolor{red}{0.9094} & \textcolor{red}{32.12}/\textcolor{red}{0.9021} \\ \hline \end{tabular} \label{tab:BI} \vspace{-0.25cm} \end{table*} } \setlength{\tabcolsep}{1.0mm}{ \begin{table}\small \caption{Quantitative comparison (PSNR$\uparrow$ and SSIM$\uparrow$) on Vid4~\cite{liu2013bayesian}, UDM10~\cite{yi2019progressive} and Vimeo-90K-T~\cite{xue2019video} dataset for $4\times$ video super-resolution. All the results are calculated on Y-channel. \textcolor{red}{Red} indicates the best and \textcolor{blue}{blue} indicates the second best performance (best view in color).} \vspace{-0.2cm} \centering \begin{tabular}{ l || c | c | c } \hline Method & Vid4~\cite{liu2013bayesian} & UDM10~\cite{yi2019progressive} & Vimeo-90K-T~\cite{xue2019video} \\ \hline Bicubic & 21.80/0.5246 & 28.47/0.8253& 31.30/0.8687 \\ TOFlow~\cite{xue2019video} & 25.85/0.7659 & 36.26/0.9438 & 34.62/0.9212 \\ FRVSR~\cite{sajjadi2018frame} & 26.69/0.8103 & 37.09/0.9522 & 35.64/0.9319 \\ DUF~\cite{jo2018deep} & 27.38/0.8329 & 38.48/0.9605 & 36.87/0.9447 \\ RBPN~\cite{haris2019recurrent} & 27.17/0.8205 & 38.66/0.9596 & 37.20/0.9458 \\ RLSP~\cite{fuoli2019efficient} & 27.48/0.8388 & 38.48/0.9606 & 36.49/0.9403 \\ EDVR~\cite{wang2019edvr} & 27.85/0.8503 & 39.89/0.9686 & 37.81/0.9523 \\ TDAN~\cite{tian2020tdan} & 26.86/0.8140 & 38.19/0.9586 & 36.31/0.9376 \\ TGA~\cite{isobe2020video2} & 27.59/0.8419 & 39.05/0.9634 & 37.59/0.9516 \\ RSDN~\cite{isobe2020video} & 27.92/0.8505 & 39.35/0.9653 & 37.23/0.9471 \\ BasicVSR~\cite{chan2021basicvsr} & 27.96/0.8553 & 39.96/0.9694 & 37.53/0.9498 \\ IconVSR~\cite{chan2021basicvsr} & \textcolor{blue}{28.04}/\textcolor{blue}{0.8570} & \textcolor{blue}{40.03}/\textcolor{blue}{0.9694} & \textcolor{blue}{37.84}/\textcolor{blue}{0.9524} \\ \hline \textbf{TTVSR} & \textcolor{red}{28.40}/\textcolor{red}{0.8643} & \textcolor{red}{40.41}/\textcolor{red}{0.9712} & \textcolor{red}{37.92}/\textcolor{red}{0.9526} \\ \hline \end{tabular} \label{tab:BD} \vspace{-0.3cm} \end{table} } \subsection{Datasets and Metrics} We evaluate the proposed TTVSR and compare its performance with other SOTA approaches on two widely-used datasets: \textbf{REDS}~\cite{nah2019ntire} and \textbf{Vimeo-90K}~\cite{xue2019video}. For \textbf{REDS}~\cite{nah2019ntire}, it is published in the NTIRE19 challenge~\cite{nah2019ntire}. It contains a total of 300 video sequences, in which 240 for training, 30 for validation, and 30 for testing. Each sequence contains 100 frames with a resolution of $720 \times 1280$. To create training and testing sets, we follow previous works~\cite{wang2019edvr,li2020mucan,chan2021basicvsr} to select four sequences\footnote{Clips 000,011,015,020 of the REDS training set.} as the testing set which is called \textbf{REDS4}~\cite{nah2019ntire}. And we select the rest 266 sequences from the training and validation set as the training set. For \textbf{Vimeo-90K}~\cite{xue2019video}, it contains 64,612 sequences for training and 7,824 for testing. Each sequence contains seven frames with a resolution of $448 \times 256$. For fair comparison, we follow previous works~\cite{chan2021basicvsr} to evaluate TTVSR with $4 \times$ downsampling by using two degradations: 1) MATLAB bicubic downsample (BI), and 2) Gaussian filter with a standard deviation of $\sigma=1.6$ and downsampling (BD). Same with previous works~\cite{isobe2020video,isobe2020video2,tian2020tdan,li2020mucan}, we apply the BI degradation on \textbf{REDS4}~\cite{nah2019ntire} and BD degradation on \textbf{Vimeo-90K-T}~\cite{xue2019video}, \textbf{Vid4}~\cite{liu2013bayesian} and \textbf{UDM10}~\cite{yi2019progressive}. We keep the same evaluation metrics: 1) Peak signal-to-noise ratio (PSNR) and 2) structural similarity index (SSIM)~\cite{wang2004image} as previous works \cite{li2020mucan,chan2021basicvsr}. \subsection{Comparisons with State-of-the-art Methods} \par We compare TTVSR with 15 start-of-the-art methods. These methods can be summarized into three categories: single image super-resolution (SISR)~\cite{zhang2018image,mei2020image}, sliding window-based~\cite{xue2019video,jo2018deep,wang2019edvr,tian2020tdan,isobe2020video2,li2020mucan,cao2021video}, and recurrent structure-based ~\cite{sajjadi2018frame,fuoli2019efficient,haris2019recurrent,isobe2020video,chan2021basicvsr}. For fair comparisons, we obtain the performance from their original paper or reproduce results by authors officially released models. \par \noindent\textbf{Quantitative comparison.} We compare TTVSR with other SOTA methods on the most widely-used REDS dataset~\cite{nah2019ntire}. As shown in Tab.~\ref{tab:BI}, we categorize these approaches according to the frames used in each inference. Among them, since only one LR frame is used, the performance of SISR methods~\cite{zhang2018image,mei2020image} is very limited. MuCAN~\cite{li2020mucan} and VSR-T~\cite{cao2021video} use attention mechanisms in sliding window, which has a significant improvement over the SISR methods. However, they do not fully utilize the information of the sequence. BasicVSR~\cite{chan2021basicvsr} and IconVSR~\cite{chan2021basicvsr} try to model the whole sequence through hidden states. Nonetheless, the well-known vanishing gradient issue limits their capabilities of long-term modeling, thus the information at a distance will be lost. Different from them, our TTVSR tries to link the relevant visual token together along the same trajectory in an efficient way. TTVSR also uses the whole sequence information to recover the lost textures. Due to such merits, TTVSR achieves a result of 32.12dB PSNR and significantly outperforms IconVSR~\cite{chan2021basicvsr} by \textbf{0.45dB} on the REDS4~\cite{nah2019ntire}. This large margin demonstrates the power of TTVSR in long-range modeling. \setlength{\tabcolsep}{1.0mm}{ \begin{table}\small \caption{Comparison of params, FLOPs and numbers. FLOPs is computed on one LR frame with the size of $180 \times320$ and $\times 4$ upsampling on the REDS4~\cite{nah2019ntire} dataset.} \vspace{-0.2cm} \centering \begin{tabular}{ l || c || c || c} \hline Method & \#Params(M) & FLOPs(T) & PSNR/SSIM \\ \hline DUF\cite{jo2018deep} & 5.8 & 2.34 & 28.63/0.8251 \\ RBPN\cite{haris2019recurrent} & 12.2 & 8.51 & 30.09/0.8590 \\ EDVR\cite{wang2019edvr} & 20.6 & 2.95 & 31.09/0.8800 \\ MuCAN\cite{li2020mucan} & 13.6 & $>$1.07 & 30.88/0.8750 \\ BasicVSR\cite{chan2021basicvsr} & 6.3 & 0.33 & 31.42/0.8909 \\ IconVSR\cite{chan2021basicvsr} & 8.7 & 0.51 & 31.67/0.8948 \\ \textbf{TTVSR} & 6.8 & 0.61 & 32.12/0.9021 \\ \hline \end{tabular} \label{tab:MS} \vspace{-0.3cm} \end{table} } \begin{figure*}[t!] \setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.2cm} \centering \includegraphics[width=1.0\linewidth]{case_sum_TTVSR.pdf} \vspace{-0.55cm} \caption{Visual results on REDS4~\cite{nah2019ntire}, Vid4~\cite{liu2013bayesian}, UDM10~\cite{yi2019progressive} and Vimeo-90K-T~\cite{xue2019video} for $4 \times$ scaling factor. The frame number is shown at the bottom of each case. Zoom in to see better visualization.} \label{fig:case} \vspace{-0.25cm} \end{figure*} To further verify the generalization capabilities of TTVSR, we train TTVSR on Vimeo-90K dataset~\cite{xue2019video}, and evaluate the results on Vid4~\cite{liu2013bayesian}, UDM10~\cite{yi2019progressive}, and Vimeo-90K-T datasets~\cite{xue2019video}, respectively. As shown in Tab.~\ref{tab:BD}, on the Vid4~\cite{liu2013bayesian}, UDM10~\cite{yi2019progressive}, and Vimeo-90K-T~\cite{xue2019video} test sets, TTVSR achieves the results of 28.40dB, 40.41dB, and 37.92dB in PSNR respectively, which is superior to other SOTA methods. Specifically, on the Vid4~\cite{liu2013bayesian} and UDM10~\cite{yi2019progressive} datasets, TTVSR outperforms IconVSR~\cite{chan2021basicvsr} by \textbf{0.36dB} and \textbf{0.38dB} respectively. At the same time, we notice that compared with the evaluation on Vimeo-90K-T~\cite{xue2019video} dataset with only seven frames in each testing sequence, TTVSR has a better improvement on other datasets which have at least 30 frames per video. The results verify that TTVSR has strong generalization capabilities and is good at modeling the information in long-range sequences. \noindent\textbf{Qualitative comparison.} To further compare visual qualities of different approaches, we show visual results generated by TTVSR and other SOTA methods on four different test sets in Fig.~\ref{fig:case}. For fair comparisons, we either directly take the original SR images of the author-released or use author-released models to get results. It can be observed that TTVSR has a great improvement in visual quality, especially for areas with detailed textures. For example, in the fourth row in Fig.~\ref{fig:case}, TTVSR can recover more striped details from the stonework in the oil painting. The results verify that TTVSR can utilize textures from relevant tokens to produce finer results. More visual results can be found in the supplementary materials. \par \noindent\textbf{Model sizes and computational costs.} In real applications, model sizes and computational costs are usually important. To avoid the gap between different hardware devices, we use two hardware-independent metrics, including the number of parameters (\#Params) and FLOPs. As shown in Tab.~\ref{tab:MS}, the FLOPs are computed with the input of LR size $180 \times320$ and $\times 4$ upsampling settings. Compared with IconVSR~\cite{chan2021basicvsr}, TTVSR achieves higher performance while keeping comparable \#Params and FLOPs. Besides, it should be emphasized that our method is much lighter than MuCAN~\cite{li2020mucan} which is the SOTA attention-based method. Such superior performances mainly benefit from the use of trajectories in attention calculation which significantly reduces computational costs. \setlength{\tabcolsep}{1.0mm}{ \begin{table}\small \caption{Ablation study results of trajectory-aware attention module on the REDS4~\cite{nah2019ntire} dataset. TG: trajectory generation. TA: trajectory-aware attention.} \centering \vspace{-0.2cm} \begin{tabular}{ l || c | c || c} \hline Method & TG & TA & PSNR/SSIM \\ \hline Base & ~ & ~ & 30.46/0.8661 \\ Base+TG & $\checkmark$ & ~ & 31.91/0.8985 \\ Base+TG+TA & $\checkmark$ & $\checkmark$ & \textbf{31.99}/\textbf{0.9007} \\ \hline \end{tabular} \label{tab:TA} \vspace{-0.3cm} \end{table}} \subsection{Ablation Study} In this section, we conduct the ablation study on the proposed trajectory-aware attention and study the influence of frames number used in this module. In addition, we further analyze the effect of the cross-scale feature tokenization. \noindent\textbf{Trajectory-aware attention.} Trajectory generation (TG) is a prerequisite for trajectory-aware attention (TA), so we study them together in this part. We directly use convolution layers to integrate the not aligned previous tokens and current token as our ``Base" model. We denote the model that aggregates the most relevant tokens on the trajectory as our ``Base+TG" model. We denote the model that adds trajectory-aware attention progressively as our ``Base+TG+TA" model. The results are shown in Tab.~\ref{tab:TA}. With the addition of TG, PSNR can be improved from 30.46 to 31.91, which verifies that the trajectory can link relevant visual tokens together precisely. When TA is involved, we integrate tokens from trajectories, and the performance is improved to 31.99. This demonstrates the superiority of TA for modeling long-range information. We further explore the visual differences as shown in Fig.~\ref{fig:ab_ta}. TG can capture the relevant tokens, while TA integrates tokens into the current frame to produce clearer textures. \noindent\textbf{Influence of frame number during inference.} To explore the influence of the frame number used during inference on the ability of modeling long-range sequences. As shown in Tab.~\ref{tab:ab_lf}, we use different temporal intervals to sample frames from the entire sequence (100 frames). The performance is positively correlated with the number of sampled frames. It demonstrates the effectiveness of the trajectory-aware attention module for long-range modeling. However, the performance gain gradually decreases when the frame number is more than 45. It indicates that choosing three as the temporal interval (i.e., 33 frames) is sufficient to model the entire sequences. Using smaller intervals may not provide more information since the adjacent frames are too similar. \setlength{\tabcolsep}{1.0mm}{ \begin{table}\small \caption{Ablation study results of the frame number used on the REDS4~\cite{nah2019ntire} dataset.} \centering \vspace{-0.2cm} \begin{tabular}{ l || c | c | c | c | c} \hline \#Frame & 5 & 10 & 20 & 33 & 45\\ \hline PSNR &31.89 & 31.93 & 31.97 & 31.99 & 32.01 \\ \hline SSIM &0.8984 & 0.8994 & 0.9005 & 0.9007 & 0.9004 \\ \hline \end{tabular} \label{tab:ab_lf} \vspace{-0.1cm} \end{table}} \begin{figure} \setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.1cm} \centering \includegraphics[width=1.0\linewidth]{ablation_LTAM.pdf} \vspace{-0.5cm} \caption{Ablation study on the trajectory generation (TG) and trajectory-aware attention (TA) on the REDS4~\cite{nah2019ntire} dataset.} \label{fig:ab_ta} \vspace{-0.3cm} \end{figure} \noindent\textbf{Cross-scale feature tokenization.} To alleviate the scale-changing problem in sequences, we discuss the impact of token size in the cross-scale feature tokenization (CFT). As shown in Tab.~\ref{tab:CFT}, the first three rows of results show that CFT can extract richer textures as the token scale increases. The performance can improve PSNR from 31.99 to 32.12, indicating that CFT can adapt to scale changes in sequences. In addition, according to the visualizations, as shown in Fig.~\ref{fig:ab_CEF}, cross-scale feature tokenization can introduce finer textures from a larger scale, avoiding the loss of textures caused by scale-changing in long-range sequences. It is also observed that using the larger scale (e.g., $12$) leads to undesirable results. This is because oversized tokens are not conducive to textures learning. In our model, we choose $4$, $6$, and $8$ scales as the token size in CFT. \setlength{\tabcolsep}{1.0mm}{ \begin{table}\small \caption{Ablation study results of cross-scale feature tokenization (CFT) module on the REDS4~\cite{nah2019ntire} dataset, ``S2" and ``S3" represent extracting features from two and three scales, respectively. TTVSR can be interpreted as ``Base+TG+TA+CFT(S3)".} \centering \vspace{-0.2cm} \begin{tabular}{ l || c || c } \hline Method & Token sizes in CFT & PSNR/SSIM \\ \hline Base+TG+TA & 4 & 31.99/0.9007 \\ Base+TG+TA+CFT(S2) & 4, 6 & 32.08/0.9011 \\ Base+TG+TA+CFT(S3) & 4, 6, 8 & \textbf{32.12}/\textbf{0.9021} \\ Base+TG+TA+CFT(S3.1) & 6, 9, 12 & 31.95/0.9004 \\ Base+TG+TA+CFT(S3.2) & 8, 12, 16 & 31.91/0.8991 \\ \hline \end{tabular} \label{tab:CFT} \vspace{-0.1cm} \end{table} } \begin{figure} \setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.1cm} \centering \includegraphics[width=1.0\linewidth]{ablation_CEFM.pdf} \vspace{-0.5cm} \caption{Example of without and with the cross-scale feature tokenization (CFT) on the REDS4~\cite{nah2019ntire} dataset. CFT can transfer the clearer textures from larger scales to restore the detailed textures.} \label{fig:ab_CEF} \end{figure} \begin{figure} \setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.1cm} \centering \includegraphics[width=1.0\linewidth]{R_failure_TTVSR.pdf} \caption{A failure case when rotation occurs.} \label{fig:FC} \vspace{-0.3cm} \end{figure} \section{Limitations} In this section, we visualize the failure cases of TTVSR in Fig.~\ref{fig:FC}. The motion trajectories are inaccurate when rotation occurs and useful information cannot be transferred through it, thus limiting the performance of our method. However, due to the high difficulty of modeling rotation, other SOTA methods also fail to obtain better performance. It is notable that TTVSR still achieves greater gains than other methods through its powerful long-range modeling ability. More analyses can be found in the supplementary. \section{Conclusion} \label{sec:conclusion} In this paper, we study video super-resolution by leveraging long-range frame dependencies. In particular, we propose a novel trajectory-aware Transformer (TTVSR), which is one of the first works to introduce Transformer architectures in video super-resolution tasks. Specifically, we formulate video frames into pre-aligned trajectories of visual tokens, and calculate attention along trajectories. To implement such formulations, we propose a novel location map to record trajectories, and the location map can online update efficiently by design. TTVSR significantly mitigates computational costs and enables Transformers to model long-range information in videos in an effective way. Experimental results show clear visual margins between the proposed TTVSR and existing SOTA models. In the future, we will focus on 1) evaluating our method in more low-level vision tasks, and 2) extending the trajectory-aware Transformer to high-level vision tasks by more explorations. \textbf{Acknowledgement.} This work was supported by the NSFC under grants No.61772407. We would like to also thank Tiankai Hang for his help with the paper discussion. {\small \bibliographystyle{ieee_fullname}
1,116,691,498,005
arxiv
\section{Introduction} One of the most important goals of investigating strong interaction physics is to understand the of hadrons and hadronic interactions on the basis of quantum chromodynamics (QCD). Shifman, Vainshtein and Zakharov proposed the method of the QCD sum rule, which provides us with a framework to investigate the properties of hadrons in a model-independent way.~\cite{rf:SVZ} This method has been successfully applied to the study of the masses, decay constants, magnetic moments and other properties of various hadrons.~\cite{rf:RRY} Recently, the present authors extended the QCD sum rule to the investigation of hadronic interactions.~\cite{rf:KM} In Ref.~3) a nucleon-nucleon system was studied as a typical case. The correlation function of the nucleon interpolating field, whose matrix element is taken with respect to the one-nucleon state and averaged over the nucleon spin, was considered. It was noted that the correlation function has a second-order pole at the nucleon on-shell energy as a function of the energy associated with the interpolating field and that its coefficient is the T-matrix for the nucleon-nucleon ($NN$) scattering. Assuming that the dispersion integral is dominated by the pole term, sum rules were derived which relate the spin-averaged $NN$ scattering lengths with the spin-averaged matrix elements of the quark-gluon operators with respect to the one-nucleon state. The obtained $NN$ scattering lengths are of the order of several fm, which is rather large in the strong interaction scale, but are rather smaller than the experimental values. The formalism was further applied to other hadron-nucleon systems.~{\cite{rf:KMN}\cite{rf:Koike}} The following point, however, remained unclarified in those works. In the analysis of the QCD sum rule for the hadron in the vacuum, the correlation function of the hadron interpolating field, whose matrix element is taken with respect to the vacuum, is considered. The imaginary part of the correlation function consists of a pole term corresponding to the ground state and a continuum term corresponding to the excited states. Under the Borel transformed dispersion integral, the continuum contribution is exponentially suppressed compared to the ground state contribution due to the energy difference between the ground state and the continuum threshold. For this reason the sum rule analysis is expected to be insensitive to the detailed form of the continuum, so that the continuum is usually parametrized in a very simple form. When one deals with the hadron correlation function, whose matrix element is taken with respect to the one-nucleon state, the situation is different. The energy of the continuum threshold is not higher than the pole energy. Therefore, it is not clear if the Borel transformed dispersion integral is really dominated by the pole term or not. A related question is the following. It is known that there is a loosely bound state in the spin-triplet nucleon-nucleon channel and an almost bound state in the spin-singlet nucleon-nucleon channel. If there is a zero-energy bound state, the scattering length diverges. Therefore, the $NN$ scattering lengths are expected to be very sensitive to the $NN$ interaction strength. On the other hand, it is hard to believe that the nucleon matrix elements of the quark-gluon operators would be very sensitive to the $NN$ interaction strength. It seems strange that the sum rule relates these two quantities of very different natures. Another point is that only the spin-averaged sum rules are obtained from the spin-averaged correlation function. The $NN$ channel is special in the sense that selecting the isospin channel automatically selects the spin state. In Ref.~\cite{rf:KM}, the sum rules for the spin-triplet and singlet scattering lengths are obtained by combining isospin states. As far as the pole term is concerned, the above selection rule is correct, but it does not hold for the continuum. The sum rule in Ref.~3) is valid only if the pole term is dominant. Therefore, it is more desirable to construct the spin-dependent sum rules. In this paper we consider the spin-dependent correlation function of the nucleon interpolating field, where the matrix element is taken with respect to the spin-nonaveraged one-nucleon state. The purpose of this paper is two-fold. First, we extend the procedure of the sum rule to the case of the spin-nonaveraged correlation function. Second, we show that the dispersion integral of the correlation function around the nucleon threshold can be regarded as a measure of the nucleon-nucleon interaction strength. As a result, we derive sum rules which relate the spin-dependent $NN$ interaction strengths with the spin-dependent nucleon matrix elements of the quark-gluon operators. \section{Formulation} \subsection{Physical content of the correlation function and Borel sum rules} Consider the spin-dependent correlation function, $\Pi(q \hat p s)$, \begin{eqnarray*} \Pi(q \hat p s)=-i\int d^4x e^{iqx}\langle\hat p s|T(\psi(x)\bar\psi(0))|\hat p s\rangle , \end{eqnarray*} where $|\hat p s\rangle $ is the one-nucleon state with momentum $\hat p$ and spin $s$ ($\hat p^2=M^2$, $s^2=-1$ and $\hat ps=0$, where $M$ is the nucleon mass) normalized as $\langle\hat p s|\hat p' s' \rangle =(2\pi)^3\delta^3(\vec p-\vec p')\delta_{ss'}$ and $\psi$ is the normalized nucleon field operator $\langle 0|\psi(0)|\hat p s\rangle=u(ps)$, where $u(ps)$ is a positive energy solution of the free Dirac equation for the nucleon. In this paper, momentum with $\;\hat{ }\;$ represents the nucleon on-shell momentum. Later, the normalized nucleon field, $\psi$, is replaced by the unnormalized nucleon interpolating field (quark-gluon composite field), $\eta$. The following discussion, however, holds as it is for the interpolating field, except for the normalization. Naively, the dispersion relation for the correlation function, $\Pi(q \hat p s)$, is written as \begin{eqnarray}\label{eq:dr} \Pi(q \hat p s)=-{1 \over \pi}\int^\infty_{-\infty}dq'_0{1 \over q_0-q'_0+i\eta}{\rm Im}\Pi(q' \hat p s) , \end{eqnarray} where $q'=(q'_0, \vec q)$. Throughout this paper, whenever we take the imaginary part of a quantity, we approach the real energy axis from above in the complex energy plane. Therefore, strictly speaking, ${\rm Im}\Pi$ is the imaginary part of the retarded correlation function. The QCD sum rules are obtained by evaluating the left-hand side of Eq.~(\ref{eq:dr}) by the operator product expansion (OPE) and expressing the right-hand side in terms of physical quantities. Let us consider the singularities of $\Pi(q \hat p s)$ as functions of $q_0$. In the complex $q_0$ plane, $\Pi(q \hat p s)$ has a branch cut from the lowest $B=2$ continuum threshold to the right and another branch cut starting from the lowest $B=0$ continuum threshold to the left. In addition, $\Pi(q \hat p s)$ has second-order poles at $q_0=\pm\sqrt{\vec q^2+M^2}\equiv\pm E_{\vec q}$ whose coefficients are the $NN$ and $N\bar N$ T-matrices $T_{+}$ and $T_{-}$, respectively: \begin{eqnarray*} &&T_{+}(\hat q r\hat p s;\hat q r\hat p s)\cr &=&-i\int d^4x e^{iqx}\sqrt{M \overE_{\vec q}}\bar u(q r)(\gsl{q}-M)\langle\hat p s|T(\psi(x)\bar\psi(0))|\hat p s\rangle(\gsl{q}-M)\sqrt{M \overE_{\vec q}}u(q r)\cr &=&(q_0-E_{\vec q})^2{M \overE_{\vec q}}\bar u(q r)\Pi(q \hat p s)u(q r),\cr &&T_{-}(\hat q r\hat p s;\hat q r\hat p s)\cr &=&-i\int d^4x e^{iqx}\sqrt{M \overE_{\vec q}}\bar v(\bar q \bar r)(\gsl{q}-M)\langle\hat p s|T(\psi(x)\bar\psi(0))|\hat p s\rangle(\gsl{q}-M)\sqrt{M \overE_{\vec q}}v(\bar q\bar r)\cr &=&(q_0+E_{\vec q})^2{M \overE_{\vec q}}\bar v(\bar q \bar r)\Pi(q \hat p s)v(\bar q \bar r). \end{eqnarray*} Here $\bar q=(q_0,-\vec q)$, $\bar r=(r_0,-\vec r)$ and $v(\bar q \bar r)$ is the negative energy solution of the free Dirac equation for the nucleon. In order to take out the pole contribution from ${\rm Im}\Pi(q \hat p s)$ it is convenient to define off-shell $NN$ and $N\bar N$ T-matrices by \begin{eqnarray}\label{eq:offT} &&T_{+}(q' r'\hat p' s';q r\hat p s)\cr &=&-i\int d^4x e^{iq'x}\sqrt{M \overE_{\vec q'}}\bar u(q' r')(\gsl{q'}-M)\langle\hat p' s'|T(\psi(x)\bar\psi(0))|\hat p s\rangle(\gsl{q}-M)\sqrt{M \overE_{\vec q}}u(q r),\cr &&T_{-}(q' r'\hat p' s';q r\hat p s)\cr &=&-i\int d^4x e^{iq'x}\sqrt{M \overE_{\vec q}}\bar v(\bar q \bar r)(\gsl{q}-M)\langle\hat p' s'|T(\psi(x)\bar\psi(0))|\hat p s\rangle(\gsl{q'}-M)\sqrt{M \overE_{\vec q'}}v(\bar q' \bar r').\cr && \end{eqnarray} Note that Eq.~(\ref{eq:offT}) is just a definition of the T-matrix off the mass shell, but the LSZ reduction formula shows rigorously that it is the T-matrix on the mass shell. In order to separate the contribution from the poles at $q_0=E_{\vec q}$ and $q_0=-E_{\vec q}$, we introduce the projection operators $\Lambda_+$ and $\Lambda_-$ by \begin{eqnarray*} &&\Lambda_+(q s)=u(q s)\bar u(q s)={\gsl{\hat q}+M \over 2M}{1+\gamma_5\gsl{s} \over 2},\cr &&\Lambda_-(q s)=v(q s)\bar v(q s)={\gsl{\hat q}-M \over 2M}{1+\gamma_5\gsl{s} \over 2}, \end{eqnarray*} which have the properties \begin{eqnarray*} &&\Lambda_+^2(q s)=\Lambda_+(q s),\cr &&\Lambda_-^2(q s)=\Lambda_-(q s),\cr &&\Lambda_+(q s)\Lambda_+(q \bar s)=\Lambda_-(q s)\Lambda_-(q \bar s)=\Lambda_+(q s)\Lambda_-(q s')=0. \end{eqnarray*} Then we define the projected correlation functions by \begin{eqnarray*} \Pi_+(q r \hat p s) &=&{M\overE_{\vec q}}{\rm tr}\left\{\Lambda_+(\bar q \bar r)\Pi(q \hat p s)\right\},\cr \Pi_-(q r \hat p s) &=&{M\overE_{\vec q}}{\rm tr}\left\{\Lambda_-(q r)\Pi(q \hat p s)\right\}. \end{eqnarray*} The projected correlation functions are related to the off-shell T-matrices as \begin{eqnarray*} \Pi_\pm(q r \hat p s) ={T_\pm(q r \hat p s) \over (q_0\mpE_{\vec q})^2} , \end{eqnarray*} where $T_\pm(q r \hat p s) \equiv T_\pm(q r \hat p s;q r \hat p s)$. Clearly, $\Pi_\pm(q r \hat p s)$ has a second-order pole at $q_0=\pmE_{\vec q}$ but not at $q_0=\mpE_{\vec q}$. Naively, the dispersion relation for the projected correlation function, $\Pi_+$, is given by \begin{eqnarray}\label{eq:PiI} \Pi_+(q r \hat p s) =-{1 \over \pi}\int^\infty_{-\infty}dq'_0{1 \over q_0-q'_0+i\eta}{\rm Im}\Pi_+(q' r \hat p s) . \end{eqnarray} Formally, the imaginary part of the correlation function is written as \begin{eqnarray*} {\rm Im}\Pi_+(q r \hat p s)&=& {\rm Im}{1 \over \left(q_0 - E_{\vec q} + i\eta \right)^2}{\rm Re}T_+(q r \hat p s)+ {\rm Re}{1 \over \left(q_0 - E_{\vec q} + i\eta \right)^2}{\rm Im}T_+(q r \hat p s)\cr &=& \pi\delta'(q_0 - E_{\vec q}) {\rm Re}T_+(q r \hat p s)+ {{\rm Pf} \over \left(q_0 - E_{\vec q} \right)^2}{\rm Im}T_+(q r \hat p s)\cr &=& \pi\delta'(q_0 - E_{\vec q}) t - \pi\delta(q_0 - E_{\vec q}) u+ {{\rm Pf} \over \left(q_0 - E_{\vec q} \right)^2}{\rm Im}T_+(q r \hat p s) , \end{eqnarray*} where \begin{eqnarray*} &&t = \left.{\rm Re}T_+(q r \hat p s)\right|_{q_0=E_{\vec q}} ,\cr &&u = \left.{\partial \over \partial q_0}{\rm Re}T_+(q r \hat p s)\right|_{q_0=E_{\vec q}} . \end{eqnarray*} However, as it will turn out, when $\vec q = 0$ and $\vec p = 0$, the integral of the second term is divergent because $u$ is divergent, and the integral of the third term is also divergent because it behaves as $(q_0-M)^{-{3 \over 2}}$ in the vicinity of $q_0 = M$. Therefore, Eq.~(\ref{eq:PiI}) is ill-defined. Instead of Eq.~(\ref{eq:PiI}), we consider the dispersion relation for ${\gsl{q}-M \over q_0}\Pi(qr\hat ps)$, \begin{eqnarray*} {\gsl{q}-M \over q_0}\Pi(qr \hat ps)=-{1 \over \pi}\int^\infty_{-\infty}dq'_0{1 \over q_0-q_0'+i\eta}{\rm Im}\left\{{\gsl{q'}-M \over q'_0}\Pi(q'r \hat ps)\right\} , \end{eqnarray*} or for ${q_0-E_{\vec q} \over q_0}\Pi_+(qr \hat ps)$ in terms of the projected correlation function, \begin{eqnarray}\label{eq:PiII} &&{q_0-E_{\vec q} \over q_0}\Pi_+(qr \hat ps)=-{1 \over \pi}\int^\infty_{-\infty}dq'_0{1 \over q_0-q_0'+i\eta}{\rm Im}\left\{{q'_0-E_{\vec q} \over q'_0}\Pi_+(q'r \hat ps)\right\}.\qquad \end{eqnarray} Symmetrizing Eq.~(\ref{eq:PiII}) we obtain \begin{eqnarray}\label{eq:QSR} {q_0-E_{\vec q} \over 2q_0}\Pi_+(qr \hat ps) +(q_0\rightarrow-q_0) ={1 \over \pi}\int^\infty_{-\infty}dq'_0{1 \over (q_0+i\eta)^2-q'^2_0} (q'_0-E_{\vec q}){\rm Im}\Pi_+(q'r \hat ps),\cr \end{eqnarray} where \begin{eqnarray}\label{eq:ImPi} (q_0-E_{\vec q}){\rm Im}\Pi_+(q r \hat p s) = -\pi\delta(q_0 - E_{\vec q}) t + {{\rm P} \over q_0 - E_{\vec q}}{\rm Im}T_+(q r \hat p s) . \end{eqnarray} Now Eq.~(\ref{eq:QSR}) is well-defined when $\vec q = 0$ and $\vec p = 0$ because $u$ does not appear and the second term behaves as $(q_0-M)^{-{1 \over 2}}$ in the vicinity of $q_0 = M$. Applying the Borel transformation, \begin{eqnarray*} L_B\equiv \lim_{{n\rightarrow\infty \atop -q_0^2\rightarrow\infty} \atop -q_0^2/n = M_B^2} {(q_0^2)^n\over(n-1)!}\left(-{d\over dq_0^2}\right)^n , \end{eqnarray*} to both sides of Eq.~(\ref{eq:QSR}), we obtain \begin{eqnarray}\label{eq:BSR} & &L_B\Big[{q_0-E_{\vec q} \over 2q_0}\Pi_+(qr \hat ps) +(q_0\rightarrow-q_0) \Big]\cr &=&-{1 \over \pi}\int^\infty_{-\infty}dq'_0 {1\overM_B^2}\exp\left(-{q_0'^2\overM_B^2}\right) (q'_0-E_{\vec q}){\rm Im}\Pi_+(q' r \hat p s) , \end{eqnarray} where $M_B$ is the Borel mass. In order to derive the Borel sum rules we must evaluate the left-hand side by the OPE and parametrize the right-hand side in terms of physical quantities. Now, the question is how to parametrize the integrand of the right-hand side of Eq.~(\ref{eq:BSR}). Let us recall the QCD sum rule for the nucleon in the vacuum, where the imaginary part of the correlation function has the form \begin{eqnarray*} {\rm Im}\Pi_{+}(q) \propto -\pi\delta(q_0 -E_{\vec q})+\sigma(q) . \end{eqnarray*} The first term is the contribution from the nucleon pole term, and the second term is due to the excited states. Borel transformation on the dispersion integral of ${\rm Im}\Pi_{+}(q)$ gives \begin{eqnarray*} & &L_B\Big[{1\over2q_0}\Pi_{+}(q)+(q_0\rightarrow-q_0)\Big]\cr &=&-{1 \over \pi}\int dq_0'{1\overM_B^2}e^{-q_0'^2/M_B^2}{\rm Im}\Pi_{+}(q')\cr &\propto&{1\overM_B^2}e^{-E_{\vec q}^2/M_B^2} -{1\over\pi}\int^\infty_{\omega} dq_0'{1\overM_B^2}e^{-q_0'^2/M_B^2}\sigma'(q') , \end{eqnarray*} where the second term, the contribution from the excited states, starts at the continuum threshold, $\omega$ ($\omega > E_{\vec q}$), and is exponentially suppressed compared to the first term. For this reason, it is possible to use the rough model of the hadron continuum, \begin{eqnarray}\label{eq:piv} {\rm Im}\Pi_{+}(q) =-\lambda^2\pi\delta(q_0 -E_{\vec q}) +\left\{\theta(q_0-\omega_0)+\theta(-\omega_0-q_0)\right\}{\rm Im}\Pi^{OPE}_{+}(q) ,\qquad \end{eqnarray} where $\Pi^{OPE}_{+}$ is the asymptotic form of the correlation function in the OPE, $\omega_0$ is the effective continuum threshold, and the normalization constant $\lambda$ is explicitly included ($\langle 0|\eta(0)|\hat p s\rangle=\lambda u(ps)$). Let us turn to the problem at hand. As an extension of Eq.~(\ref{eq:ImPi}) one might parametrize $(q_0-E_{\vec q}){\rm Im}\Pi_+$ as \begin{eqnarray}\label{eq:pin} &&(q_0-E_{\vec q}){\rm Im}\Pi_+\cr &=&-\lambda^2\pi\delta(q_0-E_{\vec q})t +\left\{\theta(-q_0-\omega_-)+\theta(q_0-\omega_+)\right\}(q_0-E_{\vec q}){\rm Im}\Pi_+^{OPE},\qquad \end{eqnarray} by approximating the second term of the right-hand side of Eq.~(\ref{eq:pin}) by its asymptotic form. However, the second term starts at $q_0=\omega=\sqrt{4M^2+{\vec q}^2}-M$ ($\omega \leq \sqrt{M^2+\vec q^2}$), and it is not exponentially suppressed compared to the first term. Therefore, one cannot justify Eq.~(\ref{eq:pin}). One has to know the behavior of the second term around the threshold. For this purpose it is important to note that the off-shell optical theorem holds for $T$. When the center-of-mass energy is above the threshold of the $NN$ channel and below the threshold of the next channel, only the $NN$ states contribute in the intermediate states, and the off-shell optical theorem is simplified as \begin{eqnarray}\label{eq:OT} {\rm Im}T_+(q \hat p;q \hat p) =-\pi\int {d^3p_n\over(2\pi)^3}{d^3q_n\over(2\pi)^3}(2\pi)^3 \delta^4(\hat p+q-\hat p_n-\hat q_n) T_+(q \hat p;\hat q_n\hat p_n) T_+(\hat q_n\hat p_n;q \hat p).\cr \end{eqnarray} In order to simplify the notation we introduce the scattering amplitude $f$ by \begin{eqnarray*} f(q'p';qp) = -{\mu'^{1/2}\mu^{1/2} \over 2\pi}T_+(q'p';qp) , \end{eqnarray*} where $\mu={q_0p_0\over q_0+p_0}$ and $\mu'={q'_0p'_0\over q'_0+p'_0}$. Moreover, we go to the center-of-mass frame ($\vec q + \vec p = \vec q' + \vec p' =0$) and restrict ourselves to the $s$-wave. We define three scattering amplitudes, $f_0$, $f_1$ and $f_2$ as \begin{eqnarray*} f_0(k) = f(\hat q'\hat p';\hat q\hat p) , \end{eqnarray*} where $|\vec p|=|\vec q|=|\vec p'|=|\vec q'|=k$, $p_0=q_0=p'_0=q'_0=\sqrt{M^2+k^2}$, \begin{eqnarray*} f_1(k) = f(q'\hat p';\hat q\hat p) , \end{eqnarray*} where $|\vec p|=|\vec q|=k$, $|\vec p'|=|\vec q'|=0$, $p_0=q_0=\sqrt{M^2+k^2}$, $p'_0=M$, $q'_0=2\sqrt{M^2+k^2}-M$, and \begin{eqnarray*} f_2(k) = f(q'\hat p';q \hat p) , \end{eqnarray*} where $|\vec p|=|\vec q|=|\vec p'|=|\vec q'|=0$, $p_0=p'_0=M$, $q_0=q'_0=2\sqrt{M^2+k^2}-M$. It is well known that the on-shell scattering amplitude $f_0$ has the form \begin{eqnarray*} f_0(k) = {1 \over -ik + k\cot\delta} = {1 \over -ik + {1 \over a}+{1 \over 2}rk^2+O(k^4)}, \end{eqnarray*} where $a$ is the scattering length and $r$ the effective range. Similarly, the off-shell scattering amplitude $f_2$ has the form \begin{eqnarray}\label{eq:fii} f_2(k) = {1 \over i\left\{-k+bk^3+O(k^5)\right\}+ \left\{{1 \over a} + {1 \over 2}\tilde r k^2 + O(k^4)\right\}}, \end{eqnarray} which can be shown as follows. First, since the discontinuity of the T-matrix along the real energy axis is proportional to the imaginary part of the T-matrix, the Taylor expansion of the real and imaginary parts of the scattering amplitude includes only even and odd powers of $k$, respectively. Second, $f_0$, $f_1$ and $f_2$ coincide on the mass-shell ($k=0$), $f_0(0)=f_1(0)=f_2(0)=a$. Therefore, we have \begin{eqnarray}\label{eq:Ref} &&{\rm Re}{1 \over f_0(k)}= {1 \over a} + {1 \over 2} r k^2 + O(k^4) ,\cr &&{\rm Re}{1 \over f_2(k)}= {1 \over a} + {1 \over 2} \tilde r k^2 + O(k^4) . \end{eqnarray} Third, from Eq.~(\ref{eq:OT}) the following relations hold, \begin{eqnarray*} &&{\rm Im}f_0(k)=k|f_0(k)|^2 ,\cr &&{\rm Im}f_2(k)=k|f_1(k)|^2 . \end{eqnarray*} Therefore, we have \begin{eqnarray}\label{eq:Imf} &&{\rm Im}{1 \over f_0(k)}=-{{\rm Im}f_0(k) \over |f_0(k)|^2} = -k ,\cr &&{\rm Im}{1 \over f_2(k)}=-{{\rm Im}f_2(k) \over |f_2(k)|^2} = -k{|f_1(k)|^2 \over |f_2(k)|^2} = -k + bk^3 + O(k^5) . \end{eqnarray} Equation~(\ref{eq:fii}) follows from Eqs.~(\ref{eq:Ref}) and (\ref{eq:Imf}). It should be noted that $\tilde r$ is different from the effective range $r$, but $\tilde r$ coincides with $r$ in the limit $a \rightarrow \infty$: \begin{eqnarray*} \tilde r = r + O\left({1 \over a}\right). \end{eqnarray*} This is shown as follows. Equations~(\ref{eq:Ref}) and (\ref{eq:Imf}) indicate that both $O(1)$ and $O(k)$ terms of ${1/f_0(k)}$ and ${1/f_2(k)}$, which are real and imaginary respectively, coincide with each other. Therefore, we have \begin{eqnarray}\label{eq:ratio} {f_2(k) \over f_0(k)}=1+O(k^2) . \end{eqnarray} Equation~(\ref{eq:ratio}) is independent of $a$ and therefore holds also in the limit $a \rightarrow \infty$ due to the continuity. Since ${1/f_0(k)}$ and ${1/f_2(k)}$ do not have the $O(1)$ terms in this limit, they must coincide with each other up to the $O(k^2)$ terms, i.e. $\tilde r = r + O\left({1 \over a}\right)$. From Eq.~(\ref{eq:fii}) we have \begin{eqnarray*} {\rm Re}f_2(k)&=&\left\{ \begin{array}{ll}a + a^2\kappa + O(\kappa^2) &q_0 < M \\ a-\left(1+{\tilde r \over 2a}\right)a^3k^2 + O(k^4) &q_0 > M \end{array} \right. ,\\ {\rm Im}f_2(k)&=&\left\{ \begin{array}{ll}0 &q_0 < M \\ a^2 k + O(k^3) &q_0 > M \end{array} \right. . \end{eqnarray*} where $\kappa=-ik$. One sees that \begin{eqnarray*} \left.{\partial \over \partial q_0}{\rm Re}T_+\right|_{q_0=M} = -{1 \over 2\pi}\left.{\partial \over \partial q_0}\left\{{q_0+M \over q_0M}{\rm Re}f_2\right\}\right|_{q_0=M}=\infty, \end{eqnarray*} and \begin{eqnarray*} {\rm Im}T_+=-{1 \over 2\pi}{q_0+M \over q_0M}{\rm Im}f_2\propto (q_0-M)^{1 \over 2}, \end{eqnarray*} which make the naive dispersion relation, Eq.~(\ref{eq:PiI}), ill-defined. Having understood the structure of ${\rm Im}T_+$, we proceed to the integral of the right-hand side of Eq.~(\ref{eq:BSR}), $I$, in the vicinity of $q_0 = M$, \begin{eqnarray*} I&=&-{1 \over \pi}\int_{\vicM}dq'_0 {1\overM_B^2}\exp\left(-{q_0'^2\overM_B^2}\right) (q'_0-M){\rm Im}\Pi_+. \end{eqnarray*} The integral, $I$, can be decomposed as \begin{eqnarray}\label{eq:III} I=I_{t}+I_{c}\ (+I_{b}). \end{eqnarray} In Eq.~(\ref{eq:III}), the first term, $I_{t}$, is the threshold contribution, given by \begin{eqnarray*} I_{t}=-{1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right){4\pi a \over M}. \end{eqnarray*} The second term, $I_{c}$, is the continuum contribution, given by \begin{eqnarray*} I_{c}=-{1 \over \pi}\int_{\vicM} dq'_0{1\overM_B^2}\exp\left(-{q_0'^2\overM_B^2}\right) {{\rm P} \over q'_0-M}\left\{-{2\pi}{q'_0+M\over q'_0M}{\rm Im}f^{cut}_2\right\}, \end{eqnarray*} where \begin{eqnarray*} {\rm Im}f^{cut}_2 = {k\over {1 \over a^2} + \left(1+{\tilde r\over a}+{b\over a^2}\right)k^2+O(k^4)}\theta(q_0-M). \end{eqnarray*} The last term, $I_{b}$, is the bound-state contribution, which has to be taken into account if there is a bound state, given by \begin{eqnarray*} I_{b}=-{1 \over \pi}\int_{\vicM} dq'_0{1\overM_B^2}\exp\left(-{q_0'^2\overM_B^2}\right) {{\rm P} \over q'_0-M}\left\{-{2\pi}{q'_0+M\over q'_0M}{\rm Im}f^{pole}_2\right\}, \end{eqnarray*} where \begin{eqnarray*} {\rm Im}f^{pole}_2 &=& -i\pi\left\{\left.{\partial\over\partial k}\left({1 \over f_2}\right)\right|_{k=i\kappa_0}\right\}^{-1}\delta(\kappa-\kappa_0)\cr &\equiv& \pi c \delta(\kappa-\kappa_0) , \end{eqnarray*} and $i\kappa_0$ is the pole momentum, $1/f_2(i\kappa_0)=0$. By performing the integral, the continuum contribution becomes \begin{eqnarray*} I_c &\approx& -{1 \over \pi}\int dq'_0 {1\overM_B^2}\exp\left(-{q_0'^2\overM_B^2}\right) \left\{{{\rm P} \over q'_0-M}\left(-2\pi{q'_0+M \over q'_0M}\right) {k \over {1 \over a^2}+\left(1+{\tilde r \over a}+{b\over a^2}\right)k^2}\right\}\cr &=& {1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right) {4\pi|a|\over\sqrt{{1\over a^2}+M^2\left(1+{\tilde r \over a}+{b\over a^2}\right)}}, \end{eqnarray*} which is simplified in two limits of $a$ as \begin{eqnarray*} I_c\rightarrow\left\{ \begin{array}{ll} {1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right) {4\pi\over\sqrt{1+M^2 b}} a^2, &(a \rightarrow 0)\cr {1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right) {4\pi \over M}|a|\left(1-{r\over 2a}\right)+O\left({1 \over a}\right). &(a \rightarrow \infty) \end{array} \right. \end{eqnarray*} Similarly, the bound-state contribution becomes \begin{eqnarray*} I_b&=&-{1 \over \pi}\int_{\simM} dq'_0{1\overM_B^2}\exp\left(-{q_0'^2\overM_B^2}\right)\left\{{{\rm P} \over q'_0-M}\left(-2\pi{q'_0+M \over q'_0M}\right) \pi c\delta(\kappa-\kappa_0) \right\}\cr &=&-{1 \over \pi}{2\kappa_0\over\sqrt{M^2-\kappa_0^2}} {1\overM_B^2}\exp\left(-{\omega'^2\overM_B^2}\right) {1 \over \omega' - M}2\pi^2{M+\omega'\over M\omega'} \pi c, \end{eqnarray*} where $\omega'=2\sqrt{M^2-\kappa_0^2}-M$. In the limit $a\rightarrow\infty$, \begin{eqnarray*} &&\kappa_0 \rightarrow -{1 \over a}+O\left({1 \over a^2}\right),\cr &&c \rightarrow \left(1-{r \over a}\right)+O\left({1 \over a^2}\right), \end{eqnarray*} and the bound-state contribution is simplified as \begin{eqnarray*} I_b\rightarrow {1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right){8\pi \over M}a\left(1-{r \over 2a}\right)+O\left({1 \over a}\right). \qquad (a \rightarrow \infty) \end{eqnarray*} Let us suppose that one can freely change the interaction strength of two nucleons and examine how the integral $I$ should change as a function the interaction strength. When the interaction is weak, the scattering length is also small, and the integral $I$ is dominated by $I_{t}$: \begin{eqnarray*} I&=&I_{t}+I_{c}\cr &=&-{1 \over M_B^2}\exp\left(-{M^2\overM_B^2}\right){4\pi \over M}a+O(a^2) . \end{eqnarray*} As the interaction becomes stronger, the scattering length increases and the integral, $I$, also increases. As the interaction strength increases further, the scattering length eventually diverges when the bound state is just formed. Just before the bound state is formed, the integral $I$ becomes \begin{eqnarray*} I&=&I_{t}+I_{c}\cr &=&-{1 \over M_B^2}\exp\left(-{M^2\overM_B^2}\right)\left\{{4\pi a \over M}- {4\pi a \over M}\left(1-{r \over 2a}\right)+O\left({1 \over a}\right)\right\}\cr &=&-{1 \over M_B^2}\exp\left(-{M^2\overM_B^2}\right){2\pi r \over M}+O\left({1 \over a}\right), \end{eqnarray*} and just after the bound state is formed, it becomes \begin{eqnarray*} I&=&I_{t}+I_{c}+I_{b}\cr &=&-{1 \over M_B^2}\exp\left(-{M^2\overM_B^2}\right)\left\{{4\pi a \over M}- {-4\pi a \over M}\left(1-{r \over 2a}\right)+ {-8\pi a \over M}\left(1-{r \over 2a}\right)+O\left({1 \over a}\right)\right\}\cr &=&-{1 \over M_B^2}\exp\left(-{M^2\overM_B^2}\right){2\pi r \over M}+O\left({1 \over a}\right) . \end{eqnarray*} This shows that before and after the bound state is formed the integral is continuous, though the scattering length diverges with opposite signs. This observation leads us to conjecture that the integral around the threshold is a measure of the $NN$ interaction strength. Based on this conjecture we define the $NN$ interaction strength, $\alpha$, by \begin{eqnarray}\label{eq:I} I&=&-{1 \over \pi}\int_{\vicM}dq'_0 {1\overM_B^2}\exp\left(-{q_0'^2\overM_B^2}\right) (q'_0-M){\rm Im}\Pi_+\cr &\equiv&-{1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right){4\pi \alpha \over M}. \end{eqnarray} In the dispersion integral, the imaginary part of the correlation function, ${\rm Im}\Pi_+$, contains the contribution from all possible intermediate states such as those of the $NN$, $NN\pi$ channels and so on. However, only the $NN$ channel contributes around the threshold. We assume that the contribution from the $NN$ state is taken into account by the form of the right-hand side of Eq.~(\ref{eq:I}) and that the rest is approximated by the asymptotic form of the correlation function starting from an (effective) threshold, $\omega_+$, for the $B=2$ channels other than the $NN$ channel and $\omega_-$ for the $B=0$ channels: \begin{eqnarray}\label{eq:ImPi} & &(q_0-M){\rm Im}\Pi_+ \cr &=&\lambda^2\pi\delta(q_0-M){2\pi\over\mu}\alpha +\left\{\theta(-q_0-\omega_-)+\theta(q_0-\omega_+)\right\}(q_0-M){\rm Im}\Pi_+^{OPE},\qquad\quad \end{eqnarray} where the normalization constant $\lambda$ is explicitly included. This is possible now because the contribution from states other than those of the $NN$ channel is exponentially suppressed compared to the $NN$ contribution. \subsection{OPE of the correlation function and results} Let us turn to the OPE. We take the interpolating field of the neutron as~\cite{rf:Ioffe} \begin{eqnarray*} \eta(x) = \epsilon_{abc} \left (d^{Ta}(x) C\gamma_\mu d^b(x)\right ) \gamma_5\gamma^\mu u^c(x) , \end{eqnarray*} where $C$ denotes the charge conjugation operator and $a$, $b$ and $c$ are color indices. We take into account all the operators of dimension less than or equal to four. We also include four-quark operators, they are of dimension six. The operators which involve the quark mass are ignored. In the OPE, the neutron correlation function, where the matrix elements are taken with respect to the one-nucleon state, is given as \begin{eqnarray}\label{eq:OPE} & & \Pi^{OPE}(q\hat ps)\cr\cr &=&{1\over4\pi^4}\gamma^\mu\Big[ q^2\ln(-q^2)\pi^2\Big\{-{7\over3}\langle\bar d\gamma_\mu d\rangle_N -{1\over3}\langle\bar u\gamma_\mu u\rangle_N\Big\}\cr & &\qquad +q_\mu q^\nu\ln(-q^2)\pi^2\Big\{-{2\over3}\langle\bar d\gamma_\nu d\rangle_N -{2\over3}\langle\bar u\gamma_\nu u\rangle_N\Big\}\cr & &\qquad +q_\mu\ln(-q^2)\pi^2\Big\{ {1\over8}\left\langle{\alpha_s\over\pi}G^{\alpha\beta}G_{\alpha\beta} \right\rangle_N\Big\}\cr & &\qquad +q^\nu\ln(-q^2)\pi^2\Big\{ -{1\over6}\left\langle{\alpha_s\over\pi}{\cal S} [G^{\rho}_{\mu}G_{\rho\nu}]\right\rangle_N \cr & &\qquad +{16\over3}i\langle{\cal S}[\bar d\gamma_\mu D_\nu d]\rangle_N +{4\over3}i\langle{\cal S}[\bar u\gamma_\mu D_\nu u]\rangle_N\Big\}\cr & &\qquad +q_\mu{1\over q^2}\pi^4 \Big\{{8\over3}\langle\bar d d \bar d d\rangle_N \Big\} \Big] \cr &+&{1\over4\pi^4}\Big[ q^2\ln(-q^2)\pi^2\{-\langle\bar uu\rangle_N\} +q^\mu{1\over q^2}\pi^4\Big\{{16\over3} \langle\bar uu \bar d\gamma_\mu d\rangle_N\Big\} \Big] \cr &+&{1\over4\pi^4}\gamma^\mu\gamma_5\Big[ q^2\ln(-q^2)\pi^2\Big\{{5\over3}\langle\bar d\gamma_\mu\gamma_5 d\rangle_N -{1\over3}\langle\bar u\gamma_\mu\gamma_5 u\rangle_N\Big\}\cr & &\qquad +q_\mu q^\nu\ln(-q^2)\pi^2\Big\{ -{2\over3}\langle\bar d\gamma_\nu\gamma_5 d\rangle_N -{2\over3}\langle\bar u\gamma_\nu\gamma_5 u\rangle_N\Big\}\cr & &\qquad +q^\nu\ln(-q^2)\pi^2\Big\{ -{8\over3}\langle{\cal S}[\bar d\gamma_\mu\gamma_5 iD_\nu d]\rangle_N +{4\over3}\langle{\cal S}[\bar u\gamma_\mu\gamma_5 iD_\nu u]\rangle_N\Big\}\cr & &\qquad +q^\nu{1\over q^2}\pi^4\Big\{ -{16\over3}\langle\bar d d \bar d\gamma_5i\sigma_{\mu\nu} d\rangle_N\Big\} \Big] \cr &+&{1\over4\pi^4}\gamma_5\sigma^{\mu\nu}\Big[ q^2\ln(-q^2)\pi^2\Big\{ -{1\over6}\langle\bar u\gamma_5\sigma_{\mu\nu}u\rangle_N \Big\} +q_\mu q^\rho\ln(-q^2)\pi^2\Big\{ -{2\over3}\langle\bar u\gamma_5\sigma_{\nu\rho}u\rangle_N \Big\} \Big\}\cr & &\qquad +q_\mu{1\over q^2}\pi^4\Big\{{16\over3}i \langle\bar uu \bar d\gamma_\nu\gamma_5 d\rangle_N \Big\} \Big] . \end{eqnarray} In Eq.~(\ref{eq:OPE}), $D$ is the covariant derivative, ${\cal S}[A_\mu B_\nu]\equiv A_{\{\mu} B_{\nu\}}-({\rm traces})$ , where $_\{\quad_\}$ represents symmetrization over the Lorentz indices and $-({\rm traces})$ represents the subtraction of the trace terms. Here, $\langle{\cal O}\rangle_N$ is the connected part of nucleon matrix element of ${\cal O}$, $\langle{\cal O}\rangle_N\equiv\langle N|{\cal O}|N \rangle-\langle N|N \rangle\langle{\cal O}\rangle_0$, where $\langle{\cal O}\rangle_0$ represents the vacuum expectation value of ${\cal O}$, and $|N \rangle\equiv |\hat ps\rangle$. Substituting the projection of Eq.~(\ref{eq:OPE}) into the left-hand side and Eq.~(\ref{eq:ImPi}) into the right-hand side of Eq.~(\ref{eq:BSR}), respectively, and splitting $\alpha$ into spin-independent and spin-dependent parts, $\alpha=\alpha^{indep}+\alpha^{dep}$, we obtain sum rules for the spin independent and dependent interaction strengths, respectively, as \begin{eqnarray}\label{eq:SRindep} & &-\lambda^2 {4\pi\overM}\alpha^{indep}{1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right)\cr &=&{1\over4\pi^4}\Big\{ \left(C_2MM_B -C_3M_B^2\right) \Big[\pi^2\Big\{-3\langle d^\dagger d\rangle_N -\langle u^\dagger u\rangle_N \Big\} -\pi^2\langle\bar uu\rangle_N \Big]\cr & &\qquad +\left(C_1M-C_2M_B\right) \Big[\pi^2\Big\{ {1\over8}\left\langle{\alpha_s\over\pi}G^{\alpha\beta}G_{\alpha\beta} \right\rangle_N -{1\over6}\left\langle{\alpha_s\over\pi}{\cal S} [G^{\rho}_{0}G_{\rho0}]\right\rangle_N\cr & &\qquad +{16\over3}i\langle{\cal S}[\bar d\gamma_0 D_0 d]\rangle_N +{4\over3}i\langle{\cal S}[\bar u\gamma_0 D_0 u]\rangle_N\Big\}\Big]\cr & &\qquad +{M\overM_B^2}\Big[{8\over3}\pi^4\langle\bar dd \bar dd\rangle_N +{16\over3}\pi^4\langle\bar uu d^\dagger d\rangle_N\Big] \Big\} , \end{eqnarray} and \begin{eqnarray}\label{eq:SRdep} & &-\lambda^2 {4\pi\overM}\alpha^{dep}{1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right)\cr &=&{1\over4\pi^4}r_k\Big\{ \left(C_2MM_B-C_3M_B^2\right)\cr & &\qquad\times \Big[\pi^2\Big\{{5\over3}\langle\bar d\gamma_k\gamma_5 d\rangle_N -{1\over3}\langle\bar u\vec \gamma\gamma_5 u\rangle_N\Big\} -i{1\over3}\pi^2\langle\bar u\gamma_5\sigma_{k0} u\rangle_N \Big]\cr & &\qquad +\left(C_1M-C_2M_B\right)\cr & &\qquad\times \Big[\pi^2\Big\{-{8\over3}\langle\bar d\gamma_k\gamma_5 iD_0 d\rangle_N +{4\over3}\langle\bar u\gamma_k\gamma_5 iD_0 u\rangle_N\Big\} \Big]\cr & &\qquad +{M\overM_B^2}\Big[ -{16\over3}\pi^4\langle\bar dd \bar d\gamma_5i\sigma_{k0} d\rangle_N -{16\over3}\pi^4\langle\bar uu \bar d\gamma_k\gamma_5 d\rangle_N \Big] \Big\} , \end{eqnarray} where \begin{eqnarray*} C_1 &=&1-{1\over2}\left[\exp\left(-{\omega_+^2\overM_B^2}\right) +\exp\left(-{\omega_-^2\overM_B^2}\right)\right],\cr C_2 &=&-{1\over2}\left[{\omega_+\overM_B}\exp\left(-{\omega_+^2\overM_B^2}\right) -{\omega_-\overM_B}\exp\left(-{\omega_-^2\overM_B^2}\right)\right] +{\sqrt{\pi}\over4}\left[\Phi\left({\omega_+\overM_B}\right) -\Phi\left({\omega_-\overM_B}\right)\right],\cr C_3 &=&1-{1\over2}\left[ \left(1+{\omega_+^2\overM_B^2}\right)\exp\left(-{\omega_+^2\overM_B^2}\right) +\left(1+{\omega_-^2\overM_B^2}\right)\exp\left(-{\omega_-^2\overM_B^2}\right)\right] , \end{eqnarray*} and $\Phi$ is the error function. From the QCD sum rule for the nucleon in the vacuum in Ref.~\cite{rf:Ioffe}, the normalization constant $\lambda^2$ is related with the vacuum expectation values of the operators according to \begin{eqnarray*} & &\lambda^2 {1\overM_B^2}\exp\left(-{M^2\overM_B^2}\right)\cr &=&{1\over4\pi^4}\Big[ - D_2M_B^4{1\over8} - D_1\Big\{ {\pi^2\over8} \left\langle{\alpha_s\over\pi}G^{\alpha\beta}G_{\alpha\beta}\right\rangle_0 \Big\} - {1\overM_B^2}\Big\{\pi^4{8\over3}\langle\bar dd\bar dd\rangle_0\Big\}\Big] , \end{eqnarray*} where \begin{eqnarray*} D_1&=&1-\exp\left(-{\omega_0^2\overM_B^2}\right),\cr D_2&=&1-\left(1+{\omega_0^2\overM_B^2}+{1\over2}{\omega_0^4\overM_B^4}\right) \exp\left(-{\omega_0^2\overM_B^2}\right). \end{eqnarray*} The sum rule for the spin-independent part of the interaction strength, $\alpha^{indep}$, is nothing but the sum rule for the $NN$ scattering length in Ref.~\cite{rf:KM}, which was obtained starting from the spin-averaged correlation function. The difference, however, is that the scattering length in Ref.~\cite{rf:KM} is replaced by the interaction strength in Eq.~(\ref{eq:SRindep}). Equation~(\ref{eq:SRdep}) is the new sum rule for the spin-dependent part of the interaction strength, $\alpha^{dep}$. The new sum rule provides us with a relation between the spin-dependent $NN$ interaction strength and the nucleon matrix elements of the spin-dependent quark-gluon operators. Therefore, if one knows the nucleon matrix elements of the spin-dependent quark-gluon operators, one can predict the spin-dependent $NN$ interaction strength, and vice versa. In this paper we investigate the first possibility. We now discuss the nucleon matrix elements of the quark-gluon operators. Dimension-three operators are quark operators, $\bar q q$, $\bar q\gamma_\mu q$, $\bar q\gamma_\mu\gamma_5 q$ and $\bar q\gamma_5\sigma_{\mu\nu} q$. The nucleon matrix elements, $\langle\bar q q\rangle_N$ and $\langle\bar q\gamma_\mu q\rangle_N$, are spin-independent and have already been discussed in Ref.~7), while $\langle\bar q\gamma_\mu\gamma_5 q\rangle_N$ and $\langle\bar q\gamma_5\sigma_{\mu\nu} q\rangle_N$ are spin-dependent and are written in terms of the axial charge, $\Delta q$, and the tensor charge, $\delta q$, as \begin{eqnarray*} &&\langle\bar q\gamma_\mu\gamma_5 q\rangle_N=\Delta q s_\mu ,\cr &&\langle\bar qi\gamma_5\sigma_{\mu\nu}q\rangle_N =\delta q(s_\mu \hat p_\nu-s_\nu \hat p_\mu)/\hat p_0 . \end{eqnarray*} The axial charge and the tensor charge are related to the structure functions~\cite{rf:JJ} as \begin{eqnarray*} &&\Delta q=\int_0^1dx[g_1(x)+\bar g_1(x)],\cr &&\delta q=\int_0^1dx[h_1(x)-\bar h_1(x)], \end{eqnarray*} where $g_1$ is the longitudinal quark-spin distribution in the nucleon, $\bar g_1$ is the antiquark-spin distribution, $h_1$ is the quark-transversity distribution, and $\bar h_1$ is the antiquark-transversity distribution. $h_1$, together with the unpolarized quark distributions $f_1$ and $g_1$, forms a complete set of twist-2 structure functions. Recently $\Delta q$ has received much attention in connection with the spin content of the nucleon. While a naive quark model picture suggests that the nucleon spin is carried by quarks, i.e. $\Delta u+\Delta d=1$, it was experimentally found that $\Delta u+\Delta d<<1$.~\cite{rf:EMC} In Ref.~10), $\Delta q$ for the proton was determined from recent EMC/SMC and SLAC data with the SU(3) symmetry and the hyperon $\beta$ decay as \begin{eqnarray*} &&\Delta u = 0.83 \pm 0.03,\cr &&\Delta d = -0.43 \pm 0.03, \end{eqnarray*} at the renormalization scale $Q^2=10\ {\rm GeV}^2$. There is also a lattice QCD calculation~\cite{rf:FKOU} with the result \begin{eqnarray*} &&\Delta u = 0.638 \pm 0.054,\cr &&\Delta d = -0.347 \pm 0.046 \end{eqnarray*} at $Q^2=2\ {\rm GeV}^2$. On the other hand, up to now no experimental information is available for $\delta q$, since $h_1$ and $\bar h_1$ cannot be measured by deep inelastic scattering. These quantities can, however, be measured by Drell-Yang processes, which are planned in a future RHIC experiment. However, $\delta q$ for the proton was calculated on the lattice~\cite{rf:ADHK} with the result \begin{eqnarray*} &&\delta u=0.839 \pm 0.060,\cr &&\delta d=-0.231 \pm 0.055 \end{eqnarray*} at $Q^2=2\ {\rm GeV}^2$. In this paper we use the lattice results both for $\Delta q$ and $\delta q$ and ignore the $Q^2$ evolution of these matrix elements between $Q^2=1\ {\rm GeV}^2$ and $2\ {\rm GeV}^2$. Dimension-four operators are gluon operators, ${\alpha_s\over\pi}G^{\mu\nu}G_{\mu\nu}$, ${\alpha_s\over\pi}{\cal S}[G^{\rho}_{\mu}G_{\rho\nu}]$, and quark operators, ${\cal S}[\bar q\gamma_\mu iD_\nu q]$, $\bar q{\cal S}(\gamma_\mu iD_\nu)\gamma_5 q$. The nucleon matrix elements of the gluon operators are spin-independent and have already been discussed in Refs.~7) and~13). The matrix element of the quark operator, $\langle{\cal S}[\bar q\gamma_\mu iD_\nu q]\rangle_N$, is also spin-independent and is related to the unpolarized quark distribution $f_1$,~\cite{rf:HL} while $\langle\bar q{\cal S}(\gamma_\mu iD_\nu)\gamma_5 q\rangle_N$ is spin-dependent and \begin{eqnarray*} \langle\bar q{\cal S}(\gamma_\mu iD_\nu)\gamma_5 q\rangle_N &=& a_1s_{\{\mu} \hat p_{\nu\}}. \end{eqnarray*} By neglecting the operator including the quark mass, $a_1$ can be related to the first moment of the longitudinal quark-spin distribution, $g_2$, and it is given at tree-level by~\cite{rf:KYU} \begin{eqnarray*} a_1=-2\int_0^1dxxg_2(x). \end{eqnarray*} Very recently, measurements of $g_2$ for the proton and the neutron have begun.~\cite{rf:g2p}~\cite{rf:g2n} We have calculated $a_1$ for the proton using the data for $g_2$ in Ref.~16) over the range $0.075<x<0.8$ and $1.3<Q^2<10\,({\rm(GeV}/c)^2$ for the proton and those in Ref.~17) over the range $0.06<x<0.70$ and $1.0<Q^2<17.0\,({\rm(GeV}/c)^2$ for the neutron with the results \begin{eqnarray*} a^u_1 = 0.05 \pm 0.04,\qquad a^d_1 =-0.08 \pm 0.13. \end{eqnarray*} In addition to the above dimension-three and dimension-four operators we take into account the dimension-six four-quark operators, $\bar qq\bar qq$, $\bar qq\bar q\gamma_\mu q$, $\bar qq\bar q\gamma_\mu\gamma_5 q$ and $\bar qq\bar q\gamma_5\sigma_{\mu\nu} q$, since four-quark operators are known to give the largest contribution among higher order operators in the QCD sum rule for the nucleon in the vacuum.~\cite{rf:RRY}~\cite{rf:HL} In the vacuum, the matrix elements of the four-quark operators are estimated by the factorization hypothesis;~{\cite{rf:SVZ}\cite{rf:RRY}} i.e. it is assumed that the vacuum contribution dominates in the intermediate states: $\langle{\cal O}_1{\cal O}_2\rangle_0\approx\langle{\cal O}_1\rangle_0\langle{\cal O}_2\rangle_0$. Similarly, for the nucleon matrix element, we assume that the contribution from the one-nucleon state dominates in the intermediate states: \begin{eqnarray*} \langle{\cal O}_1{\cal O}_2\rangle_N &\equiv&\langle N|{\cal O}_1{\cal O}_2|N \rangle-\langle N|N \rangle\langle{\cal O}_1{\cal O}_2\rangle_0\cr &\approx&{\langle N|{\cal O}_1|N \rangle\langle N|{\cal O}_2|N \rangle \over\langle N|N \rangle} -\langle{\cal O}_1\rangle_0\langle{\cal O}_2\rangle_0\langle N|N \rangle\cr &=&\langle{\cal O}_1\rangle_N\langle{\cal O}_2\rangle_0 +\langle{\cal O}_1\rangle_0\langle{\cal O}_2\rangle_N . \end{eqnarray*} Thus we assume $\langle\bar qq\bar qq\rangle_N=2\langle\bar qq\rangle_0\langle\bar qq\rangle_N$, $\langle\bar qq\bar q\gamma_\mu q\rangle_N=\langle\bar qq\rangle_0\langle\bar q\gamma_\mu q\rangle_N$, $\langle\bar qq\bar q\gamma_\mu\gamma_5 q\rangle_N=\langle\bar qq\rangle_0\langle\bar q\gamma_\mu\gamma_5 q\rangle_N$ and $\langle\bar qq\bar q\gamma_5\sigma_{\mu\nu} q\rangle_N=\langle\bar qq\rangle_0\langle\bar q\gamma_5\sigma_{\mu\nu} q\rangle_N$. For completeness we list here the values which were used in the calculation. The spin-independent proton matrix elements of the operators are \begin{eqnarray*} &&\langle u^\dagger u\rangle_p=2,\qquad\langle d^\dagger d\rangle_p=1,\qquad \langle\bar u u\rangle_p=3.46,\qquad \langle\bar d d\rangle_p=2.96, \cr &&i\langle{\cal S}[\bar u\gamma_\mu D_\nu u]\rangle_p=222\;{\rm MeV},\qquad i\langle{\cal S}[\bar d\gamma_\mu D_\nu d]\rangle_p=95\;{\rm MeV},\cr &&\langle{\alpha_s\over\pi}G_{\mu\nu}G^{\mu\nu}\rangle_p =-738 \;{\rm MeV},\qquad \langle{\alpha_s\over\pi}{\cal S}[G_{\mu 0}G^{\mu 0}]\rangle_p =-50 \;{\rm MeV}. \end{eqnarray*} The condensates of the operators in the vacuum are $\langle\bar uu\rangle_0 =\langle\bar dd\rangle_0 = -(250\;{\rm MeV})^3$ and $\langle{\alpha_s\over\pi}G^2\rangle_0 = (330\;{\rm MeV})^4$.~\cite{rf:RRY} In the QCD sum rule for the nucleon in the vacuum, $\omega_0$ is determined to be $2.2\ {\rm GeV}$ by Borel stability analysis. $\omega_+$ and $\omega_-$ are determined by Borel stability analysis. Namely, we search for the values of $\omega_+$ and $\omega_-$ for which the calculated strength $\alpha$ has the most stable plateau as a function of the Borel mass $M_B$ both in the triplet and singlet channels. We find that the optimum choice is $\omega_+\approx 1.3\ {\rm GeV}$ and $\omega_-\approx 1.3\ {\rm GeV}$. Figures~1 and 2 display how sensitive the Borel stability is to $\omega_+$ and $\omega_-$. Figure~1 displays $\alpha$ vs. the Borel mass for $\omega_+ = 1.3\ {\rm GeV}$ and $\omega_- =1.2\ {\rm GeV}, 1.3\ {\rm GeV}$ and $1.4\ {\rm GeV}$. Figure~2 displays the same for $\omega_- = 1.3\ {\rm GeV}$ and $\omega_+ =1.1\ {\rm GeV}, 1.3\ {\rm GeV}$ and $1.5\ {\rm GeV}$. One sees that the Borel stability is much more sensitive to $\omega_-$ than to $\omega_+$. This is because the $\omega_-$ dependence of the coefficients of the dimension-three and dimension-four operators is much stronger than the $\omega_+$ dependence. \begin{figure} \begin{center} \leavevmode \psfig{figure=fig1.ps,angle=90,width=5.2in} \end{center} \caption{The calculated strength $\alpha$ as a function of the Borel mass squared, $M_B^2$. The solid lines represent the spin triplet channel, and the dotted lines represent the spin singlet channel by fixing $\omega_+$ to $1.3\ {\rm GeV}$. The top, middle and bottom lines are for $\omega_-=1.2\ {\rm GeV}$, $\omega_-=1.3\ {\rm GeV}$ and $\omega_-=1.4\ {\rm GeV}$, respectively.} \label{fig:1} \end{figure} \begin{figure} \begin{center} \leavevmode \psfig{figure=fig2.ps,angle=90,width=5.2in} \end{center} \caption{The calculated strength $\alpha$ as a function of the Borel mass squared, $M_B^2$. The solid lines corresponds to the spin triplet channel, and the dotted lines corresponds to the spin singlet channel by fixing $\omega_-$ to $1.3\ {\rm GeV}$. The top, middle and bottom lines are for $\omega_+=1.1\ {\rm GeV}$, $\omega_+=1.3\ {\rm GeV}$ and $\omega_+=1.5\ {\rm GeV}$, respectively.} \label{fig:2} \end{figure} The $\alpha$ is determined by taking the maximum values for $\omega_+= 1.3\ {\rm GeV}$ and $\omega_-= 1.3\ {\rm GeV}$ as \begin{eqnarray*} \alpha_{pn}^{3}&=&1.7\;{\rm fm},\cr \alpha_{pn}^{1}&=&1.0\;{\rm fm}.\cr \end{eqnarray*} We see that the spin-dependent part is rather smaller than the spin-independent part. This is due to the small matrix elements of the spin-dependent operators. From Figs.~1 and 2 we should expect errors of about 30\% due to the choice of $\omega_+$ and $\omega_-$. The errors due to the uncertainties of the nucleon matrix elements of the dimension-three and dimension-four operators are about 10\% and 20\%, respectively. Combining all the errors together, the above results have errors of approximately 40\%. Let us compare the above results with experimental facts. Experimentally, the scattering length and the effective range have been found to be \begin{eqnarray*} a_{pn}^{3}&=&-5.39\;{\rm fm},\qquad r_{pn}^{3}=1.75\;{\rm fm},\cr a_{pn}^{1}&=&23.7\;{\rm fm},\qquad r_{pn}^{1}=2.73\;{\rm fm}.\cr \end{eqnarray*} In the spin-singlet channel, the scattering length is so large that the strength, $\alpha_{pn}^{1}$, is expected to be approximated well by ${r_{pn}^{1}}/2$, \begin{eqnarray*} \alpha_{pn}^{1} \approx {r_{pn}^{1}}/2 = 1.37\;{\rm fm} , \end{eqnarray*} which is in rather good agreement with the calculated result. In the spin-triplet channel, the scattering length is not small. However, it is not as large as in the spin-singlet channel. Therefore, it is not easy to estimate the strength $\alpha_{pn}^{3}$ from experimental observables. However, there is a loosely bound state, deuteron, in the spin-triplet channel, while there is an almost bound state in the spin-singlet channel. This implies that the interaction in the spin-triplet channel is stronger (but not very stronger) than that in the spin-singlet channel. This tendency is consistent with the calculated results. \section{Summary and outlook} In this paper we have studied spin-dependent nucleon-nucleon ($NN$) interactions in the QCD sum rules. The basic object of our study is the spin-dependent nucleon correlation function, whose matrix element is taken with respect to the one-nucleon state. The dispersion integral of the correlation function around the nucleon threshold has been investigated in detail. The integral is given by the sum of the threshold contribution, which is due to the second-order pole term proportional to the scattering length, the continuum contribution, and the bound state contribution. When the interaction is weak, the integral is dominated by the threshold contribution and is proportional to the scattering length. As the interaction becomes stronger, the continuum contribution also becomes important. When the interaction is just so strong as to form a bound state, both the threshold contribution and the continuum contribution diverge, but their sum is finite. Thus the sum of these two contributions is given by the same form as in the case of the weak interaction. However, in this case the scattering length is replaced by one half of the effective range. Based on this observation we have defined the $NN$ interaction strength through the dispersion integral around the nucleon threshold. In the OPE of the correlation function, new operators, such as $\bar q\gamma_\mu\gamma_5q$, $\bar q\gamma_5\sigma_{\mu\nu}q$, have to be taken into account. These operators do not vanish when the matrix element is taken with respect to the spin-nonaveraged one-nucleon state. We have calculated the Wilson coefficients of such operators. The obtained sum rules relate the spin-dependent $NN$ interaction strengths with the spin-dependent nucleon matrix elements of the quark-gluon composite operators. The spin-dependent nucleon matrix elements such as $\langle\bar q\gamma_\mu\gamma_5 q\rangle_N$ and $\langle\bar q\gamma_5\sigma_{\mu\nu} q\rangle_N$ are related with the spin-dependent structure functions of the nucleon, $g_1$, $g_2$ and $h_1$. We found that the interaction strength in the spin-singlet channel is weaker than in the spin-triplet channel, but that the spin-dependent part of the interaction strength is considerably smaller than the spin-independent part. Experimentally, it has been found that there is a loosely bound state, deuteron, in the spin-triplet channel, while there is an almost bound state in the spin-singlet channel, which implies that the interaction is slightly stronger in the spin-triplet channel than in the spin-singlet channel. This seems to be consistent with the sum rule result. In the spin-singlet channel, the scattering length is so large that the interaction strength can be estimated by using the observed effective range, while in the spin-triplet channel the absolute value of the interaction strength is difficult to obtain from observables. The empirical interaction strength thus obtained in the spin-singlet channel agrees rather well with the sum rule calculation. The method used in the present paper can be extended to other hadron-nucleon channels. In particular, it is straightforward to apply it to the hyperon-nucleon channels. The obtained sum rules would relate the hyperon-nucleon interaction strengths with the nucleon matrix elements of the quark-gluon operators which include strange quark operators in addition to up and down quark operators. Also, the present sum rules can be used in the opposite way. Namely, if one knows detailed information on various spin-dependent hadron-nucleon interaction strengths, one can obtain spin-dependent matrix elements of quark-gluon operators with respect to the one-nucleon state. This provides us with information such as the spin content of the nucleon. \vskip 14pt \begin{center} {\bf Acknowledgements} \end{center} \vskip 14pt We would like to thank Professor K.~Yazaki for valuable discussions and Professor M.~Oka for bringing our attention to the spin-dependent correlation function. We are also grateful to Professor H. Terazawa for careful reading of the manuscript.
1,116,691,498,006
arxiv
\section{Introduction} We have witnessed tremendous progresses in natural language generation (NLG) with large-scale Transformer~\cite{transformer} based language models, such as the GPT-2~\cite{gpt-2}. A natural question to raise beyond language modeling is, how can we have more fine-grained control over these powerful models? Specifically, we would desire a language model to generate text centers on some user-defined conditions\footnote{Interchangeable with \textit{attributes}}. We call this kind of language model a \textit{conditioned} NLG. One could easily imagine the commercial benefits of employing a conditioned NLG in our every day products, such as search. To obtain a conditioned NLG, a naiive approach would be reformulate the original language modeling objectives and train the whole model from scratch~\cite{CTRL}. However, doing so requires tons of labeled text with conditions, which is not always available in most applications. Recently, PPLM~\cite{pplm} proposed to take advantage of the pretrained GPT-2 without any further retraining of the language model itself. This is done by having another pretrained `guidance model', such as a Bag of Word (BOW) or an attribute classifier (AC), to guide the latent state of GPT-2 toward generating more relevant tokens. Witnessing the upside of this approach, we further explored \textit{simple, flexible,} and \textit{effective} approaches to conditioned NLG, that encompasses these desired qualities: \begin{itemize} \item Simple: \textit{only} an unconditioned language model is needed. Does not require any additional training data~\cite{CTRL} or additional pretrained `guidance models'~\cite{pplm}. \item Flexible: the model should be able to model any combination of conditions with any weighting. This is not the case in previous work. \item Effective: the generated text is fluent and highly-relevant to the given attribute, by objective and human evaluation. \end{itemize} We propose four methods in total. Three of them are direct modeling of $p(x|c)$ by modification of token embedding or hidden states. The other one models $p(x) p(c|x)$ by the consideration of next-token distribution. We combine those four methods into one and compare our results with PPLM~\citep{pplm}. Some examples of the text generated by our proposed method is in Table~\ref{tab:teaser}. \begin{table}[!ht] \begin{center} \footnotesize \begin{tabularx}{\linewidth}{L} \hline \knob{} \prefix{To conclude}, To conclude, we have a very strong case for the hypothesis that the human body was originally made from an alien substance. There is nothing in biology or physics that would have predicted this. It's an obvious hypothesis, but if one is to believe the \ldots \\ \hline \knob{Negative} \prefix{To conclude} this article with something about the role of the media in spreading the narrative of the \rw{mass shooting}, the writer has a few words about the media's role in spreading a \rw{false} news article that was originally published on Fox \ldots \\ \hline \knob{Positive} \prefix{To conclude} To conclude this, let the following be the statement of his \rw{majesty}, as to the power of his \rw{Majesty} in his own name, his own body and his own blood, in all the places where the said act was made \ldots \\ \hline \knob{Positive, Science} \prefix{To conclude}, the following article is a \rw{positive} review of the book by a \rw{respected scientific} and medical reviewer in the same journal entitled, The American Medical Association: A Review of Recent Medical Research on the Prevention of and \ldots \\ \hline \knob{Negative, Science} \prefix{To conclude}: If the majority of \rw{scientific} papers are either \rw{negative} and \rw{false} or have a \rw{weak statistical evidence} for their own status, then these papers are not of sufficient quality for a high quality journal in the \rw{scientific} literature \ldots \\ \hline \knob{Positive, Science, Military} \prefix{To conclude} this article, the US \rw{Navy} needs some of the \rw{best military research} in space and \rw{aerospace}; its \rw{first class} of space flight \rw{research ship} has a high capability for launching large cargo spacecraft to the outer \ldots \\ \hline \knob{Negative, Science, Military} \prefix{To conclude}, the above is the main reason why the US \rw{military} has \rw{no} credibility for using the \rw{nuclear weapon} and the only alternative is \rw{nuclear} deterrence and \rw{nuclear} destruction in its own name – not for the other \ldots \\ \end{tabularx} \end{center} \caption{Our methods employ a pre-trained language model to generate text conditioned on any number of attributes without fine-tuning or human knowledge. In this table, we demonstrate results using our methods. The underlined prefix is what the language model is given to generate a passage of text (e.g. \prefix{To conclude}). The controlled attributes are colored and bracketed (e.g. \knob{Science}) and we highlight words in the passage (e.g. \rw{false}) that are related to the attributes. }\label{tab:teaser} \end{table} \section{Related Work} All previous methods require knowledge about the attributes. In~\citep{human-pref}, human knowledge about the attribute is provided to train a reward model which is used to train a NLG model by reinforcement learning. On the other hand, \citep{CTRL} and \citep{control-style} fine-tuned NLG model on additional dataset with attribute label. These methods usually create highly-relevant sentences but have the limitation of fixing the available attributes in advance. In addition, PPLM-Discrim in \citep{pplm} also requires attribute label, but it only has to train a much smaller discriminator model. Finally, PPLM-BoW in \citep{pplm} has to first construct a curated word list which is a list words that are highly related to a given attribute. Although no fine-tuning is needed, obtaining a word list for any arbitrary attribute is definitely not a trivial task. Although some previous methods~\citep{CTRL, pplm} do generate high-quality results, they still have major limitations when comparing to our methods. Comparison of different methods are shown in Table.~\ref{model_comparison}. \begin{table*}[!t] \footnotesize \begin{tabular}{r|c|c|l} \textbf{Model type} & \textbf{Form of model} & \textbf{Samples} & \textbf{Example models and number of trainable params} \\ \hline Language model & \multirow{2}{2em}{$p(x)$} & \multirow{2}{3.5em}{Uncond.} & \multirow{2}{12em}{GPT-2 medium: 345M} \\ \citep{gpt-2} & & & \\ \hline Fine-tuned language model & \multirow{2}{2em}{$p(x)$} & \multirow{2}{3.5em}{Uncond.} & \multirow{2}{15em}{Fine-tuned GPT-2 medium: 345M} \\ \citep{human-pref} & & & \\ \hline Conditional language model & \multirow{2}{2.8em}{$p(x|c)$} & \multirow{2}{2.5em}{Cond.} & \multirow{2}{8em}{CTRL: 1.6B} \\ \citep{CTRL} & & & \\ \hline Plug and play language model & \multirow{2}{8.8em}{$p(x|c) \propto p(x)p(c|x)$} & \multirow{2}{2.5em}{Cond.} & PPLM-BoW: 0 (need curated word list) \\ \citep{pplm} & & & PPLM-Discrim: $\sim$ 1K/attribute \\ \hline \multirow{2}{10em}{Our approaches} & $p(x|c)$ & \multirow{2}{2.5em}{Cond.} & Our-prefix: 0, Our-embedding: 0, Our-attention: 0\\ \cline{2-2} \cline{4-4} & $p(x|c) \propto p(x)p(c|x)$ & & Our-next-token: 0 \\ \end{tabular} \caption{Comparison of different methods for NLG, including unconditioned and conditioned language NLG. All conditioned methods are based on unconditioned models, but our methods doesn't require fine-tuning or any curated word list.} \label{model_comparison} \end{table*} \section{Our Approaches} We want to find approaches to model conditioned generation $p_{cg}(x_{t+1}|c, x_1, \dots, x_t)$ by using only a pre-trained language model $p_{lm}(x_{t+1}|x_1, \dots, x_t)$, where $x_i$ are the words and $c$ is the condition. If there are $n$ condition, then $c = \{c_1, \dots, c_n\}$. Here, we described our four methods to solve this problem. \subsection{Our-prefix: Conditional prefix} The first approach is the simplest one. We feed a conditional sentence into GPT-2 before it generates the conditioned text. For example, if we want a positive sentence about politics, then the conditional sentence to the GPT-2 will be "The following is a positive article about politics." Although very naive, we found this method does actually work. Empirically, we found that adding the word "following" greatly improves the coherence. In \citep{recover-any}, the authors show that a pre-trained language model can be steered to recover arbitrary sentences. Here, although $p_{cg}$ is definitely not close to $p_{lm}$, we think that prepending a well-designed prefix can make them closer. The prefix alters the hidden states of the unconditioned language model in order to steer it closer to a conditioned one. Formally, we assume that $p_{lm}(x_{t+1} | \text{``The following''} + $c$, x_1, \dots, x_t) \approx p_{cg}(x_{t+1} | c, x_1, \dots, x_t)$. However, the problem with this method is that the model will be influenced by the added sentence. For example, it will increase the probability of generating the word ``following'' or ``article'' subsequently. We tried two ways to fix this. First, we tried to disconnect the order relation of $x$ and $c$. We first feed $c$ into the language model and keep the key-value pairs of the self-attention. Then, during the conditioned generation, the language model will start the generation without the input of $c$ but will self-attend on those key-value pairs. The counting of position indices are also restarted from 0. Unfortunately, this does not work out. The model is greatly disturbed by those redundant key-value pairs and thus generate unrecognizable language. Another straightforward way is to cut-off the special prefix after a fixed generation step. This absolutely fixes the issue in a brute-force manner. In \citep{pplm}, they also employed this method to avoid degeneration (i.e. model keep producing the same token). In this paper, we refer to this approach as `early stopping'. \subsection{Our-embedding: changing token embedding} Given $n$ conditions, we can use the tokenizer to obtain the token index $t_i, \forall i \in [1, n]$ corresponding to the conditions $c = \{c_1, \dots, c_n\}$. Then, we add the token embedding of $t_i$ with weight $w_i$ to all token embeddings for all $i \in [1, n]$. Finally, we re-normalize all embedding by dividing $1 + \sum_{i=1}^n w_i$. In original GPT-2~\citep{gpt-2}, the input and output embeddings are tied. Here, we untie them and only apply this change to the input embedding. By doing so, every input embedding contains information about the conditions and the transformation from token space to embedding space is guided toward the conditions. For example, the token ``military'' will gain positivity if the condition is ``positive'' and gain negativity if the condition is ``negative'' from the viewpoint of language model. We do not change the output embedding because we conjecture that doing so will de-transformed the conditioned embedding. Notice that in this method, the user can decide the weight for each condition. This is the flexibility that we desired. \subsection{Our-attention: changing self-attention key-value pairs} Similar to the previous method, we change the self-attention key-value pairs in the language model by adding the key-value pairs of the condition token indices. To obtain the key-value pairs corresponding to a token index $t_i$ at time step $t$, we feed a single token $t_i$ with position index $t$ into the model. All key-value pairs are also re-normalize by dividing $1 + \sum_{i=1}^n w_i$. To avoid degeneration, the weights are decrease inversely proportional to the number of time steps. The idea of this method is similar to the previous one. The main difference is that this method considers different position indices. \subsection{Our-next-token: changing the output distribution by next-token distribution} We have \begin{align} & p_{cg}(x_{t+1} | c, x_1, \dots, x_t) \\ = & \frac{p(x_{t+1}, c | x_1, \dots, x_t)}{p(c | x_1, \dots, x_t)} \\ = & \frac{p(c | x_1, \dots, x_{t+1}) p(x_{t+1} | x_1, \dots, x_t)}{p(c | x_1, \dots, x_t)}. \end{align} Notice that $c, x_1, \dots, x_t$ all have known assignment, so $p(c | x_1, \dots, x_t)$ is a constant. Also, $p(x_{t+1} | x_1, \dots, x_t)$ is essentially a language model. Thus, we have \begin{align} \begin{split} &p_{cg}(x_{t+1}|c, x_1, \dots, x_t) \\ &\propto p(c|x_1, \dots, x_{t+1}) p_{lm}(x_{t+1}|x_1, \dots, x_t). \end{split} \end{align} In PPLM, $p(c|x_1, \dots, x_{t+1})$ is approximated by either a separate BOW or a linear classifier. In our approach, we use $p_{lm}(x_{t+2} | x_1, \dots, x_{t+1})$ to approximate $p(c|x_1, \dots, x_{t+1})$, that is: \begin{align} \begin{split} &p_{cg}(x_{t+1}|c, x_1, \dots, x_t) \\ &\appropto p_{lm}(x_{t+2}=c|x_1, \dots, x_{t+1}) p_{lm}(x_{t+1}|x_1, \dots, x_t). \label{eq:approx} \end{split} \end{align} In practice, we can add a weight $w$ to control the influence of the condition: \begin{align} \begin{split} &p_{cg}(x_{t+1}|c, x_1, \dots, x_t) \\ &\appropto p^w_{lm}(x_{t+2}=c|x_1, \dots, x_{t+1}) p_{lm}(x_{t+1}|x_1, \dots, x_t). \label{eq:weighted} \end{split} \end{align} If there are $n$ conditions, we take the weighted mean: \begin{align} \begin{split} &p_{cg}(x_{t+1}|c, x_1, \dots, x_t) \\ &\appropto (\prod_{i=1}^n p^{w_i}_{lm}(x_{t+2}=c|x_1, \dots, x_{t+1}))^{\frac{1}{n}} \cdot \\ &p_{lm}(x_{t+1}|x_1, \dots, x_t). \label{eq:mulitple} \end{split} \end{align} In our implementation, we first use top-$K$ sampling to obtain $K$ next tokens $\{x^1_{t+1}, \dots, x^K_{t+1}\}$ and the next-token distribution. Then, we feed $K$ new sequences $x_1, \dots, x_t + \{x^1_{t+1}, \dots, x^K_{t+1}\}$ into the model to have $p_{lm}(x_{t+2}|x_1, \dots, x_t, x^k_{t+1}), \forall k \in \{1, \dots, K\}$. Next, we single out those probabilities corresponding to $x_{t+2} = c_i$. Finally, we multiply two probabilities together with weight $w_i$ as in Equation~\ref{eq:weighted} to multinomially sample the next token. \section{Experiments} \subsection{Experimental Setup} \paragraph{GPT-2 Language Model.} Our language model is based on the GPT-2, similar to that in~\cite{pplm}. We borrowed pretrained GPT-2 and PPLM models and their implementations from HuggingFace~\footnote{\href{https://huggingface.co/}{https://huggingface.co/}}. In their implementation, GPT-2 is GPT-2 medium. \paragraph{Hyperparameters.} The hyperparameter used in this work is detailed in Table~\ref{tab:hyperparams}. \begin{table} \begin{center} \begin{tabularx}{\linewidth}{@{}l|L@{}} \hline Method & Hyperparameters \\ \hline Ours & K=12, embed-weights=0.04, attention-weights=0.02, condition-weights=0.20, early-stopping=3 \\ \hline PPLM-BoW & gamma=1.5, num-iterations=3, stepsize=0.03, window-length=5, kl-scale 0.01, gm-scale 0.99\\ \hline PPLM-Discrim & gamma=1.0 num-iterations=10 stepsize=0.04 kl-scale=0.01 gm-scale=0.95\\ \hline \end{tabularx} \end{center} \caption{The full set of hyperparameters used in our work. Note that we did not perform any hyperparameter tuning.}\label{tab:hyperparams} \end{table} Due to time limit, we did not perform a hyperparameter sweep for our model. As described in the Appendix of~\cite{pplm}, careful hyperparameter search is vital for its generation quality, and we would imagine that our approach works much better with hyperparameter tuning. We directly used the hyperparameters for PPLM specified in their github repo~\footnote{\href{https://github.com/uber-research/PPLM}{https://github.com/uber-research/PPLM}}. \paragraph{Automated Evaluation.} We evaluated the generated text by its fluency (perplexity) and diversity (Dist-1, Dist-2, Dist-3), as in ~\cite{pplm}. In our implementation, perplexity is measured by a separately pre-trained language model (GPT-2 samll); diversity is measured by the percentages of unique n-grams (1-2-3-grams)~\cite{li2015diversity}. \paragraph{External Sentiment Classifier.} For our sentiment modeling experiments, we adopted a pre-trained tokenizer, word2vec and sentiment classifier from Twitter~\footnote{\href{https://www.kaggle.com/paoloripamonti/twitter-sentiment-analysis}{https://www.kaggle.com/paoloripamonti/twitter-sentiment-analysis}} to gauge the effectiveness of our model. The sentiment classifier is a single-layer LSTM. \paragraph{Human Evaluation.} We conducted a small-scale human evaluation by asking the annotators to evaluate the text by its fluency and topic relevance, both on a scale of 1-5, with 1 being 'not readable' and 5 being 'very fluent'. By the time of submission, we have a total of 12 annotations submitted. For our human evaluation, we consider the following conditions and prefix: \begin{itemize} \item Condition: \{Military, Religion, Politics, Science, Legal, Space, Technology, Negative, Positive\} \item Prefix: \{`To conclude'\} \end{itemize} For each condition-prefix pair, we randomly generated 10 sentences (each of 60 tokens) and picked out 3 reasonable sentences (without \textit{degeneration issue}). We do this for every pair and for both our method and PPLM, ending up with 54 sentences. Unlike in~\cite{pplm} where A/B testing is conducted as part of its ablation study, we do not have enough time and resources to generate statistically significant text pairs. \paragraph{Dataset.} Given that our proposed approach \textit{does not} require any further fine-tuning, we do not need any additional corpus to obtain the conditioned NLG. \subsection{Single Condition Modeling} We generated and evaluated samples based on single condition. Similar to~\cite{pplm}, we consider the following conditions and prefixes: \begin{itemize} \item Condition: \{Military, Religion, Politics, Science, Legal, Space, Technology\} \item Prefix: \{`the chicken', `the house', `the potato', `the lake', `the pizza'\} \end{itemize} Table~\ref{tab:odd-combination} contains a few cherry-picked samples generated by our approach. The results of human evaluation are shown in Table~\ref{tab:single_condition}. From the Table, we observe the classic perplexity-diversity trade-offs seen in dialog research, that although our perplexity is lower than PPLM, we achieve higher diversity scores. Focusing on the human evaluation columns, we can see that our approach only lag behind PPLM a little in both attribute relevance and fluency, and this is without any hyperparameter search. This suggests not only our approach is effective but it is certainly possible to generated conditoined natural language using only unconditioned lanugage models. \begin{table*}[] \resizebox{\textwidth}{!}{ \centering \begin{tabular}{@{}l | c | c | c c c c | c c c@{}} \toprule Topic & Method & Attribute relevance \% ($\uparrow$ better) & Perplexity & Dist-1 & Dist-2 & Dist-3 & Fluency ($\uparrow$ better) \\ & & (human) &($\downarrow$ better) & ($\uparrow$ better) & ($\uparrow$ better) & ($\uparrow$ better) & (human) \\ \midrule \multirow{3}{*}{Military} & Ours & - & 22.954 & 0.610 & 0.896 & 0.968 & - \\ & Ours (w/ the following) & \textbf{4.167} & 22.797 & 0.597 & 0.883 & 0.964 & \textbf{3.81} \\ & PPLM-BOW & 2.694 & 12.302 & 0.65 & 0.876 & 0.9192 & 3.472 \\ \hline \multirow{3}{*}{Religion} & Ours & - & 21.227 & 0.573 & 0.869 & 0.957 & - \\ & Ours (w/ the following) & 1.472 & 20.184 & 0.552 & 0.845 & 0.941 & 3.111 \\ & PPLM-BOW & \textbf{1.611} & 12.204 & 0.533 & 0.725 & 0.780 & \textbf{3.583} \\ \hline \multirow{3}{*}{Politics} & Ours & - & 21.679 & 0.581 & 0.866 & 0.949 & - \\ & Ours (w/ the following) & 3.250 & 20.055 & 0.555 & 0.844 & 0.940 & 3.139 \\ & PPLM-BOW & \textbf{3.278} & 12.524 & 0.660 & 0.891 & 0.935 & \textbf{3.611} \\ \hline \multirow{3}{*}{Science} & Ours & - & 22.645 & 0.596 & 0.887 & 0.967 & -\\ & Ours (w/ the following) & 2.806 & 21.643 & 0.582 &0.874 & 0.958 & 3.472 \\ & PPLM-BOW & \textbf{4.028} & 13.508 & 0.640 & 0.873 & 0.92 & \textbf{3.778} \\ \hline \multirow{3}{*}{Legal} & Ours & - & 22.457 & 0.598 & 0.891 & 0.968 & - \\ & Ours (w/ the following) & 3.278 & 21.397 & 0.579 & 0.868 & 0.956 & 3.694 \\ & PPLM-BOW & \textbf{3.528} & 12.401 & 0.662 & 0.888 & 0.930 & \textbf{4.028} \\ \hline \multirow{3}{*}{Space} & Ours & - & 22.529 & 0.582 & 0.881 & 0.965 & - \\ & Ours (w/ the following) & 2.333 & 21.053 & 0.571 & 0.859 & 0.952 & 3.583 \\ & PPLM-BOW & \textbf{3.167} & 12.101 & 0.540 & 0.728 & 0.770 & \textbf{3.639} \\ \hline \multirow{3}{*}{Technology} & Ours & - & 23.303 & 0.596 & 0.887 & 0.967 & - \\ & Ours (w/ the following) & 2.861 & 23.507 & 0.578 & 0.871 & 0.957 & 3.250 \\ & PPLM-BOW & \textbf{3.194} & 12.489 & 0.61 & 0.820 & 0.860 & \textbf{3.750} \\ \hhline{========} \multirow{3}{*}{Average} & Ours & - & 22.399 & 0.591 & 0.882 & 0.963 & - \\ & Ours (w/ the following) & 2.881 & 21.520 & 0.573 & 0.863 & 0.953 & 3.437 \\ & PPLM-BOW & \textbf{3.071} & 12.504 & 0.614 & 0.829 & 0.874 & \textbf{3.694} \\ \hline \end{tabular} } \caption{Single Condition Modeling: automated and human evaluation results of ours approach and PPLM-BOW. The conditional prefix we used here is `The following is an article about <Topic>'. In addition, we evaluated our method \textit{with} and \textit{without} the conditional prefix. Results here correspond to the average over all samples in each topic: <Military>, <Religion>, <Politics>, <Science>, <Legal>, <Space>, <Technology>. 20 samples are generated for each topic. Attribute relevance and fluency is rated on a scale of 1-5. Perplexity implies fluency, which is computed based on an external LM \citep{gpt} different from the base LM. Dist-1,2,3 implies diversity, which is the percentage of unique n-grams in the samples. }\label{tab:single_condition} \end{table*} \begin{table*}[] \begin{center} \footnotesize \resizebox{\textwidth}{!}{ \begin{tabular}{@{}l | c | c c | c c c c c@{}} \toprule Topic & Method & Sentiment Acc. (\%) & Sentiment Acc. (\%) & Perplexity & Dist-1 & Dist-2 & Dist-3 & {Human Evaluation}\\ && (human) & (external classifer) & ($\downarrow$ better) & ($\uparrow$ better) & ($\uparrow$ better) & ($\uparrow$ better) & Fluency ($\uparrow$ better) \\ \midrule \multirow{3}{*}{Positive} & Ours & - & 60 & 26.506 & 0.590 & 0.895 & 0.972 & - \\ & Ours (w/ the following) & 3.417 & 72 & 26.055 & 0.570 & 0.879 & 0.964 & \textbf{3.222} \\ & PPLM-Discrim & \textbf{3.778} & 82.5 & 18.960 & 0.678 & 0.910 & 0.955 & 3.083 \\ \hline \multirow{3}{*}{Negative} & Ours & - & 52 & 26.427 & 0.592 & 0.901 & 0.976 & - \\ & Ours (w/ the following) & 2.639 & 37 & 24.820 & 0.582 & 0.886 & 0.966 & 2.944 \\ & PPLM-Discrim & \textbf{3.361} & 70 & 12.781 & 0.638 & 0.889 & 0.940 & \textbf{3.306} \\ \hhline{=========} \multirow{3}{*}{Average} & Ours & - & 56 & 26.467 & 0.591 & 0.898 & 0.974 & - \\ & Ours (w/ the following) & 3.028 & 54.5 & 25.438 & 0.576 & 0.883 & 0.965 & 3.083 \\ & PPLM-Discrim & \textbf{3.569} & 76.25 & 15.871 & 0.658 & 0.898 & 0.948 & \textbf{3.194} \\ \hline \end{tabular} } \end{center} \caption{ Sentiment Modeling: similar analysis as in Table~\ref{tab:single_condition} is presented here. Here, the conditions are <Positive> and <Negative>. In addition to the metrics described in Table~\ref{tab:single_condition}, the samples are evaluated by a pretrained sentiment classifier. }\label{tab:sentiment} \end{table*} \subsection{Sentiment Modeling} We generated and evaluated samples based on sentiments. We consider the following conditions and prefixes: \begin{itemize} \item Condition: \{Positive, Negative\} \item Prefix: \{`the chicken', `the house', `the potato', `the lake', `the pizza'\} \end{itemize} Table~\ref{tab:sentiment} contains the results of sentiment modeling. We can see that PPLM has better sentiment modeling in terms of human and objective evaluations. We suspect the reason is that the PPLM-Discrim model is fine-tuned, and its latent space is updated 10 times (num-iteration) for each generated sample, and therefore the quality is much better. This suggests a future extension of our approach is to also have iterative updates. \subsection{Multiple Conditions Modeling} The flexibility of our approach also ensures that we can have more than 2 conditions at the same time, see Table~\ref{tab:teaser}. Compare this to PPLM, where the conditions are pre-determined and can not be modified after the `guidance models' are trained~\cite{pplm}. \begin{table*}[!ht] \begin{center} \footnotesize \begin{tabularx}{\linewidth}{L} \hline \knob{Military} \prefix{The chicken} wing is the most famous \rw{weapon} of the Korean \rw{military} as one of its main \rw{war-fighting} aids, so the name is usually translated into Korean as ``the chicken wing.'' \ldots \\ \hline \knob{Religion} \prefix{The chicken}, which is not necessarily the most \rw{religious bird}, doesn't really enjoy eating it. It seems to like eating eggs. It actually enjoys some sort of cheese that is part of the shell. It will eat \ldots \\ \hline \knob{Politics} \prefix{The chicken} of \rw{politics} is not the \rw{politician} but the \rw{political} process as embodied by the electorate. It is a \rw{political} process that is at odds with the \rw{democratic} principles on which the country was founded. \\ \hline \hline \hline \knob{Military} \prefix{The horse} is an example of a \rw{military} vehicle. \rw{Military} vehicles were built to perform certain functions in particular ways and to have particular characteristics. There are lots of examples of that." And, he continues, "We have \ldots \\ \hline \knob{Religion} \prefix{The horse}, and the \rw{religious} person, has a lot to go through: They have to show that their \rw{faith} and \rw{morals} are as strong as the horse's. So, what's my question to the \rw{atheists}? If \ldots \\ \hline \knob{Politics} \prefix{The horse} racing industry's lobbying groups and its candidates for \rw{Congress} received \$1.6 million in total from the race promoters and their lobbyists, and the money has not gone to a single anti-slavery advocate \ldots \\ \hline \hline \hline \knob{Military} \prefix{The pizza} in question is from a recent \rw{military} pizza that I ate out and didn't really like. It tasted like something that had never been served before. It had no char. It didn't have the sweetness that \ldots \\ \hline \knob{Religion} \prefix{The pizza} box with the \rw{Bible's} signature in the center has the box itself on a shelf. A large cardboard cutout of \rw{Jesus Christ} is painted on the box. "I was really inspired by a movie \ldots \\ \hline \knob{Politics} \prefix{The pizza} is in the form of a picture book about \rw{politics} in the USA. The man was a former student. Now he's running to be one of the next \rw{president}. As a part of this, \ldots \\ \hline \hline \hline \knob{Military} \prefix{The potato} salad is not something served at \rw{military} academies. It is not something that most people eat everyday. It is a kind of food that is consumed mostly by members of the \rw{military}. In order to be sure \ldots \\ \hline \knob{Religion} \prefix{The potato} of \rw{religion} is that belief in a \rw{god} — and when you don't believe in something, or at least can't find evidence for it, you take it for granted. But what if you' ve learned you \ldots \\ \hline \knob{Politics} \prefix{The potato} has never been banned in the \rw{US}. It's not all bad news for the potato. The Food and Drug \rw{administration} announced Tuesday that it is ending its approval process for a cancer drug that was made \ldots \\ \hline \hline \hline \knob{Military} \prefix{The lake} is named after the \rw{military} officer named John Taylor, who led the charge into the Battle of Lake Tanganyika on March 1, 1854. The area that today is known as Zamboanga \ldots \\ \hline \knob{Religion} \prefix{The lake} of the same name on the west side of the mountain, which is named after the Greek \rw{goddess} of the spring, is now home to a number of \rw{historic} buildings and \rw{cultural} treasures. The city was \ldots \\ \hline \knob{Politics} \prefix{The lake} may be a \rw{political} issue but there is another issue with this case, as the judge said, to take care of." A hearing to determine whether the water is protected from contamination is set for Oct \ldots \\ \hline \hline \hline \end{tabularx} \end{center} \caption{ Examples generated from a designed odd combination of condition and prefix pairs. Each example is cherry-picked from 10 samples. The underlined prefix is what the language model is given to generate a passage of text (e.g. \prefix{To conclude}). The controlled attributes are colored and bracketed (e.g. \knob{Science}) and we highlight words in the passage (e.g. \rw{false}) that are related to the attributes. Even with the odd combination, our method is still able to generate fluent samples respecting to both the attribute and prefix, though some samples are not really sensible. }\label{tab:odd-combination} \end{table*} \section{Known Issues} We think that without iterative steps (like PPLM), it is difficult to generate high-quality results. In other words, it is difficult to produce high-quality results with only one-step update. Also, some attributes are much difficult to obey, so it should require more updating steps. Additionally, directly adding token embedding and self-attention key-value pairs greatly increases the perplexity. Finally, sometimes degeneration is still observed. This may be due to the fact of adding of token embeddings, key-values pairs, and changing of output distribution by next-token distribution. \section{Conclusion} Past approaches to conditioned NLG still fall short in several ways. With that in mind, we took inspiration from recent work~\cite{pplm} and proposed four methods for conditioned NLG that is simple, flexible and effective, such that only the original base LM is needed. We displayed a few samples for single and multiple conditions NLG. Experiments are conducted for single condition modeling and sentiment modeling, and these samples are evaluated based on their fluency and diversity. A note on the inadequacy of our appraoches is appended at the end.
1,116,691,498,007
arxiv
\section{Introduction} In 3 dimensions, the universality class of the Ising model includes $\phi^4$ theory, which entails at the 3-loop level a tetrahedral Feynman diagram, corresponding to the symmetrical 9-dimensional integral~\cite{GKM} \begin{equation} C^{Tet}:=\frac{1}{\pi^6}\int d^3\mbox{\bf k}_1d^3\mbox{\bf k}_2d^3\mbox{\bf k}_3\Delta(\mbox{\bf k}_1)\Delta( \mbox{\bf k}_2)\Delta(\mbox{\bf k}_3)\Delta(\mbox{\bf k}_1-\mbox{\bf k}_2)\Delta(\mbox{\bf k}_2-\mbox{\bf k}_3)\Delta(\mbox{\bf k}_3-\mbox{\bf k}_1) \end{equation} with $\Delta(\mbox{\bf k}):=1/(|\mbox{\bf k}|^2+1)$ as the unit-mass propagator. A numerical value, $C^{Tet}\approx0.1739006$, was obtained in~\cite{NMB} and checked in~\cite{GKM,AKR}. We shall show that the dispersive methods of~\cite{mas,sixth} enable a reduction of $C^{Tet}$, as for any assignment of masses, to single integrals of logarithms. Then we shall describe how the lattice algorithm PSLQ~\cite{PSLQ} achieved a very simple reduction of $C^{Tet}$ to a Clausen integral, which gives an exponentially convergent sum that reveals a new feature of the distinctive mapping~\cite{DK} of diagrams~\cite{sixth,BKP,BDK,BGK,BK15,BK4} to numbers~\cite{Eul,BBB,poly,BBBL} provided by quantum field theory. \section{Dispersive integral} Let $C(a,b)$ be the tetrahedron with non-adjacent lines carrying masses $a$ and $b$, while the other 4 lines retain unit mass. Then a long dispersive calculation produces a short result: \begin{equation} C(a,b)=-\frac{16}{b}\int_{2}^{\infty}{dw\over(w+a)D(w,b)}~{\rm arctanh} \left({N(w,b)\over D(w,b)}\right)\label{cab} \end{equation} where the denominator function \begin{equation} D(w,b):=w\sqrt{w^2+b^2-4}\label{Dwb} \end{equation} is regular at the 2-particle threshold, $w=2$, provided that $b>0$, and \begin{eqnarray} N(w,b)&=&w^2-2(2+b)\mbox{ ~for~ }w\in[2,2+b]\label{N2}\\ N(w,b)&=&\qquad~w~b\qquad\mbox{ ~for~ }w\in[2+b,\infty]\label{N3} \end{eqnarray} specify a numerator that is continuous in value, though not in derivative, at the 3-particle threshold, $w=2+b$. The origins of~(\ref{cab}--\ref{N3}) will be outlined, neglecting factors of 2 and $\pi$. \begin{enumerate} \item Let $I(\mbox{\bf k},b)$ be the 2-point function obtained by cutting the tetrahedron at the line with mass $a$, so that \begin{equation} C(a,b)\sim\int\frac{d^3\mbox{\bf k}}{|\mbox{\bf k}|^2+a^2}\,I(\mbox{\bf k},b)\label{int1} \end{equation} with the 2-point function given by a dispersion relation of the form \begin{equation} I(\mbox{\bf k},b)\sim\int_2^\infty\frac{w\,dw}{w^2+|\mbox{\bf k}|^2}\,\sigma(w,b)\label{int2} \end{equation} where $\sigma$ is the spectral density of $I$, considered in 2+1 spacetime dimensions. We perform this anti-Wick rotation, away from the 3 spatial dimensions of condensed matter, in order to exploit the Cutkosky rules of Minkowski-space quantum field theory, as in~\cite{mas}. An interchange of order of integration in~(\ref{int1},\ref{int2}) gives \begin{equation} C(a,b)\sim\int_2^\infty\frac{w\,dw}{w+a}\,\sigma(w,b) \end{equation} which explains the simple dependence on $a$ of the integrand in~(\ref{cab}). \item The spectral density \begin{equation} \sigma(w,b)=\theta(w-2)\sigma_2(w,b)+\theta(w-2-b)\sigma_3(w,b)\label{s23} \end{equation} receives contributions from intermediate states with 2 and 3 particles. In the first case, $\sigma_2(w,b)\sim\Re F(w+i0,b)/w$ entails a 1-loop form factor, $F$. This may also be calculated dispersively, from its imaginary part \begin{equation} \Im F(w+i0,b)\sim \frac{1}{w}\int_0^\pi\frac{d\phi}{2k^2(1-\cos\phi)+b^2} =\frac{\pi}{w\,b\sqrt{w^2+b^2-4}} \end{equation} where $k:=\sqrt{(w/2)^2-1}$ and $\phi$ are the centre-of-mass 2-momentum and scattering angle, in the elastic scattering of unit-mass particles, by exchange of a particle of mass $b$, in 2+1 spacetime dimensions. This is the origin of the square root in~(\ref{Dwb}). \item It is now straightforward to calculate \begin{equation} w\,b\,\sigma_2(w,b)\sim\Re\int_2^\infty\frac {x\,dx}{x^2-w^2+i0}\,\frac{1}{D(x,b)} \end{equation} and obtain logarithms from the real part of the form factor. Maple produced 3 arctanh functions, which were combined, by hand, to give the numerator~(\ref{N2}). \item The 3-particle intermediate state yields the Dalitz-plot integral \begin{equation} \sigma_3(w,b)\sim\Re \int_{b(2+b)}^{w(w-2)}\frac{ds}{s} \int\frac{dt}{t}\,\frac{1}{\sqrt{J(s,t,w^2,b^2)}} =\int_{b(2+b)}^{w(w-2)}\frac{ds}{s}\,\frac{\pi}{\sqrt{-J(s,0,w^2,b^2)}} \end{equation} where $s$ and $t$ are the denominators of the propagators of the two particles that are still off-shell and the $t$ integration is over the range in which the Jacobian \begin{equation} J(s,t,u,v):=-(s~t-u~v)(s+t+4-u-v)-(s-t)^2\,\label{J} \end{equation} is positive. Maple produced 2 arctanh functions, to be added to the 3 from $\sigma_2$. Manual combination of these 5 logs produced the amazingly simple numerator~(\ref{N3}). \end{enumerate} This method is clearly generalizable to give a single integral of logs in any mass case. \section{Superconvergence and KLN cancellations} The factor $-16/b$ in~(\ref{cab}) looks alarming, at first sight. The integral is manifestly finite as $a\to0$. Field theory proves that $C(a,b)=C(b,a)$, notwithstanding the very different ways that the masses $a$ and $b$ enter the integral. Hence $C(a,b)$ is finite as $b\to0$, despite the factor of $1/b$. Already we see that potentially linear infra-red divergences have been cancelled, by combining 2-particle and 3-particle intermediate states in~(\ref{N3}). This parallels the 4-dimensional cancellation of logarithmic divergences, from virtual and real soft photons, by the Kinoshita--Lee--Nauenberg mechanism~\cite{KLN}. However, it is still not safe to take the limit $b\to0$, blithely, since the contributions from $w>2+b$ are manifestly negative, and have a $1/(w-2)$ singularity as $b\to0$. The key to handling this tricky limit is the superconvergence relation \begin{equation} 0=\int_2^\infty{dw\over D(w,b)}~{\rm arctanh} \left({N(w,b)\over D(w,b)}\right)\label{super} \end{equation} which ensures that $\lim_{a\to\infty}a\,C(a,b)=0$. Thus one may make the replacement \begin{equation} {1\over w+a}\,\longrightarrow\,{1\over w+a}-{1\over 2+a} \,=\,-{w-2\over(w+a)(2+a)} \end{equation} in~(\ref{cab}). Then the factor $w-2$ suppresses the singularity at threshold in the limit $b\to0$, giving the elementary integral \begin{equation} C(a,0)=\frac{16}{2+a}\int_2^\infty\frac{dw}{w(w+a)(w+2)} ={16\log(1+a/2)-8a\log2\over4a-a^3} \end{equation} in agreement with a more general case, given in~\cite{AKR}. The values \begin{eqnarray} C(0,0)&=&2-\log4\\ C(1,0)&=&\df83\log\df98\\ C(2,0)&=&\log2-\df12\label{c20}\\ C(4,0)&=&\df13\log\df43\\ C(6,6)&=&\df{1}{12}\log2 \end{eqnarray} entail only $\log2$ and $\log3$. This observation prompted the next step. \section{Dilogarithms at $b=2$} By giving numerical evaluations to the lattice algorithm PSLQ, it was discovered that $C(a,2)$ evaluates to dilogs with simple rational arguments, for $a\in\{1,2,4,6\}$, namely \begin{eqnarray} C(1,2)&=&\pi^2+4{\rm Li}_2(\df{1}{16})-8{\rm Li}_2(\df16)-16{\rm Li}_2(\df14) -2\log^23-4\log^22\\ C(2,2)&=&\df{\pi^2}{12}-{\rm Li}_2(\df14)-\log^22\\ C(4,2)&=&\df38{\rm Li}_2(\df14)+\df18\log^23-\df34\log2\log\df32\\ C(6,2)&=&\df29{\rm Li}_2(\df14)-\df19{\rm Li}(\df{1}{16})-\df{1}{18}\log^22 \end{eqnarray} which indicated a dilogarithmic dependence of $C(a,2)$ on $a$. Combining the superconvergence relation with the simplicity of $D(w,2)=w^2$, a lengthy expression was proven by computer algebra, and then simplified by hand to give \begin{eqnarray}\df14a^2C(a,2)&=& 3{\rm Li}_2(a/(a+2))-2{\rm Li}_2(a/(2a+4)) +{\rm Li}_2(2a/(a-2))-{\rm Li}_2(a/(a-2))\nonumber\\&&{} +2{\rm Li}_2(-a/4)+\log^2(1+a/2)-\log(1-a^2/4)\log2 \end{eqnarray} which shows that $C(0,2)=\log2-\df12$, in agreement with~(\ref{c20}). Thanks to advice from Arttu Rajantie, it became clear that the 5 dilogs could be simplified to give 2, using transformations of ${\rm Li}_2(x):=-\int_0^x(dy/y)\log(1-y)$. The most compact formula is \begin{equation} \df14a^2C(a,2)={\rm Li}_2((a-2)/(a+2))-2{\rm Li}_2(-2/(a+2)) -\df{1}{12}\pi^2\,. \end{equation} \section{PSLQ and the symmetric tetrahedron} The previous results suggested the hypothesis that the totally symmetric tetrahedron, $C^{Tet}:=C(1,1)$, is a dilogarithm. With the help of PSLQ, it was eventually reduced to a Clausen integral of startling simplicity: \begin{equation} \frac{C(1,1)}{2^{5/2}}=-\int_{2\alpha}^{4\alpha} d\theta\log(2\sin\df12\theta)\label{cl2} \end{equation} with $\alpha:=\arcsin\df13$. A proof appears to be rather difficult, though~(\ref{cl2}) has been confirmed numerically, at 1,000-digit precision. The discovery route was typical of work with PSLQ. Splitting $C(1,1)$ into contributions below and above the 3-particle threshold, one finds that the latter involve terms of the form $\sqrt2\,{\rm Cl}_2(j\alpha+k\pi/6)$, with \begin{equation} {\rm Cl}_2(\theta):=\Im{\rm Li}_2(\exp(i\theta)) =\sum_{n>0}\frac{\sin(n\theta)}{n^2} \end{equation} and integer values of $j$ and $k$. There appeared to be little prospect of reducing all terms to this set of constants, by analytical methods alone. Yet PSLQ found that the total is so reducible and also found many relations between such Clausen values and the constants $\{\pi\log2,\pi\log3,\alpha\log2,\alpha\log3\}$. As so often remarked in field theory, the whole: \begin{equation} {C(1,1)\over2^{5/2}}={\rm Cl}_2(4\alpha)-{\rm Cl}_2(2\alpha)\label{ans} \end{equation} turned out to be far simpler than its parts. As a final bonus, this was transformed, again with the aid of PSLQ, to the exponentially convergent sum \begin{equation} C(1,1)=\sum_{n=0}^\infty\frac{(-1/2)^{3n}}{n+\frac12} \left(\frac{1}{n+\frac12}-3\log2-\sum_{m=1}^n\frac{3}{m}\right)\label{exp} \end{equation} formed from terms found in integer relations with $\sqrt2{\rm Cl}_2(j\alpha+k\pi/6)$. This last result enables rapid computation in a single do-loop. The first 50 digits of \begin{equation}C^{Tet}:=C(1,1)={\tt 0.17390061066200274272650601711566596761380833829869 }\end{equation} result in a trice, with 50,000 digits taking only 40 minutes on a 233 MHz Pentium. The first 1,000 digits agree with numerical quadrature of dispersive integrals, generously undertaken by Greg Fee, at CECM. After this work was completed, Arttu Rajantie drew attention to an alternative representation of massive 3-dimensional tetrahedra~\cite{AKR}, obtained by the method of differential equations~\cite{AVK}. In the totally symmetric case this gives~\cite{AKR} \begin{equation} {C(1,1)\over2^{5/2}}= \int_0^1\frac{dx}{\sqrt{3-x^2}}\,\left(\log\frac34+\log\frac{3+x}{2+x} -\frac{x^2}{4-x^2}\log\frac{4}{2+x}+\frac{x}{2+x}\log\frac{3+x}{3}\right) \end{equation} which appears to be no easier to reduce to~(\ref{ans}) than the dispersive integral~(\ref{cab}). \section{Conclusions} Thus PSLQ has shown that I was off target when suggesting at the recent Rheinsberg workshop that a super-renormalizable theory~\cite{GKM,AKR} might be less interesting, mathematically, than QCD~\cite{sixth}. In fact, the Ising tetrahedron is as intriguing as those in QCD. One now sees that the symmetric 3-dimensional tetrahedron is given by~(\ref{exp}) as an exponentially convergent sum that sits close to the classical formula~\cite{BBP} \begin{equation} {\pi\over\sqrt2}=\sum_{n\ge0}\frac{(-1/2)^n+(-1/2)^{3n+2}}{n+\df12}\,. \label{b2} \end{equation} This association resonates strongly with the recent reduction~\cite{sixth} of a 4-dimensional tetrahedron, in the 3-loop QCD corrections to the electro-weak rho-parameter~\cite{rho,rhop}, to a sum of squares of two distinguished dilogarithms, namely $\zeta(2)$ and ${\rm Cl}_2(\pi/3)$. The latter was first encountered in 1-loop massless 3-point functions~\cite{CG} and then in the pioneering work of van der Bij and Veltman~\cite{BV} on 2-loop massive diagrams. In the massive case it appears in association with \begin{equation} {\pi\over\sqrt3}=\sum_{n\ge0}\frac{(-1/3)^n}{n+\df12}\,.\label{b3} \end{equation} It remains to be seen whether the `magic' connection proven in~\cite{DT}, between massless and massive instances of ${\rm Cl}_2(\pi/3)$, is generalizable to the quadrilogarithms found in~\cite{sixth} or to the dilogarithm~(\ref{ans}) found here. In conclusion: 3-loop single-scale vacuum diagrams in 4 dimensions~\cite{sixth} evaluate to quadrilogarithms of the sixth root of unit, $\exp(i\pi/3)=(1+i\sqrt3)/2$, while in 3 dimensions we have now encountered dilogarithms of $\exp(i\alpha)=(\sqrt8+i)/3$. In both cases, there are remarkable transformations to exponentially convergent sums. In the 4-dimensional case, these entail polylogarithmic ladders, akin to those in~\cite{poly}, beginning with~(\ref{b3}); in 3 dimensions~(\ref{b2}) appears to provide the lowest rung. In both cases, the results are of a simplicity, scarcely to be expected from the method, that was revealed by PSLQ~\cite{PSLQ}. \newpage\noindent{\bf Acknowledgements} I thank David Bailey and Greg Fee, for computational assistance, Jochum van der Bij, Andrei Davydychev, Dirk Kreimer, Gernot M\"unster, Willi van Neerven, Arttu Rajantie and Bas Tausk, for advice, and Johannes Bl\"umlein, Fred Jegerlehner and Tord Riemann, for hospitality at Zeuthen. As so often, Dirk Kreimer provided the vital stimulus. \raggedright
1,116,691,498,008
arxiv
\section{Introduction} The generalization of various mathematical notions such as functions or even operators has importance that goes beyond mathematical curiosity. The Gamma function is a generalization of factorial for a real argument that is unique under certain constraints \cite{davis1959historical}. It can even be generalized to complex arguments \cite{abramovitz1964handbook}. It has been used in complex analysis, statistics, number theory, and even string theory in physics. Similarly, the Riemann zeta function was introduced by Euler for real argument and was later extended to complex argument \cite{borwein2008riemann}. It has turned out to be an extremely important function in physics and mathematics \cite{sakhr2003zeta}. The notion of derivatives has even been extended to functional derivatives \cite{bartolotti1982functional}. Another example is the q-deformation of numbers and functions \cite{sahoo1993q}. It has found applications in quantum groups and statistical physics \cite{plastino2004liouville}. Even q-derivatives and q-integrals have been defined. Of course, these generalizations need not be unique and different generalizations can be used in a different contexts. One of the important generalizations of the concept of derivatives has been fractional calculus \cite{kleinz2000child}. Several definitions have been proposed for extending the concept of fractional derivatives for real numbers. Some of them are extended to complex numbers and even derivatives of fractional complex order have been introduced. Differential equations of complex fractional order (which should be distinguished from complex differential equations) have been studied in the context of viscoelasticity, control systems, etc. Time domain, frequency domain and stability analysis of linear systems represented by differential equations with complex order derivative has been carried out \cite{jacob2016review}. It has found applications such as the design of controller for fractional-order DC motor system \cite{shah2021complex}, in PID controller and low-pass filter \cite{bingi2021design}. Such systems have been found to have large stability regions for certain parameters. The dynamic response of elastic foundations was modeled using complex order differential equations and was useful in predicting the response for various vibration modes over the entire frequency range of interest \cite{makris1994complex}. Particle swarm optimization is a well-studied optimization technique. It has been used in several constraints. Complex order derivatives have found application in this context as well \cite{pahnehkolaei2021particle}. It has been studied in discrete-time control of linear and nonlinear systems \cite{machado2013optimal}. In biophysics, atrial fibrillation is an important research topic and a mathematical model based on fractional-order complex derivatives has been recently proposed \cite{ugarte2018atrial}. Fractional-order circuit theory has been popular in recent times. It has been extended to fractional-order derivatives in circuit elements \cite{si2017attempt}. Thus complex order differential equations have found applications in several situations as mentioned before and the difference equations of complex order can be useful in those contexts. As Oono and Puri pointed out "Nature gives physicists phenomena, not equations." \cite{oono1988study}, new mathematical tools have always found applications in a variety of fields and these difference equations can be useful in numerous phenomena in nature. Difference equations can be viewed as an attempt to solve the differential equation by finite difference method and the notion of fractional order difference equation has been introduced in this context \cite{atici2010modeling}. We note that in fields such as economics and biology, difference equations appear naturally in modeling. Several dynamical phenomena obtained in differential equations are seen in difference equations as well \cite{strogatz}. Many schemes for control of chaos have been applicable to both differential equations as well as maps. The notion of fractional order differential equation has been extended to fractional order difference equation and few definitions have been proposed \cite{deshpande2016chaos}. Dynamics of linear and nonlinear systems have been investigated for fractional-order difference equations \cite{gade2021fractional} and even spatially extended dynamical systems have been defined as well as investigated \cite{pakhare2020emergence}. We are not aware of any attempt to define and study the difference equation of complex fractional order. In this work, we define the difference equation of complex fractional order in Caputo-like definition. We study the stability of linear systems which is an important and useful starting point for understanding the dynamics. We give stability conditions for linear systems and results can be extended to higher dimensions without loss of generality. Finally, we study nonlinear difference equations of complex order and investigate their dynamics. \section{Preliminaries} \begin{Def} (see \cite{mozyrska2015transform}). The Z-transform of a sequence $ \{y(n)\}_{n=0}^\infty $ is a complex function given by \begin{equation*} Y(z)=Z[y](z)=\sum_{k=0}^{\infty} y(k) z^{-k} \end{equation*} where $z \in \mathbb{C}$ is a complex number for which the series converges absolutely. \end{Def} \begin{Def}(see \cite{ferreira2011fractional, bastos2011discrete}). Let $ h > 0 ,\; a \in \mathbb{R}$ and $ (h\mathbb{N})_a = \{ a, a+h, a+2h, \ldots\} $. For a function $x : (h\mathbb{N})_a \rightarrow \mathbb{C}$, the forward h-difference operator if defined as $$ (\Delta_h x)(t)=\frac{x(t+h)- x(t)}{h},$$ where t $ \in (h\mathbb{N})_a $. \end{Def} Throughout this article, we take $a = 0$ and $h = 1$. We write $\Delta$ for $\Delta_1 $. Now, we generalize the fractional order operators defined in \cite{mozyrska2015transform,ferreira2011fractional, bastos2011discrete} to include the complex order $\alpha$. \begin{Def} For a function $x : (h\mathbb{N})_a \rightarrow \mathbb{C}$ the fractional h-sum of order $\alpha = u +\iota v \in \mathbb{C}, u>0 $ is given by \begin{equation*} (_{a}\Delta_h^{-\alpha}x)(t) = \frac{h^\alpha}{\Gamma(\alpha)}\sum_{s=0}^{n}\frac{\Gamma(\alpha+n-s)}{\Gamma(n-s+1)} x(a+sh),\\ \end{equation*} where, $t=a+(\alpha+n)h, \; n \in \mathbb{N_\circ}$. \end{Def} For $h=1$ and $a=0$ , we have \begin{eqnarray*} (\Delta^{-\alpha}x)(t) &=&\frac{1}{\Gamma(\alpha)}\sum_{s=0}^{n}\frac{\Gamma(\alpha+n-s)}{\Gamma(n-s+1)}x(s)\\ &=&\sum_{s=0}^{n} \left( \begin{array}{c} n-s+\alpha-1\\ n-s\\ \end{array} \right) x(s). \end{eqnarray*} Here, we used the generalized binomial coefficient \begin{equation*} \left( \begin{array}{c} \mu \\ \eta\\ \end{array} \right) =\frac{\Gamma(\mu+1)}{\Gamma(\eta+1)\Gamma(\mu-\eta+1)},\\ \; \mu ,\eta \in \mathbb{C}, \; \text{Re}(\mu)>0,\;\text{and Re}(\eta)>0. \end{equation*} If $n$ $\in$ $\mathbb{N_\circ}$ then \begin{eqnarray*} \left( \begin{array}{c} \mu \\ n \end{array} \right) =\frac{(\mu + 1)}{n!\Gamma(\mu-n+1)} =\frac{\mu(\mu-1)\ldots(\mu-n-1)}{n!}. \end{eqnarray*} \begin{Def} For $n \in \mathbb{N_\circ}$ and $\alpha=u+\iota v \in \mathbb{C}, u>0,$ we define \begin{eqnarray*} \tilde{\phi}_{\alpha}(n)= \left( \begin{array}{c} n+\alpha-1\\ n\\ \end{array} \right) =(-1)^n \left( \begin{array}{c} -\alpha\\ n \end{array} \right). \end{eqnarray*} \end{Def} \textbf{Note}: The convolution $\tilde{\phi}_{\alpha}*x$ of the sequences $\tilde{\phi}_{\alpha}$ and $x$ is defined as \begin{equation*} \left(\tilde{\phi}_{\alpha}*x\right)(n)=\sum_{s=0}^{n}\tilde{\phi}_{\alpha}(n-s)x(s) \end{equation*} \begin{equation*} \therefore (\Delta^{-\alpha}x)(n)=(\tilde{\phi}_{\alpha}*x)(n)\\. \end{equation*} \begin{equation*} \therefore Z(\Delta^{-\alpha}x)(n)=Z\left(\tilde{\phi}(n)\right)Z(x(n))\\ =(1-z^{-1})^{-\alpha}X(z), \end{equation*} where $X$ is $Z$ transform of $x$. \begin{Lem} For $\alpha \in \mathbb{C},\; \text{Re}(\alpha)>0$, \begin{equation*} Z(\tilde{\phi}_{\alpha}(t))=\frac{1}{(1-z^{-1})^{\alpha}}. \end{equation*} \end{Lem} Proof: We have, \begin{eqnarray*} Z(\tilde{\phi}_{\alpha}(t))&=&\sum_{j=0}^{\infty}\tilde{\phi}_{\alpha}(j)z^{-j}\\ &=&\sum_{j=0}^{\infty}\left( \begin{array}{c} j+\alpha-1\\ j \end{array} \right)z^{-j}\\ &=&\sum_{j=0}^{\infty}(-1)^{j}\left( \begin{array}{c} -\alpha\\ j \end{array} \right)z^{-j}\\ &=&(1-z^{-1})^{-\alpha}. \end{eqnarray*} by using Newton's generalization of Binomial Theorem \cite{niven1969formal,link1}. \section{Stability Analysis} We consider the linear fractional order difference equation \begin{equation} (\Delta^{\alpha}x)(t)=(A-I)x(t+\alpha-1), \label{aaa}\\ \end{equation} where $ t \in \mathbb{N}_{1-\alpha}=\{1-\alpha,2-\alpha,3-\alpha,\ldots\}, \; \alpha \in \mathbb{C},\; \text{Re}(\alpha) \in (0,1), $ $x(t) \in \mathbb{C}^n,\; A$ is $n\times n$ complex matrix and $I$ is $n\times n$ identity matrix and $x(0)=x_0$.\\ Initial value problem (\ref{aaa}) is equivalent to \cite{fulai2011existence} \begin{equation*} x(t)=x_0+\frac{1}{\Gamma(\alpha)}\sum_{s=1+\alpha}^{t+\alpha}\frac{\Gamma(t-s)(A-I)x(s+\alpha-1)}{\Gamma(t-s-\alpha+1)}. \end{equation*} Putting $s+\alpha-1=j$, we get \begin{eqnarray*} x(t)&=&x_0+\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma(\alpha)\Gamma(t-j)}(A-I)x(j)\\ &=&x_0+(A-I)(\tilde{\phi}_{\alpha}*x)(t-1). \end{eqnarray*} \begin{equation} \therefore \; x(t+1)=x_0+(A-I)(\tilde{\phi}_{\alpha}*x)(t), \; t=0,1,2\ldots. \label{bbb} \end{equation} If $Z(x(t))=X(z),$ then $Z(x(t+1))=zX(z)-zx_0$.\\ Taking Z-transform of (\ref{bbb}), we get \begin{equation*} zX(z)-zx_0 =\frac{x_0}{1-z^{-1}}+(A-I)\frac{1}{(1-z^{-1})^{\alpha}}X(z)\\. \end{equation*} provided, $|z|>1$ \cite{mozyrska2015transform}. \begin{equation*} \therefore[z(1-z^{-1})^{\alpha}I-(A-I)]X(z)\\ =z(1-z^{-1})^{\alpha-1}x_0, \end{equation*} where $|z|>1$. \\ We can solve this equation for $X(z)$ if the matrix $(z(1-z^{-1})^{\alpha}I-(A-I))$ is invertible matrix, i.e. if $det(z(1-z^{-1})^{\alpha}I-(A-I)) \ne 0$ $\forall$ $z$ with $|z|>1$ \cite{elaydi1993stability, desoer2009feedback}. Therefore, we have following theorem. \begin{The} The zero solution of (\ref{aaa}) or (\ref{bbb}) is asymptotically stable if and only if all the roots of $det(z(1-z^{-1})^{\alpha}I-(A-I))=0$ satisfy $|z|<1$. \end{The} \subsection{Sketching the boundary of stable region} Without loss, we can assume that the matrix $(A-I)$ is diagonal. Suppose that ($\lambda -1$) is an arbitrary entry on the diagonal. For stability, all the roots of characteristic equation \begin{equation} z(1-z^{-1})^{\alpha}-(\lambda - 1)=0 \label{iii} \end{equation} should satisfy $|z|<1$. On the boundary of stable region, we must have $z=e^{\iota t}$, $0\leq t \leq 2\pi$. Therefore, the characteristic equation (\ref{iii}) becomes $ e^{\iota t}(1-e^{-\iota t})^{\alpha}=(\lambda-1)$. Therefore, we have \begin{equation*} \lambda=2^{\alpha}\left(\sin\frac{t}{2}\right)^{\alpha}e^{\iota\left[\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right]}+1. \end{equation*} Therefore, the parametric representation of boundary curve is \begin{eqnarray} \gamma(t)=\left(Re[2^{\alpha}\left(\sin\frac{t}{2}\right)^{\alpha}e^{\iota\left[\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right]}]+1,Im[2^{\alpha}\left(\sin\frac{t}{2}\right)^{\alpha}e^{\iota\left[\frac{\alpha\pi}{2}+t\left(1-\frac{\alpha}{2}\right)\right]}]\right),\;t \in [0,2\pi]. \label{eee} \end{eqnarray} If all the eigenvalues of matrix A lie inside this simple closed curve $\gamma(t)$ then the system will be asymptotically stable. \subsection{Condition for simple curve} \begin{The} The curve $\gamma(t)$ defined by (\ref{iii}) is simple curve for $\alpha=u+\iota v$, $u \in (0,1)$ if and only if $0<v<\sqrt{2u-u^2}$. \end{The} \textbf{Proof:} We have $$ \beta(t)=e^{\iota t}(1-e^{-\iota t})^{\alpha},$$ where $\alpha=u+\iota v$. There is self intersection or cusp in this parametric curve if and only if $\exists$ $t_1\ne t_2$ such that \begin{eqnarray*} \beta(t_1)&=&\beta(t_2),\; t_1,t_2 \in [0,2\pi].\\ \iff e^{\iota t_1}(1-e^{\iota t_1})^{u+\iota v}&=&e^{it_2}(1-e^{\iota t_2})^{u+\iota v}\\ \iff \left(\sin\frac{t_1}{2}\right)^ue^{\frac{vt_1}{2}}e^{\iota [v\log\sin\frac{t_1}{2}+t_1(1-\frac{u}{2})]}&=&\left(\sin\frac{t_2}{2}\right)^ue^{\frac{vt_2}{2}}e^{\iota [v\log\sin\frac{t_2}{2}+t_2(1-\frac{u}{2})]}\\ \text{and}\\ v\log\left(\left(\sin\frac{t_1}{2}\right)+t_1\left(1-\frac{u}{2}\right)\right)&=&v\log\left(\left(\sin\frac{t_2}{2}\right)+t_2\left(1-\frac{u}{2}\right)\right) +2k\pi,\;k \in \mathbb{Z} \\ \iff \frac{v^2}{2}&=&\left(1-\frac{u}{2}\right)u-2k\pi u\\ \therefore v^2&=&(2-u)u-4k\pi u\\ \iff v&=&\sqrt{(2+4k\pi)u+u^2} \end{eqnarray*} This is non real if $k=-1,-2,-3,\ldots$. Further $v$ is minimum if $k=0$ and we have $v=\sqrt{2u-u^2}$ or $(u-1)^2+v^2=1$. Therefore, if $0<v<\sqrt{2u-u^2}\;$ then $\gamma(t)$ is a simple curve. This completes the proof of theorem.\\ \textbf{Observation}: \begin{figure}[h] \centering \includegraphics[scale=1]{simpleregion.pdf} \caption{Region for simple curve} \label{fig12} \end{figure} \textbf{Observation}: If there exists multiple points i.e. if $v>\sqrt{2u-u^2}$ then the system is unstable for all eigenvalues. \section{Illustrative Examples} In this section we verify the stability results described in the previous section. \begin{Ex} \begin{figure} \centering \includegraphics[scale=0.6]{stableRegn1.pdf} \caption{Stable region for $\alpha=e^{\frac{\iota\pi}{4}}$} \label{fig17} \end{figure} \begin{figure} \subfloat{\includegraphics[scale=1]{ex1stab.pdf}} \caption{Example 1 stable solution for $f(x)=(0.2+0.5\iota) x$} \label{fig9} \subfloat{\includegraphics[scale=1]{ex1unstab.pdf}} \caption{Example 1 unstable solution for $f(x)=(0.1-2\iota) x$} \label{fig10} \subfloat{\includegraphics[scale=1]{ex1unstab2.pdf}} \caption{Example 1 unstable solution for $f(x)=(0.1-2\iota) x$} \label{fig11} \end{figure} We take $\alpha=e^{\frac{\iota\pi}{4}}=0.7071(1+\iota)$. The stable region for this $\alpha$ is sketched in Figure \ref{fig17}. Consider the $1$-dimensional system (\ref{bbb}) with $f(x)=(0.2+0.5\iota)x$. In this case, $\lambda=0.2+0.5\iota$ is inside the stable region of this system as shown in Figure \ref{fig9}. On the other hand, the eigenvalue of $\lambda=0.1-2\iota)x$ lies outside the stable region. The unstable trajectory of this system is traced in Figures \ref{fig10} and \ref{fig11}. \end{Ex} \begin{Ex} \begin{figure}[h] \centering \includegraphics[scale=0.9]{multiplecurve.pdf} \caption{Multiple curve for $\alpha=0.4+0.9\iota$} \label{fig13} \end{figure} \begin{figure} \subfloat{\includegraphics[scale=1]{unstable1.pdf}} \caption{Example 2 unstable trajectory for $\lambda=1.1-0.1\iota$} \label{fig14} \subfloat{\includegraphics[scale=0.9]{unstable2.pdf}} \caption{Example 2 unstable solution for $\lambda=1.1-0.1\iota$}\label{fig15} \subfloat {\includegraphics[scale=1]{unstable3.pdf}} \caption{Example 2 unstable solution for $\lambda=0.5+0.1\iota$}\label{fig16} \end{figure} Now we take $\alpha=0.4+0.9\iota$. Here $v>\sqrt{2u-u^2}$. Therefore, the boundary curve $\gamma(t)$ will have multiple points as shown in Figure \ref{fig13}. In this case, we observe the unstable solutions for all eigenvalues. We take $f(x)=\lambda x$ with $\lambda=1.1-0.1\iota,-0.1-0.1\iota$ and $0.5+0.1\iota$ inside and outside the curve $\gamma(t)$. The unstable solutions for these values are sketched in Figures \ref{fig14}, \ref{fig15} and \ref{fig16} respectively. \end{Ex} \section{Nonlinear System} We consider, \begin{equation} x(t)=x_0+\frac{1}{\Gamma(\alpha)}\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma(t-j)}[f(x(j))-x(j)],\; t=1,2,\ldots. \label{fff} \end{equation} where $x(t) \in \mathbb{C}^n$ and $f:\mathbb{C}^n \rightarrow \mathbb{C}^n$ is continuously differentiable. A steady state solution $x_*$ of (\ref{fff}) is a complex number satisfying $f(x_*)=x_*$. Following definitions are generalizations of those given in \cite{elaydi2006introduction, hirsch2012differential}. \begin{Def} We say that $x_*$ is stable if for each $\epsilon>0,\; \exists\; \delta>0$ such that $\Vert x_0-x_*\Vert <\delta$ $\implies\; \Vert x(t)-x_*\Vert<\epsilon$, $t=1,2,\ldots$ \end{Def} \begin{Def} Equilibrium point $x_*$ is asymptotically stable if it is stable and $\exists\; \delta>0$ such that $\Vert x_0-x_*\Vert<\delta \; \implies \lim_{t\rightarrow \infty}x(t)=x_*$. \end{Def} \textbf{Note}: If $x_*$ is not stable then it is unstable. \\ The linearization of nonlinear system (\ref{fff}) in the neighborhood of $x_*$ is given by \begin{equation} x(t)=x_0+\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma(\alpha)\Gamma(t-j)}(A-I)x(j) \label{ggg} \end{equation} where $A=f'(x_*)$ is the Jacobian matrix. The local stability properties of equilibrium point $x_*$ of (\ref{fff}) are same as those of linearization (\ref{ggg}) i.e. the equilibrium point $x_*$ is asymptotically stable if all the eigenvalues of Jacobian $A$ lie inside the stable region. \subsection{Complex order logistic map} We consider $$f(x)=\lambda x(1-x),$$ where $\lambda>0$.\\ The logistic map of complex order $\alpha$, with Re$(\alpha)>0$ is given by \begin{equation} x(t)=x_0+\sum_{j=0}^{t-1}\frac{\Gamma(t-j-\alpha-1)}{\Gamma(\alpha)\Gamma(t-j)}[f(x(j))-x(j)],\; t=1,2,\ldots. \label{ccc} \end{equation} Equilibrium points of (\ref{ccc}) are $x_{1*}=0$ and $x_{2*}=(\frac{\lambda-1}{\lambda})$. Linearization of (\ref{ccc}) in the neighborhood of equilibrium $x_*$ is given by \begin{equation} x(t)=x_0+\sum_{j=0}^{t-1}\frac{\Gamma(t-j-\alpha-1)}{\Gamma(\alpha)\Gamma(t-j)}[(f'(x_*)-1)x(j)]. \label{ddd} \end{equation} \textbf{Stability of $x_{1*}=0$:}\\ Here $f'(x_{1*})=f'(0)=\lambda$. Therefore, the equilibrium point $x_{1*}$ is asymptotically stable if $\lambda$ is inside the stable region bounded by (\ref{eee}).\\ \textbf{Stability of $x_{2*}=0$:}\\ In this case, \begin{eqnarray*} f'(x_{2*})&=&\lambda-2\lambda\left(\frac{\lambda-1}{\lambda}\right)\\ &=&2-\lambda. \end{eqnarray*} Therefore, $x_{2*}$ is asymptotically stable if ($2-\lambda$) lies inside the stable region bounded by (\ref{eee}).\\ \begin{figure}[h] \centering \includegraphics[scale=0.8]{logisticStab.pdf} \caption{Stability region for logistic map} \label{fig1} \end{figure} \begin{figure}[h] \subfloat {\includegraphics[scale=1]{logistic1.pdf}} \caption{$x_{1*}$ is asymptotically stable for $\lambda=-0.1$}\label{fig2} \subfloat{\includegraphics[scale=1]{logistic2.pdf}} \caption{$x_{2*}$ is unstable for $\lambda=-0.1$}\label{fig3} \subfloat{\includegraphics[scale=1]{logistic4.pdf}} \caption{$x_{1*}$ is unstable for $\lambda=1.5$}\label{fig4} \subfloat{\includegraphics[scale=1]{logistic3.pdf}} \caption{$x_{2*}$ is asymptotically stable for $\lambda=1.5$}\label{fig5} \end{figure} Let us take $\alpha=0.8+0.7\iota $. The stable region is given in Figure \ref{fig1}. Therefore, $x_{1*}$ is asymptotically stable if $\lambda \in (-0.2774,0.6432)\cup (1,1.0754)$ and the $x_{2*}$ is asymptotically stable if $\lambda \in (0.9246,1)\cup (1.3568,2.2774)$. For $\lambda=-0.1$, $x_{1*}$ is asymptotically stable whereas $x_{2*}=11$ is unstable. In Figure \ref{fig2}, we take $x_0=0.3$ where the trajectory converges to $x_{1*}$. We take $x_0=10.2$ in Figure \ref{fig3} and show that the trajectory is diverging. The equilibrium point $x_{1*}$ is unstable for $\lambda=1.5$ whereas $x_{2*}=0.3333$ is asymptotically stable. This result is verified in Figures \ref{fig4} and \ref{fig5} by selecting appropriate initial conditions viz. $x_0=0.3$ and $x_0=-0.1$ respectively. \subsection{Two-dimensional system} Now, we consider two-dimensional system \begin{eqnarray} x(t)=x_0+\frac{1}{\Gamma(\alpha)}\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma(t-j)}[f_{1}(x(j),y(j))-x(j)], \nonumber \\ y(t)=y_0+\frac{1}{\Gamma(\alpha)}\sum_{j=0}^{t-1}\frac{\Gamma(t-j+\alpha-1)}{\Gamma(t-j)}[f_{2}(x(j),y(j))-y(j)],\label{jjj}\\ t=1,2,\ldots \nonumber \end{eqnarray} where, $ f_{1}(x,y)=\lambda x(y+1)+\mu(x^2+1)y$ and $ f_{2}(x,y)=\lambda y(x+1)-\mu(y+1)^2x.$ Clearly, origin $(0,0)$ is an equilibrium point. The Jacobian matrix at origin \begin{eqnarray*} J=\left[ \begin{array}{cc} \lambda&\mu\\ \-\mu&\lambda \end{array}\right] \end{eqnarray*} \begin{figure}[h] \centering \includegraphics[scale=0.9]{twodStab.pdf} \caption{Stable region for $2$-dimensional system (\ref{jjj})} \label{fig6} \end{figure} \begin{figure} \subfloat{\includegraphics[scale=0.9]{twod1.pdf}} \caption{Stable orbits for $(\lambda,\mu)=(-0.2,0.1)$} \label{fig7} \subfloat{\includegraphics[scale=0.9]{twod2.pdf}} \caption{Unstable orbits for $(\lambda,\mu)=(-0.2,0.5)$} \label{fig8} \end{figure} has eigenvalues $\lambda \pm \iota \mu$. For $\alpha=0.7+0.4\iota$, the stable region is shown in Figure \ref{fig6}. For the value $(\lambda,\mu)=(-0.2,0.1)$, both the eigenvalues $-0.2\pm0.1\iota$ of $J$ lie inside the stable region. The stable orbits $|x(j)|$ and $|y(j)|$ are shown in Figure \ref{fig7}. If we set $(\lambda,\mu)=(-0.2,-0.5)$ then the eigenvalue $-0.2-0.5\iota$ lies inside the stable region whereas the eigenvalue $-0.2+0.5\iota $ lies outside the stable region. Therefore, the equilibrium is unstable. The unstable orbits are shown in Figure \ref{fig8}. \section{Results and Conclusion} We have used Z-transform for carrying out stability analysis of equilibrium points. This technique is fairly general. Stability for the linear system is achieved if the zero solutions of the system are asymptotically stable i.e., the roots of equation (\ref{iii}) satisfy $|z|<1$. If all the roots of the system lie within the boundary curve defined by (\ref{eee}) then it is considered to be asymptotically stable otherwise, unstable. We have demonstrated that this result can be extended to non-linear systems using the logistic map. For a non-linear system, the first step is to linearize it around its equilibrium points. These points are asymptotically stable if the eigenvalues of the Jacobian matrix lie inside the stable region. For higher dimensions, we considered the example of a two-dimensional system and found similar results for stability analysis. All the eigenvalues of the Jacobian matrix must lie inside the curve for the equilibrium point to be stable as seen in the examples. This criterion is very similar to one obtained for integer-order difference equations though the stability region is very different. However, we note that the qualitative dynamics is dissimilar in many contexts. For example, for $x_{n+1}=\lambda x_n$, we observe monotonic decrease or increase for $\lambda \in \mathbb{R}$. On the other hand, trajectories can spiral in or out even for $\lambda \in \mathbb{R}$ for complex order difference equations even for real initial conditions. It is possible that these difference equations can lead to stable limit cycles or quasi-periodic cycles in nonlinear systems even in $1$-dimension and it is of interest in the future if it is possible in one or higher dimensions. It has been shown that periodic orbits do not exist for fractional-order differential equations for real order \cite{saleh2012simplification}. Complex order differential has been studied in several contexts as mentioned in the introduction. In viscoelastic systems, some researchers have demanded that the output should be real for real input, and hence the dynamics is governed by the sum of two complex order differential equations of complex conjugate order \cite{atanackovic2016complex}. Our work can be easily extended in this direction and the stable region, in this case, would be the intersection set of the two difference equations. This work is mainly motivated by mathematical curiosity and we have generalized the notion of difference equations to complex order. The limited studies in nonlinear systems do not show chaos or quasi-periodic cycles. It would be interesting to further examine whether the system shows chaos under certain conditions. \section{Acknowledgment} P. M. Gade thanks DST-SERB for financial assistance (Ref. EMR/2016/006686 and CRG/2020/003993). S. Bhalekar acknowledges the Science and Engineering Research Board (SERB), New Delhi, India for the Research Grant (Ref. MTR/2017/000068) under Mathematical Research Impact Centric Support (MATRICS) Scheme and the University of Hyderabad for Institute of Eminence-Professional Development Fund (IoE-PDF) by MHRD (F11/9/2019-U3(A)).
1,116,691,498,009
arxiv
\section{Introduction} Impulsive differential equations describe the dynamics of real world processes in which abrupt changes occur. Such equations play an increasingly important role in various fields such as mechanics, electronics, biology, neural networks, communication systems, chaos theory and population dynamics \cite{Akh1,Akh4,Akh3,Akh5,Herrera12,Khadra03,Liu94,Yang07,Yang97b,Zhou09}. In this paper, we investigate the existence of homoclinic and heteroclinic motions in systems with impulsive effects. The main object of the present study is the following impulsive system, \begin{eqnarray} \label{impulsive_system} \begin{array}{l} x'= A(t)x + f(t,x) + g(t,\zeta), ~ t\neq \theta_k, \\ \Delta x |_{t= \theta_k} = B_k x + J_k (x) + \zeta_k, \end{array} \end{eqnarray} where $\left\{ \theta_k \right\},$ $k\in\mathbb Z,$ is a strictly increasing sequence of real numbers such that $\left|\theta_k\right| \to \infty$ as $\left| k \right| \to \infty,$ $A(t)$ is an $n \times n$ continuous matrix function, $B_k$ are constant $n \times n$ real valued matrices, $\Delta x |_{t= \theta_k}=x(\theta_k+)-x(\theta_k),$ $x(\theta_k+)=\displaystyle \lim_{t\to\theta_k^+}x(t),$ the functions $f: \mathbb R \times \mathbb R^n \to \mathbb R^n$ and $J_k: \mathbb R^n \to \mathbb R^n$ are continuous in all their arguments, the function $g(t,\zeta)$ is defined by the equation $g(t,\zeta)=\zeta_k,$ $t \in (\theta_{k-1},\theta_k],$ and the sequence $\zeta=\left\{\zeta_k\right\},$ $k\in \mathbb Z,$ is a solution of the map \begin{eqnarray} \label{discrete_map} \zeta_{k+1}=F(\zeta_k), \end{eqnarray} where the function $F:\Lambda \to \Lambda$ is continuous and $\Lambda$ is a bounded subset of $\mathbb R^n.$ Here, $\mathbb R$ and $\mathbb Z$ denote the sets of real numbers and integers, respectively. The system under investigation is a hybrid one, since it combines the dynamics of an impulsive differential equation with a discrete map. Our main objective is to prove rigorously the existence of homoclinic and heteroclinic solutions in the dynamics of (\ref{impulsive_system}) provided that (\ref{discrete_map}) possesses such solutions. The idea of the usage of discontinuous perturbations to generate homoclinic and heteroclinic motions in systems of differential equations was first realized in the papers \cite{Akh2,Akh7} on the basis of functional spaces. It was shown in \cite{Akh2} that the chaotic attractor of the relay system, which was introduced in the paper \cite{Akh6}, consists of homoclinic solutions. Similar results for impulsive differential equations were obtained in the study \cite{Akh7} by taking advantage of the moments of impulses. The existence of homoclinic and heteroclinic motions in systems with impulses were also investigated in the papers \cite{Battelli97,Fang12,Feckan96,Han11,Li14,Zhang14,Zhang11}. The existence and multiplicity of fast homoclinic solutions for a class of damped vibration problems with impulsive effects were investigated in \cite{Zhang14} by using the mountain pass theorem and the symmetric mountain pass theorem in the critical point theory. The mountain pass theorem was also utilized in \cite{Fang12,Li14} to show the presence of homoclinic motions in second order impulsive systems. On the other hand, Wei and Chen \cite{Wei14,Wei13} considered the existence of heteroclinic cycles in predator-prey systems with Allee effect and state-dependent impulsive harvesting within the scope of their studies. Zhang and Li \cite{Zhang11} proved the existence of at least one non-zero homoclinic solution, which is generated by impulses, under appropriate conditions for a class of impulsive second order differential equations. Han and Zhang \cite{Han11} obtained the existence of homoclinic solutions for a class of asymptotically linear or sublinear Hamiltonian systems with impulses by using variational methods. It was mentioned in \cite{Han11} that no homoclinic solutions exist for the system under investigation without impulses. However, in the present study, the emergence of homoclinic and heteroclinic motions are completely provided by the influence of a discrete map instead of impulsive effects. Additionally, our results are valid for systems with arbitrary high dimensions. The rest of the paper is organized as follows. In Section \ref{prelim}, we discuss bounded solutions of (\ref{impulsive_system}), and present sufficient conditions for the existence of homoclinic and heteroclinic motions in the system. Section \ref{mainresults} is devoted for the main results of the paper. In this part, we show the connection between the stable and unstable sets of the impulsive system (\ref{impulsive_system}) and the discrete map (\ref{discrete_map}), and prove the existence of homoclinic and heteroclinic solutions in (\ref{impulsive_system}). Examples concerning homoclinic and heteroclinic motions in an impulsive Duffing equation are provided in Section \ref{examples_sec}. Finally, some concluding remarks are given in Section \ref{conc}. \section{Preliminaries} \label{prelim} In the sequel, we will make use of the usual Euclidean norm for vectors and the norm induced by the Euclidean norm for matrices \cite{Horn85}. Let us denote by $U(t,s)$ the transition matrix of the linear homogeneous system \begin{eqnarray} \label{homogeneous_system} \begin{array}{l} u'= A(t)u, ~ t\neq \theta_k, \\ \Delta u |_{t= \theta_k} = B_k u(\theta_k). \end{array} \end{eqnarray} The following conditions are required. \begin{enumerate} \item[\textbf{(C1)}] $\det \left( I + B_k\right) \neq 0$ for all $k\in \mathbb Z,$ where $I$ is the $n\times n$ identity matrix; \item[\textbf{(C2)}] There exists a positive number $\theta$ such that $\theta_{k+1}-\theta_k \ge \theta$ for all $k\in\mathbb Z;$ \item[\textbf{(C3)}] There exist positive numbers $N$ and $\omega$ such that $\left\| U(t,s) \right\| \le Ne^{-\omega (t-s)}$ for $t\ge s;$ \item[\textbf{(C4)}] There exist positive numbers $M_f,$ $M_F$ and $M_J$ such that $$\displaystyle \sup_{(t,x)\in \mathbb R \times \mathbb R^n } \left\| f(t,x)\right\| \le M_f, \ \ \displaystyle \sup_{\sigma \in \Lambda} \left\| F(\sigma)\right\| \le M_F, \ \ \displaystyle \sup_{k\in\mathbb Z, x\in \mathbb R^n } \left\| J_k(x)\right\| \le M_J;$$ \item[\textbf{(C5)}] There exist positive numbers $L_f$ and $L_J$ such that $$\left\|f(t,x_1) - f(t,x_2)\right\| \le L_f \left\|x_1-x_2\right\|$$ for all $t\in \mathbb R,$ $x_1,x_2 \in \mathbb R^n,$ and $$\left\|J_k(x_1) - J_k(x_2)\right\| \le L_J \left\|x_1-x_2\right\|$$ for all $k\in\mathbb Z,$ $x_1,x_2 \in \mathbb R^n;$ \item[\textbf{(C6)}] $\displaystyle N \left( \frac{L_f}{\omega} + \frac{ L_J}{1-e^{-\omega \theta}} \right)<1;$ \item[\textbf{(C7)}] $-\omega+NL_f+\displaystyle\frac{1}{\theta}\ln(1+NL_J)<0.$ \end{enumerate} Let $\Theta$ be the set of all sequences $\zeta=\left\{\zeta_k\right\},$ $k\in \mathbb Z,$ obtained by equation (\ref{discrete_map}). By using the results of \cite{Akh1,Samolienko95} one can show under the conditions $(C1)-(C6)$ that for a fixed sequence $\zeta \in \Theta$ the system (\ref{impulsive_system}) possesses a unique bounded on $\mathbb R$ solution $\phi_{\zeta}(t),$ which satisfies the following relation, \begin{eqnarray} \label{bounded_soln_relation} \phi_{\zeta} (t) = \displaystyle \int_{-\infty}^t U(t,s) \left[ f(s,\phi_{\zeta} (s)) + g(s,\zeta) \right] ds + \displaystyle \sum_{-\infty < \theta_k < t} U(t,\theta_k+) \left[J_k(\phi_{\zeta} (\theta_k))+\zeta_k\right]. \end{eqnarray} One can confirm under the conditions $(C1)-(C7)$ that for a fixed sequence $\zeta \in \Theta,$ the bounded solution $\phi_{\zeta}(t)$ attracts all other solutions of (\ref{impulsive_system}), i.e., $\left\|x(t)-\phi_{\zeta}(t)\right\|\to 0$ as $t\to \infty$ for any solution $x(t)$ of (\ref{impulsive_system}). Moreover, $$\displaystyle \sup_{t \in\mathbb R} \left\|\phi_{\zeta}(t)\right\| \le \displaystyle N \left(\displaystyle\frac{M_f+M_F}{\omega}+\displaystyle\frac{M_J+M_F}{1-e^{-\omega \theta}}\right)$$ for each $\zeta \in \Theta.$ \section{Homoclinic and heteroclinic motions} \label{mainresults} In this section, first of all, we will describe the stable, unstable and hyperbolic sets as well as the homoclinic and heteroclinic motions for both system (\ref{impulsive_system}) and the discrete map (\ref{discrete_map}). These definitions were introduced in the papers \cite{Akh2,Akh7}. After that the existence of homoclinic and heteroclinic motions in the dynamics of (\ref{impulsive_system}) will be proved. Consider the set $\Theta$ described in the previous section once again. The stable set of a sequence $\zeta\in\Theta$ is defined as \begin{eqnarray*} \label{stable_set} W^s(\zeta)= \left\{ \eta \in \Theta \ | \ \left\|\eta_k-\zeta_k\right\|\to 0 ~\textrm{as}~ k\to \infty \right\}, \end{eqnarray*} and the unstable set of $\zeta$ is \begin{eqnarray*} \label{unstable_set} W^u(\zeta)= \left\{ \eta \in \Theta \ | \ \left\|\eta_k-\zeta_k\right\|\to 0 ~\textrm{as}~ k\to -\infty \right\}. \end{eqnarray*} The set $\Theta$ is called hyperbolic if for each $\zeta \in \Theta$ the stable and unstable sets of $\zeta$ contain at least one element different from $\zeta.$ A sequence $\eta \in \Theta$ is homoclinic to another sequence $\zeta \in \Theta$ if $\eta \in W^s(\zeta) \cap W^u(\zeta).$ Moreover, $\eta \in \Theta$ is heteroclinic to the sequences $\zeta^1 \in \Theta,$ $\zeta^2 \in \Theta,$ $\eta \neq \zeta^1,$ $\eta \neq \zeta^2,$ if $\eta \in W^s(\zeta^1) \cap W^u(\zeta^2).$ On the other hand, let us denote by $\mathscr{A}$ the set consisting of all bounded on $\mathbb R$ solutions of system (\ref{impulsive_system}). A bounded solution $\phi_{\eta}(t) \in \mathscr{A}$ belongs to the stable set $W^s(\phi_{\zeta}(t))$ of $\phi_{\zeta}(t) \in \mathscr{A}$ if $\left\|\phi_{\eta}(t)-\phi_{\zeta}(t)\right\|\to 0$ as $t\to \infty.$ Besides, $\phi_{\eta}(t)$ is an element of the unstable set $W^u(\phi_{\zeta}(t))$ of $\phi_{\zeta}(t)$ provided that $\left\|\phi_{\eta}(t)-\phi_{\zeta}(t)\right\|\to 0$ as $t\to -\infty.$ We say that $\mathscr{A}$ is hyperbolic if for each $\phi_{\zeta}(t) \in \mathscr{A}$ the sets $W^s(\phi_{\zeta}(t))$ and $W^u(\phi_{\zeta}(t))$ contain at least one element different from $\phi_{\zeta}(t).$ A solution $\phi_{\eta}(t)\in \mathscr{A}$ is homoclinic to another solution $\phi_{\zeta}(t) \in \mathscr{A}$ if $\phi_{\eta}(t) \in W^s(\phi_{\zeta}(t)) \cap W^u(\phi_{\zeta}(t)),$ and $\phi_{\eta}(t)\in \mathscr{A}$ is heteroclinic to the bounded solutions $\phi_{\zeta^1}(t),$ $\phi_{\zeta^2}(t)\in \mathscr{A},$ $\phi_{\eta}(t) \neq \phi_{\zeta^1}(t),$ $\phi_{\eta}(t) \neq \phi_{\zeta^2}(t),$ if $\phi_{\eta}(t) \in W^s(\phi_{\zeta^1}(t)) \cap W^u(\phi_{\zeta^2}(t)).$ In what follows, we will denote by $i((a,b))$ the number of the terms of the sequence $\left\{\theta_k\right\},$ $k\in \mathbb Z,$ which belong to the interval $(a,b),$ where $a$ and $b$ are real numbers such that $a<b.$ It is worth noting that $\displaystyle i((a,b))\leq 1+\frac{b-a}{\theta}.$ The connection between the stable sets of the solutions of (\ref{impulsive_system}) and (\ref{discrete_map}) is provided in the next assertion. \begin{lemma}\label{lemma1} Suppose that the conditions $(C1)-(C7)$ are fulfilled, and let $\zeta$ and $\eta$ be elements of $\Theta.$ If $\eta\in W^{s}(\zeta),$ then $\phi_{\eta}(t)\in W^{s}(\phi_{\zeta}(t)).$ \end{lemma} \noindent \textbf{Proof.} Fix an arbitrary positive number $\epsilon,$ and denote $\alpha= \omega-NL_f-\displaystyle\frac{1}{\theta}\ln(1+NL_J).$ Assume without loss of generality that $\epsilon \le 2 M_F.$ Let $\gamma$ be a real number such that $$\gamma \ge 1+N\left(\frac{1}{\omega}+\frac{1}{1-e^{-\omega\theta}}\right)\left(1+\frac{N L_f(1+N L_J)}{\alpha}+\frac{N L_J(1+N L_J)}{1-e^{-\alpha \theta}}\right).$$ Because the sequence $\eta=\left\{\eta_k\right\},$ $k\in\mathbb Z,$ belongs to the stable set $W^{s}(\zeta)$ of $\zeta=\left\{\zeta_k\right\},$ there exists an integer $k_0$ such that $\left\|\eta_k-\zeta_k\right\|<\displaystyle\frac{\epsilon}{\gamma}$ for all $k\ge k_0.$ One can confirm that $\left\|g(t,\eta)-g(t,\zeta)\right\|<\displaystyle \frac{\epsilon}{\gamma}$ for $t>\theta_{k_0-1}.$ Making use of the relation \begin{eqnarray*} && \phi_{\eta}(t) - \phi_{\zeta}(t) = \displaystyle \int_{-\infty}^{t} U(t,s) \left[ f(s,\phi_{\eta}(s)) - f(s,\phi_{\zeta}(s)) + g(s,\eta)- g(s,\zeta) \right] ds \\ && + \displaystyle \sum_{-\infty < \theta_k < t} U(t,\theta_k+) \left[ J_k (\phi_{\eta}(\theta_k)) - J_k (\phi_{\zeta}(\theta_k)) +\eta_k - \zeta_k \right], \end{eqnarray*} we obtain for $t>\theta_{k_0-1}$ that \begin{eqnarray} \label{proof_ineq1} \begin{array}{l} \displaystyle \left\|\phi_{\eta}(t) - \phi_{\zeta}(t)\right\| \le \displaystyle \int_{-\infty}^{\theta_{k_0-1}} 2N(M_f+M_F) e^{-\omega (t-s)} ds \\ + \displaystyle \sum_{-\infty < \theta_k \le \theta_{k_0-1}} 2N(M_J+M_F) e^{-\omega (t-\theta_k)} + \displaystyle \int^{t}_{\theta_{k_0-1}} \frac{N\epsilon}{\gamma} e^{-\omega (t-s)} ds\\ + \displaystyle \sum_{\theta_{k_0-1} < \theta_k < t } \frac{N\epsilon}{\gamma} e^{-\omega (t-\theta_k)} + \displaystyle \int^{t}_{\theta_{k_0-1}} NL_f e^{-\omega (t-s)} \left\|\phi_{\eta}(s) - \phi_{\zeta}(s)\right\| ds \\ +\displaystyle \sum_{\theta_{k_0-1} < \theta_k < t } NL_J e^{-\omega (t-\theta_k)} \left\|\phi_{\eta}(\theta_k) - \phi_{\zeta}(\theta_k)\right\| \\ \le \displaystyle 2N \left( \frac{ M_f+M_F }{\omega} + \frac{ M_J+M_F }{1-e^{-\omega \theta} } \right)e^{-\omega (t-\theta_{k_0-1})} \\ + \displaystyle \frac{N \epsilon}{\gamma \omega} \left( 1-e^{-\omega(t-\theta_{k_0-1})} \right) + \displaystyle \frac{N \epsilon}{\gamma (1-e^{-\omega \theta})} \left( 1-e^{-\omega(t-\theta_{k_0-1} + \theta)} \right) \\ + \displaystyle \int^{t}_{\theta_{k_0-1}} NL_f e^{-\omega (t-s)} \left\|\phi_{\eta}(s) - \phi_{\zeta}(s)\right\| ds \\ +\displaystyle \sum_{\theta_{k_0-1} < \theta_k < t } NL_J e^{-\omega (t-\theta_k)} \left\|\phi_{\eta}(\theta_k) - \phi_{\zeta}(\theta_k)\right\|. \end{array} \end{eqnarray} Define the functions $u(t)=e^{\omega t} \left\| \phi_{\eta}(t) - \phi_{\zeta}(t) \right\|$ and $h(t)= c_1 + c_2 e^{\omega t},$ where $$c_1=2N \left( \frac{ M_f+M_F }{\omega} + \frac{ M_J+M_F }{1-e^{-\omega \theta} } \right)e^{\omega \theta_{k_0-1}} - \frac{N\epsilon}{\gamma} \left(\frac{e^{\omega \theta_{k_0-1}}}{\omega}+ \frac{e^{\omega (\theta_{k_0-1}-\theta)}}{1-e^{-\omega \theta}}\right)$$ and $$c_2=\frac{N\epsilon}{\gamma} \left(\frac{1}{\omega}+ \frac{1}{1-e^{-\omega \theta}}\right).$$ The inequality (\ref{proof_ineq1}) implies that $$ u(t) \le h(t) + \displaystyle \int_{\theta_{k_0-1}}^t NL_f u(s) ds + \sum_{\theta_{k_0-1} < \theta_k < t} NL_J u(\theta_k). $$ The application of the analogue of the Gronwall's inequality for piecewise continuous functions yields \begin{eqnarray*} && u(t) \le h(t) + \displaystyle \int_{\theta_{k_0-1}}^t NL_f (1+ NL_J)^{i((s,t))} e^{NL_f(t-s)} h(s) ds \\ &&+ \displaystyle \sum_{\theta_{k_0-1} < \theta_k < t} NL_J (1+NL_J)^{i((\theta_k,t))} e^{NL_f(t-\theta_k)} h(\theta_k). \end{eqnarray*} Since the equation \begin{eqnarray*} && 1+\displaystyle \int^t_{\theta_{k_0-1}} NL_f (1+NL_J)^{i((s,t))} e^{NL_f(t-s)} ds \\ && + \displaystyle \sum_{\theta_{k_0-1} < \theta_k < t} NL_J (1+NL_J)^{i((\theta_k,t))} e^{NL_f(t-\theta_k)} \\ && = (1+ NL_J)^{i((\theta_{k_0-1},t))} e^{NL_f(t-\theta_{k_0-1})} \end{eqnarray*} is valid and $(1+NL_J)^{i((a,b))} e^{NL_f(b-a)} \le (1+NL_J) e^{(\omega-\alpha)(b-a)} $ for any real numbers $a$ and $b$ with $a < b,$ one can confirm that \begin{eqnarray*} && u(t) \le c_1 (1+NL_J) e^{(\omega-\alpha)(t-\theta_{k_0-1})} + c_2e^{\omega t} \\ && + \displaystyle \int_{\theta_{k_0-1}}^t c_2NL_f (1+NL_J) e^{(\omega-\alpha) (t-s)} e^{\omega s} ds \\ && + \displaystyle \sum_{\theta_{k_0-1}<\theta_k< t} c_2 NL_J (1+NL_J) e^{(\omega-\alpha)(t-\theta_k)} e^{\omega \theta_k} \\ && \le c_1 (1+NL_J) e^{(\omega-\alpha)(t-\theta_{k_0-1})} + c_2e^{\omega t} \\ && + \displaystyle \frac{c_2 NL_f (1+NL_J)}{\alpha} e^{\omega t} \left( 1-e^{-\alpha (t-\theta_{k_0-1})} \right) \\ && + \displaystyle \frac{c_2 NL_J (1+NL_J)}{1-e^{-\alpha \theta}} e^{\omega t} \left( 1-e^{-\alpha (t-\theta_{k_0-1}+\theta)} \right). \end{eqnarray*} If we multiply both sides of the last inequality by $e^{-\omega t},$ then we obtain that \begin{eqnarray*} && \left\|\phi_{\eta}(t) - \phi_{\zeta}(t) \right\| \le c_1 (1+NL_J) e^{-\omega \theta_{k_0-1}} e^{-\alpha(t-\theta_{k_0-1})} + c_2\\ && + \displaystyle \frac{c_2 NL_f (1+NL_J)}{\alpha} \left( 1-e^{-\alpha (t-\theta_{k_0-1})} \right) \\ && + \displaystyle \frac{c_2 NL_J (1+NL_J)}{1-e^{-\alpha \theta}} \left( 1-e^{-\alpha (t-\theta_{k_0-1}+\theta)} \right) \\ && < 2N(1+NL_J) \left( \frac{M_f+M_F}{\omega} + \frac{M_J+M_F}{1-e^{-\omega \theta}} \right) e^{-\alpha (t-\theta_{k_0-1})} \\ && + \displaystyle \frac{N\epsilon}{\gamma}\left(\frac{1}{\omega}+\frac{1}{1-e^{-\omega\theta}}\right)\left(1+\frac{N L_f(1+N L_J)}{\alpha}+\frac{N L_J(1+N L_J)}{1-e^{-\alpha \theta}}\right). \end{eqnarray*} Now, let $R> \theta_{k_0-1}$ be a sufficiently large real number such that \begin{eqnarray*} \displaystyle 2N(1+N L_J)\left(\displaystyle\frac{M_f+M_F}{\omega}+\frac{M_J+M_F}{1-e^{-\omega \theta}}\right)e^{-\alpha (R-\theta_{k_0-1})}\le \frac{\epsilon}{\gamma}. \end{eqnarray*} For $t\ge R,$ we have \begin{eqnarray*} \Big\|\phi_{\eta}(t)-\phi_{\zeta}(t)\Big\|<\displaystyle\frac{\epsilon}{\gamma}\Big[1+N\Big(\frac{1}{\omega}+\frac{1}{1-e^{-\omega\theta}}\Big) \Big(1+\frac{N L_f(1+N L_J)}{\alpha}+\frac{N L_J(1+N L_J)}{1-e^{-\alpha \theta}}\Big)\Big] \le \epsilon. \end{eqnarray*} Therefore, $\displaystyle \lim_{t \to \infty} \left\|\phi_{\eta}(t)-\phi_{\zeta}(t)\right\|=0.$ Consequently, $\phi_{\eta}(t)\in W^{s}(\phi_{\zeta}(t)).$ $\square$ In the next lemma, we reveal the connection between the unstable sets of the solutions of (\ref{impulsive_system}) and (\ref{discrete_map}). \begin{lemma}\label{lemma2} Suppose that the conditions $(C1)-(C6)$ are fulfilled, and let $\zeta$ and $\eta$ be elements of $\Theta.$ If $\eta\in W^{u}(\zeta),$ then $\phi_{\eta}(t)\in W^{u}(\phi_{\zeta}(t)).$ \end{lemma} \noindent \textbf{Proof.} Fix an arbitrary positive number $\epsilon,$ and let $\lambda$ be a real number such that $$\lambda > \frac{N(\omega + 1 -e^{-\omega\theta})}{\omega(1-e^{-\omega\theta})-N(L_f(1-e^{-\omega\theta})+L_J\omega)}.$$ Since $\eta=\left\{\eta_k\right\},$ $k\in\mathbb Z,$ is an element of the unstable set $W^{u}(\zeta)$ of $\zeta=\left\{\zeta_k\right\},$ there exists an integer $k_0$ such that $\left\|\eta_k-\zeta_k\right\|<\displaystyle\frac{\epsilon}{\lambda}$ for all $k\le k_0.$ In this case, we have that $\left\|g(t,\eta)-g(t,\zeta)\right\|<\displaystyle \frac{\epsilon}{\lambda}$ for $t\le\theta_{k_0}.$ By using the relation \begin{eqnarray*} && \phi_{\eta}(t) - \phi_{\zeta}(t) = \displaystyle \int_{-\infty}^{t} U(t,s) \left[ f(s,\phi_{\eta}(s)) - f(s,\phi_{\zeta}(s)) + g(s,\eta)- g(s,\zeta) \right] ds \\ && + \displaystyle \sum_{-\infty < \theta_k < t} U(t,\theta_k+) \left[ J_k (\phi_{\eta}(\theta_k)) - J_k (\phi_{\zeta}(\theta_k)) +\eta_k - \zeta_k \right], \end{eqnarray*} one can verify for $t \le \theta_{k_0}$ that \begin{eqnarray*} && \left\|\phi_{\eta}(t) - \phi_{\zeta}(t)\right\| < \displaystyle \int^t_{-\infty} N e^{-\omega (t-s)} \left( L_f \left\|\phi_{\eta}(s) - \phi_{\zeta}(s)\right\| +\frac{\epsilon}{\lambda} \right) ds \\ && + \displaystyle \sum_{-\infty < \theta_k < t} N e^{-\omega (t-\theta_k)} \left( L_J \left\|\phi_{\eta}(\theta_k) - \phi_{\zeta}(\theta_k)\right\| +\frac{\epsilon}{\lambda} \right) \\ && \le \frac{N}{\omega} \left( L_f \sup_{t\le\theta_{k_0}} \left\| \phi_{\eta}(t) - \phi_{\zeta}(t) \right\|+ \frac{\epsilon}{\lambda} \right) + \frac{N}{1-e^{-\omega \theta}} \left( L_J \sup_{t\le\theta_{k_0}} \left\| \phi_{\eta}(t) - \phi_{\zeta}(t) \right\| + \frac{\epsilon}{\lambda} \right). \end{eqnarray*} Therefore, \begin{eqnarray*} \left( 1-\frac{NL_f}{\omega}-\frac{NL_J}{1-e^{-\omega \theta}} \right) \sup_{t\le\theta_{k_0}} \left\| \phi_{\eta}(t) - \phi_{\zeta}(t) \right\| \le \frac{N\epsilon}{\lambda} \left( \frac{1}{\omega} + \frac{1}{1-e^{-\omega \theta}} \right). \end{eqnarray*} The last inequality implies that $\displaystyle\sup_{t\le\theta_{k_0}} \left\| \phi_{\eta}(t) - \phi_{\zeta}(t) \right\| < \epsilon.$ Consequently, $$\displaystyle \lim_{t \to -\infty} \left\|\phi_{\eta}(t)-\phi_{\zeta}(t)\right\|=0,$$ and $\phi_{\eta}(t)$ belongs to $W^u(\phi_{\zeta}(t)).$ $\square$ The main result of the present paper is mentioned in the following theorem, which can be proved by using the results of Lemma \ref{lemma1} and Lemma \ref{lemma2}. \begin{theorem}\label{main_theorem} Under the conditions $(C1)-(C7),$ the following assertions are valid. \begin{enumerate} \item[(i)] If $\eta \in \Theta$ is homoclinic to $\zeta \in \Theta,$ then $\phi_{\eta}(t) \in \mathscr{A}$ is homoclinic to $\phi_{\zeta}(t) \in \mathscr{A};$ \item[(ii)] If $\eta \in \Theta$ is heteroclinic to $\zeta^1,$ $\zeta^2 \in \Theta,$ then $\phi_{\eta}(t) \in \mathscr{A}$ is heteroclinic to $\phi_{\zeta^1}(t),$ $\phi_{\zeta^2}(t) \in \mathscr{A};$ \item[(iii)] If $\Theta$ is hyperbolic, then the same is true for $\mathscr{A}.$ \end{enumerate} \end{theorem} The next section is devoted to examples concerning homoclinic and heteroclinic motions in an impulsive Duffing equation. \section{Examples} \label{examples_sec} Let us take into account the impulsive Duffing equation \begin{eqnarray} \label{imp_Duf} \begin{array}{l} x'' + 0.2 x' + 0.81 x + 0.001 x^3 = 0.7 \displaystyle \cos\left(\frac{2\pi}{3} t\right) + g(t,\zeta), \ t\neq \theta_k, \\ \Delta x |_{t= \theta_k} = -0.12 x + 0.09+\zeta_k, \\ \Delta x' |_{t= \theta_k} = -0.12 x'+ 0.015 \sin(x), \end{array} \end{eqnarray} where $\theta_k=3k,$ $k\in \mathbb Z,$ the function $g(t,\zeta)$ is defined through the equation $g(t,\zeta)=\zeta_k,$ $t \in (\theta_{k-1},\theta_k],$ and the sequence $\zeta=\left\{\zeta_k\right\}$ is a solution of the logistic map \begin{eqnarray} \label{logistic_map} \zeta_{k+1}=F_{\mu}(\zeta_k), \end{eqnarray} where $F_{\mu}(s)=\mu s (1-s)$ and $\mu$ is a parameter. For $0<\mu\leq 4,$ the interval $[0,1]$ is invariant under the iterations of (\ref{logistic_map}) \cite{Dev90,Hale91,Rob95}, and the inverses of the function $F_{\mu}$ on the intervals $[0,1/2]$ and $[1/2,1]$ are $ h_1(s)=\displaystyle \frac{1}{2} \left( 1-\sqrt{1-\frac{4s}{\mu}} \right) $ and $ h_2(s)=\displaystyle \frac{1}{2} \left( 1+\sqrt{1-\frac{4s}{\mu}} \right), $ respectively. By using the new variables $x_1=x$ and $x_2=x'$ one can reduce (\ref{imp_Duf}) to the system \begin{eqnarray} \label{imp_Duf_system} \begin{array}{l} x_1'=x_2, \\ x_2' = - 0.81 x_1 - 0.2 x_2 - 0.001 x_1^3 + 0.7 \displaystyle \cos \left(\frac{2\pi}{3} t\right) + g(t,\zeta), \ t\neq \theta_k, \\ \Delta x_1 |_{t= \theta_k} = -0.12 x_1 +0.09+ \zeta_k, \\ \Delta x_2 |_{t= \theta_k} = -0.12 x_2+ 0.015 \sin(x_1) . \end{array} \end{eqnarray} Denote by $U(t,s)$ the transition matrix of the linear homogeneous system \begin{eqnarray} \label{linear_homogenous_system_imp} \begin{array}{l} u'_1=u_2, \\ u'_2=-0.81 u_1 - 0.2 u_2, ~t \neq \theta_k, \\ \Delta u_1|_{t=\theta_k} = -0.12 u_1, \\ \Delta u_2|_{t=\theta_k} = -0.12 u_2. \end{array} \end{eqnarray} One can verify for $t> s$ that \[U(t,s)= e^{-(t-s)/10} \left(\frac{22}{25}\right)^{i([s,t))} P\left( \begin {array}{ccc} \cos \Big(\frac{2}{\sqrt{5}}(t-s)\Big)&- \sin \Big(\frac{2}{\sqrt{5}}(t-s)\Big)\\ \noalign{\medskip} \sin \Big(\frac{2}{\sqrt{5}}(t-s)\Big)& \cos \Big(\frac{2}{\sqrt{5}}(t-s)\Big) \end {array} \right)P^{-1},\] where $i([s,t))$ is the number of the terms of the sequence $\left\{\theta_k\right\}$ that belong to the interval $[s,t)$ and $P=\left( \begin {array}{ccc} 0&1\\ \noalign{\medskip} 2/\sqrt{5}&-1/10 \end {array} \right).$ It can be calculated that $\left\|U(t,s)\right\|\le N e^{-\omega (t-s)},$ $t\ge s,$ where $\omega=1/10$ and $N=1.17.$ For $0 < \mu \le 4$ the bounded solutions of (\ref{imp_Duf_system}) lie inside the compact region $$D=\left\{ (x_1,x_2) \in \mathbb R^2: \left|x_1\right| \le 2.8, \ \left|x_2\right| \le 1.4 \right\},$$ and the conditions $(C1)-(C7)$ are valid for system (\ref{imp_Duf_system}). It is worth noting that for a periodic solution $\zeta=\left\{\zeta_k\right\}$ of (\ref{logistic_map}) the corresponding bounded solution $\phi_{\zeta}(t)$ of (\ref{imp_Duf_system}) is also periodic. Consider the map (\ref{logistic_map}) with $\mu=3.9.$ It was demonstrated in \cite{Avrutin15} that the orbit $$\eta=\left\{\ldots, h^3_2(\eta_0), h_2^2(\eta_0), h_2(\eta_0), \eta_0, F_{\mu}(\eta_0), F^2_{\mu}(\eta_0), F^3_{\mu}(\eta_0), \ldots \right\},$$ where $\eta_0=1/3.9,$ is homoclinic to the fixed point $\eta^{*}=2.9/3.9$ of (\ref{logistic_map}). Denote by $\phi_{\eta}(t)$ and $\phi_{\eta^*}(t)$ the bounded solutions of (\ref{imp_Duf_system}) corresponding to $\eta$ and $\eta^*,$ respectively. One can conclude by using Theorem \ref{main_theorem} that $\phi_{\eta}(t)$ is homoclinic to the periodic solution $\phi_{\eta^*}(t).$ Figure \ref{fig1} shows the graphs of the $x_1-$coordinates of $\phi_{\eta}(t)$ and $\phi_{\eta^*}(t).$ In the figure, the solution $\phi_{\eta}(t)$ is represented in blue color, while $\phi_{\eta^*}(t)$ is represented in red color. Figure \ref{fig1} reveals that $\phi_{\eta}(t)$ is homoclinic to $\phi_{\eta^*}(t),$ i.e., $\left\|\phi_{\eta}(t)-\phi_{\eta^*}(t)\right\| \to 0$ as $t \to \pm \infty.$ \begin{figure}[ht] \centering \includegraphics[width=11.0cm]{fig1.eps} \caption{\footnotesize Homoclinic solution of (\ref{imp_Duf_system}). The $x_1-$coordinates of $\phi_{\eta}(t)$ and $\phi_{\eta^*}(t)$ are shown in blue and red colors, respectively. The figure manifests that $\phi_{\eta}(t)$ is homoclinic to $\phi_{\eta^*}(t).$ } \label{fig1} \end{figure} Now, we set $\mu=4$ in equation (\ref{logistic_map}). According to \cite{Avrutin15}, the orbit $$\widetilde{\eta}=\left\{\ldots, h^3_1(\widetilde{\eta}_0), h_1^2(\widetilde{\eta}_0), h_1(\widetilde{\eta}_0), \widetilde{\eta}_0, F_{\mu}(\widetilde{\eta}_0), F^2_{\mu}(\widetilde{\eta}_0), F^3_{\mu}(\widetilde{\eta}_0), \ldots \right\},$$ where $\widetilde{\eta}_0=1/4,$ is heteroclinic to the fixed points $\eta^1=3/4$ and $\eta^2=0$ of (\ref{logistic_map}). Suppose that $\phi_{\widetilde{\eta}}(t),$ $\phi_{\eta^1}(t)$ and $\phi_{\eta^2}(t)$ are the bounded solutions of (\ref{imp_Duf_system}) corresponding to $\widetilde{\eta},$ $\eta^1$ and $\eta^2,$ respectively. Theorem \ref{main_theorem} implies that $\phi_{\widetilde{\eta}}(t)$ is heteroclinic to the periodic solutions $\phi_{\eta^1}$ and $\phi_{\eta^2}.$ Figure \ref{fig2} shows the graphs of the $x_1-$coordinates of $\phi_{\widetilde{\eta}}(t),$ $\phi_{\eta^1}(t)$ and $\phi_{\eta^2}(t)$ in blue, red and green colors, respectively. The figure supports Theorem \ref{main_theorem} such that $\phi_{\widetilde{\eta}}(t)$ converges to $\phi_{\eta^1}(t)$ as time increases and converges to $\phi_{\eta^2}(t)$ as time decreases, i.e., $\phi_{\widetilde{\eta}}(t)$ is heteroclinic to $\phi_{\eta^1}(t),$ $\phi_{\eta^2}(t).$ \begin{figure}[ht] \centering \includegraphics[width=11.0cm]{fig2.eps} \caption{\footnotesize Heteroclinic solution of (\ref{imp_Duf_system}). The $x_1-$coordinates of $\phi_{\widetilde{\eta}}(t),$ $\phi_{\eta^1}(t)$ and $\phi_{\eta^2}(t)$ are represented in blue, red and green colors, respectively. The figure confirms that $\phi_{\widetilde{\eta}}(t)$ is heteroclinic to the periodic solutions $\phi_{\eta^1}(t),$ $\phi_{\eta^2}(t).$} \label{fig2} \end{figure} \section{Conclusions} \label{conc} In this study, we rigorously prove the presence of homoclinic and heteroclinic motions in hybrid systems with impacts. The dynamics of the system under consideration consist of an impulsive differential equation and a discrete map, which influences the former. According to our results, homoclinic and heteroclinic orbits of the discrete map give rise to the emergence of homoclinic and heteroclinic motions in the impulsive system. The presented technique is appropriate to design mechanical and electrical impulsive systems with homoclinic and heteroclinic motions, without any restriction in the dimension. One can take advantage of our approach to investigate the presence of such motions in hybrid systems with impacts. An impulsive Duffing equation is utilized to illustrate the results of the paper. The provided examples show the applicability of our results. \section*{Acknowledgments} The authors wish to express their sincere gratitude to the referees for the helpful criticism and valuable suggestions, which helped to improve the paper significantly. This work is supported by the 2219 scholarship programme of T\"{U}B\.{I}TAK, the Scientific and Technological Research Council of Turkey.
1,116,691,498,010
arxiv
\section{Examples of volatility estimators} Consider dependence on time $t$ of the price $P(t)$ of some financial instrument. As a rule, at discussing of volatility, one consider its logarithm \[ X(t) := \ln P(t) . \] Let point out one of the conventional volatility $V(T)$ definition, which we are using in this paper: It is the variance \begin{equation}\label{sqvoldef} V(T):=\bold{Var}\left[Y(t,T)\right] = \bold{E}\left[Y^2(t,T)\right] - \bold{E}^2\left[Y(t,T)\right] . \end{equation} of the log-price increment $Y(t,T) := X(t+T) - X(t)$ within given time interval duration $T$. Recall, Garman-Klass (G\&K) \cite{Garman1980}, Parkinson (PARK) \cite{PARK1980} and Roger-Satch\-ell (R\&S) \cite{Roger1991} volatility estimators are resting on the high and low values: \begin{equation}\label{hilidef} H := \sup_{t'\in(0,T)} Y(t,t') , \qquad L := \inf_{t'\in(0,T)} Y(t,t') . \end{equation} Accordingly, PARK estimator is equal to \begin{equation}\label{Park} \hat{V}_p := \frac{(H-L)^2}{\ln 16} , \end{equation} while G\&K estimator given by expression \begin{equation}\label{GKest} \begin{array}{c} \displaystyle \hat{V}_g := k_1 (H-L)^2-k_2 (C(H-L)- 2 H L) - k_3 C^2 , \\%[1mm] \displaystyle k_1 = 0.511 , \qquad k_2 = 0.0109 , \qquad k_3 = 0.383 . \end{array} \end{equation} Here $C:= Y(t,T)$ is the close value of the log-price increment. Recall else R\&S estimator, equal to \begin{equation}\label{RSest} \hat{V}_r := H (H-C) +L (L-C). \end{equation} Besides of mentioned well-known estimators, we discuss \emph{bridge oscillation estimator}. Below we call it shortly by \emph{bridge estimator}. Before to define it, recall bridge $Z(t,t')$ stochastic process definition. It is equal to \begin{equation}\label{bridgorgdef} Z(t,t') := Y(t,t') - \frac{t'}{T} ~ Y(t,T) , \qquad t' \in (0,T) . \end{equation} Let introduce high and low of the bridge: \begin{equation}\label{hilibridgedef} \mathcal{H} := \max_{t'\in(0,T)} Z(t,t') , \qquad \mathcal{L} := \min_{t'\in(0,T)} Z(t,t') . \end{equation} Accordingly, mentioned above bridge volatility estimator given by \begin{equation}\label{bridsqvoldef} \hat{V}_b := \kappa \left(\mathcal{H}-\mathcal{L}\right)^2 . \end{equation} The value of the factor $\kappa$ will be calculated later. \section{Geometric Brownian motion} One of conventional models of price stochastic behavior is geometric Brownian motion (see \cite{Jeanblanc2009,Cont2004,{Saichev2010}}). In particular, it is used in theoretical justification of G\&K, PARK and R\&S estimators. Below we discuss statistics of mentioned volatility estimators in frame of geometric Brownian motion model. Namely, we assume that increment of the log-price is of the form \begin{equation}\label{musigwbrm} Y(t,T) = \mu T + \sigma B(T) . \end{equation} Here $\mu$ is the drift of the price, while $B(t)$ is the standard Brownian motion $B(t)\sim \mathcal{N}(0,t)$. Factor $\sigma^2$ is the intensity of the Brownian motion. Recall, Brownian motion posses by self-similar property \begin{equation}\label{bsimasq} B(t) \sim \sqrt{T}\, B\left(\frac{t}{T}\right) , \qquad \forall~ T > 0 , \end{equation} where and below sign $\sim$ means identity in law. Using pointed out self-similar property, one can ensure that \begin{equation}\label{xtsimxtau} \begin{array}{c}\displaystyle Y(t,t') \sim \sigma \sqrt{T} ~ x(\tau,\gamma), \\[4mm] \displaystyle x(\tau,\gamma):= \gamma \tau + B(\tau) , \qquad \gamma := \frac{\mu}{\sigma} \sqrt{T} , \qquad \tau := \frac{t'}{T} \in(0,1) . \end{array} \end{equation} Henceforth we call process $x(\tau,\gamma)$ by \emph{canonical Brownian motion}, while factor $\gamma$ by \emph{canonical drift}. Using relations \eqref{Park}, \eqref{GKest}, \eqref{bridsqvoldef} and \eqref{xtsimxtau}, one find that \[ \begin{array}{c} \hat{V}_p \sim V(T) \cdot \hat{v}_p(\gamma) , \qquad \hat{V}_g \sim V(T) \cdot \hat{v}_g(\gamma) , \qquad \hat{V}_b \sim V(T) \cdot \hat{v}_b , \\[2mm] \hat{V}_r \sim V(T) \cdot \hat{v}_r(\gamma) , \qquad V(T) = \sigma^2 T . \end{array} \] We have used above \emph{canonical estimators}: \begin{equation}\label{canpgoests} \begin{array}{c} \displaystyle \hat{v}_p(\gamma) : = \frac{d^2}{\ln 16} , \qquad \hat{v}_b := \kappa s^2 , \qquad d := h-l , \qquad s := \xi-\zeta , \\[4mm] \hat{v}_g(\gamma) := k_1 d^2-k_2 (c d- 2 h c) - k_3 c^2 , \qquad \hat{v}_r = h (h-c) + l(l-c) , \end{array} \end{equation} containing high, low and close values \begin{equation}\label{hlcdef} h := \sup_{\tau\in(0,1)} x(\tau,\gamma) , \qquad l := \inf_{\tau\in(0,1)} x(\tau,\gamma) , \qquad c := x(1,\gamma) , \end{equation} of canonical Brownian motion, and high and low values \begin{equation}\label{extrbridge} \xi := \sup_{\tau\in(0,1)} z(\tau) , \qquad \zeta := \inf_{\tau\in(0,1)} z(\tau) , \end{equation} of the canonical bridge \begin{equation}\label{mostzdef} z(\tau) := x(\tau,\gamma)- \tau x(1,\gamma) = B(\tau) - \tau \cdot B(1) , \qquad \tau\in(0,1) . \end{equation} Plots of the typical paths of the canonical Brownian motion $x(\tau,\gamma)$ \eqref{xtsimxtau} for $\gamma=1$ and corresponding canonical bridge $z(\tau)$ \eqref{mostzdef} are given in figure~\ref{winbridgam10}. \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{01.eps}\\ \end{center} \caption{Typical paths of canonical Brownian motion $x(\tau,\gamma)$ \eqref{xtsimxtau} for $\gamma=1$ and corresponding canonical bridge $z(\tau)$ \eqref{mostzdef}}\label{winbridgam10} \end{figure} It is worthwhile to note that the closer expected values of canonical estimators $\hat{v}_p(\gamma)$, $\hat{v}_g(\gamma)$, $\hat{v}_r$ and $\hat{v}_b$ to unity, the less biased corresponding original volatility estimators. Analogously, the smaller variances of canonical estimators the more efficient original volatility estimators $\hat{V}_p$, $\hat{V}_g$, $\hat{V}_r$ and $\hat{V}_b$. Notice additionally that canonical drift $\gamma$ of the canonical Brownian motion $x(\tau,\gamma)$ \eqref{xtsimxtau} is, as a rule, unknown. Nevertheless, to get some idea about dependence on drift $\mu$ of bias and efficiency of volatility estimators, we will discuss below in details dependence of canonical estimators statistical properties on possible values of the factor $\gamma$. \section{Comparative efficiency of PARK and bridge estimators} Resting on, given at Appendix, analytical formulas for probability density functions (pdfs) of random variables \eqref{hlcdef} and \eqref{extrbridge}, we explore in this section some atatistical properties of canonical PARK estimator $\hat{v}_p(\gamma)$ and bridge one $\hat{v}_b$ \eqref{canpgoests}. Let check, first of all, unbiasedness of canonical PARK estimator. To make it, let calculate, with help of pdf $q_x(\delta)$ \eqref{qdelexpr}, mean square of oscillation $d=h-l$ of the canonical Brownian motion $x(\tau,\gamma)$ at the zero canonical drift ($\gamma=0$). After simple calculations obtain \begin{equation}\label{matexphlsq} \bold{E}[d^2] = 2 + \sum_{m=1}^\infty \frac{2}{m (4 m^2-1)} = \ln 16. \end{equation} From here and from expression \eqref{canpgoests} of canonical PARK estimator $\hat{v}_p(\gamma)$ one can see that the following expression is true \[ \bold{E}[\hat{v}_p(\gamma=0)] = 1 . \] Let find now the factor $\kappa$ at expressions \eqref{bridsqvoldef} and \eqref{canpgoests}. To make it, calculate first of all the mean square of the bridge oscillation. Due to expression \eqref{rhodelexp} for the bridge oscillation $s$ \eqref{canpgoests} pdf, one have \[ \bold{E}[s^2] = \sum_{m=1}^\infty \frac{1}{m^2} = \frac{\pi^2}{6} . \] Accordingly, unbiased canonical bridge estimator has the form \begin{equation}\label{bgamma6pi} \bold{E}[\hat{v}_b] = 1 \quad \Rightarrow \quad \kappa =\frac{1}{\bold{E}[s^2]} \quad \Rightarrow \quad \hat{v}_b = \frac{6\, s^2}{\pi^2} . \end{equation} The great advantage of the bridge estimator is its unbiasedness for any drift. This remarkable property of the pointed out estimator is the consequence of the fact that bridge $Z(t,t')$ \eqref{bridgorgdef} and its canonical counterpart $z(\tau)$ don't depend on the drift $\mu$ (canonical drift $\gamma$) at all. On the contrary, PARK estimator becomes essentially biased at nonzero drift. In figure~\ref{parkmeangam} depicted dependence on $\gamma$ of canonical PARK estimator expected value, illustrating bias of PARK estimator at nonzero drift. Corresponding curve obtained with help of analytical expression \eqref{qdelgamexpr} for canonical bridge oscillation pdf. \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{02.eps} \end{center} \caption{Plot of canonical PARK estimator $\hat{v}_p(\gamma)$ mean value, as function of canonical drift $\gamma$. It is seen that with growth of $\gamma$ PARK estimator becomes more and more biased. Straight line is the plot of canonical bridge $\hat{v}_b$, mean value}\label{parkmeangam} \end{figure} Let calculate variances of canonical PARK and bridge estimators. After substitution into the rhs of expression \[ \bold{E}[\hat{v}^2_p(\gamma=0)] := \frac{1}{\ln^2 16}\int_0^\infty \delta^4 q_x(\delta) d\delta \] the sum \eqref{qdelexpr} for the canonical Brownian motion oscillation pdf $q_x(\delta)$, and after summation obtain for $\gamma=0$: \[ \bold{E}[\hat{v}^2_p(\gamma=0)] = \frac{9 \, \zeta(3)}{\ln^2 16} \simeq 1.40733 . \] Accordingly, variance of canonical PARK estimator $\hat{v}_p$ is \begin{equation}\label{parkvargamzero} \bold{Var}[\hat{v}_p(0)] = \frac{9 \, \zeta(3)}{\ln^2 16} -1 \simeq 0.407 . \end{equation} As the next step, we calculate variance of canonical bridge estimator $\hat{v}_b$ \eqref{bgamma6pi}. Sought variance is equal to \[ \bold{Var}[\hat{v}_b] := \frac{36}{\pi^4} ~ \bold{E}[s^4] -1 . \] After substitution here, following from \eqref{rhodelexp}, relation \[ \bold{E}[s^4] := \int_0^2 \delta^4 q_b(\delta) d\delta = 3 \sum_{m=1}^\infty \frac{1}{m^4} = \frac{\pi^4}{30} , \] obtain \begin{equation}\label{varbridge} \bold{Var}[\hat{v}_b] = \frac{6}{5} -1 = 0.2 . \end{equation} Comparing equalities \eqref{parkvargamzero} and \eqref{varbridge}, one can see that variance of bridge estimator approximately twice smaller than variance of PARK estimator. Recall, variance of bridge estimator does not depend on drift. On the contrary, variance of PARK estimator essentially depends on the drift. One can see it in figure~\ref{parkvargam}, where depicted plot of dependence, on canonical drift $\gamma$, of canonical PARK estimator variance. \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{03.eps} \end{center} \caption{Plots of dependence on $\gamma$ of canonical PARK estimator variance. Straight line is the variance of canonical bridge estimator}\label{parkvargam} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{04.eps} \end{center} \caption{Plot of relative bias \eqref{varrhodef} of canonical PARK estimator as function of canonical drift $\gamma$}\label{rhopgam} \end{figure} Notice else that bias of some estimator is insignificant only if it is much smaller than rms of corresponding estimator, i.e. is small the relative bias: \begin{equation}\label{varrhodef} \varrho := \frac{\bold{E}[\hat{v}(\gamma)]-1}{\sqrt{\bold{Var}[\hat{v}(\gamma)]}} . \end{equation} Plot of canonical PARK estimator relative bias, as function of canonical drift $\gamma$ depicted in figure~~\ref{rhopgam}. \section{Interval estimations on the basis of PARK and bridge estimators} Given at Appendix analytical expressions \eqref{qdelgamexpr}, \eqref{qdelexpr} and \eqref{rhodelexp} for canonical Brownian motion and canonical bridge random oscillations pdfs allow us to explore in details probabilistic properties of PARK and bridge canonical estimators. Let find, at first, pdfs of mentioned canonical estimators random values. It is well known from Probabilistic Theory that pdf $W_p(x;\gamma)$ of canonical PARK estimator is expressed through pdf $q_x(\delta;\gamma)$ \eqref{qdelgamexpr} of canonical Brownian motion oscillation by the relation \begin{equation}\label{Wpxgam} W_p(x;\gamma) = \sqrt{\frac{\alpha}{4 x}} ~ q_x\left(\sqrt{\alpha x};\gamma\right) , \qquad \alpha = \ln 16 . \end{equation} Similarly, pdf of canonical bridge estimator is equal to \begin{equation}\label{Wpxbridge} W_b(x) = \sqrt{\frac{\alpha}{4 x}} ~ q_b\left(\sqrt{\alpha x}\right) , \qquad \alpha = \frac{\pi^2}{6} . \end{equation} Here $q_b(\delta)$ \eqref{rhodelexp} is the pdf of canonical bridge oscillation. Plots of canonical PARK estimator pdf, for $\gamma=0$, and pdf of canonical bridge estimator are depicted in figure~\ref{pdfspbzero}. In figure~\ref{pdfspbone} are comparing pdfs of canonical PARK estimator, for $\gamma=1$, and pdf of canonical bridge estimator. It is seen in both figures that pdf of canonical bridge estimator is better concentrated around its expected value $\bold{E}[\hat{v}_b]=1$ than canonical PARK estimator pdf. Knowing estimators pdfs, one can produce interval estimations of possible volatility values. Consider typical interval estimation: Let $\hat{V}$ is some volatility estimator, equal to \begin{equation}\label{hatveqvhatv} \hat{V} = V(T) \cdot \hat{v} . \end{equation} Here $\hat{v}$ is corresponding canonical estimator, while $V(T)$ is the measured volatility. One needs to find probability \[ F(N) := \bold{Pr}\left\{V(T) < N \cdot \hat{V} \right\} \] that unknown (random) volatility $V(T)$ is not more than $N$ times exceeds known (measured) volatility estimated value $\hat{V}$. It follows from \eqref{hatveqvhatv} that following inequalities are equivalent: \[ V(T) < N \cdot \hat{V} \qquad \Leftrightarrow \qquad \hat{v} > 1\big/ N . \] Last means in turn that sought probability $F(N)$ is expressed through pdf of canonical estimator $\hat{v}$ by the following way: \begin{equation}\label{fnexpr} F(N) = \bold{Pr}\left\{\hat{v} > 1\big/ N \right\} = \int_{1/N}^\infty W(x) dx . \end{equation} Here $W(x)$ is the pdf of canonical estimator $\hat{v}$. \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{05.eps} \end{center} \caption{Plots of canonical PARK and bridge estimators pdfs, clearly demonstrating ``probabilistic preference'' of bridge estimator in compare with PARK one}\label{pdfspbzero} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{06.eps} \end{center} \caption{Plots of PARK and bridge canonical estimators pdfs for $\gamma=1$}\label{pdfspbone} \end{figure} Calculations, resting on relations \eqref{Wpxgam}, \eqref{Wpxbridge}, \eqref{fnexpr} give probability $F_b(2)\simeq 0.918$ that true volatility is less than twice of given bridge volatility estimator value $\hat{V}_b$. It is substantially larger than analogous probability in the case of PARK estimator: $F_p(2,\gamma=0)\simeq 0.813$. Plots of probabilities $F(N)$ \eqref{fnexpr} dependence on the level $N$, for PARK estimator (in the case of zero drift $\mu=0$) and for bridge volatility estimator are given in figure~\ref{fleveln}. \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{07.eps} \end{center} \caption{Plots of probabilities $F_p(N)$ and $F_b(N)$ that true volatility is less than $N$ times exceeds values of PARK and bridge estimators}\label{fleveln} \end{figure} \section{Comparative statistics of canonical estimators} Above, we explored in detail statistical properties of two, PARK and bridge estimators. Here we compare their statistics and statistics of another well-known volatility estimators: G\&K and R\&S one. Des\-pite to previous chapters, where we have used known analytical expressions for pdfs of canonical PARK and the bridge estimators, below we use predominantly results of numerical simulations. Namely, we produce $M\gg1$ numerical simulations of random sequences \begin{equation}\label{xngamdisc} x_n(\gamma) := \gamma \frac{n}{N} + \frac{1}{\sqrt{N}} \sum_{n=1}^N \epsilon_n , \qquad n= 0, 1,\dots, N, \qquad x_0(\gamma) = 0 , \end{equation} where $\{\epsilon_n\}$ are iid Gaussian variables $\sim\mathcal{N}(0,1)$. Notice that stochastic process $x_n(\gamma)$ of discrete argument $n$ rather accurately approximates, for large $N\gg1$, paths of canonical Brownian motion $x(\tau,\gamma)$ \eqref{xtsimxtau}. \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{08.eps} \end{center} \caption{\textbf{Upper panel:} Histogram of $M$ samples of canonical bridge estimator $\hat{v}_b$. Solid line is the plot of canonical bridge estimator's pdf, given by analytical expression \eqref{Wpxbridge}, \eqref{rhodelexp}. Dashed line is the pdf of canonical PARK estimator for $\gamma=0$. \textbf{Lower panel:} Histogram of $M$ samples of canonical G\&K estimator $\hat{v}_g$ for $\gamma=0$. Solid line is the plot of the canonical bridge estimator pdf. Dashed line is the canonical PARK estimator pdf for $\gamma=0$}\label{bargkpdfs} \end{figure} Knowing $M$ iid sequences $\{x_n(\gamma)\}$ one can find corresponding iid samples of pointed out above canonical estimators. Everywhere below we take number of iid samples $M$ and discretization number $N$ equal to \[ N = 5\cdot 10^3 , \qquad M = 5 \cdot 10^5 . \] Plots in figure~\ref{bargkpdfs} demonstrate rather convincingly accuracy of numerical simulations. In figure~\ref{hotvsamples} are given two hundred samples of canonical G\&K and bridge estimators, ensuring ``by naked eye'' that canonical bridge estimator is more efficient than G\&K one. In figure~\ref{estmeandats} are given, obtained by numerical simulations, plots of canonical G\&K, PARK, R\&S and bridge estimators mean values, illustrating bias of G\&K and PARK estimators for nonzero canonical drift $\gamma\neq 0$, and actual absence of bias for bridge and R\&S estimators. Eventually, in figure~\ref{pdel} are given plots of probabilities that true volatility $V(T)$ is larger than half of corresponding estimator value and less than twice of it: \begin{equation}\label{pdeldef} P_\Delta := \bold{Pr}\left\{\frac{\hat{V}}{2} < V(T)< 2 \hat{V}\right\} = \int_{1/2}^2 W(x) dx . \end{equation} It is seen that for any $\gamma$ mentioned probability is essentially larger for bridge estimator, than for G\&K, R\&S and PARK estimators. \section{Acknowledgements} We are grateful for scientific and financial help of Higher school of economics (Russia, Nizhny Novgorod) and Nizhny Novgorod State University (Russia). \clearpage \begin{figure} \includegraphics[width=0.99\linewidth]{09.eps}\\ \caption{Plots of two hundreds samples of canonical estimators. Up to down are samples of G\&K, R\&S, bridge and PARK estimators. It is seen even by ``naked eye'' that bridge estimator estimates volatility more accurately than another mentioned estimators}\label{hotvsamples} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.75\linewidth]{10.eps} \end{center} \caption{Mean values $\bar{\hat{v}}$ of canonical PARK ($\blacksquare$), G\&K ($\blacklozenge$), R\&S ($\bigstar$) and bridge ($\blacktriangle$) estimators. Solid lines are theoretical expectations, borrowing from figure~\ref{parkmeangam}}\label{estmeandats} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.7\linewidth]{11.eps} \end{center} \caption{Estimations $\bar{D}$ of variance of PARK ($\blacksquare$), R\&S ($\bigstar$), G\&K ($\blacklozenge$) and bridge ($\blacktriangle$) canonical estimators. Solid lines are plots of theoretical variances, borroved from the figure~\ref{parkvargam}. It is seen that for any $\gamma$ bridge estimator's variance significantly smaller than variances of another mentioned estimators}\label{estvardats} \end{figure} \clearpage \begin{figure} \includegraphics[width=0.98\linewidth]{12.eps}\\ \caption{Estimations of probability $P_\Delta$ \eqref{pdeldef} at different $\gamma$ values, for PARK ($\blacksquare$), R\&S ($\bigstar$), G\&K ($\blacklozenge$) and bridge ($\blacktriangle$) estimators. Solid lines are results of theoretical calculations, resting on formula \eqref{pdeldef}}\label{pdel} \end{figure} \clearpage
1,116,691,498,011
arxiv
\section{Introduction} Natural Language Processing (NLP) plays a significant role in keeping languages alive and the development of languages in the digital device era \citewpar{Karakanta2018}. One of the sub-parts of NLP is Machine Translation (MT). MT has been the most promising application of Artificial Intelligence (AI) since the invention of computers, which has has been shown to increase access to information by the native language of the speakers in many cases. One of the such critical case is the spread of vital information during a crisis or emergency \citewpar{lewis-etal-2011-crisis,neubig-hu-2018-rapid}. Recently, translation accuracy has increased, and commercial systems have gained popularity. These systems have been developed for hundreds of languages, and hundreds of millions of people gained access. However, some of the less common languages do not enjoy this availability of resources. These under-resourced languages lack essential linguistics resources, e.g. corpora, POS taggers, grammars. This is more pertinent for MT since most common systems require large amounts of high-quality parallel resources or linguistics experts to make a vast set of rules. This survey studies how to take advantage of the orthographic information and closely related languages to improve the translation quality of under-resourced languages. The most common MT systems are based on Rule-Based Machine Translation (RBMT) or Corpus-Based Machine Translation (CBMT). RBMT systems \citewpar{kaji-1988-efficient, charoenpornsawat-etal-2002-improving,abercrombie-2016-rule,susanto-etal-2012-rule,centelles-costa-jussa-2014-chinese, allauzen-etal-2014-pushdown,hurskainen-tiedemann-2017-rule} are based on linguistic knowledge which are encoded by experts. On the other hand, CBMT \citewpar{dauphin-lux-1996-corpus,carl-2000-model} depends on a large number of aligned sentences such as Statistical Machine Translation (SMT) \citewpar{kondrak-etal-2003-cognates,setiawan-etal-2005-phrase,Koehn:2005,koehn2007moses, green-etal-2014-empirical,junczys-dowmunt-grundkiewicz-2016-phrase} and Neural Machine Translation (NMT) \citewpar{Sutskever:2014:SSL:2969033.2969173, cho2014learning, Bahdanau2016, zhang-etal-2017-incorporating}. Unlike RBMT systems, which require expertise of linguists to write down the rules for the language, CBMT-based systems rely on examples in the form of sentence aligned parallel corpora. CBMT systems such as SMT and NMT have alleviated the burden of writing down the rules which are not feasible for all languages since human languages are more dynamic in nature. However, CBMT systems suffer from the lack of parallel corpora for under-resourced languages to train machine translation systems. A number of the methods have been proposed to address the non-availability of parallel corpora for under-resourced languages, such as pivot-based approaches \citewpar{wu-wang-2007-pivot,wu-wang-2009-revisiting,kim-etal-2019-pivot}, zero-shot translation \citewpar{johnson-etal-2017-googles, tan-etal-2019-multilingual,gu-etal-2019-improved, pham-etal-2019-improving, currey-heafield-2019-zero} and unsupervised methods \citewpar{artetxe-etal-2019-effective,pourdamghani-etal-2019-translating, artetxe-etal-2019-bilingual}, which are described in detail in following sections. A large array of techniques have been applied to overcome the data sparsity problem in MT, and virtually all of them seem to be based on the field of transfer learning from high-resource languages in recent years. Other techniques are based on lexical and semantic similarities of closely related languages which are more relevant to our survey on orthographic information in machine translation. The main goal of this survey is to shed light on how orthographic information is utilised in the MT system development and how orthography helps to overcome the data sparsity problem for under-resourced languages. More particularly, it tries to explain the nature of interactions with orthography with different types of machine translation. For the sake of simplicity, the analysis presented here in this article is restricted to those languages which have some form of internet resources. The survey is organised as follows: Section \ref{background} explains the background information to follow this article. We present orthographic information in Section \ref{orthographic}. Section \ref{RBMT} describes the challenges of automatically using orthographic information in RBMT outputs. Section \ref{SMT} presents an analysis of orthographic information in SMT systems. Section \ref{NMT} presents an analysis of orthographic information in NMT systems. This survey ends with a discussion of the future directions towards utilising the orthographic information. \section{Background} \label{background} In this section, we explain the necessary background information to follow the paper, different types of MT systems and the orthographic information available for MT. \subsection{Under-resourced Languages} Worldwide, there are around 7,000 languages \citewpar{P10-1010, L14-1618}. However, most of the machine-readable data and natural language applications are available for very few popular languages, such as Chinese, English, French, or German. For other languages, resources are scarcely available and, for some languages, not at all. Some examples of these languages do not even have a writing system \citewpar{W06-0605,krauwer2003basic,alegria2011strategies}, or are not encoded in major schemes such as Unicode. Due to the unavailability of digital resources, many of these languages may go extinct. With each language that is lost, we lose connection with the culture of the people and characteristics of the languages. \citewopar{alegria2011strategies} proposed six levels language typology to develop language technologies that could be useful for several hundred languages. This classifies the world's languages based on the availability of Internet resources for each language. According to the study, the term resource-poor or under-resourced is relative and also depends on the year. The first level is the most resourced languages; the second level is languages in the top 10 languages used on the web. The third level is languages which have some form of resources in NLP. The fourth level considers languages which have any lexical resources. Languages that have a writing system but not in digital form are in the fifth level. The last level is significant, including oral languages which do not have a writing system of its own. We follow this approach to define the term under-resourced languages in terms of machine translation by taking the languages in the third and fourth level. Languages that lack extensive parallel corpora are known as under-resourced or low-resourced languages \citewpar{JIMERSON18.749}. Languages that seeks to survive in modern society need NLP, which requires a vast amount of data and linguistic knowledge to create new language technology tools for languages. Mainly it is a big challenge to develop MT systems for these languages due to the scarcity of data, specifically sentence aligned data (parallel corpora) in large amounts to train MT systems. For example, Irish, Scottish Gaelic, Manx or Tamil, Telugu, and Kannada belonging to the Goidelic and the Dravidian languages, respectively are considered as under-resourced language due to scarcely available machine -readable resources \citewopar{alegria2011strategies}. \subsection{Orthographic Information} \label{orthographic} Humans are endowed with a language faculty that is determined by biological and genetical development. However, this is not true of the written form of the language, which is the visual representation of the natural and genetically determined spoken form. With the development of orthography, humans have not only overcome limitations with human short term memory, and brain storage capacity, but also this development allows communication through space and time \citewpar{fromkin2018introduction}. Orthography is a linguistic factor of mutual intelligibility which may facilitate or impede inter-comprehension \citewpar{fischer2016orthographic}. The orthographic information of languages does not only represents the information of the language but also the psychological representation of the world of the users. Chinese orthography is unique in its own in the sense that it uses logo-graphic writing system. In such a system, each Chinese character carries visual patterns along with rich linguistic information. These characters are visualised in square space, which depends on the number of strokes a character has. Each character can be decomposed in two parts. \textit{Radicals}, which carries the semantic meaning, whereby the other part tells about the pronunciation. According to Shuo WenJie Zi\footnote{https://en.wikipedia.org/wiki/Shuowen$\_$Jiezi} new Chinese characters consist of 540 radicals but only 214 in modern Chinese \cite{min2004direct}. The problems lie when the decomposition strategy does not comply with some of the characters. On the other hand, other Asian languages such as Korean and Japanese, have two different writing systems. Modern-day Korea uses the Hangul orthography, which is part of the syllabic writing system, and the other is known as Hanja, which uses classical Chinese characters. Like the history of writing in Korea, Japan to have two writing systems, Kana and Kanji, where Kanji is identified as Classical Chinese characters, and Kana represents sounds where each kana character is recognized as a syllable. As both Korean and Japanese are very different from Chinese and morphologically rich languages adoption of Chinese characters to the languages was rather difficult. These problems also posed great difficulty in the field of translation and transliteration. Irrespective of all the differences and challenges these three Asian languages share common properties which could be significant advantages in MT. Closely related languages share similar morphological, syntactical, orthographic properties. Orthographic similarity can be seen from two major sources. First one is based on the genetic relationship between languages such as based on language families, Germanic, Slavic, Gaelic and Indo-Aryan languages. The second one is based on the contact though geographical area Indo-Aryan and Dravidian languages in the Indian subcontinent \citewpar{kunchukuttan-etal-2018-leveraging}. Two languages posses orthographic similarity only when these languages have the following properties: overlapping phonemes, mutually compatible orthographic systems and similar grapheme to phoneme mapping. The widespread and underlying problem for the MT systems is variations in orthographic conventions. The two languages written in two different orthography leads to error in MT outputs. Orthographic information can also be used to improve the machine translation system. In the following subsection, we describe the different orthographic properties related to MT. \subsubsection{Spelling and Typographical Errors} Spelling or typographical errors are to be handled very carefully in MT task as even a minor spelling error could generate out of vocabulary error in the training corpus. The source and the target languages highly influenced the methodology used to correct orthographic errors. As these languages vary in use of the same orthographic conventions very differently. \subsubsection{True-casing and Capitalization} The process of restoring case information to badly cased or not cased text is true-casing \citewpar{lita2003truecasing}. To avoid orthographical errors, it is a popular method to lower-case all words, especially in SMT. This method allows the system to avoid the mismatching of the same words, which seems different due to differences in casing thus keeping all the text in the lower-case is one of the methods to avoid the error. In most MT systems, both a pre-processing and post-processing is carried out. Post-processing of the text involves converting all the lower case to its original case form and generating the proper surface forms. \subsubsection{Normalization} The use of the same words with different orthographic spellings such as \textit{colour} and \textit{color} give rise to different errors while building a translation model. In such cases, orthographic normalization is required. There are several other issues which require orthographic normalization, which could be language-specific such as Arabic diacritization, or contextual orthographic normalization. This approach needs some linguistic knowledge and can be adapted easily to other languages as well. Normalization is a process which is carried out before most of the natural language processing task; similarly, in machine translation, language-specific normalization yields a good result. Some of the examples of text normalization carried out for SMT system are removal of HTML contents, extraction of tag contents, splitting each line after proper punctuation marks as well as correction of language-specific word forms \citewpar{schlippe2010text}. Normalization reduces sparsity as it eliminates out-of-vocabulary words used in the text \citewpar{leusch-etal-2005-preprocessing}. \subsubsection{Tokenization and Detokenization} The process of splitting text into smaller elements is known as tokenization. Tokenization can be done at different levels depending on the source and the target language as well the goal that we want to achieve. It also includes processing of the signs and symbols used in the text such as hyphens, apostrophes, punctuation marks, and numbers to make the text more accessible for further steps in MT. Like normalization, tokenization also helps in reducing language sparsity. Detokenization is the process of combining all the token to the correct form before releasing the main output. Tokenization and detokenization are not linked directly to orthographic correction, rather, they are more about morphological linking and correction, especially towards morphological rich languages like Irish and Arabic \citewpar{guzman-etal-2016-machine-translation}. Orthography plays a major role in tokenization and detokenizations as each orthography has different rules on how to tokenize and detokenize. \subsubsection{Transliteration} Transliteration is the conversion of the text from one orthography to another without any phonological changes. The best example of transliteration is named entities and generic words \citewpar{kumaran2007generic}. Data collected from social media are highly transliterated and contains errors thus, using these data for building a machine translation system for resource-poor languages causes errors. One of the primary forms that have a high chance of transliteration is cognates. Cognates are words from different languages derived from the same root. The concept cognate in NLP approaches are the words with similar orthography. Therefore, cognates have a high chance of transliteration. Though Machine translation has progressed a lot in recently, the method of dealing with transliteration problem has changed from a language-independent manner to cognates prediction when translating between closely related languages, transliteration of cognates would help to improve the result for under-resourced languages. \subsubsection{Code-Mixing} Code-mixing is a phenomenon which occurs commonly in most multilingual societies where the speaker or writer alternate between more than one languages in a sentence \citewpar{ayeomoni2006code,Ranjan2016ACS,yoder2017code,PARSHAD2016375}. Most of the corpora for under-resourced languages came from the publicly available parallel corpora which were created by voluntary annotators or aligned automatically. The translation of technical documents such as KDE, GNOME, and Ubuntu translations have code-mixed data since some of the technical terms may not be known to voluntary annotators for translation. Code-mixing in the OpenSubtitles corpus is due to bilingual and historical reasons of native speakers \citewpar{chanda2016columbia,PARSHAD2016375}. Different combinations of languages may occur while code-mixing, for example, German-Italian and French-Italian in Switzerland, Hindi-Telugu in state of Telangana, India, Taiwanese-Mandarin Chinese in Taiwan \citewpar{chan-etal-2009-automatic}. As a result of code-mixing of the script are also possible from a voluntary annotated corpus. This poses another challenge for MT \section{Orthographic Information in RBMT} \label{RBMT} Rule-Based Machine Translation (RBMT) was one of the first approaches to tackle translation from the input of the source text to target text without human assistance by means of collections of dictionaries, collections of linguistics rules and special programs based on these dictionaries and rules. It also depends on rules and linguistic resources, such as bilingual dictionaries, morphological analysers, and part-of-speech taggers. The rules dictate the syntactic knowledge while the linguistics resources deal with morphological, syntactic, and semantic information. Both of them are grounded in linguistic knowledge and generated by linguistics experts \citewpar{slocum1985evaluation, charoenpornsawat-etal-2002-improving, Lagarda:2009:SPR:1620853.1620913, susanto-etal-2012-rule}. The strength of RBMT is that analysis can be done at both syntax and semantics level. However, it requires a linguistic expert to write down all the rules that cover languages. Open-source shallow-transfer MT engine for the Romance languages of Spain such as Spanish, Catalan and Galician developed by \citewopar{rbmt_spanish}. They were regeneration of existing non-open-source engines based on linguistic data. The post-generator in the system performs the orthographical operation such as contraction and apostrophes to reduce the orthographical errors. The dictionaries were used for string transformation operations to the target language surface forms. Similarly, the translation between Spanish-Portugues used a post-generation module to performs orthographical transformations to improve the translation quality \citewpar{garrido2004shallow,forcada2011apertium}. Manually constructed list of orthographic transformation rules assist in detecting cognates by string matching \citewpar{xu-etal-2015-detecting}. Irish, Scottish and Gaelic belong to the Goidelic language family and share similar orthography and cognates. \citewopar{scannell2006machine} developed ga2gd software which translates from Irish to Scottish Gaelic. In the context-sensitive syntactic rewriting submodule, the authors implemented transfer rules based on orthography, which are stored in a plain text. Then each rule is transformed into a finite-state recogniser for the input stream. This work also uses simple rule-based orthographic changes to find cognates by orthography. A Czech to Polish translation system also followed the shallow-transfer method at the lexical stage. A set of collective transformation rules were used on a source language list to produce a target language list of cognates \citewpar{ruth2011shallow}. Another shallow-transfer MT system used frequent orthographic changes from Swedish to Danish to identify cognates and transfer rules are based on orthography \citewpar{tyers2009shallow}. A Turkmen to Turkish MT system \citewpar{tantuug2007mt,tantuug2018machine} uses the finite-state transformer to identify the cognate even thought the orthography is different for these languages. \section{Orthographic Information in SMT} \label{SMT} Statistical Machine Translation (SMT) \citewpar{brown-etal-1993-mathematics,Koehn:2005, koehn2007moses, koehn2009statistical, waite-byrne-2015-geometry} is one of the CBMT based systems. SMT systems assume that we have a set of example translations($S^{(k)}$, $T^{(k)}$) for $k=1\ldots.n$, where $S^{(k)}$ is the $k^{th}$ source sentence, $T^{(k)}$ is the $k^{th}$ target sentence which is the translation of $S^{(k)}$ in the corpus. SMT systems try to maximize the conditional probability $p(t|s)$ of target sentence $t$ given a source sentence $s$ by maximizing separately a language model $p(t)$ and the inverse translation model $p(s|t)$. A language model assigns a probability $p(t)$ for any sentence t and translation model assigns a conditional probability $p(s|t)$ to source / target pair of sentence \citewpar{wang-waibel-1997-decoding}. By Bayes rule \begin{equation} p(t|s) \propto p(t)p(s|t) \end{equation} This decomposition into a translation and a language model improves the fluency of generated texts by making full use of available corpora. The language model is not only meant to ensure a fluent output, but also supports difficult decisions about word order and word translation \citewpar{koehn2009statistical}. The two core methodologies used in the development of machine translation systems - RBMT and SMT - come with their own shares of advantages and disadvantages. In the initial stages, RBMTs were the first commercial systems to be developed. These systems were based on linguistic rules and have proved to be more feasible for resource-poor languages with little or no data. It is also relatively simpler to carry out error analysis and work on improving the results. Moreover, these systems require very little computational resources. On the contrary, SMT systems need a large amount of data, but no linguistic theories, especially with morphologically rich languages such as Irish, Persian, and Tamil SMT suffer from out-of-vocabulary problems very frequently due to orthographic inconsistencies. To evade the problem, orthographic normalization was proposed to improve the quality of SMT by sparsity reduction \citewpar{kholynizar2012}. SMT learns from data and requires less human effort in terms of creating linguistics rules. SMT systems, unlike RBMT system, does not cause disambiguation problems. Even though SMT has lots of advantages over rule-based, it also has some disadvantages. Its is very difficult to conduct error analysis with SMT and data sparsity another disadvantage faced by SMT \citewpar{costa2012study}. \subsection{Spelling and Typographical Errors } The impact of spelling and typographical errors in SMT has been studied extensively \citewpar{4637895,bertoldi-etal-2010-statistical,formiga-fonollosa-2012-dealing}. Dealing with random, non-word error or real-word error can be done in many ways; one such method is the use of a character-level translator, which provides various spelling alternative. Typographical errors such as substitution, insertion, deletion, transposition, run-on, and split can be addressed with edit-distance under a noisy channel model paradigm \citewpar{brill-moore-2000-improved,toutanova-moore-2002-pronunciation}. Error recovery was performed to correct spelling alternative of input before the translation process. \subsection{True-casing and Capitalization, Tokenization and Detokenization } Most SMT systems accept pre-processed inputs, where the pre-processing consists of tokenising, true-casing, and normalising punctuation. Moses \citewpar{koehn2007moses} is a toolkit for SMT, which has pre-processing tools for most languages based on hand-crafted rules. Improvement has been achieved for recasing and tokenization processes \citewpar{nakov-2008-improving}. For a language which does not use Roman characters, linguistically-motivated tokenization has shown to improve the results on SMT \citewpar{oudah-etal-2019-impact}. Byte Pair Encoding (BPE) avoids out-of-vocabulary issues by representing more frequent sub-word as atomic units \citewopar{sennrich-etal-2016-improving}. A joint BPE model based on the lexical similarity between Czech and Polish identified cognate vocabulary of sub-words. This is based on the orthographic correspondences from which words in both languages can be composed \citewpar{chen-avgustinova-2019-machine}. \subsection{Normalization} Under-resourced languages utilise corpora from the user-generated text, media text or voluntary annotators. However, SMT suffers from customisation problems as tremendous effort is required to adapt to the style of the text. A solution to this is text normalization, that is normalising the corpora before passing it to SMT \citewpar{formiga-fonollosa-2012-dealing} which has been shown to improve the results. The orthographies of the Irish and Scottish Gaelic languages were quite similar due to a shared literary tradition. Nevertheless, after the spelling reform in Irish, the orthography became different. \citewopar{scannell-2014-statistical} proposed a statistical method to normalise the orthography between Scottish Gaelic and Irish as part of the translation for social media text. To able to use the current NLP tool to deal with historical text, spelling normalization is essential; that is converting the original spelling to present-day spelling which was studied for historical English text by \citewopar{schneider-etal-2017-comparing} and \citewopar{hamalainen-etal-2018-normalizing}. For dialects translation, spelling normalising is an important step to take advantage of high-resource languages resources \citewpar{honnet-etal-2018-machine,napoles-callison-burch-2017-systematically} \subsection{Transliteration (Cognate) } As we know, closely related languages share the same features; the similarities between the language would be of much help to study the cognates of two languages. Several methods have been obtained to manipulate the features of resource-rich languages in order to improve SMT for resource-poor languages. Manipulation of the cognates to obtain transliteration is one of the methods adopted by some of the authors to improve the SMT system for resource-poor languages. Language similarities and regularities in morphology and spelling variation motivate the use of character-level transliteration models. However, in order to avoid the character mapping differences in various contexts \citewopar{nakov2012combining} transformed the input to a sequence of character n-grams. A sequence character of n-grams increases the vocabulary as well as also make the standard alignment models and their lexical translation parameters more expressive. For the languages which use same or similar scripts, approximate string matching approaches, like Levenshtein distance \citewpar{Levenshtein_SPD66} are used to find cognate and longest common subsequence ratio (LCSR) \citewpar{melamed-1999-bitext}. For the languages which use different scripts, transliteration is the first step and follow the above approach. A number of studies have used statistical and deep learning methods along with orthographic information \citewpar{ciobanu-dinu-2014-automatic,mulloni-pekar-2006-automatic} to find the cognates. In reference to the previous section we know that cognates can be used for mutual translation between two languages if they share similar properties, it is essential to know the cognateness between the two languages of a given text. The word "cognateness" means how much two pieces of text are related in terms of cognates. These cognates were useful to improve the alignment when the scoring function of the length-based alignment function is very low then it passes to the second method, a cognate alignment function for getting a proper alignment result \citewpar{simard1993using}. One of the applications of cognates before applying MT is parallel corpora alignment. A study of using cognates to align sentences for parallel corpora was done by \citewopar{10.5555/962367.962411}. Character level methods to align sentences \citewpar{church-1993-char} are based on a cognate approach \citewpar{10.5555/962367.962411}. As early as \citewopar{C88-1010}, researchers have looked into translation between closely-related languages such as from Czech-Russian RUSLAN and Czech-Slovak CESILKO \citewpar{A00-1002} using syntactic rules and lexicons. The closeness of the related languages makes it possible to obtain a good translation by means of more straightforward methods. However, both systems were rule-based approaches and bottlenecks included complexities associated with using a word-for-word dictionary translation approach. Nakov and Ng~\citewpar{D09-1141} proposed a method to use resource-rich closely-related languages to improve the statistical machine translation of under-resourced languages by merging parallel corpora and combining phrase tables. The authors developed a transliteration system trained on automatically-extracted likely cognates for Portuguese into Spanish using systematic spelling variation. \citewopar{W14-4210} created an MT system between closely-related languages for the Slavic language family. Language-related issues between Croatian, Serbian and Slovenian are explained by \citewopar{W16-4806}. Serbian is digraphic (uses both Cyrillic and Latin Script), the other two are written using only the Latin script. For the Serbian language transliteration without loss of information is possible from Latin to Cyrillic script because there is a one-to-one correspondence between the characters. In 2013 a group of people used a PBSMT approach as the base method to produce cognates. Instead of translating the phrase, they tried to transform a character sequence from one language to another. They have used words instead of sentences and characters instead of words in the transformation process. The combination of the phrase table with transformation probabilities, language model probabilities, selects the best combination of sequence. Thus the process includes the surrounding context and produces cognates \citewpar{beinborn2013cognate}. A joint BPE model based on the lexical similarity between Czech and Polish identifies a cognate vocabulary of sub-words. This is based on the orthographic correspondences from which words in both languages can be composed \citewpar{chen-avgustinova-2019-machine}. It has been demonstrated that the use of cognates improves the translation quality \citewpar{kondrak-etal-2003-cognates}. \subsection{Code-Switching} An SMT system with a code-switched parallel corpus was studied by \citewopar{menacer2019machine} and \citewopar{fadaee-monz-2018-back} for Arabic-English language pair. The authors have manually translated or used back translation method to translate foreign words. The identification of the language of the word is based on the orthography. \citewopar{chakravarthi2018improving} used the same approach for Dravidian languages; they used the improved MT for creating WordNet, showing improvement in the results. For English-Hindi, \citewopar{dhar-etal-2018-enabling} manually translated the code-switched component and shown improvements. Machine translation of social media was studied by \citewopar{rijhwani2016translating} where they tackle the code-mixing for Hindi-English and Spanish-English. The same approach translated the main language of the sentence using Bing Translate API \citewpar{niu-etal-2018-bi}. Back transliteration from one script to native script in code-mixed data is one of the challenging tasks to be performed. \citewopar{riyadh2019joint} adopted three different methods to back transliterate Romanised Hindi-Bangla code-mixed data to Hindi and Bangla script. They have used Sequitur, a generative joint n-gram transducer, DTLM, a discriminate string transducer and the OpenNMT \footnote{https://opennmt.net/} neural machine translation toolkit. Along with these three approaches, they have leveraged target word lists, character language models, as well as synthetic training data, whenever possible, in order to support transliteration. At last, these transliterations are provided to a sequence prediction module for further processing. \subsection{Pivot Translation} Pivot translation is a translation from a source language to the target language through an intermediate language which is called a pivot language. Usually, pivot language translation has large source-pivot and pivot-target parallel corpora \citewpar{cohn-lapata-2007-machine, wu-wang-2009-revisiting}. There are different levels of pivot translation, the first one is the triangulation method where the corresponding translation probabilities and lexical weights in the source-pivot and pivot-target translation are multiplied. In the second method, the sentences are translated to the pivot language using the source-pivot translation system then pivoted to target language using a pivot-target translation system \citewpar{utiyama-isahara-2007-comparison}. Finally, using the source-target MT system to create more data and adding it back to the source-target model, which is called back-translation \citewpar{sennrich-etal-2016-improving, edunov-etal-2018-understanding}. Back translation is simple and easy to achieve without modifying the architecture of the machine translation models. Back-translation has been studied in both SMT \citewpar{tiedemann-etal-2016-phrase,ahmadnia-etal-2017-persian,poncelas-etal-2019-combining} and NMT \citewpar{sennrich-etal-2016-improving, edunov-etal-2018-understanding, hoang-etal-2018-iterative, prabhumoye-etal-2018-style, graca-etal-2019-generalizing,kim-etal-2019-pivot}. The pivot translation method could also be used to improve MT systems for under-resourced languages. One popular way is training SMT systems using source-pivot or pivot-target language pair using sub words where the pivot language is related to source or target or both. The subwords units consisted of orthographic syllable and byte-pair-encoded unit. The orthographic unit is a linguistically motivated unit which occurs in a sequence of one or more consonants followed by a vowel. Unlike orthographic units, BPE (Byte Pair Encoded Unit) \citewpar{sennrich-etal-2016-improving} is motivated by statistical properties of the text. It represents stable and frequent character sequences in the texts. As orthographic syllable and BPE are variable-length units and the vocabularies used are much smaller than morpheme and word-level model, the problem of data sparsity does not occur but provides an appropriate context for translation between closely related languages \citewpar{kunchukuttan2017utilizing}. \section{Orthographic Information in NMT} \label{NMT} Neural Machine Translation is a sequence-to-sequence approach \citewpar{Sutskever:2014:SSL:2969033.2969173} based on encoder-decoder architectures with attention \citewpar{Bahdanau2016, saunders-etal-2019-domain} or self attention encoder \citewpar{Vaswani:2017:AYN:3295222.3295349,wang-etal-2019-learning}. Given a source sentence $\mathbf{x}$=${x_1,x_2,x_3,...}$ and target sentence $\mathbf{y}$=${y_1,y_2,y_3,..}$, the training objective for NMT is to maximize the log-likelihood $\mathcal{L}$ with respect to $\theta$: \begin{equation} \mathcal{L}_{\theta}=\sum_{(\mathbf{x}, \mathbf{y}) \in \mathrm{C}} \log p(\mathbf{y} | \mathbf{x} ; \theta) \end{equation} The decoder produces one target word at a time by computing the probability \begin{equation} p(\mathbf{y} | \mathbf{x} ; \theta)=\prod_{j=1}^{m} p\left(y_{j} | y_{<j}, \mathbf{x} ; \theta\right) \end{equation} Where $m$ is the number of words in $\mathbf{y}, y_{j}$ is the current generated word, and $y_{<j}$ are the previously generated words. At inference time, beam search is typically used to find the translation that maximises the above probability. Most of NMT models follows the $Embedding\rightarrow$ $Encoder\rightarrow$ $Attention\rightarrow$ $Decoder$ framework. The attention mechanism across encoder and decoder is calculated by $c_t$ as the weighted sum of the source-side context vectors: \begin{equation} c_t=\sum_{i=1}^n \alpha_{t,i} h_i \end{equation} \begin{equation} \alpha_{t,i}= \frac{\exp{(e_{t,i}})}{\sum_{j=1}^{m}\exp{(e_{t,j}})} \end{equation} $\alpha_{t,i}$ is the normalized alignment matrix between each source annotation vector $h_i$ and word $y_t$ to be emitted at a time step $t$. Expected alignment $e_{t,i}$ between each source annotation vector $h_i$ and the target word $y_t$ is computed using the following formula: \begin{equation} e_{t,i}=a(\mathbf{s}_{\mathbf{t}-\mathbf{1}},h_i) \end{equation} \begin{equation} \mathbf{s}_{\mathbf{t}}=g\left(\mathbf{s}_{\mathbf{t}-\mathbf{1}}, \mathbf{y}_{\mathbf{t}-\mathbf{1}}, \mathbf{c}_{\mathbf{t}}\right) \end{equation} where $g$ is an activation decoder function, $s_{j-1}$ is the previous decoder hidden state, $y_{j-1}$ is the embedding of the previous word. The current decoder hidden state $s_{j},$ the previous word embedding and the context vector are fed to a feedforward layer $f$ and a softmax layer computes a score for generating a target word as output: \[ P\left(y_{j} | y_{<j}, \mathbf{x}\right)=\operatorname{softmax}\left(f\left(\mathbf{s}_{\mathbf{j}}, \mathbf{y}_{j-1}, \mathbf{c}_{\mathbf{j}}\right)\right) \] \subsection{Multilingual Neural Machine Translation} In recent years, NMT has improved translation performance, which has lead to a boom in NMT research. The most popular neural architectures for NMT are based on the encoder-decoder \citewpar{Sutskever:2014:SSL:2969033.2969173,cho-etal-2014-learning,Bahdanau2016} structure and the use of attention or self-attention based mechanism \citewpar{luong-etal-2015-effective,Vaswani:2017:AYN:3295222.3295349}. Multilingual NMT created with or without multiway corpora has been studied for the potential for translation between two languages without any direct parallel corpus. Zero-shot translation is translation using multilingual data to create a translation for languages which have no direct parallel corpora to train independently. Multilingual Neural Machine Translation with only monolingual corpora was studied by \citewpar{sen-etal-2019-multilingual, wang-etal-2019-compact}. In \citewopar{DBLP:journals/corr/HaNW16} and \citewpar{johnson-etal-2017-googles}, the authors have demonstrated that multilingual NMT improves translation quality. For this, they created a multilingual NMT without changing the architecture by introducing special tokens at the beginning of the source sentence indicating the source language and target language. Phonetic transcription to Latin script and the International Phonetic Alphabet (IPA) was studied by \citewopar{chakravarthi2018improving} and showed that Latin script outperforms IPA for the Multilingual NMT of Dravidian languages. \citewopar{chakravarthi-et-al:OASIcs:2019:10370} propose to combine multilingual, phonetic transcription and multimodal content with improving the translation quality of under-resourced Dravidian languages. The authors studied how to use the closely-related languages from the Dravidian language family to exploit the similar syntax and semantic structures by phonetic transcription of the corpora into Latin script along with image feature to improve the translation quality \citewpar{chakravarthi2019wordnet}. They showed that orthographic information improves the translation quality in multilingual NMT\citewpar{chakravarthi-etal-2019-multilingual}. \subsection{Spelling and Typographical Errors} Spelling errors are amplified in under-resourced setting due to the potential infinite possible misspelling and leads to a large number of out-of-vocabulary. Additionally, under-resourced morphological rich languages have morphological variation, which causes orthographic errors while using character level MT. A shared task was organised by \citewopar{li-etal-2019-findings}; to deal with orthographic variation, grammatical error and informal languages from the noisy social media text. Data cleaning was used along with suitable corpora to handle spelling errors. \citewopar{belinkov2018synthetic} investigated noise in NMT, focusing on kinds of orthographic errors. Parallel corpora were cleaned before submitting to NMT to reduce the spelling and typographical errors. NMT with word embedding lookup ignores the orthographic representation of the words such as the presence of stems, prefixes, suffixes and another kind of affixes. To overcome these drawbacks, character-based word embedding was proposed by \citewopar{10.5555/3016100.3016285}. Character-based NMT \citewpar{costa-jussa-fonollosa-2016-character,yang-etal-2016-character,lee-etal-2017-fully,cherry-etal-2018-revisiting} were developed to cover the disadvantages of the languages which do not have explicit word segmentation. This enhances the relationship between the orthography of a word and its meaning in the translation system. For spelling mistake data for under-resourced languages, the quality of word-based translation drops severely, because every non-canonical form of the word cannot be represented. Character-level model overcomes the spelling and typological error without much effort. \subsection{True-casing and Capitalization, Normalization, Tokenization and Detokenization} Although NMT can be trained end-to-end translations, many NMT systems are still language-specific and require language-dependent preprocessing, such as used in Statistical Machine Translation, Moses \citewpar{koehn2007moses} a toolkit for SMT which has preprocessing tools for most languages which based on hand-crafted rules. In fact, these are mainly available for European languages. For Asian languages which do not use space between words, a segmenter is required for each language independently before feeding into NMT to indicate a word segment. This becomes a problem when we train Multilingual NMT \citewpar{johnson-etal-2017-googles}. A solution for the open vocabulary problems in NMT is to break up the rare words into subword units \citewpar{chitnis-denero-2015-variable, ding-etal-2019-call} which has been shown to deal with multiple script languages ambiguities \citewpar{6289079,wu2016google}. A simple and language-independent tokenizer was introduced for NMT and Multilingual NMT by \citewopar{kudo-richardson-2018-sentencepiece}; it is based on two subword segmentation algorithms, byte-pair encoding (BPE) \citewpar{sennrich-etal-2016-improving} and a unigram language model \citewpar{kudo-2018-subword}. This system also normalise semantically equivalent Unicode character into canonical forms. Subword segmentation and true-casing model will be rebuilt whenever the training data changes. The preprocessing tools introduced by OpenNMT normalises characters and separates punctuation from words, and it can be used for any languages and any orthography \citewpar{2017opennmt}. Character-level NMT systems work at the character level to grasp orthographic similarity between the languages. They were developed to overcome the issue of limited parallel corpora and resolve the out-of-vocabulary problem for the under-resourced languages. For Hindi-Bhojpuri, where Bhojpur is closely related to Hindi, Bhojpuri is considered as an under-resourced language, and it has an overlap of word with high-resource language Hindi due to the adoption of works from a common properties of language \citewpar{jha2019learning}. To solve the out-of-vocabulary problem the transduction of Hindi word to Bhojpuri words was adapted from NMT models by training on Hindi-Bhojpuri cognate pairs. It was a two-level system: first, the Hindi-Bhojpuri system was developed to translate the sentence; then the out-of-vocabulary words were transduced. \subsection{Transliteration (Cognate)} Transliteration emerged to deal with proper nouns and technical terms that are translated with preserved pronunciation. Transliteration can also be used to improve machine translation between closely related languages, which uses different scripts since closely related languages language have orthographic and phonological similarities between them. Machine Translation often occurs between closely related languages or through a pivot language (like English) \citewpar{bhattacharyya-etal-2016-statistical}. Translation between closely related languages or dialects is either a simple transliteration from one language to another language or a post-processing step. Transliterating cognates has been shown to improve MT results since closely related languages share linguistic features. To translate from English to Finnish and Estonian, where the words have similar orthography \citewopar{gronroos-etal-2018-cognate} used Cognate Morfessor, a multilingual variant of Morfessor which learns to model cognates pairs based on the unweighted Levenshtein distance \citewpar{Levenshtein_SPD66}. The ideas are to improve the consistency of morphological segmentation of words that have similar orthography, which shows improvement in the translation quality for the resource-poor Estonian language. \citewopar{D09-1111} use transliteration as a method to handle out-of-vocabulary (OOV) problems. To remove the script barrier, \citewopar{DBLP:conf/coling/BhatBJS16} created machine transliteration models for the common orthographic representation of Hindi and Urdu text. The authors have transliterated text in both directions between Devanagari script (used to write the Hindi language) and Perso-Arabic script (used to write the Urdu language). The authors have demonstrated that a dependency parser trained on augmented resources performs better than individual resources. The authors have shown that there was a significant improvement in BLEU (Bilingual Evaluation Understudy) \citewpar{P02-1040} score and have shown that the problem of data sparsity is reduced. Recent work by \citewopar{Q18-1022} has explored orthographic similarity for transliteration. In their work, they have used related languages which share similar writing systems and phonetic properties such as Indo-Aryan languages. They have shown that multilingual transliteration leveraging similar orthography outperforms bilingual transliteration in different scenarios. Phonetic transcription is a method for writing a language in the other scripts keeping the phonemic units intact. It is extensively used in speech processing research, text-to-speech, and speech database construction — phonetic transcription to common script has shown to improve the results of machine translation \citewpar{chakravarthi2018improving}. The authors focus on the multilingual translation of languages which uses different scripts and studies the effect of different orthographies to common script with multilingual NMT. Multiway NMT system was created for Czech and Polish with Czech IPA transcription and Polish transcription to a 3-way parallel text together to take advantage of the phonology of the closely related languages \citewpar{chen-avgustinova-2019-machine}. Orthographic correspondence rules were used as a replacement list for translation between closely related Czech-Polish with added back-translated corpus \citewpar{chen-avgustinova-2019-machine}. Dialect translation was studied by \citewopar{baniata2018neural}. To translate Arabic dialects to modern standard Arabic, they used multitask learning which shares one decoder for standard Arabic, while every source has a separate encoder. This is due to the non-standard orthography in the Arabic dialects. The experiments showed that for the under-resourced Arabic dialects, it improved the results. Machine Translation of named entities is a significant issue due to linguistic and algorithmic challenges found in between languages. The quality of MT of named entities, including the technical terms, was improved with the help of developing lexicons using orthographic information. The lexicon integration to NMT was studied for the Japanese and Chinese MT \citewpar{halpern-2018-large}. They deal with the orthographic variation of named entities of Japanese using large scale lexicons. For English-to-Japanese, English-to-Bulgarian, and English-to-Romanian \citewopar{ugawa-etal-2018-neural} proposed a model that encodes the input word based on its NE tag at each time step. This helps to improve the BLEU score for machine translation results. \subsection{Code-Switching} A significant part of corpora for under-resourced languages comes from movie subtitles and technical documents, which makes it even more prone to code-mixing. Most of these corpora are movie speeches \citewpar{birch-etal-2019-global} transcribed to text, and they differ from that in other written genres: the vocabulary is informal, non-linguistics sounds like \textit{ah}, and mixes of scripts in case of English and native languages \citewpar{TIEDEMANN08.484,chakravarthi2016,chakravarthi-code-mix-survey,chakravarthi-etal-2020-senti-tamil,chakravarthi-etal-2020-senti-malayalam,chakravarthi-code-mix-ruba-ne}. Data augmentation \citep{fadaee-etal-2017-data,li-specia-2019-improving} and changing the foreign to native words using dictionaries or other methods have been studied. Removing the code-mixing word from the corpus on both sides was studied by \citewopar{chakravarthi2018improving,chakravarthi2019wordnet} for English-Dravidian languages. \citewopar{song-etal-2019-code} studied the data augmentation method, making code-switched training data by replacing source phrases with their target translation. Character-based NMT \citewpar{costa-jussa-fonollosa-2016-character,yang-etal-2016-character,lee-etal-2017-fully} can naturally handle intra-sentence codeswitching as a result of the many-to-one translation task. \section{Orthographic Information in Unsupervised Machine Translation} Building parallel corpora for the under-resourced languages is time-consuming and expensive. As a result parallel corpora for the under-resourced languages are limited or unavailable for some of the languages. With limited parallel corpora, supervised SMT and NMT cannot achieve the desired quality translations. However, monolingual corpora can be collected from various sources on the Internet, and are much easier to obtain than parallel corpora. Recent research has created a machine translation system using only monolingual corpora \citewpar{koehnandkhight2000, ravi-knight-2011-deciphering, dou-etal-2014-beyond} by the unsupervised method to remove the dependency of sentence aligned parallel corpora. These systems are based on both SMT \citewpar{klementiev-etal-2012-toward,artetxe-etal-2018-unsupervised} and NMT \citewpar{artetxe2018iclr}. One such task is bilingual lexicon induction. Bilingual lexicon induction is a task of creating word translation from monolingual corpora in two languages \citewpar{turcato-1998-automatically,rosner-sultana-2014-automatic}. One way to induce the bilingual lexicon induction is using orthographic similarity. Based on the assumptions that words that are spelled similarly are sometimes good translation and maybe cognates as they have similar orthography due to historical reasons. A generative model for inducing a bilingual lexicon from monolingual corpora by exploiting orthographic and contextual similarities of words in two different languages was proposed by \citewopar{haghighi-etal-2008-learning}. Many methods, based on edit-distance and orthographic similarity are proposed for using linguist feature for word alignments supervised and unsupervised methods \citewpar{dyer-etal-2011-unsupervised,berg-kirkpatrick-etal-2010-painless,hauer-etal-2017-bootstrapping}. \citewopar{riley-gildea-2018-orthographic} proposed method to utilise the orthographic information in word-embedding based bilingual lexicon induction. The authors used the two languages' alphabets to extend the word embedding and modifying the similarity score functions of previous word-embedding methods to include the orthographic similarity measure. Bilingual lexicons are shown to improve machine translation in both RBMT \citewpar{turcato-1998-automatically} and CBMT \citewpar{chu-etal-2014-improving,dou-knight-2013-dependency,dou-etal-2014-beyond}. In work by \citewopar{W17-2504}, the authors translated lexicon induction for a heavily code-switched text of historically unwritten colloquial words via loanwords using expert knowledge with language information. Their method is to take word pronunciation (IPA) from a donor language and convert them into the borrowing language. This shows improvements in BLEU score for induction of Moroccan Darija-English translation lexicon bridging via French loan words. \section{Conclusion} \label{con} In this work, we presented a review of the current state-of-the-art in machine translation utilising orthographic information, covering rule-based machine translation, statistical machine translation, neural machine translation and unsupervised machine translation. As a part of this survey, we introduced different machine translations methods and have shown how orthography played a role in machine translation results. These methods to utilise the orthographic information have already let to a significant improvement in machine translation results. From our comprehensive survey, we can see that orthographic information improves translation quality in all types of machine translation from rule-based to completely unsupervised systems like bilingual lexicon induction. For the rule-based machine translation, translation between the closely related language is simplified to transliteration due to the cognates. Statistical machine translation deals with data sparsity problem by using orthographic information. Since statistical machine translation has been studied a long time, most of the orthographic properties are studies for different types of languages. Even the recent neural machine translation and other methods still use preprocessing tools such as true-casers, tokenizers, and detokenizers that are developed for statistical machine translation. Recent neural machine translation is completely end-to-end, however, it suffers from data sparsity when dealing with morphologically rich languages or under-resourced languages. These issues are dealt by utilising orthographic information in neural machine translation. One such method which improves the translation is a transliteration of cognates. Code-switching is another issue with under-resourced languages due to the data collected from voluntary annotator, web crawling or other such methods. However, dealing with code-switching based on orthography or using character-based neural machine translation has been shown to improve the results significantly. From this, we conclude that orthographic information is much utilised while translating between closely related languages or using multilingual neural machine translation with closely related languages. While exciting advances have been made in machine translation in recent years, there is still an exciting direction for exploration from leveraging linguistic information to it, such as orthographic information. One such area is unsupervised machine translation or bilingual lexicon induction. Recent works show that word vector, along with orthographic information, performs better for aligning the bilingual lexicons in completely unsupervised or semi-supervised approaches. We believe that our survey will help to catalogue future research papers and better understand the orthographic information to improve machine translation results. \section*{Acknowledgments} This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (Insight), SFI/12/RC/2289$\_$P2 (Insight$\_$2), \& SFI/18/CRT/6223 (CRT-Centre for Research Training in Artficial Intelligence) co-funded by the European Regional Development Fund as well as by the EU H2020 programme under grant agreements 731015 (ELEXIS-European Lexical Infrastructure), 825182 (Prêt-à-LLOD), and Irish Research Council grant IRCLA/2017/129 (CARDAMOM-Comparative Deep Models of Language for Minority and Historical Languages). \bibliographystyle{spbasic}
1,116,691,498,012
arxiv
\section{\@startsection{section}{1}{\z@
1,116,691,498,013
arxiv
\section*{Introduction} Achieving a quantum advantage for information processing requires scaling quantum systems to sizes that can provide significant quantum resources, including entanglement. Large quantum systems are now realized across many platforms, including atomic simulators beyond $50$ qubits\cite{bernien:2017, zhang:2017, friis:2018}, nascent superconducting and trapped-ion based quantum computers\cite{debnath:2016, brown:2016}, integrated-photonic circuits\cite{kues:2017, carolan:2015, masada:2015, wang:2018, mennea:2018}, and photon-pairs entangled in high-dimensional variables\cite{yokoyama:2013, mirhosseini:2015, zhong:2015, xie:2015, bolduc:2016,islame:2017}. As quantum-information-based technologies mature, it will become useful to separate the physical layer providing quantum resources (e.g trapped ions, photons) from the logical layer that utilizes those resources. For example, many imperfect qubits may form one logical qubit\cite{gambetta:2017, frowis:2017}, or thousands of atoms may coherently act as a single-photon quantum memory\cite{mcconnell:2015, tiranov:2017}. As with classical communication and computing, protocols and algorithms will be implemented in the logical layer with minimal concern for the underlying platform. Because real-world systems are varied and imperfect, the quantum resources they provide must be characterized before use\cite{gambetta:2017}. Certifying an amount of entanglement in a large quantum system is an essential but daunting task. While entanglement witnesses\cite{terhal:2002, guhne:2009} and Bell tests\cite{brunner:2014} can reveal entanglement's presence, quantification generally requires a full estimation of the quantum state\cite{horodecki:2009}. Beyond moderately sized states, the number of parameters to physically measure (i.e. the number of the measurements) becomes overwhelming, making this approach unviable for current and future large-scale quantum technologies. Any practical method for quantitative entanglement certification must require only limited data. Two ideas can dramatically reduce the needed measurement resources. First is the development of quantitative entanglement witnesses, which bound the amount of entanglement without full state estimation\cite{horodecki:1999,audenaert:2006, brandao:2005, eisert:2007}. In a recent landmark experiment, $4.1$ entangled bits (ebits) of high-dimensional biphoton entanglement was certified using partial state estimation\cite{martin:2017}. One ebit describes the amount of entanglement in a maximally entangled, two-qubit state\cite{horodecki:2009}. Second, prior knowledge can be exploited to economize sampling. Certain features, or structure, are expected in specific systems. In highly-entangled quantum systems, for example, some observables should be highly correlated, the density matrix will be low-rank, or the state may be nearly pure. Such assumptions can be paired with numerical optimization to recover signals sampled below the Nyquist limit. One popular technique is Compressed Sensing\cite{donoho:2006}, which has massively disrupted conventional thinking about sampling. Applied to quantum systems, compressed sensing reduced measurement resources significantly for tasks including tomography\cite{gross:2010, flammia:2012, tonolini:2014, kalev:2015, riofrio:2016, steffens:2017, bolduc:2017} and witnessing entanglement\cite{howland:2013, howland:2016}. Computational recovery techniques have substantial downsides. Because they are estimation techniques, conclusions drawn from their results are contingent on the veracity of the initial assumptions. They are therefore unsuitable for closing loopholes or verifying security. Numerical solvers are often proven correct under limited noise models and require hand-tuned parameters, potentially adding artifacts and complicating error analysis. Finally, the computational resources needed become prohibitive in very large systems. The largest quantum systems characterized using these approaches remain considerably smaller than state-of-the-art. Here we provide an approach to entanglement quantification that overcomes these downsides. First, we improve an entropic, quantitative entanglement witness to operate on arbitrarily downsampled data. Then we develop an adaptive, multilevel sampling procedure to rapidly obtain compressed distributions suitable for the witness. Crucially, our sampling assumptions are independent of the entanglement certification, so our method can guarantee security. Because we avoid numerical optimization, error analysis is straightforward and few computational resources are needed. \section*{Results} \subsection{Entropic witnesses of high-dimensional entanglement} Entanglement is revealed when subsystems of a quantum state are specially correlated. A common situation divides a system between two parties, Alice and Bob, who make local measurements on their portion. Given two mutually unbiased, continuous observables $\mathbf{\hat{x}}$ and $\mathbf{\hat{k}}$, they can measure discrete joint probability distributions $P(\mathbf{X}_\mathrm{a},\mathbf{X}_\mathrm{b})$ and $P(\mathbf{K}_\mathrm{a}, \mathbf{K}_\mathrm{b})$ by discretizing to pixel sizes $\Delta_{\mathrm{X}}$ and $\Delta_{\mathrm{K}}$. Here, bold notation indicates that $\mathbf{X}$ and $\mathbf{K}$ may (though need not) represent multidimensional coordinates. For example $\mathbf{X}$ and $\mathbf{K}$ might represent cartesian position and momentum that can be decomposed into horizontal and vertical components such that $\mathbf{X}=(X,Y)$ and $\mathbf{K}=(K^{(\mathrm{x})},K^{(\mathrm{y})})$. A recent, quantitative entanglement witness\cite{schneeloch:2017} uses these distributions to certify an amount of entanglement: \begin{equation} d\log_2\left(\frac{2\pi}{\Delta_{\mathrm{X}} \Delta_{\mathrm{K}}}\right)-H(\mathbf{X}_{\mathrm{a}}|\mathbf{X}_{\mathrm{b}})-H(\mathbf{K}_{\mathrm{a}}|\mathbf{K}_{\mathrm{b}}) \le E_{\mathrm{f}}, \label{eq:witness} \end{equation} where, for example, $H(\mathbf{A}|\mathbf{B})$ is the conditional Shannon entropy for $P(\mathbf{A},\mathbf{B})$. $E_{\mathrm{f}}$ is the Entanglement of Formation, a measure describing the average number of Bell pairs required to synthesize the state. Eq. \ref{eq:witness} does not require full state estimation, but depends on an informed choice of $\hat{\mathbf{x}}$ and $\hat{\mathbf{k}}$. Still, in large systems, measuring these joint distributions remains oppressive. For example, if $\mathbf{X}_{\mathrm{a}}$ has $100$ possible outcomes, determining $P(\mathbf{X}_{\mathrm{a}},\mathbf{X}_{\mathrm{b}})$ takes $100^2$ joint measurements. ` Describing quantum uncertainty with information-theoretic quantities is increasingly popular\cite{coles:2017, schneeloch:2013}. Entropies naturally link physical and logical layers and have useful mathematical properties. In particular, many approximations to the joint distributions can only increase conditional entropy. Because Eq. \ref{eq:witness} bounds $E_\mathrm{f}$ from below, any such substitution is valid. \subsection{Improving an entropic entanglement witnesses for use with limited data} We use two entropic shortcuts to improve the entanglement witness. First, if the system is highly entangled, and $\hat{\mathbf{x}}$ and $\hat{\mathbf{k}}$ are well-chosen, the joint distributions will be highly correlated; a measurement outcome for $\mathbf{X}_{\mathrm{a}}$ should correlate to few outcomes for $\mathbf{X}_{\mathrm{b}}$. The distributions are therefore highly compressible. Consider replacing arbitrary groups of elements in $P(\mathbf{X}_{\mathrm{a}},\mathbf{X}_{\mathrm{b}})$ with their average values to form a multilevel, compressed estimate $\tilde{P}(\mathbf{X}_{\mathrm{a}},\mathbf{X}_{\mathrm{b}})$. By multilevel, we mean that the new, estimated distribution will appear as if it was sampled with varying resolution---fine detail in some regions and coarse detail in others. Because coarse-graining can not decrease conditional entropy, Equation \ref{eq:witness} remains valid for $\tilde{P}(\mathbf{X}_{\mathrm{a}},\mathbf{X}_{\mathrm{b}})$ and $\tilde{P}(\mathbf{K}_{\mathrm{a}},\mathbf{K}_{\mathrm{b}})$ (see Supplemental Material: Proof arbitrary coarse-graining cannot decrease conditional entropy). Good estimates for $\tilde{P}(\mathbf{X}_{\mathrm{a}},\mathbf{X}_\mathrm{b}$) and $\tilde{P}(\mathbf{K}_{\mathrm{a}},\mathbf{K}_\mathrm{b})$ can be efficiently measured by sampling at high resolution in correlated regions and low resolution elsewhere. Note that the original ($P$) and estimate ($\tilde{P})$) are full correlation matrices with $N$ elements, but only $M\ll N$ values measured to specify $\tilde{P}$. The witness is valid for arbitrary downsampling; it works best when the approximate and actual distributions are most similar, but can never overestimate $E_\mathrm{f}$ or allow false-positives. Second, if the observables are multi-dimensional such that they can be decomposed into $d$ marginal, component observables (e.g. horizontal and vertical components) $\hat{\mathbf{x}}=(\hat{x}^{(1)},\hat{x}^{(2)},...,\hat{x}^{(d)})$ (similar for $\hat{\mathbf{k}}$), the conditional entropies have the property \begin{equation} H(\mathbf{X}_\mathrm{a}|\mathbf{X}_\mathrm{b}) \leq \sum_i^d H(X^{(i)}_\mathrm{a}|X^{(i)}_\mathrm{b}), \end{equation} with equality when $P(\mathbf{X}_\mathrm{a},\mathbf{X}_\mathrm{b})$ is separable. If we expect nearly-separable joint-distributions, the reduced, marginal joint-distributions $P(X^{(i)}_\mathrm{a},X^{(i)}_\mathrm{b})$ can be separately measured but still capture nearly all of the correlations present. For example, in a two-dimensional cartesian scenario, we might separately measure horizontal correlations $P(X_\mathrm{a},X_\mathrm{b})$, $P(K^{\mathrm{(x)}}_\mathrm{a},K^{\mathrm{(x)}}_\mathrm{b})$ and vertical correlations $P(Y_\mathrm{a},Y_\mathrm{b})$, $P(K^{\mathrm{(y)}}_\mathrm{a},K^{\mathrm{(y)}}_\mathrm{b})$. For $d$-component observables, this is a $d^{\text{th}}$-power reduction in the number of measurements. Like the first shortcut, this approximation also can not overestimate $E_\mathrm{f}$. Combining both improvements, our new quantitative entanglement witness is \begin{align} \label{eq:impwitness} \sum_{i=1}^d \Biggl[ \log_2 & \left(\frac{2\pi}{\Delta_\mathrm{X}^{(i)}\Delta_\mathrm{K}^{(i)}}\right) \\ \nonumber - & \tilde{H}(X^{(i)}_\mathrm{a}|X^{(i)}_\mathrm{b}) - \tilde{H}(K^{(i)}_\mathrm{a}|K^{(i)}_\mathrm{b}) \Biggr] \le E_\mathrm{f}. \end{align} \subsection{Proof of concept experimental setup} As a test experimental system, we use photon pairs entangled in their transverse-spatial degrees of freedom\cite{walborn:2010, schneeloch:2016}, where the transverse plane is perpendicular to the optic axis. Our testbed, given in Figure \ref{fig:setup}(a), creates photon pairs via spontaneous parametric downconversion (see Methods). Generated photons are positively correlated in transverse-position and anti-correlated in transverse-momentum. This state closely approximates the original form of the Einstein-Podolsky-Rosen paradox. Because position $\hat{\mathbf{x}}=(\hat{x},\hat{y})$ and momentum $\hat{\mathbf{k}}=(\hat{k}^{\mathrm{(x)}},\hat{k}^{\mathrm{(y)}})$ (where $\mathbf{\hat{k}}= \mathbf{\hat{p}}/\hbar$) observables are continuous, this state is very high-dimensional. After creation, the twin photons are separated at a beam splitter and enter identical measurement apparatuses, where a basis selection system allows for interrogating position or momentum. A digital micromirror device (DMD)---an array of individually addressable micromirrors---is placed in the output plane. By placing patterns on the signal and idler DMDs and using coincidence detection, rectangular regions of the position or momentum joint-distributions are sampled at arbitrary resolution. \subsection{Adaptive, multi-level data acquisition} We measure joint-distributions $\tilde{P}(X_\mathrm{a},X_\mathrm{b})$, $\tilde{P}(Y_\mathrm{a},Y_\mathrm{b})$, $\tilde{P}(K^{\mathrm{(x)}}_\mathrm{a},K^{\mathrm{(x)}}_\mathrm{b})$, and $\tilde{P}(K^{\mathrm{(y)}}_a,K^{\mathrm{(y)}}_\mathrm{b})$. Finding compressed distributions requires a multilevel partitioning of the joint space that is not known a priori. Our adaptive approach is inspired by quad-tree image compression\cite{samet:1985}. An example is shown in Figure \ref{fig:setup}(b-g). First, all DMD mirrors are directed towards the detector to obtain a total coincidence rate $R_\mathrm{T}$. Then, the joint space is divided into four quadrants (c), which are independently sampled. If the count rate in the $i^{\text{th}}$ quadrant exceeds a threshold $\alpha R_\mathrm{T}$ ($0\le\alpha\le1$), the region is recursively split and the process is repeated. The algorithm rapidly identifies important regions of the joint-space for high-resolution sampling. We set the maximum resolution of our system to $512\times512$ pixels-per-photon for a $512^4$-dimensional joint space. The recovered joint-distributions in position and momentum are given in Figure \ref{fig:distributions}(a-d). Figure \ref{fig:distributions}(e-f) show $\tilde{P}(X_\mathrm{a},X_\mathrm{b})$ with the partitioning overlaid. These display the expected strong position and momentum correlations. A histogram showing the number of partitions at various scales is given in Figure \ref{fig:distributions}(g); most partitions are either $1\times 1$ or $2\times 2$ pixels in size. Only $6,456$ partitions are needed to accurately cover the $512^4$-dimensional space---an astonishing $20$-million-fold improvement versus using the unimproved witness. Over $10^{21}$ measurements are needed to perform full, unbiased tomography. The entanglement witness (Equation \ref{eq:impwitness}) applied to the data in Figure \ref{fig:distributions} is shown in Figure \ref{fig:entropies}. For short acquisition times, there is a systematic bias towards estimating a large $E_\mathrm{f}$. This occurs because many of the poorly correlated regions have not yet accumulated any detection events, resulting in a systematic bias towards low conditional entropies. Statistical error is low in this region because the highly-correlated regions have high-count rates and rapidly reach statistical significance. With additional measurement time, the initial bias diminishes and statistical error decreases. To our knowledge, $7.11\pm.04$ ebits is the largest quantity of entanglement experimentally certified in a quantum system. More than $14$ maximally-pairwise-entangled logical qubits are needed to describe an equal amount of entanglement. We do not require advanced post-processing such as numerical optimization, estimation, or noise reduction; however, we do post-select on coincident detection events and optionally subtract accidental coincidences (see Methods). Our witness does not explicitly require any post-processing, and is suitable for use in adversarial scenarios given a pristine experimental system. The performance of our technique as a function of maximum discretization resolution is shown in Figure \ref{fig:results}. Figure \ref{fig:results}(a) shows the approximate distribution partition number as a function of discretization dimension and the improvement factor over naive sampling. Figure \ref{fig:results}(b) shows the certified $E_\mathrm{f}$, with and without accidental subtraction, along with the ideal $E_\mathrm{f}$ for our source under a double-Gaussian approximation\cite{schneeloch:2016}. Because our pump laser is not Gaussian (Figure \ref{fig:setup}(a)), the actual $E_\mathrm{f}$ is slightly less but difficult to simulate. Error bars enclosing two standard deviations are scarcely visible. For low resolution, fewer than $1,000$ measurements witness entanglement. Progressively refining to higher effective resolution allows more entanglement to be certified until the maximum is reached. \section*{Discussion} We have shown an efficient method for performing information-based entanglement certification in a very large quantum system. An alternative, important metric for quantifying entanglement in high-dimensional systems is the entanglement dimensionality, or Schmidt rank, which describes the number of modes over which the entanglement is distributed \cite{terhal:2000,guhne:2009, sperling:2011, krenn:2014}. In contrast, entanglement measures quantify entanglement as a resource of entangled bits without regard for their distribution. Efficiently certifying the entanglement dimensionality faces many of the same problems as certifying a number ebits, such as the intractability of full tomography and the desire to avoid side effects from prior assumptions. Recently, Bavaresco et. al. used measurements in only two bases to efficiently certify over $9$ entangled dimensions between orbital-angular-momentum entangled photon pairs without special assumptions about the underlying state \cite{bavaresco:2018}. The number of entangled dimensions and the number of entangled bits are complementary but distinct characterizations of entanglement \cite{erker:2017}. If a density matrix can not be decomposed into pure states with Schmidt rank less than $d$, then the state is at least $d$-dimensionally entangled. However, a $d$-dimensional entangled state may possess an arbitrarily small amount of entanglement. Consider a system with a large Schmidt rank, but where one coefficient of the Schmidt decomposition is much larger than the others. This system will have a large entanglement dimensionality but require few entangled bits to synthesize. In this way, a given entanglement dimensionality $D$ provides an upper bound on the entanglement of formation $E_\mathrm{f}$ such that $0<E_\mathrm{f}\le \log_2 D$. In contrast, a given $E_\mathrm{f}$ provides a lower bound to the entanglement dimensionality $D \ge 2^{E_\mathrm{f}}$, describing the situation where all $D$ dimensions are maximally entangled. Our quantitative witness therefore also certifies entanglement dimensionality, but may dramatically underestimate when the target system is not near-maximally entangled (e.g. with additive noise or non-uniform marginals). In our case, we certify $2^{7.11}\ge138$ maximally-entangled dimensions with background subtraction and $2^{3.43}\ge10$ maximally-entangled dimensions without background subtraction. To our knowledge, $10$ entangled dimensions is the largest certified entanglement dimensionality without assumptions about the state. Our approach shows a path forward for certifying quantum resources in large quantum systems, where we exploit prior knowledge without conventional downsides. We show the power of an information-theoretic approach to characterizing quantum systems, and how compression can be leveraged without computational signal recovery. Though the method presented here is limited to EPR-style systems where entanglement is shared by two parties, we expect similar techniques for many-body systems utilizing higher-order correlations will soon follow. \begin{methods} \subsection{Experimental apparatus} $810$ nm, spatially entangled photon pairs are produced via spontaneous parametric downconversion (SPDC)\cite{schneeloch:2016}. The pump laser is a $405$ nm diode laser (CrystaLaser DL405-025-SO) attenuated to $7.9$ mW with a $356$ $\mu$m (x) $\times$ $334$ $\mu$m (y) beam waist. A spectral clean-up filter (Semrock Versachrome TBP01-400/16) removes unwanted $810$ nm light. The pump laser is not spatially filtered. The nonlinear crystal is a $3$ mm long BiBO crystal oriented for type-I, degenerate, collinear SPDC. The crystal is held at $32.3^{\circ}$C in an oven for long-term stability. A low-pass interference filter (Semrock LP442) removes remaining pump light, followed by a telescope relay system ($f_1=50$ mm, $f_2=100$ mm) that magnifies the SPDC field $\approx 2$X. A half-waveplate and polarizing beamsplitter choose between imaging ($\hat{\mathbf{x}}$) and Fourier-transforming ($\hat{\mathbf{k}}$) beam-paths; a beam block is placed in the unused path. The DMDs (TI Lightcrafter 4500) are computer controlled via a digital video port (HDMI). A $512\times1024$ physical-pixel area was used for data given in this manuscript. Because the DMD has twice the vertical pixel density, this corresponds to a square area. $10$ mm effective focal length, aspheric lenses (Thorlabs AC080-010) couple light into $100$ micron core multi-mode fibers connected to photon-counting detector modules (Excelitas SPCM-AQ4C-10). $810/10$ nm bandpass filters (Thorlabs FBS810-10) are placed before the fiber coupling. A time-correlated single-photon counting module (PicoQuant HydraHarp400) produces histograms of photon-pair relative arrival times. We post-select on coincident detections within a $1$ ns coincidence window centered on the histogram peak. With all DMD mirrors pointed towards the detectors, there are approximately $26,400$ total coincidences/second. \subsection{Data collection} The apparatus must be adjusted to separately measure the four reduced, joint-probabilty distributions $P(X_\mathrm{a},X_\mathrm{b})$, $P(Y_\mathrm{a},Y_\mathrm{b})$, $P(K^{\mathrm{(x)}}_\mathrm{a}, K^{\mathrm{(x)}}_\mathrm{b})$, and $P(K^{\mathrm{(y)}}_\mathrm{a},K^{\mathrm{(y)}}_\mathrm{b}).$ For example, to access the horizontal, joint-position distribution $P(X_\mathrm{a}, X_\mathrm{b})$, we adjust the half-waveplates to direct light down the imaging beam-paths so the DMDs lie in an image plane of the nonlinear crystal. To access a particular, rectangular element of the distribution, local, one-dimensional "top-hat" patterns are placed on signal ($\mathrm{a}$) and idler ($\mathrm{b}$) DMDs that only vary horizontally. In the regions where light should be directed to the detectors, all vertical pixels are used. The local images' outer-product defines the rectangular region of the joint-space $P(X_\mathrm{a},X_\mathrm{b})$ that is being sampled. To instead access the vertical, joint-position distribution $P(Y_\mathrm{a},Y_\mathrm{b})$, local DMD patterns are used that only vary vertically. The joint-momentum distributions are similarly sampled, with the half-waveplates instead adjusted to send light down the Fourier transforming optical path so that the DMDs sit in the far-field of the nonlinear crystal. \subsection{Adaptive Sampling Algorithm} For each configuration, experimental data is stored in nodes in a quad-tree decomposition of $P$ whose levels describe increasingly fine detail. The $i^{\mathrm{th}}$ node corresponds to a square area of $\tilde{P}$ at location $(x^{i}_\mathrm{a},x^{i}_\mathrm{b})$ with span $w^{i}_\mathrm{a}=w^{i}_\mathrm{b}=w$. Nodes are sampled by placing the corresponding, one-dimensional local patterns on the DMDs and generating a coincidence histogram during acquisition time $T_\mathrm{a}=0.5$ s. Coincidences $C_i$ are counted within a $1$ ns coincidence window centered on the coincidence peak; accidental coincidences $A_i$ are counted in a $1$ ns window displaced $2$ ns from the coincidence window. Coincidence and accidental values are appended to a list each time the node is sampled. The estimated count-rate $R_i=\braket{C_i}/\epsilon_i T_\mathrm{a}$, where $\epsilon_i$ is a calibrated, relative fiber coupling efficiency. Optionally, $A_i$ can be subtracted from $C_i$ for accidental removal. Uncertainty is computed by assuming Poissonian counting statistics for $C_i$ and $A_i$ and applying standard, algebraic propagation of error through the calculation of the entanglement quantity (Eq. \ref{eq:impwitness}). The data collection algorithm consists of a partitioning phase followed by an iterative phase. During partitioning, the algorithm repeatedly iterates through a scan-list of leaves of the tree. Node $i$ is considered stable when $\mathrm{sgn}(\alpha R_\mathrm{T}-R_i)$ is known to at-least $\beta$ standard-deviations of certainty, where splitting threshold $\alpha$ ($0 \le \alpha \le 1$) and stability criterion $\beta$ are user-chosen heuristics. Stable nodes are no longer measured. If a node is stable and $R_i \ge \alpha R_\mathrm{T}$, the node is split into four equal-sized sub-quadrants which are initially unstable and added to the scan-list. Optionally, a maximum resolution (maximum tree depth) may be set. The transition to the iterative phase occurs when the percentage of unstable leaves is less than $\Gamma$, a user chosen parameter. At this point, stability is ignored and all leaf nodes are scanned repeatedly and guaranteed to have the same total acquisition time. Various final stopping criteria can be used; we chose a fixed total run time. Note that heuristic parameters $\alpha$, $\beta$, and $\gamma$ may be changed during operation if desired. For the data shown in this manuscript, $\alpha=.002$, $\beta=2$, and $\Gamma=.15$ with a $30$ hour runtime. The probability distribution $\tilde{P}$ is computed by uniformly distributing the estimated count rate (with or without-accidental subtraction) from each leaf node across its constituent elements in $\tilde{P}$, followed by normalization. \section*{Data Availability} The data supporting the results presented in this manuscript is available from the corresponding author G.A.H upon request. \end{methods} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{Figure_1.pdf} \caption{{\bf Experimental setup for adaptive measurements} (a) An entangled photon source produces spatially entangled photon pairs, which are separated and routed through basis selection optics that switch between measuring transverse-position or transverse-momentum. Computer-controlled digital micromirror devices and photon-counting detectors perform joint spatial projections at up to $512\times 512$ pixel resolution. (b) shows a simulated, true position joint-distribution of $P(X_\mathrm{a},X_\mathrm{b})$ at $128\times 128$ pixel resolution, while (c-g) show its simulated, adaptively decomposed estimate $\tilde{P}(X_\mathrm{a}, X_\mathrm{b})$ as it is refined to higher detail via quad-tree decomposition. When the joint-intensity in a block exceeds a user-defined threshold, it is split into four sub-quadrants and the process is recursively repeated, rapidly partitioning the space to obtain a compressed distribution from very few measurements.} \label{fig:setup} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{Figure_2_Corrected.pdf} \caption{{\bf Measured joint probability distributions at $512\times 512$ pixel resolution}. (a-d) show the four estimated joint probability distributions with their single-party marginal distributions overlaid, showing tight correlations. (e) shows an enlarged version of $\tilde{P}(X_\mathrm{a},X_\mathrm{b})$ overlaid with the adaptive partitioning, with (f) showing a small central region to see fine detail. The histogram (g) shows the number of partitions as a function of their area. Only $6,456$ measurements are needed instead of $2\times 512^4$.} \label{fig:distributions} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{Figure_3.pdf} \caption{{\bf Entanglement quantification versus acquisition time} The entanglement of formation $E_\mathrm{f}$ is given as a function of acquisition time-per-partition for unaltered coincidence data and accidental-subtracted data. Error bars enclosing two standard deviations are determined by propagation of error from photon-counting statistics. We confirm the validity of this error analysis strategy via Monte Carlo simulation in Supplemental Material: Monte Carlo error analysis (see Supplemental Figure 1).} \label{fig:entropies} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{Figure_4.pdf} \caption{{\bf Entanglement quantification versus maximum resolution} (a) shows the number of partitions required as a function of maximum allowed resolution and the improvement over the uncompressed approach. (b) shows the amount of entanglement captured as the maximum resolution increases. We see the progressive nature of the technique, which witnesses entanglement with few measurements at low resolution but more accurately quantifies it with further refinement. Our results approach the ideal maximum measurable value $E_\mathrm{f}=7.68$ ebits for our source.} \label{fig:results} \end{figure*} \clearpage \begin{addendum} \item[Acknowledgements] We gratefully acknowledge support from the OSD ARAP QSEP program and Air Force Office of Scientific Research LRIR 14RI02COR. J.S. acknowledges support from the National Research Council Research Associate Program. Any opinions, findings, and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of AFRL. \item[Author Contributions] G.A.H. and J.S. conceived of the idea and contributed equally. J.S. derived the entanglement witness and led the theoretical analysis. G.A.H and C.C.T. developed the data collection algorithm. G.A.H performed the experiment with help from M.L.F and analyzed the data with help from C.C.T. and J.S. P.M.A. participated in useful scientific discussions. G.A.H wrote the manuscript with contributions from all authors. \item[Competing Interests] The authors declare that they have no competing interests. \item[Correspondence] Correspondence and requests for materials should be addressed to G.A.H.~(email: [email protected]). \end{addendum} \subsection{Proof that arbitrary coarse-graining cannot decrease conditional entropy} We are given two discrete probability distributions $P_{1}(X_{\mathrm{A}},X_{\mathrm{B}})$ and $P_{2}(X_{\mathrm{A}},X_{\mathrm{B}})$. We will also assume a permutation operation $\chi$, that shuffles the outcomes of $X_\mathrm{A}$ and $X_\mathrm{B}$. With this, we define the permuted distributions $P_{1}^{'}(X_\mathrm{A},X_\mathrm{B})$ and $P_{2}^{'}(X_\mathrm{A},X_\mathrm{B})$ as the result of permutation operator $\chi$ on $P_{1}(X_\mathrm{A},X_\mathrm{B})$ and $P_{2}(X_\mathrm{A},X_\mathrm{B})$, respectively. The joint convexity of relative entropy states that given distributions $P_{1}, P_{2}, P_{1}^{'}$, and $P_{2}^{'}$, the following inequality holds: \begin{align} \label{ConvRelEnt} \lambda \mathscr{D}&(P_{1}||P_{2}) + (1-\lambda)\mathscr{D}(P_{1}^{'}||P_{2}^{'})\geq \nonumber\\ &\geq \mathscr{D}(\lambda P_{1} + (1-\lambda) P_{1}^{'}||\lambda P_{2} + (1-\lambda) P_{2}^{'}) \end{align} where $\lambda\in[0,1]$. Next, we define the mixed probability distribution $\bar{P}_{1}\equiv(\lambda P_{1} + (1-\lambda) P_{1}^{'})$, and define $\bar{P}_{2}$ similarly. Since $P_{1}^{'}$ and $P_{2}^{'}$ are respectively related to $P_{1}$ and $P_{2}$ by the same permutation $\chi$, we have that $\mathscr{D}(P_{1}||P_{2})=\mathscr{D}(P_{1}^{'}||P_{2}^{'})$. Therefore, we obtain the inequality: \begin{equation}\label{RelEntMixIneq} \mathscr{D}(P_{1}||P_{2})\geq \mathscr{D}(\bar{P}_{1}||\bar{P}_{2}). \end{equation} This result that mixing (i.e., majorization) cannot increase relative entropy has far reaching applications. In particular, coarse-graining is a form of majorization between adjacent elements in a probability distribution. Because all (Shannon) entropic functions can be expressed in terms of relative entropies, it immediately follows that: \begin{align} H_{\mathrm{\bar{P}}}(X_\mathrm{A})&\geq H_{\mathrm{P}}(X_\mathrm{A})\\ H_{\mathrm{\bar{P}}}(X_\mathrm{A},X_\mathrm{B})&\geq H_{\mathrm{P}}(X_\mathrm{A},X_\mathrm{B})\\ H_{\mathrm{\bar{P}}}(X_\mathrm{A}|X_\mathrm{B})&\geq H_{\mathrm{P}}(X_\mathrm{A}|X_\mathrm{B}) \end{align} where the subscripts $\mathrm{P}$ and $\mathrm{\bar{P}}$ represent the probability distribution before and after coarse-graining, respectively. In addition, the mutual information and the conditional mutual information obey the inequalities \begin{align} H_{\mathrm{\bar{P}}}(X_\mathrm{A}:X_\mathrm{B})&\leq H_{\mathrm{P}}(X_\mathrm{A}:X_\mathrm{B})\\ H_{\mathrm{\bar{P}}}(X_\mathrm{A}:X_\mathrm{B}|X_\mathrm{C})&\leq H_{\mathrm{P}}(X_\mathrm{A}:X_\mathrm{B}|X_\mathrm{C}). \end{align} where again, the subscripts $\mathrm{P}$ and $\mathrm{\bar{P}}$ denote the true and coarse grained probability distribution, respectively. Furthermore, both the continuous mutual information $h(x_\mathrm{A}:x_\mathrm{B})$ and the continuous conditional mutual information $h(x_\mathrm{A}:x_\mathrm{B}|x_\mathrm{C})$ are expressible as high-resolution limits of corresponding discrete mutual informations. Because successive coarse grainings cannot increase these quantities, the following inequalities hold between discrete and continuous mutual information \begin{align} h(x_\mathrm{A}:x_\mathrm{B})&\geq H(X_\mathrm{A}:X_\mathrm{B})\label{mineq}\\ h(x_\mathrm{A}:x_\mathrm{B}|x_\mathrm{C})&\geq H(X_\mathrm{A}:X_\mathrm{B}|X_\mathrm{C})\label{newineq} \end{align} While the former inequality \eqref{mineq} can be found with alternative methods, the latter inequality \eqref{newineq} is new to the literature. \subsection{Proof of inequality 2} Inequality (2) derives from two fundamental properties of Shannon entropy. To expand notation, we have: \begin{equation} H(\mathbf{X}_\mathrm{a}|\mathbf{X}_\mathrm{b})\equiv H(X_\mathrm{a}^{(1)},...,X_\mathrm{a}^{(d)}|X_\mathrm{b}^{(1)},...,X_\mathrm{b}^{(d)}) \end{equation} First, is that the joint Shannon entropy is less than or equal to the sum of the marginal entropies: \begin{equation} H(\mathbf{X}_\mathrm{a}|\mathbf{X}_\mathrm{b})\leq\sum_{i=1}^{d}H(X_\mathrm{a}^{(i)}|X_\mathrm{b}^{(1)},...,X_\mathrm{b}^{(d)}) \end{equation} Second, is that conditioning on additional variables cannot increase entropy, or conversely that removing conditioning variables cannot reduce entropy: \begin{equation} H(X_\mathrm{a}^{(i)}|X_\mathrm{b}^{(1)},...,X_\mathrm{b}^{(d)})\leq H(X_\mathrm{a}^{(i)}|X_\mathrm{b}^{(i)}) \end{equation} Together, this proves inequality (2): \begin{equation} H(\mathbf{X}_\mathrm{a}|\mathbf{X}_\mathrm{b})\leq \sum_{i=1}^{d}H(X_\mathrm{a}^{(i)}|X_\mathrm{b}^{(i)}). \end{equation} \subsection{Monte Carlo error analysis} For the results shown in the manuscript, we used standard, first-order propagation-of-uncertainty for error analysis. Each coincidence-count measurement is assumed to have Poissonian uncertainty, and this uncertainty is analytically propagated through the analysis (e.g. $f(x_0\pm \delta) = f(x_0) \pm \left (\frac{df}{dx}\right)_{x_0} \delta$). To confirm the validity of our propagation-style error analysis, we also estimated our uncertainty with Monte Carlo simulations. This approach does not suffer any potential issues that may arise where our equations may not be sufficiently well-behaved for the first-order propagation of error. However, it does replace a simple analytical result with the need for computational simulations. To perform Monte Carlo simulations, each coincidence count measurement is used to sample from a Poissonian distribution. Then, we follow our previously described process for generating joint-probability distributions (with or without accidental subtraction) and calculating the amount of entanglement. This process is repeated many times to see how the Poissonian counting statistics propagate to our final result. In Supplemental Figure \ref{fig:supp:mc}, we recreate Figure 3 from the main text using this approach with $100$ trials. The error bars shown enclose two standard deviations. The uncertainties from this approach behave similarly to the analytic propagation-of-error used in the main manuscript, however the uncertainties are even smaller. The values obtained for the entanglement of formation are $7.154 \pm .015$ $(7.112 \pm .0412)$ ebits with background subtraction and $3.459 \pm .012$ $(3.425 \pm .038)$ ebits, where the analytic result is given in parentheses. The two outcomes are in good agreement, with between two-times and four-times lower uncertainty with the Monte Carlo simulations. \subsection{Maximum possible entanglement that can be certified with this technique} For photon statistics contained within a finite window, the maximum possible entanglement our relation can characterize is when a pixel in the signal arm is correlated to only a single pixel in the idler arm, or when all conditional entropies are zero. In this case, the inequality reads: \begin{equation} E_{\mathrm{f}}\geq \log\Bigg(\frac{(2\pi)^{2}}{\Delta x_{\mathrm{A}} \Delta y_{\mathrm{A}} \Delta k_{\mathrm{xA}}\Delta k_{\mathrm{yA}}}\Bigg). \end{equation} For perfect diagonal correlations, the number of measurements we need with our technique scales favorably with resolution, improving better with tighter correlations. For example, for $N\times N$ resolution in both position and momentum (assuming $N$ is a power of two for simplicity), then one needs only about $12(N-\log_{2}(N) -2)$ measurements, which, for $N=512$ would be about 6096 measurements. This does not include the number of measurements needed to acquire this partitioning, which scales similarly. When the correlations are less tight, more pixels are required at maximum resolution, increasing this total. \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{monte_carlo.pdf} \caption{{\bf Entanglement quantification versus acquisition time with Monte Carlo uncertainty analysis} Measured coincidence counts are used to draw values from a Poisson distribution for 100 trials. Error bars enclose two standard deviations and are in good agreement with the analytical approach to error analysis used the main text (see Figure 3).} \label{fig:supp:mc} \end{figure*}
1,116,691,498,014
arxiv
\section{High Dimensional Case} \label{subsection:lipschitz-highd} \input{highd-lipschitz} \section{Nadaraya-Watson Estimator} \label{appendix:kde} In this section, we state the proof of Theorem~\ref{thm:kde-min-error} and the lemmas needed. But, first we state a theorem from \cite{backurs2018efficient} which will be crucial in the proof. The following theorem essentially states that for certain nice kernel $K(x,y)$, it is possible to estimate $\frac{1}{N}\sum_{i=1}^N K(x,x_i)$ with a multiplicative error of $\epsilon$ for any query $x$ efficiently. In this section, we will use $A\preceq B \text{ }(\text{or } A\succeq B)$ to denote that $ B-A\text{ }(\text{or } A-B)$ is a positive semidefinite matrix for two positive semidefinite matrices $A$ and $B$. \begin{theorem} [Theorem 11 of \cite{backurs2018efficient}] \label{thm:backurs2018efficient} There exists a data structure that given a data set $P \subset \mathbb{R}^d$ of size $N$, using $O(dL2^{O(t)}\log(\frac{\Phi N}{\delta}))\frac{1}{\epsilon^2}\cdot N$ space and preprocessing time, for any $(L,t)$ nice kernel and a query $q \in \mathbb{R}^d$, estimates $KDF_P(q) = \frac{1}{|P|}\sum_{y\in P}k(q,y)$ with accuracy $(1\pm\epsilon)$ in time $O(dL2^{O(t)}\log(\frac{\Phi N}{\delta}))\frac{1}{\epsilon^2})$ with probability at least $1-\frac{1}{poly(N)}-\delta$. \end{theorem} The kernel used in this setting $K_A(x,y) = \frac{1}{1+||A(x-y)||_2^2} \ \forall A \in \mathcal{A}$ is $(4,2)$ smooth according to their definition of smoothness as shown in Definition 1 in \cite{backurs2018efficient} and hence can be computed efficiently. Moreover, as mentioned in \cite{backurs2018efficient}, it is also possible to remove the dependence on the aspect ratio and in turn achieve time complexity of $O(\frac{d}{\epsilon^2}\log(\frac{ N}{\delta}))$ with a preprocessing time of $O(\frac{dN}{\epsilon^2}\log(\frac{ N}{\delta}))$. Note that the data structure in \cite{backurs2018efficient} only depends on the smoothness properties of the kernel and hence, the same data structure can be used for simultaneously computing the kernel density for all kernels $K_A$ for all $A \in \mathcal{A}$. As a direct corollary of Theorem~\ref{thm:backurs2018efficient}, we obtain that it is possible to efficiently estimate $p_{S,A}(q,x_i)$ $\forall x_i\in S, q \in \mathbb{R}^d, A \in \mathcal{A}$ since multiplication by a constant still preserves the multiplicative approximation (Corollary~\ref{thm:backurs2018efficient-cor}). Now, we restate Theorem~\ref{thm:kde-min-error} and its proof. \theoremkdeminerror* \begin{proof} Let us assume that $A^*$ is the optimal matrix which minimizes the prediction error i.e. $$A^* = \argmin_{A \in \mathcal{A}} \E_{x\sim \mathcal{D}}\sum_ip_{S,A}(x_i,x)|f(x_i)-f(x)|$$ Let us consider a set $\mathcal{A}_{\epsilon}$, an $\epsilon$-covering of the set of matrices $\mathcal{A}$ with size $T = |\mathcal{A}_{\epsilon}| = O(\frac{1}{\epsilon^d})$, that is, $$\mathcal{A}_{\epsilon}=\{A \in \mathbb{R}^{d\times d}\ |\ A_{i,j} = 0 \ \forall \ i\neq j \text{ and } A_{i,i}\in\{1,1+\epsilon,(1+\epsilon)^2,\cdots,2\} \ \forall i \in [d]\}. $$ From Lemma \ref{lemma:kde-approximate}, we know that it is sufficient to estimate $\min_{A \in \mathcal{A}_{\epsilon}} \E_{x\sim \mathcal{D}}|\sum_ip_{S, A}(x_i,x)|f(x_i)-f(x)|$ up to an error of $\epsilon$ because the optimal error in $\mathcal{A}$ and $\mathcal{A}_{\epsilon}$ are within $\epsilon$ of each other. To estimate this, we use the estimator from Algorithm~\ref{alg-kde}. Now, we will prove that the estimator is within $E_{x \sim \mathcal{D}}\sum_{i=1}^Np_{S,A}(x,x_i)|f(x_i)-f(x)| \pm \epsilon$ with high probability for all $A \in \mathcal{A}_{\epsilon}$. Let $E$ be the event that the estimators $\hat{p}_{S, A}$ from Algorithm \ref{alg-kde} all approximate $p_{S, A}$ up to a multiplicative error of $\epsilon$, that is, \begin{equation*} E = \{\hat{p}_{S, A}(z_i,\tilde{z}_i) \in p_{S, A}(z_i,\tilde{z}_i)[1-\epsilon, 1+\epsilon] \quad \forall i \in [M], \forall A \in \mathcal{A}_{\epsilon}\}. \end{equation*} Now, we will break the probability of the estimator $\hat{L}_{S, K_A}$ not being within $\epsilon$ close to the true value $L_{S,K_A}$ for all $A \in \mathcal{A}$. \begin{small} \begin{equation*} \Pr(|\hat{L}_{S, K_A} - L_{S, K_A} | > \epsilon) = \underbrace{\Pr(|\hat{L}_{S, K_A} - L_{S, K_A} | > \epsilon| E)}_{T_1}\Pr(E) + \Pr(|\hat{L}_{S, K_A} - L_{S, K_A} | > \epsilon | \neg E)\underbrace{\Pr(\neg E)}_{T_2} \end{equation*} \end{small} \underline{Bounding $T_1$. } Computing the expectation of the estimator conditioned on the event $E$, we get that \begin{align*} E[\hat{L}_{S,K_A}|E] &= \frac{1}{M}\sum_iE_{z_i\sim \mathcal{D}, \tilde{z}_i\sim p_{S,I}(z_i,\tilde{z}_i)}[\frac{\hat{p}_{S,A}(z_i, \tilde{z}_i)}{p_{S,I}(z_i, \tilde{z}_i)}|f(z_i)-f(\tilde{z}_i)|]\\ &= \frac{1}{M}\sum_iE_{z_i\sim \mathcal{D}}[\sum_{j=1}^N\hat{p}_{S,A}(z_i,x_j)|f(z_i)-f(x_j)|]\\ &= E_{z\sim \mathcal{D}}\sum_{j=1}^N\hat{p}_{S,A}(z,x_j)|f(z)-f(x_j)|\\ &\in E_{z\sim \mathcal{D}}\sum_{j=1}^N{p}_{S,A}(z,x_j)|f(z)-f(x_j)|[1-\epsilon, 1+\epsilon]\\ &\in L_{S,K_A}[1-\epsilon, 1+\epsilon] \end{align*} Now, we will show that the estimator is close to its expectation with high probability conditioned on the event $E$. Since we know that $|f(z_i)-f(\tilde{z}_i)| \leq 1$ and \begin{equation*} p_{S,A}(z_i,\tilde{z}_i) \leq 16p_{S,I}(z_i,\tilde{z}_i) \quad \forall z_i,\tilde{z}_i \in \mathbb{R}^d, \forall A \in \mathcal{A}, \end{equation*} from Lemma \ref{lemma:kde-ineq}, we get that each entry of our estimator is bounded between $[0,16(1+\epsilon)]$. Hence, using Hoeffding's inequality we get that \begin{equation*} \Pr(|\hat{L}_{S,K_A}-E[\hat{L}_{S,K_A}|E]| \geq \epsilon|E) \leq 2e^{\frac{-2M\epsilon^2}{{256(1+\epsilon)}^2}}. \end{equation*} Hence, for $M = O(\frac{1}{\epsilon^2}\log(\frac{T}{\delta}))$ and union bound over the $\epsilon$ cover of size $T$, we get $T_1 \leq \delta$ \underline{Bounding $T_2$. } Using Corollary~\ref{thm:backurs2018efficient-cor} with $\delta$ as $\frac{\delta\epsilon^d}{M}$ and a union bound over the $M\cdot\frac{1}{\epsilon^d}$ computations of $\hat{p}_{S, A}$, we get that \begin{equation*} \Pr(\neg E) \leq \delta + \frac{1}{poly(N)}\frac{M}{\epsilon^d} \end{equation*} Combining the upper bounds for $T_1$ and $T_2$, \begin{equation*} \Pr(|\hat{L}_{S, K_A} - L_{S, K_A} | > \epsilon) \leq 2\delta + \frac{1}{poly(N)}\frac{M}{\epsilon^d}, \end{equation*} for $M = O(\frac{1}{\epsilon^2}\log(\frac{T}{\delta}))$. Substituting $T = \frac{1}{\epsilon^d}$, we get a sample complexity of $M = O(\frac{1}{\epsilon^2}(d\log(\frac{1}{\epsilon})+\log(\frac{1}{\delta})))$ and failure probability of $2\delta+ \frac{1}{poly(N)\epsilon^{d+2}}(d\log(\frac{1}{\epsilon})+\log(\frac{1}{\delta}))$. \paragraph{Running Time.} For the running time, since we have a total of $O(\frac{1}{\epsilon^2}(d\log(\frac{1}{\epsilon})+\log(\frac{1}{\delta}))$ samples and we have to take a sum over these samples for each $A \in \mathcal{A}_{\epsilon}$, we get a total running time of $O(\frac{1}{\epsilon^d}(\frac{1}{\epsilon^2}(d\log(\frac{1}{\epsilon})+\log(\frac{1}{\delta})))$ for this part. $p_{S,I}$ needs to be computed only once for each sample and this leads to a running time of $O(N(\frac{1}{\epsilon^2}(d\log(\frac{1}{\epsilon})+\log(\frac{1}{\delta})))$. We also compute an estimator $\hat{p}_{S,A}$ for each of the sample pair for each $A \in \mathcal{A}_{\epsilon}$ and thus from Corollary~\ref{thm:backurs2018efficient-cor}, we get a preprocessing time of $O(\frac{Nd}{\epsilon^2}\log(\frac{NM}{\delta\epsilon^d}))$ and $O(\frac{Md}{\epsilon^{d+2}}\log(\frac{NM}{\delta\epsilon^d}))$ time in computing the estimator for each $A \in \mathcal{A}_{\epsilon}$. This completes the proof of the theorem. \end{proof} The following lemma used in the proof of Theorem~\ref{thm:kde-min-error} states that it is sufficient to consider an $\epsilon$-net of the set of matrices $\mathcal{A}$ to approximate the minimum error up to an additive error of $\epsilon$. \begin{lemma} \label{lemma:kde-approximate} Let us consider a set $\mathcal{A}_{\epsilon}$, an $\epsilon$-covering of the set of matrices $\mathcal{A}$ i.e. $$\mathcal{A}_{\epsilon}=\{A \in \mathbb{R}^{d\times d}\ |\ A_{i,j} = 0 \ \forall \ i\neq j \text{ and } A_{i,i}\in\{1,1+\epsilon,(1+\epsilon)^2,\cdots,2\} \ \forall i \in [d]\}$$ Then the minimum prediction error for $A \in \mathcal{A}_{\epsilon}$ additively approximates the minimum prediction error for $A \in \mathcal{A}$ i.e. $$\big|\min_{A \in \mathcal{A}_{\epsilon}} \E_{x\sim \mathcal{D}}\sum_i p_{S,A}(x_i,x)|f(x_i)-f(x)|-\min_{A \in \mathcal{A}} \E_{x\sim \mathcal{D}}\sum_ip_{S,A}(x_i,x)|f(x_i)-f(x)|\big| \leq 15\epsilon$$ \end{lemma} \begin{proof} We first show that that if $(1+\epsilon)^{-1}A_2 \preceq A_1 \preceq A_2(1+\epsilon)$, then $$\big|\E_{x\sim \mathcal{D}}\sum_i p_{S,A_1}(x_i,x)|f(x_i)-f(x)|-\E_{x\sim \mathcal{D}}|\sum_i p_{S,A_2}(x_i,x)|f(x_i)-f(x)|\big| \leq 15\epsilon$$ Note that the loss difference can also be written as $ \E_{x\sim \mathcal{D}}\sum_i |p_{S,A_1}(x_i,x)-p_{S,A_2}(x_i,x)||f(x_i)-f(x)|$. Now from Lemma~\ref{lemma:kde-ineq}, we know that if $(1+\epsilon)^{-1}A_2 \preceq A_1 \preceq A_2(1+\epsilon)$, then $(1+\epsilon)^{-4}p_{S,A_2}(x_i,x) \leq p_{S,A_1}(x_i,x) \leq p_{S,A_2}(x_i,x)(1+\epsilon)^4$. Using this, we get that \begin{multline*} E_{x\sim \mathcal{D}}\sum_i|p_{S,A_1}(x_i,x)-p_{S,A_2}(x_i,x)||f(x_i)-f(x)|\\ \begin{aligned} &\leq E_{x\sim \mathcal{D}}\sum_i|p_{S,A_2}(x_i,x)(1+\epsilon)^4-p_{S,A_2}(x_i,x)||f(x_i)-f(x)|\\ &\leq 15\epsilon E_{x\sim \mathcal{D}}\sum_i |p_{S,A_2}(x_i,x)||f(x_i)-f(x)|\\ &\leq 15\epsilon E_{x\sim \mathcal{D}}\sum_i |p_{S,A_2}(x_i,x)| \leq 15\epsilon \end{aligned} \end{multline*} Hence, the loss difference is also bounded by $15\epsilon$. Now, we will prove that $\exists A \in \mathcal{A}_\epsilon$ such that $(1+\epsilon)^{-1}A \preceq A^* \preceq A(1+\epsilon)$ where $A^* = \argmin\E_{x\sim \mathcal{D}}\sum_i |p_{S,A}(x_i,x)||f(x_i)-f(x)|$. This will be sufficient to show that the minimum error for the set $\mathcal{A}_\epsilon$ and $\mathcal{A}$ differ from each other by an additive error at most $\epsilon$. Consider each of the diagonal entries of $A$ to be the value in $\{1,1+\epsilon,(1+\epsilon)^2,\cdots,2\}$ closest to the corresponding entry of $A^*$. It is easy to see that $(1+\epsilon)^{-1}A \preceq A^* \preceq A(1+\epsilon)$. \end{proof} The following lemma states that if two matrices $A_1$ and $A_2$ are multiplicatively close to each other in terms of all their eigenvalues, then the corresponding probabilities for any query point $x$ and any data point $x_i$ are also multiplicatively close to each other. \begin{lemma} \label{lemma:kde-ineq} For any two matrices $A_1,A_2 \in \mathbb{R}^{d\times d}$, if $\frac{1}{1+\epsilon}A_2 \preceq A_1 \preceq A_2(1+\epsilon)$, then $$\frac{1}{(1+\epsilon)^4}p_{S,A_2}(x_i,x) \leq p_{S,A_1}(x_i,x) \leq p_{S,A_2}(x_i,x)(1+\epsilon)^4 \quad \forall x \in \mathbb{R}^d, x_i\in S.$$ \end{lemma} \begin{proof} Using $(1+\epsilon)^{-1}A_2 \preceq A_1 \preceq A_2(1+\epsilon)$, we get that \begin{alignat}{3} \centermathcell{||A_2(x_i-x)||\frac{1}{1+\epsilon}} &\leq& \centermathcell{||A_1(x_i-x)||} &\leq& \centermathcell{||A_2(x_i-x)||(1+\epsilon)}\nonumber\\ \centermathcell{1+||A_2(x_i-x)||^2\frac{1}{(1+\epsilon)^2}} &\leq& \centermathcell{1+||A_1(x_i-x)||^2} &\leq& \centermathcell{1+||A_2(x_i-x)||^2(1+\epsilon)^2}\nonumber\\ \centermathcell{\frac{1}{(1+\epsilon)^2}(1+||A_2(x_i-x)||^2)} &\leq& \centermathcell{\text{ }1+||A_1(x_i-x)||^2\text{ }} &\leq& \centermathcell{\text{ }(1+||A_2(x_i-x)||^2)(1+\epsilon)^2\text{ }}\nonumber\\ \centermathcell{\frac{1}{(1+\epsilon)^2}K_{A_2}(x_i,x)} &\leq& \centermathcell{K_{A_1}(x_i,x)} &\leq& \centermathcell{K_{A_2}(x_i,x)(1+\epsilon)^2}\label{eqn:kde-ineq} \end{alignat} Hence, using the inequality in equation~\ref{eqn:kde-ineq}, we get that \begin{alignat*}{3} \centermathcell{\text{ }\frac{1}{(1+\epsilon)^4}\frac{K_{A_2}(x_i,x)}{\sum_{x_i \in S} K_{A_2}(x_i,x)}\text{ }} &\leq& \centermathcell{\text{ }\frac{K_{A_1}(x_i,x)}{\sum_{x_i\in S} K_{A_1}(x_i,x)}\text{ }} &\leq& \centermathcell{\text{ }\frac{K_{A_2}(x_i,x)}{\sum_{x_i \in S} K_{A_2}(x_i,x)}(1+\epsilon)^4\text{ }}\\ \centermathcell{\frac{1}{(1+\epsilon)^4}p_{S,A_2}(x_i,x)} &\leq& \centermathcell{p_{S,A_1}(x_i,x)} &\leq& \centermathcell{p_{S,A_2}(x_i,x)(1+\epsilon)^4} \end{alignat*} This completes the proof of the lemma. \end{proof} \section{One Dimensional Case} \label{appendix:lipschitz-lowd} We first re-state Theorem~\ref{thm:lipschitzerror} which gives sample complexity guarantees for error estimation for the class of one-dimensional Lipschitz functions and also give the proof. \theoremlipschitzerror* \begin{proof} By Theorem~\ref{thm:lipschitzlocalquery}, we know that $\text{Query}(x, S, \mathcal{P}) = \tilde{f}(x)\ \forall x \in [0,1]$ and the error of $\tilde{f}$ additively approximates the error of function $f_{\mathcal{D}}^*$, that is, \begin{equation*} \Delta_{\D}(\tilde{f}, \mathcal{F}_L) = \text{err}_\D(\tilde{f}) - \text{err}_\D(\mathcal{F}_L) \leq \epsilon \end{equation*} with probability greater than $1/2$. Thus, with probability greater than $1/2$, we get \begin{align*} |\widehat{\text{err}_\D}(\mathcal{F}_L) - \text{err}_\D(\tilde{f})| &= |\frac{1}{N}\sum_{i=1}^N|\text{Query}(x_i, S, \mathcal{P})-y_i| - \text{err}_\D(\tilde{f})|\\ &= |\frac{1}{N}\sum_{i=1}^N|\tilde{f}(x_i)-y_i| - \text{err}_\D(\tilde{f})| \leq \epsilon \end{align*} The last inequality follows by standard concentration arguments since $N \geq O({1}/{\epsilon^2})$. The theorem statement follows by using triangle inequality. By Theorem~\ref{thm:lipschitzlocalquery}, the number of unlabeled samples is $O(({L}/{\epsilon^4})\log({1}/{\epsilon}))$ and the number of label queries is $ O(({1}/{\epsilon^2})\cdot(({1}/{\epsilon^4})\log({1}/{\epsilon}))) = O(({1}/{\epsilon^6})\log({1}/{\epsilon}))$. \end{proof} Now, we will state the lemmas involved in the proof of Theorem~\ref{thm:lipschitzlocalquery} with their proofs. The following lemma proves that with enough unlabeled samples, a large fraction of long intervals have enough unlabeled samples in them which is eventually used to argue that they will be sufficient to learn a function which is approximately close to the optimal Lipschitz function over that interval. \begin{comment} \begin{lemma} \label{lemma:number_unlabeled_samples} When we sample $M = \Omega(\frac{L}{\epsilon^4})$ unlabeled examples from the distribution $\mathcal{D}$, then with probability $1-\delta$ the total mass of type 1 intervals each having probability mass at least $\frac{1}{L}$ and less than $\frac{1}{2\epsilon^4}$ unlabeled samples is upper bounded by $\frac{\epsilon}{\delta}$. \end{lemma} \begin{proof} Let the group of type 1 intervals which have probability mass at least ${1}/{L}$ be $G$. The expected number of samples that fall in an interval $I$ with probability mass at least ${1}/{L}$ is at least ${1}/{\epsilon^4}$ when we sample ${L}/{\epsilon^4}$ samples. Using Chernoff bounds, we can get that $\Pr(\text{number of samples in } I \leq {1}/{(2\epsilon^4)}) \leq \exp{(-\nicefrac{1}{8\epsilon^4})} \leq \epsilon$. Now, let $F_i$ be the event that $I_i$ interval which has probability mass at least ${1}/{L}$ has less than ${1}/{(2\epsilon^4)}$ samples in it. Now, we know that $\Pr(F_i) \leq \epsilon \ \forall i$ where $I_i \in G$. Now, the expected combined mass of type 1 intervals which have less than ${1}/{(2\epsilon^4)}$ samples and probability mass at least ${1}/{L}$ is $\sum_{i \text{ such that }I_i \in G}\Pr(F_i)\Pr(I_i) \leq \epsilon\sum_{i \text{ such that }I_i \in G}\Pr(I_i) \leq \epsilon$. Now, using Markov's inequality, we get that with probability at least $1-\delta$, the total mass of type 1 intervals having less than ${1}/{(2\epsilon^4)}$ samples and probability mass at least ${1}/{L}$ each is upper bounded by ${\epsilon}/{\delta}$. \end{proof} \end{comment} \begin{lemma} \label{lemma:number_unlabeled_samples} For any distribution $\mathcal{D}_x$, consider a set $S = \{x_1, x_2, \cdots, x_M\}$ of unlabeled samples where each sample $x_i\stackrel{\text{i.i.d.}}{\sim}\mathcal{D}_x$. Let $\mathcal{G}$ be the set of long intervals $\{I_i\}$ each of which satisfies $p_i = \Pr_{x \sim \mathcal{D}_x}(x \in I_i) \geq \frac{1}{L}$. Let $E_i$ denote the event that $\sum_{x_j \in S}\mathbb{I}[x_j \in I_i] < \frac{1}{2\epsilon^4}\log(\frac{1}{\epsilon})$. Then, we have \begin{equation*} \sum_{I_i \in \mathcal{G}}p_i\mathbb{I}[E_i] \leq \frac{\epsilon}{\delta} \end{equation*} with failure probability atmost $\delta$ for $M = \Omega(\frac{L}{\epsilon^4}\log(\frac{1}{\epsilon}))$. \end{lemma} \begin{proof} For any interval $I \in \mathcal{G}$, we have that $\E[\sum_{x_j \in S}\mathbb{I}[x_j \in I]] \geq \frac{1}{\epsilon^4}\log(\frac{1}{\epsilon})$. Using Hoeffding inequality, we can get that $\Pr(E_i) \leq \epsilon$ for all intervals $I_i \in \mathcal{G}$. Calculating expectation of the desired quantity, we get \begin{equation*} \E[\sum_{I_i \in \mathcal{G}}p_i\mathbb{I}[E_i]] = \sum_{I_i \in \mathcal{G}}p_i\Pr[E_i] \leq \epsilon\sum_{I_i \in \mathcal{G}}p_i \leq \epsilon \end{equation*} We get the desired result using Markov's inequality. \end{proof} The following lemma states that the probability of short intervals ${p}_{\sf sh}$ is small with high probability. Consider the case of uniform distributions. In this case, since the short intervals cover only $\epsilon$ fraction of the $[0,1]$ length, their probability ${p}_{\sf sh}$ is upper bounded by $\epsilon$. The case for arbitrary distributions holds because the intervals are chosen randomly. \begin{lemma} \label{lemma:prob_intervals} When we divide the $[0,1]$ domain into alternating intervals of length $\frac{1}{L\epsilon}$ and $\frac{1}{L}$ with a random offset at $\{0, 1, 2, \cdots, \frac{1}{\epsilon}\}\frac{1}{L}$ as in the preprocessing step, then \begin{equation*} {p}_{\sf sh} = \sum_{I_i \in \mathcal{P}_{\sf sh}}p_i \leq \frac{\epsilon}{\delta} \end{equation*} with failure probability atmost $\delta$. \end{lemma} \begin{proof} Now, if we consider the division of $[0,1]$ into alternating intervals of length ${1}/(L\epsilon)$ and ${1}/{L}$ with the offset chosen uniformly randomly from $\{0, 1, 2, \cdots, {1}/{\epsilon}\}({1}/{L})$, then the intervals of length ${1}/{L}$ combined are disjoint in each of these divisions and together cover the entire $[0,1]$ length. Hence, there are at most $\delta$ fraction out of the total ${1}/{\epsilon}$ cases where the short intervals have probability greater than ${\epsilon}/{\delta}$. Hence, with probability $1-\delta$, the short intervals have probability upper bounded by ${\epsilon}/{\delta}$. \end{proof} \begin{lemma} \label{lem: good type 1 intervals} Let $I_i \in \mathcal{P}_{\sf{lg}, 1}$ be any long interval of subtype 1. For the event \mbox{$F_i = \{\Delta_{\mathcal{D}_i}(\hat{f}_{S \cap I_i}, f_{\mathcal{D}_i}^*) > \epsilon \}$}, we have \begin{equation*} \Pr(F_i) \leq \epsilon \end{equation*} \end{lemma} \begin{proof} We know that the covering number for the class of one dimensional $L$-Lipschitz functions supported on the interval $[0,l]$ is $O({Ll}/{\epsilon})$ (Lemma~\ref{lemma:lipschitz_covering}). For, Lipschitz functions supported on a long interval of length $l={1}/{(L\epsilon)}$, we get this complexity as $O({1}/{\epsilon^2})$. We know by standard results in uniform convergence, that the number of samples required for uniform convergence up to an error of $\epsilon$ and failure probability $\delta$ for all functions in a class $\mathcal{F}_L$ is $O(((\text{Covering number of } \mathcal{F}_L)/\epsilon^2)\log({1}/{\delta}))$ (Lemma~\ref{lemma:lipschitz_sample_complexity}) and hence, for long intervals we get this estimate as $O(({1}/{\epsilon^4})\log(1/\delta))$. Given that there are $\Omega(({1}/{\epsilon^4})\log(1/\epsilon))$ samples in these intervals by design, the function learned using these samples is only additively worse than the function with minimum error using standard learning theory arguments with failure probability at most $\epsilon$. Thus, we get that $$\Delta_{\mathcal{D}_i}(f_{S \cap I_i}, f_{\mathcal{D}_i}^*) \leq \epsilon \quad \forall I_i \in \mathcal{P}_{\sf{lg}, 1}$$ with failure probability at most $\epsilon$. \end{proof} Now, we state the definitions of covering number of a metric space and the uniform covering number of a hypothesis class. These definitions are used in Lemma~\ref{lemma:lipschitz_sample_complexity} to argue how fast the empirical error converges to the expected error for a given hypothesis class. \begin{definition} \label{defn:covering} $N(\epsilon,A,\rho)$ is the covering number of the metric space $A$ with respect to distance measure $\rho$ at scale $\epsilon$ and is defined as \begin{equation*} N(\epsilon,A,\rho) = \min\{|C|\ |\ C \text{ is an } \epsilon \text{-cover of } A \text{ wrt } \rho\ (\forall x \in A, \exists c \in C \text{ st } \rho(c,x) \leq \epsilon)\}. \end{equation*} \end{definition} \begin{definition} \label{defn:uniform_covering} $N_p(\epsilon, F, m)$ for $p \in \{1,2,\infty\}$ is the uniform covering number of the hypothesis class $F$ at scale $\epsilon$ with respect to distance measure $d_p$ where $d_p(x,y) = ||x-y||_p$ and is defined as \begin{equation*} N_p(\epsilon, F, m) :\,= \max_{x \in X^m}N(\epsilon,\{[f(x_1), f(x_2), \cdots, f(x_m)]\}_{f \in F},d_p) \end{equation*} where $N(\epsilon,A,\rho)$ is as defined in definition~\ref{defn:covering}. \end{definition} The following lemma uses the well known generalization theory to argue how fast the empirical error uniformly converges to the expected error for the class of $d$-dimensional Lipschitz functions. \begin{lemma} \label{lemma:lipschitz_sample_complexity} \sloppy Let $S = \{(x_i,y_i)\}^M_{i=1}$ be a set of $M > \frac{1}{\epsilon^2}(\Omega(\frac{Ll}{\epsilon}))^d\log(\frac{1}{\delta})$ data points sampled uniformly randomly from distribution $\mathcal{D}$. Then, \begin{equation*} \text{err}_\D(\hat{f}_S) - \text{err}_\D(\mathcal{F}_L) \leq \epsilon \end{equation*} with probability at least $1-\delta$. \end{lemma} \begin{proof} A standard result for uniform convergence for general loss functions (For example, Theorem 21.1 in \cite{anthony2009neural}) states that \begin{equation*} \Pr_{S\sim \mathcal{D}^n}[\sup_{f\in F}|er_\mathcal{D}^l[f]-er^l_S[f]|]> \epsilon] \leq 4N_1\left(\frac{\epsilon}{8}, l_F, 2m\right)e^{\frac{-m\epsilon^2}{32}} \end{equation*} where $l$ is the loss function bounded between $[0,1]$ and $F$ is a class of functions mapping into $[0,1]$. We will prove that $N_1\left(\frac{\epsilon}{8}, l_F, 2m\right) \leq (O(\frac{Ll}{\epsilon}))^d$ which will complete the proof of the lemma by using triangle inequality and uniform convergence for $f_{\mathcal{D}}^*$ and $\hat{f}_S$. \begin{equation*} \log\left(N_1\left(\frac{\epsilon}{8}, l_F, 2m\right)\right) \stackrel{\ensuremath{{\sf (i)}}}{\leq} \log\left(N_1\left(\frac{\epsilon}{8}, F, 2m\right)\right) \stackrel{\ensuremath{{\sf (ii)}}}{\leq} \log\left(N_\infty\left(\frac{\epsilon}{8}, F, 2m\right)\right)\\ \stackrel{\ensuremath{{\sf (iii)}}}{\leq} \left(O\left(\frac{Ll}{\epsilon}\right)\right)^d \end{equation*} $\ensuremath{{\sf (i)}}$ follows because $|l(f(x_i), y_i)-l(f'(x_i),y_i)| \leq |f(x_i)-f'(x_i)|$ for all $f,f' \in F$. $\ensuremath{{\sf (ii)}}$ follows from Lemma 10.5 in \cite{anthony2009neural} and $\ensuremath{{\sf (iii)}}$ follows from Lemma~\ref{lemma:lipschitz_covering}. \end{proof} Now, we state the definition of Lipschitz extension in Theorem~\ref{thm:lipschitz-extension} and the result which states that for any metric space, if we have a $L$-Lipschitz function on a subset of the metric space, then it is possible to extend the function to the entire space with respect to the metric which preserves the values of the function at the points originally in the domain and is now $L$-Lipschitz on the entire domain. This will be used in the computing covering number bounds for the class of Lipschitz functions (Lemma~\ref{lemma:lipschitz_covering}). \begin{theorem} \label{thm:lipschitz-extension} [Theorem 1 from \cite{mcshane1934extension}] For a $L$-Lipschitz function $f:E\rightarrow \mathbb{R}$ defined on a subset $E$ of the metric space $S$, $f$ can be extended to $S$ such that its values on the subset $E$ is preserved and it satisfies the $L$-Lipschitz property over the entire domain $S$ with respect to the same metric. Such an extension is called Lipschitz extension of the function $f$. \end{theorem} Now, we will state the well known covering number bounds for the class of high dimensional Lipschitz functions. Note that we have stated this here just for completeness and the proof essentially follows the proof from \cite{gottlieb2017efficient}. \begin{lemma} \label{lemma:lipschitz_covering} For the class of high dimensional Lipschitz functions $\mathcal{F}_L:[0,l]^d \rightarrow [0,1]$ where $f \in \mathcal{F}_L$ satisfies $|f(x)-f(y)| \leq L||x-y||_\infty \ \forall x,y \in [0,l]^d$, we have $\log(N_\infty(\epsilon, F, m)) = (O(\frac{Ll}{\epsilon}))^d$. \end{lemma} \begin{proof} Let us consider a discretization of the domain $P=[0,l]^d$ where we divide each coordinate of the domain into intervals of length $\frac{\epsilon}{3L}$. Let us consider a set $\mathcal{F}_{L}^\epsilon$ of all those Lipschitz functions which are the Lipschitz extensions of the Lipschitz functions which have output values amongst $R = \{0,\frac{\epsilon}{3},\frac{2\epsilon}{3},\cdots,1\}$ at the discretized points of the domain $P$ (note that it is always possible to form a Lipschitz extension of a Lipschitz function over metric space by \cite{mcshane1934extension}, also mentioned in Theorem~\ref{thm:lipschitz-extension} above). So, we have that $|\mathcal{F}_{L}^{\epsilon}| \leq (\frac{3}{\epsilon})^{(\frac{3Ll}{\epsilon})^d}$. Now, we will show that $\mathcal{F}_L^{\epsilon}$ forms a valid covering of the function class $\mathcal{F}_L$, and hence we get $\log(N_\infty(\epsilon, \mathcal{F}_L, m)) = (O(\frac{Ll}{\epsilon}))^d$. Now, let us show that for any $f \in \mathcal{F}_L$, there exists a function $\tilde{f} \in \mathcal{F}_{L}^{\epsilon}$ such that $\sup_x|f(x)-\tilde{f}(x)| \leq \epsilon$. Consider a function $\hat{f}$ such that $\hat{f}(x) = \argmin_{y \in R}|y-f(x)|$ at the discretization of the domain $P$ and let $\tilde{f}$ be its Lipschitz extension. First, we will argue that $\hat{f}(x)$ is $L$-Lipschitz and since $\tilde{f}$ is a Lipschitz extension of $\hat{f}$, $\tilde{f}$ is also $L$-Lipschitz and by construction belongs to $F_L^{\epsilon}$. Now, we will prove that $\hat{f}$ is $L$-Lipschitz restricted to the discretization of the domain. Consider any $x,y$ in the discretized domain with $||x-y||_\infty \leq \frac{\epsilon}{3L}$, we have $|\hat{f}(x)-\hat{f}(y)| \leq L||x-y||_{\infty}$ because if the smaller value say $f(y)$ gets rounded down and the larger value $f(x)$ gets rounded above, this would violate the $L$-Lipschitzness of the function $f$. Hence, we see that $\hat{f}$ is $L$-Lipschitz. Now, we will show that $\sup_x|f(x)-\tilde{f}(x)| \leq \epsilon$. Consider any point $x \in [0,l]^d$. Now, we know that there exists a $\tilde{x} \in P$ i.e. in the discretization of the domain such that $||x-\tilde{x}||_\infty \leq \frac{\epsilon}{3L}$. Hence, we get that $|\tilde{f}(x)-f(x)| \leq |\tilde{f}(x)-\tilde{f}(\tilde{x})| + |\tilde{f}(\tilde{x})-f(\tilde{x})| + |f(\tilde{x})-f(x)| \leq \frac{\epsilon}{3} + \frac{\epsilon}{3} + \frac{\epsilon}{3} \leq \epsilon$ since the functions $\tilde{f}$ and $f$ are both $L$-Lipschitz and $|\tilde{f}(\tilde{x})-f(\tilde{x})| = |\hat{f}(\tilde{x})-f(\tilde{x})| \leq \frac{\epsilon}{3}$. \end{proof} \section{Comparison with other works} \label{appendix:relatedwork} \textbf{Comparison with \cite{mansour2014robust}}: The authors used local computation algorithms in the context of robust inference to give polynomial time algorithms. They formulated their inference problem as an exponentially sized linear program and showed that the linear program (LP) has a special structure which allowed them to compute the values of certain variables in the optimal solution of the linear program in time sublinear in the total number of variables. They did this by sampling a polynomial number of constraints in the LP. Note that for our setting for learning Lipschitz functions, given all the unlabeled samples, learning the value of the best Lipschitz function on a particular input query can be cast as a linear program. However, our LP does not satisfy the block angular structure. Moreover, we have a continuous domain and the number of possible queries is infinite. We cannot hope to get a globally Lipschitz solution by locally solving a smaller LP with constraints sampled independently for each query. We have to carefully design the local intervals and use different learning strategies for different types of intervals to ensure that the learned function is globally Lipschitz and also has good error bounds. % \textbf{Comparison with \cite{feige2015learning}}: The authors considered the use of local computation algorithms for inference settings. They reduced their problem of inference for a particular query to the problem of computing minimum vertex cover in a bipartite graph. However, the focus was on time complexity rather than sample complexity. \section{Conclusion} \label{section:conclusion} We gave an algorithm to approximate the optimal prediction error for the class of bounded $L$-Lipschitz functions with independent of $L$ label queries. We also established that for any given query point, we can estimate the value of a nearly optimal function at the query point locally with label queries, independent of $L$. It would be interesting to extend these notions of error prediction and local prediction to other function classes. Finally, we also gave an algorithm to approximate the minimum error of the Nadaraya-Watson prediction rule under a linear diagonal transformation with eigenvalues in a small range which is both sample and time efficient. \subsection{Problem Setup} We recall the setting from the one dimensional case. For any fixed $L>0$, let $\mathcal{F}_{L}$ be the class of $d$ dimensional functions supported on the domain $[0,1]^d$ with Lipschitz constant at most $L$, that is, \begin{equation} \mathcal{F}_L = \{f:[0,1]^d\mapsto [0,1], \ |f(x)-f(y)|\leq L \|x-y\|_{\infty} \; \forall x,y \in [0,1]^d\}. \end{equation} Let $\epsilon$ be the error parameter. We will think of the dimension $d$ as constant with respect to $L$ and $\epsilon$. Like the one-dimensional case, our algorithm for local predictions first involves a preprocessing step (Algorithm~\ref{alg4}) which takes as input the Lipschitz constant $L$, sampling access to distribution $\mathcal{D}_x$ and the error parameter $\epsilon$ and returns a partition $\mathcal{P}$ where the partition along dimension $j$ is $\mathcal{P}^j = \{ [b^j_0, b^j_1],[b^j_1, b^j_2], \ldots, \}$ and a set $S$ of unlabeled samples. The partition $\mathcal{P}$ consists of alternating intervals\footnote{Note that long intervals at the boundary could be shorter, but those can be handled similarly. } of length ${2}/{L}$ and ${d}/(L\epsilon)$ along each dimension. Let us divide these intervals along each dimension $j$ further into the two sets \begin{equation*} \begin{gathered} \mathcal{P}_{\sf lg}^j:\,=\{[b_0,b_1],[b_2,b_3],\ldots,\}\quad \text{(long intervals for dimension } j \text{ of length } {d}/(L\epsilon)), \\ \mathcal{P}_{\sf sh}^j:\,=\{[b_1,b_2],[b_3,b_4],\ldots,\}\quad \text{(short intervals for dimension } j \text{ of length }{2}/L).\\ \end{gathered} \end{equation*} A data point $x$ which belongs to a short interval in $\mathcal{P}^j$ along at least one of the dimensions $j$ is said to belong to a set of short intervals $\mathcal{P}_{\sf sh}$ and otherwise, is said to belong to a set of long intervals $\mathcal{P}_{\sf lg}$. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{Preprocess($L, \mathcal{D}_x, \epsilon$)\label{alg4}} \STATE Sample a uniformly random offset $b^j_1$ from $\{0,1,2,\cdots,\frac{d}{2\epsilon}\}\frac{2}{L}$ for each dimension $j \in [d]$. \STATE Divide the $[0,1]$ interval along each dimension $j$ into alternating intervals of length $\frac{d}{L\epsilon}$ and $\frac{2}{L}$ with boundary at $b^j_1$ and let $\mathcal{P}$ be the resulting partition, that is, $\mathcal{P}^j = \{[b^j_0=0,b^j_1],[b^j_1, b^j_2], \ldots, ], \cdots, 1\}$ where $b^j_2 = b^j_1 + \frac{2}{L}, b_3^j = b_2^j + \frac{d}{L\epsilon}, \ldots, $. \STATE Sample a set $S = \{x_i\}_{i=1}^M$ of $M = O((\frac{L}{\epsilon})^d \frac{1}{\epsilon^3}\log(\frac{1}{\epsilon}))$ unlabeled examples from distribution $\mathcal{D}_x$.\\ \STATE \textbf{Output} $S, \mathcal{P}$. \end{algorithmic} \end{algorithm} We give the definition of the extension interval for a long interval in $\mathcal{P}_{\sf lg}$. It consists of a long interval and a cuboidal shell of thickness $\frac{1}{L}$ around it in each dimension and cut off at the boundary. \begin{definition} \label{defn:extension-interval} For any long interval $I_J$ where $I_J^i=[b^i_{J^i-1}, b^i_{J^i}]$, the extension interval is defined to be $\hat{I}_J$ where $\hat{I}_J^i = [\max(0,b^i_{J^i-1}-\frac{1}{L}),\min(b^i_{J^i}+\frac{1}{L},1)]$ $\forall i \in [d]$ such that $I_J \subset \hat{I}_J$. In other words, interval $\hat{I}_J$ consists of the interval $I_J$ and a cuboidal shell of thickness $\frac{1}{L}$ around it on both sides in each dimension unless it does not extend beyond $[0,1]$ in each dimension. \end{definition} The Query algorithm (Algorithm~\ref{alg5}) for test point $x^*$ takes as input the set $S$ of unlabeled examples and the partition $\mathcal{P}$ returned by the Preprocess algorithm. Note that all subsequent queries use the same partition $\mathcal{P}$ and the same set of unlabeled examples $S$. The algorithm uses different learning strategies depending on whether $x^*$ belongs to one of the long intervals in $\mathcal{P}_{\sf lg}$ or short intervals in $\mathcal{P}_{\sf sh}$. For the long intervals, it outputs the prediction corresponding to the empirical risk minimizer (ERM) function restricted to that interval. The middle function values of the short intervals are constrained to be $1$ which makes the overall function Lipschitz. If a query point $x$ lies in a short interval which is the Lipschitz extension of a given long interval, the function learned is the Lipschitz extension of function over the long interval. This ensures that the overall function is Lipschitz. We bound the expected prediction error of this scheme with respect to class $\mathcal{F}_L$ by separately bounding this error for long and short intervals. For the long intervals, we prove that the ERM has low error by ensuring that each interval contains enough unlabeled samples. On the other hand, we show that the short intervals do not contribute much to the error because of their low probability under the distribution $\mathcal{D}$. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{Query($x, S, \mathcal{P}=[\{[b^j_0,b^j_1],[b^j_1, b^j_2],\cdots\}]^d_{j=1}$)\label{alg5}} \IF{query $x \in I_J \text{ where } I^j_J = [b^j_{J^j-1}, b^j_{J^j}] \text{ when } I_J \in \mathcal{P}_{\sf lg}$ } % \STATE Query labels for $x \in S \cap I_J$ \STATE \textbf{Output} $\hat{f}_{S \cap I_J}(x)$ \\ \ELSIF{query $x \in \hat{I}_J$ where $\hat{I}_J$ is the extension of a long interval $I_J$ where $I^j_J = [b^j_{J^j-1}, b^j_{J^j}]$ is a long interval} % \STATE Query labels for $x \in S \cap I_J$ \STATE \textbf{Output:} $f(x)$ where $f(x)$ is the Lipschitz extension of $f_J(x)$ to the extension set $\hat{I}_{J}$ with constraints $f(x) = 1$ $\forall x \in \hat{I}_{J}$ satisfying $x_i \in \Big\{\frac{b^i_{J^i}+b^i_{J^i+1}}{2},\frac{b^i_{J^i-2}+b^i_{J^i-1}}{2}\Big\}$ for at least one $i \in [d]$. % \ELSE \STATE \textbf{Output:} 1 % \ENDIF \end{algorithmic} \end{algorithm} Next, we state the number of label queries needed to making local predictions corresponding to the Query algorithm (Algorithm \ref{alg5}). \begin{theorem} \label{thm:lipschitzlocalquery-highd} For any distribution $\mathcal{D}$ over $[0,1]^d\times[0,1]$, Lipschitz constant $L>0$ and error parameter $\epsilon \in [0,1]$, let $(S, P)$ be the output of (randomized) Algorithm~\ref{alg1} where $S$ is the set of unlabeled samples of size $(O(\frac{L}{\epsilon}))^d\frac{1}{\epsilon^3}\log(\frac{1}{\epsilon})$ and $P$ is a partition of the domain $[0,1]^d$. Then, there exists a function $\tilde{f} \in \mathcal{F}_L$, such that for all $x \in [0,1]$, Algorithm~\ref{alg5} queries $\frac{1}{\epsilon^2}(O(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ labels from the set $S$ and outputs $\text{Query}(x, S, P) $ satisfying \begin{equation} \text{Query}(x, S, P) = \tilde{f}(x)\;, \end{equation} and the function $\tilde{f}$ is $\epsilon$-optimal, that is, $ \Delta_{\D}(\tilde{f}, \mathcal{F}_L) \leq \epsilon$ with probability greater than $\frac{1}{2}$. \end{theorem} \iffalse \begin{theorem} \label{thm:lipschitzlocalquery-highd} Let data points $(x,y) \sim \mathcal{D}$ and let $\mathcal{F}_L$ be the class of Lipschitz functions with Lipschitz constant at most $L$. Then, there exists a $L$ Lipschitz function $\tilde{f}(x):[0,1]^d \rightarrow [0,1]$ such that $|E_{x,y \sim \mathcal{D}}|y-\tilde{f}(x)| - \min_{f \in \mathcal{F}_L}E_{x,y \sim \mathcal{D}}|y-f(x)|| \leq \epsilon$ and for a fixed given $x \in [0,1]^d$, $\tilde{f}(x)$ can be computed with $(O(\frac{L}{\epsilon}))^d\frac{1}{\epsilon^3}$ unlabeled samples $x,y \sim \mathcal{D}$ and $\frac{1}{\epsilon^2}(O(\frac{d}{\epsilon^2}))^d$ active label queries on those unlabeled samples. \end{theorem} \fi \begin{proof} We begin by defining some notation. Let $S= \{x_i\}^M_{i=1}$ be the set of the unlabeled samples and $\mathcal{P}=\{[b_0,b_1],[b_1,b_2], \ldots\}$ be the partition returned by the pre-processing step given by Algorithm~\ref{alg5}. Let $\mathcal{D}_J$ be the distribution of a random variable $(X, Y) \sim \mathcal{D}$ conditioned on the event $\{X \in I_J\}$. Similarly, let $\mathcal{D}_{\sf{sh}}$ and $\mathcal{D}_{\sf{lg}}$ be the conditional distribution of $\mathcal{D}$ on intervals belonging to $\mathcal{P}_{\sf lg}$ and $\mathcal{P}_{\sf sh}$ respectively. Let $p_J$ denote the probability of a point sampled from distribution $\mathcal{D}$ lying in interval $I_J$. Going forward, we use the shorthand `probability of interval $I_J$' to denote $p_J$. Let ${p}_{\sf lg}$ and ${p}_{\sf sh}$ be the probability of set of long and short intervals respectively. Recall that $f_{\mathcal{D}}^*$ is the function which minimizes $\text{err}_\D(f)$ for $f \in \mathcal{F}_L$ and $f_{\mathcal{D}_J}^*$ is the function which minimizes this error with respect to the conditional distribution $\mathcal{D}_J$. Let $M_J$ denote the number of unlabeled samples of $S$ lying in interval $I_J$. For any interval $I_J$, let $\hat{f}_{S\cap I_J}$ be the ERM with respect to that interval.% \paragraph{Lipschitzness of $\tilde{f}$.} For $x$ belonging to intervals say $I_J$ where $I^j_J= [b^j_{J^j-1}, b^j_{J^j}], j\in [d]$ where $I_J \in \mathcal{P}_{\sf sh}$, let $f_J(x)$ be defined by the output of the Algorithm~\ref{alg5}. Now, let $\tilde{f}:[0,1]^d\rightarrow[0,1] = f_J(x) \text{ if } x \in I_J$ be disjoint union of the functions $f_J$ defined on disjoint intervals $I_J$. Now, we will argue that $\tilde{f}(x)$ is $L$ Lipschitz for all $x\in[0,1]^d$. The function $\tilde{f}(x)$ for each long interval is $L$ Lipschitz by construction. For each long interval $I_J$ where $I^j_J = [b^j_{J^j-1},b^j_{J^j}]$ $\forall j \in [d]$, consider the extension interval $\hat{I}_J$ as defined in Definition~\ref{defn:extension-interval}. The function $\tilde{f}(x)$ is constrained to be $1$ for the middle point of the short interval along each dimension i.e. $\tilde{f}(x) = 1$ $\forall x\in[0,1]^d\text{ if } \exists i\in[d] \text{ such that }x_i=\frac{b^i_{j-1}+b^i_{j}}{2}$ where $[b^i_{j-1},b^i_{j}]$ is a short interval for dimension $i$. Note that these middle points of short intervals are precisely the only points of intersection between extensions of two different long intervals because long intervals in each dimension are separated by a distance of $\frac{2}{L}$ by construction and the extension intervals extend up to a shell of thickness around $\frac{1}{L}$ in each dimension. Now, for a query point $x$ belonging to extension interval $\hat{I}_J$ for a long interval $I_J$ takes the value $f(x)$ where $f(x)$ is the Lipschitz extension of the Lipschitz function $f_J$ to the superset $\hat{I}_J$ and constrained to be $1$ at the boundary of the shell. Hence, if we can prove that such a Lipschitz extension exists, we can see that the combined function is $L$ Lipschitz. By \cite{mcshane1934extension} (also stated as Theorem~\ref{thm:lipschitz-extension} in this paper), we can see that such a Lipschitz extension always exists because for point $x \in I_J$ and $y$ belonging to the boundary of the shell, we get that $||x-y||_{\infty} \geq \frac{1}{L}$ and $|f(x)-f(y)|\leq 1$ and hence, the function $f$ is $L$-Lipschitz. For query points $x$ which do not belong to any of these extension intervals, we output $1$. Basically, these points belong to the short intervals at the boundary of the domain and equivalent to learning Lipschitz extension with empty long interval and constrained to be $1$ at the boundary. Hence, the function takes $1$ everywhere in this interval. Hence, we can see that the function $\tilde{f}$ is $L$-Lipschitz. \paragraph{Error Guarantees for $\tilde{f}$.} Now looking at the error rate of the function $\tilde{f}(x)$ and following a repeated application of tower property of expectation, we get that \begin{align} \Delta_{\D}(\tilde{f}, \mathcal{F}_L) &= {p}_{\sf lg}\Delta_{\mathcal{D}_{\sf{lg}}}(\tilde{f}, f_{\mathcal{D}}^*) + {p}_{\sf sh}\Delta_{\mathcal{D}_{\sf{sh}}}(\tilde{f}, f_{\mathcal{D}}^*)\nonumber\\ &= \sum_{J:I_J \in \mathcal{P}_{\sf lg}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*) + {p}_{\sf sh}\Delta_{\mathcal{D}_{\sf{sh}}}(\tilde{f}, f_{\mathcal{D}}^*)\label{eqn:total-error-highd} \end{align} Now, we need to argue about the error bounds for both long and short intervals to argue about the total error of the function $\tilde{f}$. \emph{Error for short intervals. } The probability of short intervals ${p}_{\sf sh}$ is small with high probability since the total length of short intervals is $2\epsilon$ and the intervals are chosen uniformly randomly. More formally, from Lemma~\ref{lemma:prob_intervals-highd}, we know that with probability at least $1-\delta$, the probability of short intervals ${p}_{\sf sh}$ is upper bounded by $2\epsilon/\delta$. Also, the error for any function $f$ is bounded between $[0,1]$ since the function's range is $[0,1]$. Hence, we get that \begin{align} {p}_{\sf sh}\Delta_{\mathcal{D}_{\mathcal{P}_{\sf sh}}}(\tilde{f}, f_{\mathcal{D}}^*) &\leq \frac{2\epsilon}{\delta} \label{eqn:type2-error-highd} \end{align} \emph{Error for long intervals:} We further divide the long intervals into 3 subtypes: {\small \begin{align*} \mathcal{P}_{\sf{lg}, 1} &:\,= \left\lbrace I_J \; |\; I_J \in \mathcal{P}_{\sf lg},\; p_J \geq\frac{\epsilon}{(\frac{L\epsilon}{d})^d},\; M_J \geq \frac{1}{\epsilon^2}(\Omega(\frac{d}{\epsilon^2}))^d)\log(\frac{1}{\epsilon}) \right\rbrace,\\ {\mathcal{P}}_{\sf{lg}, 2} &:\,= \left\lbrace I_J \; |\; I_J \in \mathcal{P}_{\sf lg},\; p_J \geq\frac{\epsilon}{(\frac{L\epsilon}{d})^d},\; M_J < \frac{1}{\epsilon^2}(\Omega(\frac{d}{\epsilon^2}))^d)\log(\frac{1}{\epsilon}) \right\rbrace,\\ {\mathcal{P}}_{\sf{lg}, 3} &:\,= \left\lbrace I_J \; |\; I_J \in \mathcal{P}_{\sf lg},\; p_J < \frac{\epsilon}{(\frac{L\epsilon}{d})^d} \right\rbrace. \end{align*}} The intervals in both first and second types have large probability $p_J$ with respect to distribution $\mathcal{D}$ but differ in the number of unlabeled samples in $S$ lying in them. Finally, the intervals in third subtype ${\mathcal{P}}_{\sf{lg}, 3}$ have small probability $p_J$ with respect to distribution $\mathcal{D}$. Now, we can divide the total error of long intervals into error in these subtypes \begin{small} \begin{align} \sum_{J:I_J \in \mathcal{P}_{\sf lg}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*) &= \underbrace{\sum_{J:I_J \in \mathcal{P}_{\sf{lg}, 1}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*)}_{E1} + \underbrace{\sum_{J:I_J \in {\mathcal{P}}_{\sf{lg}, 2}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*)}_{E2}+ \underbrace{\sum_{J:I_J \in {\mathcal{P}}_{\sf{lg}, 3}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*)}_{E3} \label{eqn:combined-highd} \end{align} \end{small} Now, we will argue about the contribution of each of the three terms above. \underline{Bounding $E3$.} Since there are at most $(\frac{L\epsilon}{d})^d$ long intervals and each of these intervals $I_J$ has probability $p_J$ upper bounded by $\frac{\epsilon}{(\frac{L\epsilon}{d})^d}$, the total probability combined in these intervals is at most $\epsilon$. Also, in the worst case, the loss can be 1. Hence, we get an upper bound of $\epsilon$ on $E_3$. \underline{Bounding $E2$.} From Lemma~\ref{lemma:number_unlabeled_samples-highd}, we know that with success probability $\delta$, these intervals have total probability upper bounded by ${\epsilon}/{\delta}$. Again, the loss can be 1 in the worst case. Hence, we can get an upper bound of ${\epsilon}/{\delta}$ on $E_2$. \underline{Bounding $E1$.} Let $F_J$ denote the event that $\Delta_{\mathcal{D}_J}(\hat{f} _{S\cap I_J}, f_{\mathcal{D}_J}^*) > \epsilon$. The expected error of intervals $I_J$ in $\mathcal{P}_{\sf{lg}, 1}$ is then {\small \begin{align*} \E[\sum_{J:I_J \in \mathcal{P}_{\sf{lg}, 1}}p_J\Delta_{\mathcal{D}}(\tilde{f}, f_{\mathcal{D}_J}^*)] &\stackrel{\ensuremath{{\sf (i)}}}{\leq} \E[\sum_{J:I_J \in \mathcal{P}_{\sf{lg}, 1}}p_J\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap I_J}, f_{\mathcal{D}_J}^*)]\\ &= \sum_{J:I_J \in \mathcal{P}_{\sf{lg}, 1}}p_J(\E[\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap I_J}, f_{\mathcal{D}_J}^*)|F_J]\Pr(F_J) + \E[\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap I_J}, f_{\mathcal{D}_J}^*)|\neg F_J]\Pr(\neg F_J))\\ &\stackrel{\ensuremath{{\sf (ii)}}}{\leq} \sum_{J:I_J \in \mathcal{P}_{\sf{lg}, 1}}p_J(1\cdot \epsilon + \epsilon \cdot 1) \leq 2\epsilon, \end{align*}} where step $\ensuremath{{\sf (i)}}$ follows by noting that $\tilde{f} = \hat{f}_{S\cap I_J}$ for all long intervals $I_J \in \mathcal{P}_{\sf lg}$ and that $f^*_{\mathcal{D}_J}(x)$ is the minimizer of the error $\text{err}_{\mathcal{D}_J}(f)$ over all $L$-Lipschitz functions, and step $\ensuremath{{\sf (ii)}}$ follows since $\E[\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap I_J}, f_{\mathcal{D}_J}^*)|\neg F_J] \leq \epsilon$ by the definition of event $F_J$ and $\Pr(F_J) \leq \epsilon$ follows from a standard uniform convergence argument (detailed in Lemma~\ref{lem: good type 1 intervals-highd}). Now, using Markov's inequality, we get that $E_1 \leq {2\epsilon}/{\delta}$ with failure probability at most $\delta$. Plugging the error bounds obtained in equations~\eqref{eqn:type2-error-highd} and ~\eqref{eqn:combined-highd} into equation~\eqref{eqn:total-error-highd} and setting $\delta = \frac{1}{20}$ establishes the required claim. \paragraph{Label Query Complexity.} Observe that for any given query point $x^*$, $\tilde{f}(x^*)$ can be computed by only querying the labels of the interval in which $x$ lies (in case if $x^*$ lies in a long interval) or extension of the interval in which $x^*$ lies (in case if $x^*$ lies in a short interval) and hence, would only require $\frac{1}{\epsilon^2}(O(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ active label queries over the set $S$ of $\frac{1}{\epsilon^3}(O(\frac{L}{\epsilon}))^d\log(\frac{1}{\epsilon})$ unlabeled samples. \end{proof} Now, we will state the sample complexity bounds for estimating the error of the optimal function amongst the class of Lipschitz functions up to an additive error of $\epsilon$. \begin{restatable}{theorem}{theoremlipschitzerror-highd} \label{thm:lipschitzerror-highd} For any distribution $\mathcal{D}$ over $[0,1]^d\times[0,1]$, Lipschitz constant $L>0$ and parameter $\epsilon \in [0,1]$, Algorithm~\ref{alg3} uses $\frac{1}{\epsilon^4}(O(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ active label queries on $\frac{1}{\epsilon^3}(O(\frac{L}{\epsilon}))^d\log(\frac{1}{\epsilon})$ unlabeled samples from distribution $\mathcal{D}_x$ and produces an output $\widehat{\text{err}_\D}(\mathcal{F}_L)$ satisfying \begin{equation*} |\widehat{\text{err}_\D}(\mathcal{F}_L) - \text{err}_\D( \mathcal{F}_L)| \leq \epsilon \end{equation*} with probability at least $\frac{1}{2}$. \end{restatable} The proof goes exactly like the proof for the corresponding theorem in the one-dimensional case (Theorem~\ref{thm:lipschitzerror}). The following lemma proves that with enough unlabeled samples, a large fraction of long intervals have enough unlabeled samples in them which is eventually used to argue that they will be sufficient to learn a function which is approximately close to the optimal Lipschitz function over that interval. \begin{lemma} \label{lemma:number_unlabeled_samples-highd} For any distribution $\mathcal{D}_x$, consider a set $S = \{x_1, x_2, \cdots, x_M\}$ of unlabeled samples where each sample $x_i\stackrel{\text{i.i.d.}}{\sim}\mathcal{D}_x$. Let $\mathcal{G}$ be the set of long intervals $\{I_i\}$ each of which satisfies $p_J = \Pr_{x \sim \mathcal{D}_x}(x \in I_J) \geq \frac{\epsilon}{(\frac{L\epsilon}{d})^d}$. Let $E_J$ denote the event that $\sum_{x_j \in S}\mathbb{I}[x_j \in I_J] < \frac{1}{\epsilon^2}(\frac{d}{\epsilon^2})^d\log(\frac{1}{\epsilon})$. Then, we have \begin{equation*} \sum_{I_J \in \mathcal{G}}p_J\mathbb{I}[E_J] \leq \frac{\epsilon}{\delta} \end{equation*} with failure probability atmost $\delta$ for $M = \frac{1}{\epsilon^3}(\Omega(\frac{L}{\epsilon}))^d\log(\frac{1}{\epsilon})$. \end{lemma} \begin{proof} For any interval $I \in \mathcal{G}$, we have that $\E[\sum_{x_j \in S}\mathbb{I}[x_j \in I]] \geq \frac{c_d}{\epsilon^2}(\frac{d}{\epsilon^2})^d\log(\frac{1}{\epsilon})$ for some constant $c_d$ depending on the dimension $d$. Using Hoeffding inequality, we can get that $\Pr(E_i) \leq \epsilon$ for all intervals $I_i \in \mathcal{G}$. Calculating expectation of the desired quantity, we get \begin{equation*} \E[\sum_{I_J \in \mathcal{G}}p_J\mathbb{I}[E_J]] = \sum_{I_J \in \mathcal{G}}p_J\Pr[E_J] \leq \epsilon\sum_{I_J \in \mathcal{G}}p_J \leq \epsilon \end{equation*} We get the desired result using Markov's inequality. \end{proof} \iffalse \begin{proof} The expected number of samples that falls in an interval $I$ will probability mass greater than $\frac{\epsilon}{(\frac{L\epsilon}{d})^d}$ is at least $\frac{c_d}{\epsilon^2}(\frac{d}{\epsilon^2})^d$ for some constant $c_d$ depending on the dimension $d$. Using Chernoff bounds, we can get that $Pr(\text{number of samples in } I \leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d) \leq \exp(-\frac{c_dd^d}{8\epsilon^{2(d+1)}}) \leq \epsilon$. Now, let $F_i$ be the event that $I_i$ interval which has probability mass $\geq \frac{\epsilon}{(\frac{L\epsilon}{d})^d}$ has $\leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d$ samples in it. Now, we know that $F_i \leq \epsilon$ $\forall i$ where $I_i$ is an interval of type 1 having probability mass greater than $\frac{\epsilon}{(\frac{L\epsilon}{d})^d}$. Now, the expected combined mass of type 1 intervals which have $\leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d$ samples is $\sum_iPr(F_i)Pr(I_i) \leq \epsilon\sum_iPr(I_i) \leq \epsilon$. Now, using Markov's inequality, we get that with probability at least $1-\delta$, the total mass of type 1 intervals having $\leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d$ samples each is $\leq \frac{\epsilon}{\delta}$. \end{proof} \fi The following lemma states that the probability of short intervals ${p}_{\sf sh}$ is small with high probability. Consider the case of uniform distribution. In this case, since the short intervals cover only $2\epsilon$ fraction of the domain $[0,1]^d$, their probability ${p}_{\sf sh}$ is upper bounded by $2\epsilon$. The case for arbitrary distributions holds because the intervals are chosen randomly. \begin{lemma} \label{lemma:prob_intervals-highd} When we divide the $[0,1]^d$ domain into long and short intervals as in the preprocessing step (Algorithm~\ref{alg4}), then \begin{equation*} {p}_{\sf sh} = \sum_{I_J \in \mathcal{P}_{\sf sh}}p_J \leq \frac{2\epsilon}{\delta} \end{equation*} with failure probability atmost $\delta$. \end{lemma} \iffalse \begin{lemma} \label{lemma:prob_intervals-highd} When we divide the $[0,1]^d$ domain into type 1 and type 2 intervals as in the preprocessing step \ref{alg4}, then with probability more than $1-\delta$, a randomly sampled $x\sim \mathcal{D}$ lies in intervals of type 1 or type 2 intervals have probability mass bounded by $\frac{2\epsilon}{\delta}$ with probability $1-\delta$. \end{lemma} \fi \begin{proof} Now, we consider the division of every dimension i.e. $[0,1]$ independently into alternating intervals of length $\frac{d}{L\epsilon}$ and $\frac{2}{L}$ with the offset chosen uniformly randomly from $\{0, 1, 2, \cdots, \frac{d}{2\epsilon}\}\frac{2}{L}$. For any set of fixed offsets chosen for the $d-1$ dimensions, the intervals of length $\frac{2}{L}$ chosen for the $dth$ dimension combined are disjoint in each of these divisions and together cover the entire $[0,1]^d$ and hence amount to probability mass $1$. Therefore, the total probability mass covered in the total $(\frac{d}{2\epsilon}+1)^d$ possible divisions is $d(\frac{d}{2\epsilon}+1)^{d-1}$. Hence, there are at most $\delta$ fraction out of the total $(\frac{d}{2\epsilon}+1)^d$ cases where the short intervals have probability greater than $\frac{2\epsilon}{\delta}$. Hence, with probability $1-\delta$, the short intervals have probability upper bounded by $\frac{2\epsilon}{\delta}$. % \end{proof} \begin{lemma} \label{lem: good type 1 intervals-highd} Let $I_i \in \mathcal{P}_{\sf{lg}, 1}$ be any long interval of subtype 1. For the event \mbox{$F_i = \{\Delta_{\mathcal{D}_i}(\hat{f}_{S \cap I_i}, f_{\mathcal{D}_i}^*) > \epsilon \}$}, we have \begin{equation*} \Pr(F_i) \leq \epsilon \end{equation*} \end{lemma} \begin{proof} We know that the covering number for the class of $d$-dimensional $L$-Lipschitz functions supported on the interval $[0,l]$ is $(O({Ll}/{\epsilon}))^d$ (Lemma~\ref{lemma:lipschitz_covering}). For, Lipschitz functions supported on long intervals of length $l={d}/{(L\epsilon)}$ along each dimension, we get this complexity as $(O({d}/{\epsilon^2}))^d$. We know by standard results in uniform convergence, that the number of samples required for uniform convergence up to an error of $\epsilon$ and failure probability $\delta$ for all functions in a class $\mathcal{F}_L$ is $\Omega(((\text{Covering number of } \mathcal{F}_L)/\epsilon^2)\log({1}/{\delta}))$ (Lemma~\ref{lemma:lipschitz_sample_complexity}) and hence, for long intervals we get this estimate as $\frac{1}{\epsilon^2}(\Omega(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\delta})$. Given that there are $\frac{1}{\epsilon^2}(\Omega(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ samples in these intervals by design, the function learned using these samples is only additively worse than the function with minimum error using standard learning theory arguments with failure probability at most $\epsilon$. Thus, we get that $$\Delta_{\mathcal{D}_i}(f_{S \cap I_i}, f_{\mathcal{D}_i}^*) \leq \epsilon \quad \forall I_i \in \mathcal{P}_{\sf{lg}, 1}$$ with failure probability at most $\epsilon$. \end{proof} \subsection{Problem Setup} We recall the setting from the one dimensional case. For any fixed $L>0$, let $\mathcal{F}_{L}$ be the class of $d$ dimensional functions supported on the domain $[0,1]^d$ with Lipschitz constant at most $L$, that is, \begin{equation} \mathcal{F}_L = \{f:[0,1]^d\mapsto [0,1], \ |f(x)-f(y)|\leq L \|x-y\|_{\infty} \; \forall x,y \in [0,1]^d\}. \end{equation} Let $\epsilon$ be the error parameter. We will think of the dimension $d$ as constant with respect to $L$ and $\epsilon$. Like the one-dimensional case, our algorithm for local predictions first involves a preprocessing step (Algorithm~\ref{alg4}) which takes as input the Lipschitz constant $L$, sampling access to distribution $\mathcal{D}_x$ and the error parameter $\epsilon$ and returns a partition $\mathcal{P}$ where the partition along dimension $j$ is $\mathcal{P}^j = \{ [b^j_0, b^j_1],[b^j_1, b^j_2], \ldots, \}$ and a set $S$ of unlabeled samples. The partition $\mathcal{P}$ consists of alternating intervals\footnote{Note that long intervals at the boundary could be shorter, but those can be handled similarly. } of length ${2}/{L}$ and ${d}/(L\epsilon)$ along each dimension. Let us divide these intervals along each dimension $j$ further into the two sets \begin{equation*} \begin{gathered} \mathcal{P}_{\sf lg}^j:\,=\{[b^j_0,b^j_1],[b^j_2,b^j_3],\ldots,\}\quad \text{(long intervals for dimension } j \text{ of length } {d}/(L\epsilon)), \\ \mathcal{P}_{\sf sh}^j:\,=\{[b^j_1,b^j_2],[b^j_3,b^j_4],\ldots,\}\quad \text{(short intervals for dimension } j \text{ of length }{2}/L).\\ \end{gathered} \end{equation*} A data point $x$ is said to belong to a box $B_J$ (where $J$ is a vector of dimension $d$) if $x$ belongs to interval $B_J^j = I^j_{J_j} = [b^j_{J_j-1}, b^j_{J_j}]$ along the $j$th dimension. A data point $x$ which belongs to a short interval in $\mathcal{P}_{\sf sh}^j$ along at least one of the dimensions $j$ is said to belong to a set of short boxes $\mathcal{P}_{\sf sh}$ and otherwise, is said to belong to a set of long boxes $\mathcal{P}_{\sf lg}$. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{Preprocess($L, \mathcal{D}_x, \epsilon$)\label{alg4}} \STATE Sample a uniformly random offset $b^j_1$ from $\{0,1,2,\cdots,\frac{d}{2\epsilon}\}\frac{2}{L}$ for each dimension $j \in [d]$. \STATE Divide the $[0,1]$ interval along each dimension $j$ into alternating intervals of length $\frac{d}{L\epsilon}$ and $\frac{2}{L}$ with boundary at $b^j_1$ and let $\mathcal{P}$ be the resulting partition, that is, $\mathcal{P}^j = \{[b^j_0=0,b^j_1],[b^j_1, b^j_2], \ldots, ], \cdots, 1\}$ where $b^j_2 = b^j_1 + \frac{2}{L}, b_3^j = b_2^j + \frac{d}{L\epsilon}, \ldots, $. \STATE Sample a set $S = \{x_i\}_{i=1}^M$ of $M = O((\frac{L}{\epsilon})^d \frac{1}{\epsilon^3}\log(\frac{1}{\epsilon}))$ unlabeled examples from distribution $\mathcal{D}_x$.\\ \STATE \textbf{Output} $S, \mathcal{P}$. \end{algorithmic} \end{algorithm} We give the definition of the extension box for a long box in $\mathcal{P}_{\sf lg}$. It consists of a box interval and a cuboidal shell of thickness $\frac{1}{L}$ around it in each dimension with cut off at the boundary. \begin{definition} \label{defn:extension-interval} For any long box $B_J$ where $B_J^i = I^i_{J_i}=[b^i_{J_i-1}, b^i_{J_i}]$, the extension box is defined to be $\hat{B}_J$ where $\hat{B}_J^i = \hat{I}_{J_i}^i = [\max(0,b^i_{J_i-1}-\frac{1}{L}),\min(b^i_{J_i}+\frac{1}{L},1)]$ $\forall i \in [d]$ such that $B_J \subset \hat{B}_J$.\end{definition} In other words, box $\hat{B}_J$ consists of the box $B_J$ and a cuboidal shell of thickness $\frac{1}{L}$ around it on both sides in each dimension unless it does not extend beyond $[0,1]$ in each dimension. The Query algorithm (Algorithm~\ref{alg5}) for test point $x^*$ takes as input the set $S$ of unlabeled examples and the partition $\mathcal{P}$ returned by the Preprocess algorithm. Note that all subsequent queries use the same partition $\mathcal{P}$ and the same set of unlabeled examples $S$. The algorithm uses different learning strategies depending on whether $x^*$ belongs to one of the long boxes in $\mathcal{P}_{\sf lg}$ or short boxes in $\mathcal{P}_{\sf sh}$. For the long boxes, it outputs the prediction corresponding to the empirical risk minimizer (ERM) function restricted to that box. If a query point $x$ lies in a short box which is the Lipschitz extension of a given long box, the function learned is the Lipschitz extension of function over the long box. The middle function value of the short box along each dimension is constrained to be $1$ which makes the overall function Lipschitz. We bound the expected prediction error of this scheme with respect to class $\mathcal{F}_L$ by separately bounding this error for long and short boxes. For the long boxes, we prove that the ERM has low error by ensuring that each box contains enough unlabeled samples. On the other hand, we show that the short boxes do not contribute much to the error because of their low probability under the distribution $\mathcal{D}$. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{Query($x, S, \mathcal{P}=[\{[b^j_0,b^j_1],[b^j_1, b^j_2],\cdots\}]^d_{j=1}$)\label{alg5}} \IF{query $x \in B_J \text{ where } B_J^j = I^j_{J_j} = [b^j_{J_j-1}, b^j_{J_j}] \text{ when } B_J \in \mathcal{P}_{\sf lg}$ } % \STATE Query labels for $x \in S \cap B_J$ \STATE \textbf{Output} $\hat{f}_{S \cap B_J}(x)$ \\ \ELSIF{query $x \in \hat{B}_J$ where $\hat{B}_J$ is the extension of a long box $B_J$ where $B^j_{J} = I^j_{J_j} = [b^j_{J_j-1}, b^j_{J_j}]$ is a long interval} % \STATE Query labels for $x \in S \cap B_J$ \STATE \textbf{Output:} $f(x)$ where $f(x)$ is the Lipschitz extension of $\hat{f}_{S\cap B_J}(x)$ to the extension set $\hat{B}_{J}$ with constraints $f(x) = 1$ $\forall x \in \hat{B}_{J}$ satisfying $x_i \in \Big\{\frac{b^i_{J_i}+b^i_{J_i+1}}{2},\frac{b^i_{J_i-2}+b^i_{J_i-1}}{2}\Big\}$ for any dimension $i \in [d]$. % \ELSE \STATE \textbf{Output:} 1 % \ENDIF \end{algorithmic} \end{algorithm} Next, we state the number of label queries needed to making local predictions corresponding to the Query algorithm (Algorithm \ref{alg5}). \begin{theorem} \label{thm:lipschitzlocalquery-highd} For any distribution $\mathcal{D}$ over $[0,1]^d\times[0,1]$, Lipschitz constant $L>0$ and error parameter $\epsilon \in [0,1]$, let $(S, P)$ be the output of (randomized) Algorithm~\ref{alg1} where $S$ is the set of unlabeled samples of size $(O(\frac{L}{\epsilon}))^d\frac{1}{\epsilon^3}\log(\frac{1}{\epsilon})$ and $P$ is a partition of the domain $[0,1]^d$. Then, there exists a function $\tilde{f} \in \mathcal{F}_L$, such that for all $x \in [0,1]$, Algorithm~\ref{alg5} queries $\frac{1}{\epsilon^2}(O(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ labels from the set $S$ and outputs $\text{Query}(x, S, P) $ satisfying \begin{equation} \text{Query}(x, S, P) = \tilde{f}(x)\;, \end{equation} and the function $\tilde{f}$ is $\epsilon$-optimal, that is, $ \Delta_{\D}(\tilde{f}, \mathcal{F}_L) \leq \epsilon$ with probability greater than $\frac{1}{2}$. \end{theorem} \iffalse \begin{theorem} \label{thm:lipschitzlocalquery-highd} Let data points $(x,y) \sim \mathcal{D}$ and let $\mathcal{F}_L$ be the class of Lipschitz functions with Lipschitz constant at most $L$. Then, there exists a $L$ Lipschitz function $\tilde{f}(x):[0,1]^d \rightarrow [0,1]$ such that $|E_{x,y \sim \mathcal{D}}|y-\tilde{f}(x)| - \min_{f \in \mathcal{F}_L}E_{x,y \sim \mathcal{D}}|y-f(x)|| \leq \epsilon$ and for a fixed given $x \in [0,1]^d$, $\tilde{f}(x)$ can be computed with $(O(\frac{L}{\epsilon}))^d\frac{1}{\epsilon^3}$ unlabeled samples $x,y \sim \mathcal{D}$ and $\frac{1}{\epsilon^2}(O(\frac{d}{\epsilon^2}))^d$ active label queries on those unlabeled samples. \end{theorem} \fi \begin{proof} We begin by defining some notation. Let $S= \{x_i\}^M_{i=1}$ be the set of the unlabeled samples and $\mathcal{P}=[\{[b^j_0,b^j_1],[b^j_1,b^j_2], \ldots\}]_{j=1}^d$ be the partition returned by the pre-processing step given by Algorithm~\ref{alg4}. Let $\mathcal{D}_J$ be the distribution of a random variable $(X, Y) \sim \mathcal{D}$ conditioned on the event $\{X \in B_J\}$. Similarly, let $\mathcal{D}_{\sf{sh}}$ and $\mathcal{D}_{\sf{lg}}$ be the conditional distribution of $\mathcal{D}$ on boxes belonging to $\mathcal{P}_{\sf lg}$ and $\mathcal{P}_{\sf sh}$ respectively. Let $p_J$ denote the probability of a point sampled from distribution $\mathcal{D}$ lying in box $B_J$. Going forward, we use the shorthand `probability of box $B_J$' to denote $p_J$. Let ${p}_{\sf lg}$ and ${p}_{\sf sh}$ be the probability of set of long and short boxes respectively. Recall that $f_{\mathcal{D}}^*$ is the function which minimizes $\text{err}_\D(f)$ for $f \in \mathcal{F}_L$ and $f_{\mathcal{D}_J}^*$ is the function which minimizes this error with respect to the conditional distribution $\mathcal{D}_J$. Let $M_J$ denote the number of unlabeled samples of $S$ lying in box $B_J$. For any box $B_J$, let $\hat{f}_{S\cap B_J}$ be the ERM with respect to that interval.% \paragraph{Lipschitzness of $\tilde{f}$.} For any long box $B_J$, we can define the extension function $f^{\sf{ext}}_J: \hat{B}_J \mapsto \mathbb{R}$ as the $L$-Lipschitz extension of the function $\hat{f}_{S\cap B_J}:B_J\mapsto \mathbb{R}$ to the extension interval $\hat{B}_J$ restricted to $1$ at the boundaries of the extension interval $\hat{B}_J$. The Query procedure (Algorithm~\ref{alg5}) is designed to output $\tilde{f}(x)$ for each query $x$ where \begin{small} \begin{equation*} \tilde{f}(x) = \begin{cases} \hat{f}_{S\cap B_J}(x) \quad &\text{ if } x \in B_J \text{ for any } B_J \in \mathcal{P}_{\sf lg}\\ f^{\sf{ext}}_J(x) \quad &\text{ else if } x \in \hat{B}_J \text{ for any } B_J \in \mathcal{P}_{\sf lg}\\ 1 \quad &\text{ otherwise } \end{cases}. \end{equation*} \end{small} Now, we will argue that $\tilde{f}(x)$ is $L$-Lipschitz for all $x\in[0,1]^d$. The function $\tilde{f}(x)$ for each long box is $L$-Lipschitz by construction. Each of the extension functions $f^{\sf{ext}}_J$ is also $L$-Lipschitz by construction if it exists. We need to prove that such a Lipschitz extension exists. By \cite{mcshane1934extension} (also stated as Theorem~\ref{thm:lipschitz-extension} in this paper), we can see that such a Lipschitz extension always exists because for point $x \in B_J$ and $y$ belonging to the boundary of the extension interval $\hat{B}_J$, we get that $||x-y||_{\infty} \geq \frac{1}{L}$ and $|f(x)-f(y)|\leq 1$. For query points $x$ which do not belong to any of these extension intervals, we output $1$. Basically, these points belong to the short intervals at the boundary of the domain and equivalent to learning Lipschitz extension with empty long interval and constrained to be $1$ at the boundary. Hence, the function takes $1$ everywhere in this interval. This shows that each of the functions is individually Lipschitz. Now, we will argue that the function is also continuous. For each long box $B_J$ where $I^j_J = [b^j_{J^j-1},b^j_{J^j}]$ $\forall j \in [d]$, consider the extension interval $\hat{I}_J$ as defined in Definition~\ref{defn:extension-interval}. The function $\tilde{f}(x)$ is constrained to be $1$ for the middle point of the short interval along each dimension i.e. $\tilde{f}(x) = 1$ $\forall x\in[0,1]^d\text{ if } \exists i\in[d] \text{ such that }x_i=\frac{b^i_{j-1}+b^i_{j}}{2}$ where $[b^i_{j-1},b^i_{j}]$ is a short interval for dimension $i$. Note that these middle points of short boxes are precisely the only points of intersection between extensions of two different long boxes because long intervals in each dimension are separated by a distance of $\frac{2}{L}$ by construction and the extension boxes extend up to a shell of thickness around $\frac{1}{L}$ in each dimension. Now, for a query point $x$ belonging to extension interval $\hat{I}_J$ for a long box $B_J$ takes the value $f(x)$ where $f(x)$ is the Lipschitz extension of the Lipschitz function $f_J$ to the superset $\hat{B}_J$ and constrained to be $1$ at the boundary of the shell. Hence, we can see that the function $\tilde{f}$ is $L$-Lipschitz. \paragraph{Error Guarantees for $\tilde{f}$.} Now looking at the error rate of the function $\tilde{f}(x)$ and following a repeated application of tower property of expectation, we get that \begin{align} \Delta_{\D}(\tilde{f}, \mathcal{F}_L) &= {p}_{\sf lg}\Delta_{\mathcal{D}_{\sf{lg}}}(\tilde{f}, f_{\mathcal{D}}^*) + {p}_{\sf sh}\Delta_{\mathcal{D}_{\sf{sh}}}(\tilde{f}, f_{\mathcal{D}}^*)\nonumber\\ &= \sum_{J:B_J \in \mathcal{P}_{\sf lg}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*) + {p}_{\sf sh}\Delta_{\mathcal{D}_{\sf{sh}}}(\tilde{f}, f_{\mathcal{D}}^*)\label{eqn:total-error-highd} \end{align} Now, we need to argue about the error bounds for both long and short boxes to argue about the total error of the function $\tilde{f}$. \emph{Error for short boxes. } From Lemma~\ref{lemma:prob_intervals-highd}, we know that with probability at least $1-\delta$, the probability of short boxes ${p}_{\sf sh}$ is upper bounded by $2\epsilon/\delta$. Also, the error for any function $f$ is bounded between $[0,1]$ since the function's range is $[0,1]$. Hence, we get that \begin{align} {p}_{\sf sh}\Delta_{\mathcal{D}_{\mathcal{P}_{\sf sh}}}(\tilde{f}, f_{\mathcal{D}}^*) &\leq \frac{2\epsilon}{\delta} \label{eqn:type2-error-highd} \end{align} \emph{Error for long boxes:} We further divide the long boxes into 3 subtypes: {\small \begin{align*} \mathcal{P}_{\sf{lg}, 1} &:\,= \left\lbrace B_J \; |\; B_J \in \mathcal{P}_{\sf lg},\; p_J \geq\frac{\epsilon}{(\frac{L\epsilon}{d})^d},\; M_J \geq \frac{c_d}{\epsilon^2}\left(\frac{d}{\epsilon^2}\right)^d\log\left(\frac{1}{\epsilon}\right) \right\rbrace,\\ {\mathcal{P}}_{\sf{lg}, 2} &:\,= \left\lbrace B_J \; |\; B_J \in \mathcal{P}_{\sf lg},\; p_J \geq\frac{\epsilon}{(\frac{L\epsilon}{d})^d},\; M_J < \frac{c_d}{\epsilon^2}\left(\frac{d}{\epsilon^2}\right)^d\log\left(\frac{1}{\epsilon}\right) \right\rbrace,\\ {\mathcal{P}}_{\sf{lg}, 3} &:\,= \left\lbrace B_J \; |\; B_J \in \mathcal{P}_{\sf lg},\; p_J < \frac{\epsilon}{(\frac{L\epsilon}{d})^d} \right\rbrace. \end{align*}} The boxes in both first and second types have large probability $p_J$ with respect to distribution $\mathcal{D}$ but differ in the number of unlabeled samples in $S$ lying in them. Here, $c_d$ is some constant depending on the dimension $d$. Finally, the boxes in third subtype ${\mathcal{P}}_{\sf{lg}, 3}$ have small probability $p_J$ with respect to distribution $\mathcal{D}$. Now, we can divide the total error of long boxes into error in these subtypes \begin{small} \begin{align} \sum_{J:B_J \in \mathcal{P}_{\sf lg}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*) &= \underbrace{\sum_{J:B_J \in \mathcal{P}_{\sf{lg}, 1}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*)}_{E1} + \underbrace{\sum_{J:B_J \in {\mathcal{P}}_{\sf{lg}, 2}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*)}_{E2}+ \underbrace{\sum_{J:B_J \in {\mathcal{P}}_{\sf{lg}, 3}}p_J\Delta_{\mathcal{D}_J}(\tilde{f}, f_{\mathcal{D}}^*)}_{E3} \label{eqn:combined-highd} \end{align} \end{small} Now, we will argue about the contribution of each of the three terms above. \underline{Bounding $E3$.} Since there are at most $(\frac{L\epsilon}{d})^d$ long boxes and each of these boxes $B_J$ has probability $p_J$ upper bounded by $\frac{\epsilon}{(\frac{L\epsilon}{d})^d}$, the total probability combined in these boxes is at most $\epsilon$. Also, in the worst case, the loss can be 1. Hence, we get an upper bound of $\epsilon$ on $E_3$. \underline{Bounding $E2$.} From Lemma~\ref{lemma:number_unlabeled_samples-highd}, we know that with success probability $\delta$, these boxes have total probability upper bounded by ${\epsilon}/{\delta}$. Again, the loss can be 1 in the worst case. Hence, we can get an upper bound of ${\epsilon}/{\delta}$ on $E_2$. \underline{Bounding $E1$.} Let $F_J$ denote the event that $\Delta_{\mathcal{D}_J}(\hat{f} _{S\cap B_J}, f_{\mathcal{D}_J}^*) > \epsilon$. The expected error of boxes $B_J$ in $\mathcal{P}_{\sf{lg}, 1}$ is then {\small \begin{align*} \E[\sum_{J:B_J \in \mathcal{P}_{\sf{lg}, 1}}p_J\Delta_{\mathcal{D}}(\tilde{f}, f_{\mathcal{D}_J}^*)] &\stackrel{\ensuremath{{\sf (i)}}}{\leq} \E[\sum_{J:B_J \in \mathcal{P}_{\sf{lg}, 1}}p_J\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap B_J}, f_{\mathcal{D}_J}^*)]\\ &= \sum_{J:B_J \in \mathcal{P}_{\sf{lg}, 1}}p_J(\E[\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap B_J}, f_{\mathcal{D}_J}^*)|F_J]\Pr(F_J) + \E[\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap B_J}, f_{\mathcal{D}_J}^*)|\neg F_J]\Pr(\neg F_J))\\ &\stackrel{\ensuremath{{\sf (ii)}}}{\leq} \sum_{J:B_J \in \mathcal{P}_{\sf{lg}, 1}}p_J(1\cdot \epsilon + \epsilon \cdot 1) \leq 2\epsilon, \end{align*}} where step $\ensuremath{{\sf (i)}}$ follows by noting that $\tilde{f} = \hat{f}_{S\cap B_J}$ for all long boxes $B_J \in \mathcal{P}_{\sf lg}$ and that $f^*_{\mathcal{D}_J}(x)$ is the minimizer of the error $\text{err}_{\mathcal{D}_J}(f)$ over all $L$-Lipschitz functions, and step $\ensuremath{{\sf (ii)}}$ follows since $\E[\Delta_{\mathcal{D}_J}(\hat{f}_{S\cap B_J}, f_{\mathcal{D}_J}^*)|\neg F_J] \leq \epsilon$ by the definition of event $F_J$ and $\Pr(F_J) \leq \epsilon$ follows from a standard uniform convergence argument (detailed in Lemma~\ref{lem: good type 1 intervals-highd}). Now, using Markov's inequality, we get that $E_1 \leq {2\epsilon}/{\delta}$ with failure probability at most $\delta$. Plugging the error bounds obtained in equations~\eqref{eqn:type2-error-highd} and ~\eqref{eqn:combined-highd} into equation~\eqref{eqn:total-error-highd} and setting $\delta = \frac{1}{20}$ establishes the required claim. \paragraph{Label Query Complexity.} Observe that for any given query point $x^*$, $\tilde{f}(x^*)$ can be computed by only querying the labels of the box in which $x$ lies (in case if $x^*$ lies in a long box) or extension of the box in which $x^*$ lies (in case if $x^*$ lies in a short box) and hence, would only require $\frac{1}{\epsilon^2}(O(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ active label queries over the set $S$ of $\frac{1}{\epsilon^3}(O(\frac{L}{\epsilon}))^d\log(\frac{1}{\epsilon})$ unlabeled samples. \end{proof} Now, we will state the sample complexity bounds for estimating the error of the optimal function amongst the class of Lipschitz functions up to an additive error of $\epsilon$. \begin{restatable}{theorem}{theoremlipschitzerror-highd} \label{thm:lipschitzerror-highd} For any distribution $\mathcal{D}$ over $[0,1]^d\times[0,1]$, Lipschitz constant $L>0$ and parameter $\epsilon \in [0,1]$, Algorithm~\ref{alg3} uses $\frac{1}{\epsilon^4}(O(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ active label queries on $\frac{1}{\epsilon^3}(O(\frac{L}{\epsilon}))^d\log(\frac{1}{\epsilon})$ unlabeled samples from distribution $\mathcal{D}_x$ and produces an output $\widehat{\text{err}_\D}(\mathcal{F}_L)$ satisfying \begin{equation*} |\widehat{\text{err}_\D}(\mathcal{F}_L) - \text{err}_\D( \mathcal{F}_L)| \leq \epsilon \end{equation*} with probability at least $\frac{1}{2}$. \end{restatable} The proof goes exactly like the proof for the corresponding theorem in the one-dimensional case (Theorem~\ref{thm:lipschitzerror}). The following lemma proves that with enough unlabeled samples, a large fraction of long boxes have enough unlabeled samples in them which is eventually used to argue that they will be sufficient to learn a function which is approximately close to the optimal Lipschitz function over that box. \begin{lemma} \label{lemma:number_unlabeled_samples-highd} For any distribution $\mathcal{D}_x$, consider a set $S = \{x_1, x_2, \cdots, x_M\}$ of unlabeled samples where each sample $x_i\stackrel{\text{i.i.d.}}{\sim}\mathcal{D}_x$. Let $\mathcal{G}$ be the set of long boxes $\{B_J\}$ each of which satisfies $p_J = \Pr_{x \sim \mathcal{D}_x}(x \in B_J) \geq \frac{\epsilon}{(\frac{L\epsilon}{d})^d}$. Let $E_J$ denote the event that $\sum_{x_j \in S}\mathbb{I}[x_j \in B_J] < \frac{1}{\epsilon^2}(\frac{d}{\epsilon^2})^d\log(\frac{1}{\epsilon})$. Then, we have \begin{equation*} \sum_{B_J \in \mathcal{G}}p_J\mathbb{I}[E_J] \leq \frac{\epsilon}{\delta} \end{equation*} with failure probability atmost $\delta$ for $M = \frac{1}{\epsilon^3}(\Omega(\frac{L}{\epsilon}))^d\log(\frac{1}{\epsilon})$. \end{lemma} \begin{proof} For any box $B_J \in \mathcal{G}$, we have that $\E[\sum_{x_j \in S}\mathbb{I}[x_j \in B_J]] \geq \frac{c_d}{\epsilon^2}(\frac{d}{\epsilon^2})^d\log(\frac{1}{\epsilon})$ for some constant $c_d$ depending on the dimension $d$. Using Hoeffding inequality, we can get that $\Pr(E_J) \leq \epsilon$ for all boxes $B_J \in \mathcal{G}$. Calculating expectation of the desired quantity, we get \begin{equation*} \E[\sum_{B_J \in \mathcal{G}}p_J\mathbb{I}[E_J]] = \sum_{B_J \in \mathcal{G}}p_J\Pr[E_J] \leq \epsilon\sum_{B_J \in \mathcal{G}}p_J \leq \epsilon \end{equation*} We get the desired result using Markov's inequality. \end{proof} \iffalse \begin{proof} The expected number of samples that falls in an interval $I$ will probability mass greater than $\frac{\epsilon}{(\frac{L\epsilon}{d})^d}$ is at least $\frac{c_d}{\epsilon^2}(\frac{d}{\epsilon^2})^d$ for some constant $c_d$ depending on the dimension $d$. Using Chernoff bounds, we can get that $Pr(\text{number of samples in } I \leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d) \leq \exp(-\frac{c_dd^d}{8\epsilon^{2(d+1)}}) \leq \epsilon$. Now, let $F_i$ be the event that $I_i$ interval which has probability mass $\geq \frac{\epsilon}{(\frac{L\epsilon}{d})^d}$ has $\leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d$ samples in it. Now, we know that $F_i \leq \epsilon$ $\forall i$ where $I_i$ is an interval of type 1 having probability mass greater than $\frac{\epsilon}{(\frac{L\epsilon}{d})^d}$. Now, the expected combined mass of type 1 intervals which have $\leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d$ samples is $\sum_iPr(F_i)Pr(I_i) \leq \epsilon\sum_iPr(I_i) \leq \epsilon$. Now, using Markov's inequality, we get that with probability at least $1-\delta$, the total mass of type 1 intervals having $\leq \frac{c_d}{2\epsilon^2}(\frac{d}{\epsilon^2})^d$ samples each is $\leq \frac{\epsilon}{\delta}$. \end{proof} \fi The following lemma states that the probability of short boxes ${p}_{\sf sh}$ is small with high probability. Consider the case of uniform distribution. In this case, since the short boxes cover only $2\epsilon$ fraction of the domain $[0,1]^d$, their probability ${p}_{\sf sh}$ is upper bounded by $2\epsilon$. The case for arbitrary distributions holds because the boxes are chosen randomly. \begin{lemma} \label{lemma:prob_intervals-highd} When we divide the $[0,1]^d$ domain into long and short boxes as in the preprocessing step (Algorithm~\ref{alg4}), then \begin{equation*} {p}_{\sf sh} = \sum_{B_J \in \mathcal{P}_{\sf sh}}p_J \leq \frac{2\epsilon}{\delta} \end{equation*} with failure probability atmost $\delta$. \end{lemma} \iffalse \begin{lemma} \label{lemma:prob_intervals-highd} When we divide the $[0,1]^d$ domain into type 1 and type 2 intervals as in the preprocessing step \ref{alg4}, then with probability more than $1-\delta$, a randomly sampled $x\sim \mathcal{D}$ lies in intervals of type 1 or type 2 intervals have probability mass bounded by $\frac{2\epsilon}{\delta}$ with probability $1-\delta$. \end{lemma} \fi \begin{proof} Now, we consider the division of every dimension i.e. $[0,1]$ independently into alternating intervals of length $\frac{d}{L\epsilon}$ and $\frac{2}{L}$ with the offset chosen uniformly randomly from $\{0, 1, 2, \cdots, \frac{d}{2\epsilon}\}\frac{2}{L}$. For any set of fixed offsets chosen for the $d-1$ dimensions, the intervals of length $\frac{2}{L}$ chosen for the $dth$ dimension combined are disjoint in each of these divisions and together cover the entire $[0,1]^d$ and hence amount to probability mass $1$. Therefore, the total probability mass covered in the total $(\frac{d}{2\epsilon}+1)^d$ possible divisions is $d(\frac{d}{2\epsilon}+1)^{d-1}$. Hence, there are at most $\delta$ fraction out of the total $(\frac{d}{2\epsilon}+1)^d$ cases where the short boxes have probability greater than $\frac{2\epsilon}{\delta}$. Hence, with probability $1-\delta$, the short boxes have probability upper bounded by $\frac{2\epsilon}{\delta}$. % \end{proof} \begin{lemma} \label{lem: good type 1 intervals-highd} Let $B_J \in \mathcal{P}_{\sf{lg}, 1}$ be any long box of subtype 1. For the event \mbox{$F_J = \{\Delta_{\mathcal{D}_J}(\hat{f}_{S \cap B_J}, f_{\mathcal{D}_J}^*) > \epsilon \}$}, we have \begin{equation*} \Pr(F_J) \leq \epsilon \end{equation*} \end{lemma} \begin{proof} We know that the covering number for the class of $d$-dimensional $L$-Lipschitz functions supported on the box $[0,l]^d$ is $(O(\frac{Ll}{\epsilon}))^d$ (Lemma~\ref{lemma:lipschitz_covering}). For, Lipschitz functions supported on long intervals of length $l=\frac{d}{L\epsilon}$ along each dimension, we get this complexity as $(O(\frac{d}{\epsilon^2}))^d$. We know by standard results in uniform convergence, that the number of samples required for uniform convergence up to an error of $\epsilon$ and failure probability $\delta$ for all functions in a class $\mathcal{F}_L$ is $O((\frac{\text{Covering number of } \mathcal{F}_L)}{\epsilon^2})\log(\frac{1}{\delta}))$ (Lemma~\ref{lemma:lipschitz_sample_complexity}) and hence, for long boxes we get this estimate as $\frac{1}{\epsilon^2}(O(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\delta})$. Given that there are $\frac{1}{\epsilon^2}(\Omega(\frac{d}{\epsilon^2}))^d\log(\frac{1}{\epsilon})$ samples in these boxes by design, the function learned using these samples is only additively worse than the function with minimum error using standard learning theory arguments with failure probability at most $\epsilon$. Thus, we get that $$\Delta_{\mathcal{D}_J}(f_{S \cap B_J}, f_{\mathcal{D}_J}^*) \leq \epsilon \quad \forall B_J \in \mathcal{P}_{\sf{lg}, 1}$$ with failure probability at most $\epsilon$. \end{proof} \section{Introduction} Consider a setting where we have a large amount of unlabeled data but the corresponding labels are expensive to obtain. Our aim is to understand what information can we reliably obtain about the predictor that we would have learned had we been given unlimited labeled data by actively querying only a few labels (``few'' $=$ independent of the complexity of the hypothesis class). In particular, we look at the following two related questions. First, given an unlabeled set of training data sampled from a distribution, is it possible to estimate how well the best prediction function in a hypothesis class would do when the number of labels that we can actually obtain is insufficient to learn the best prediction function accurately? Second, suppose we have a few queries of interest of which we are interested in the labels of, is it possible to output the predictions for those queries corresponding to a nearly optimal function in the class without running the full training algorithm. Note that a nearly optimal function corresponds to a function which has low total error on data with respect to the underlying distribution. There are many natural scenarios in which this could be useful. For example, consider a setting where we are interested in predicting the outcome of a treatment for a particular patient with a certain disease. Medical records of patients who had similar treatments in the past are available in a hospital database but since the treatment outcome is sensitive and private, we want to minimize the number of patients for whom we acquire the information. Note that in this case, we cannot directly acquire the label of the query patient because the treatment procedure is an intervention which cannot be undone. Formally, we are interested in computing the labels of a few particular queries ``locally" and ``parallelly" with few label queries (independent of the complexity measure of the hypothesis class). We would like to mention that both the aforementioned problems are related because if we have an algorithm for predicting the labels of a few test points with few label queries, we can sample a few data points and use their labels to estimate the error of the best function in the class. While the question of error estimation has been previously studied in certain settings \citep{kong2018estimating, blum2018active}, there has been no prior work for local and parallel predictions for a function class with global constraints in learning based settings to the best of our knowledge. In this work, we answer these questions in affirmative. For the class of Lipschitz functions with Lipschitz constant at most $L$, we show that it is possible to estimate the error of the best function in the class with respect to an underlying distribution with independent of $L$ label queries. We also show that it is possible to locally estimate the values of a nearly optimal function at a few test points of interest with independent of $L$ label queries. A key point to notice here is that a function can predict any arbitrary values for the fixed constant number of queries and still be approximately optimal in terms of the total error since the queries have a zero measure with respect to the underlying distribution. Therefore, the additional guarantee that we have is after a common preprocessing step, the combined function, obtained when the algorithm is run in parallel for all possible query points independently, is $L$-Lipschitz and approximately optimal in terms of total error with respect to the underlying distribution. We also give an algorithm for estimating the minimum error for Nadaraya-Watson prediction algorithm amongst a set of linear transformations which is efficient in terms of both sample and time complexity. Note that computing the prediction for even a single query point requires computing a weighted sum of the labels of the training points and hence requires $Nd$ time and $N$ labels where $N$ is the number of points in the training set and $d$ is the dimension of the data points. Moreover, naively computing the error for a set of linear transformations with size exponential in $d$ would require exponential in $d$ labels and time which has a multiplicative factor of $N$ and ${1}/{\epsilon^d}$ where $\epsilon$ is the accuracy parameter. However, we obtain sample complexity which only depends polynomially on $d$ and logarithmically on $N$ and time complexity without the multiplicative dependence of $N$ and $1/\epsilon^d$. We would like to clarify how the setting considered in this paper differs from the classical notion of local learning algorithms. The notion of local learning algorithms \citep{bottou1992local} is used to refer to learning schemes where the prediction at a point depends locally on the training points around it. The key distinction is that rather than proposing a local learning strategy and seeing how well it performs, we are looking at the question of whether we can find local algorithms for simulating empirical risk minimization for a hypothesis class with global constraints. The remainder of the paper is organized as follows. We begin in Section~\ref{section:results} with a high level overview of our results and discuss the related work in Section~\ref{section:related-work}. Section~\ref{section:lipschitz} is devoted to our main results for the class of one dimensional Lipschitz functions. In Section~\ref{section:kde}, we present our results for error estimation for Nadaraya-Watson estimator. Finally, we end with a conclusion and some possible future directions in Section~\ref{section:conclusion}. \section{Nadaraya-Watson Estimator} \label{section:kde} In this section, we consider the related problem of approximating the minimum error that can be achieved by the Nadaraya-Watson estimator under a linear diagonal transformation with eigenvalues coming from a small range. In this setting, there exists a distribution $\mathcal{D}$ over the domain $\mathbb{R}^d$. Each data point $x\in\mathbb{R}^d$ has a true label $f(x) \in \{0,1\}$. The Nadaraya-Watson prediction algorithm when given a dataset $S=\{x_1,x_2,\cdots,x_N\}$ of unlabeled samples sampled from distribution $\mathcal{D}$ and a query point $x$ outputs the prediction \begin{small} $$\tilde{f}_{S,K_A}(x) = \frac{\sum_{i=1}^NK_A(x_i,x)f(x_i)}{\sum_{i=1}^NK_A(x_i,x)} = \sum_{i=1}^Np_{S,A}(x_i,x)f(x_i)$$ \end{small} where $K_A(x,y) = \nicefrac{1}{(1+||A(x-y)||_2^2)}$ is the kernel function for matrices $A \in \mathbb{R}^{d\times d}$. The loss of the data point $(x,f(x))$ with respect to the unlabeled samples $S$ and the kernel $K_A$ is $$l_{S,K_A}(x) = |f(x)-\tilde{f}_{S,K_A}(x)|$$ The total loss of the prediction function $\tilde{f}_{S,K_A}$ with respect to distribution $\mathcal{D}$ is {\small $$L_{S,K_A} = E_{x\sim \mathcal{D}}|f(x)-\tilde{f}_{S,K_A}(x)|$$ } Now, let us say we are interested in computing the prediction loss with the best diagonal linear transformation $A$ for the data with eigenvalues bounded between constants $1$ and $2$ that is a matrix $A \in \mathcal{A}$ where $\mathcal{A} = \{A \in \mathbb{R}^{d\times d} | A_{i,j} = 0 \ \forall i\neq j \text{ and } 1 \leq A_{i,i}\leq 2\}$ that is {\small $$L_{S} = \min_{A \in \mathcal{A}}L_{S,K_A} = \min_{A \in \mathcal{A}} \E_{x\sim \mathcal{D}}\sum_ip_{S,A}(x_i,x)|f(x_i)-f(x)|$$ } In this section, we show (Theorem~\ref{thm:kde-min-error}) that it is possible to estimate the loss $L_S$ with an additive error of $\epsilon$ with labeled sample complexity $\tilde{O}({d}/{\epsilon^2})$ and running time $\tilde{O}({d^2}/{\epsilon^{d+4}}+{dN}/{\epsilon^2})$. We use $[n]$ to denote $\{1,2,\cdots,n\}$ for a positive integer $n$ and $\tilde{O}$ notation to hide polylogarithmic factors in the input parameters and error rates. First, we discuss a theorem from \cite{backurs2018efficient} which will be crucial in the proof of Theorem~\ref{thm:kde-min-error}. Theorem~\ref{thm:backurs2018efficient}, formally stated in the appendix, states that for certain nice kernels $K(x,y)$, it is possible to efficiently estimate $N^{-1}\sum_{i=1}^N K(x,x_i)$ with a multiplicative error of $\epsilon$ for any query $x$. As a direct corollary of Theorem~\ref{thm:backurs2018efficient}, we obtain that it is possible to efficiently estimate the probabilities $p_{S,A}(q,x_i)$, for all data points $x_i$ in $S$, queries $q \in \mathbb{R}^d$ and matrices $A$ in $\mathcal{A}$. Let us define $\hat{S}_{S,A}(q)$ to be the estimator for $\sum_{x_i\in S}K(q,x_i)$ as per Theorem~\ref{thm:backurs2018efficient}. \begin{corollary} \label{thm:backurs2018efficient-cor} There exists a data structure that given a data set $S \subset \mathbb{R}^d$ with $|S| = N$, using $O(\frac{Nd}{\epsilon^2}\log(\frac{ N}{\delta}))$ space and preprocessing time, for any $A \in \mathcal{A}$ and a query $q \in \mathbb{R}^d$ and data point $x \in \mathbb{R}^d$, estimates $p_{S,A}(q, x) = \nicefrac{K_A(q,x)}{\sum_{y\in S}K_A(q,y)}$ by using the estimator $\hat{p}_{S,A}(q, x) = \nicefrac{K_A(q,x)}{\hat{S}_{S,A}(q)}$ with accuracy $(1\pm\epsilon)$ in time $O(\log(\frac{ N}{\delta})\frac{d}{\epsilon^2})$ with probability at least $1-\nicefrac{1}{poly(N)}-\delta$. \end{corollary} We state the algorithm NW Error (Algorithm~\ref{alg-kde}) for computing the minimum prediction error of the algorithm with respect to underlying distribution $\mathcal{D}$, the set of unlabeled training data $S$, the set of matrices $\mathcal{A}$, the error parameter $\epsilon$ and the failure probability $\delta$ and discuss the idea behind the algorithm in the next few paragraphs. \paragraph{Naive Algorithm.} The naive algorithm would take $O({1}/{\epsilon^2})$ labeled data points sampled from distribution $\mathcal{D}$ for each of the matrices $A \in \mathcal{A}_{\epsilon}$, an $\epsilon$-cover of the set $\mathcal{A}$ (Lemma~\ref{lemma:kde-approximate}) of size $O({1}/{\epsilon^d})$ and compute the exact loss using the $N$ data points in the training set. Hence, the number of labels required in this algorithm is $N+O({1}/{\epsilon^{d+2}})$. \paragraph{Dependence on $N$ of label query complexity.} Using our algorithm, we achieve labeled sample complexity of $\tilde{O}({d}/{\epsilon^2})$, independent of $N$ and depending only polynomially on $d$. For getting rid of the dependence on $N$, the idea is to first sample $O({1}/{\epsilon^2})$ samples from distribution $\mathcal{D}$ and then for each sample, sample a training data point from $S$ with probability proportional to $p_{S,A}$ for the matrix $A$. This gets rid of the dependence on $N$. However, we still have to repeat this procedure separately for every matrix $A \in \mathcal{A}$ which leads to a requirement of $O({1}/{\epsilon^{d+2}})$ labels. \paragraph{Dependence on $d$ of label query complexity.}To eliminate the exponential dependence on $d$, we show that for matrices in $\mathcal{A}$, we can use importance sampling and the samples generated for the identity matrix $I$ suffice to estimate the loss for all matrices $A\in\mathcal{A}$ with appropriate scaling factors $p_{S,A}/p_{S,I}$. This is because the eigenvalues of all the matrices are bounded between constants and hence, the sampling probabilities $p_{S,A}$ are similar up to a multiplicative factor (Lemma~\ref{lemma:kde-ineq}). This leads to our desired labeled sample complexity of $\tilde{O}({d}/{\epsilon^2})$. The factor $d$ comes in because of using a union bound over all the exponential number of matrices in $\mathcal{A}_{\epsilon}$. \paragraph{Running time.} However, using this approach directly, we obtain a running time of $\tilde{O}({Nd}/{\epsilon^{d+2}})$ because for each matrix $A$, for each sample, we have to compute ${p_{S,A}}/{p_{S,I}}$ which requires going over all the data points in the set $S$. To achieve better running times, we use the faster kernel density estimation algorithm \citep{backurs2018efficient} to compute approximate probabilities efficiently (Corollary~\ref{thm:backurs2018efficient-cor}) and obtain a running time of $\tilde{O}({d^2}/{\epsilon^{d+4}}+{Nd}/{\epsilon^2})$ separating the multiplicative dependence of $N$ and ${1}/{\epsilon^d}$. We state the formal guarantees in Theorem~\ref{thm:kde-min-error} and its proof in Appendix~\ref{appendix:kde}. \begin{algorithm}[t] \begin{algorithmic}[1] \caption{NW Error$(S, \mathcal{A}, \mathcal{D}, \epsilon, \delta)$\label{alg-kde}} \STATE Let $\mathcal{A}_{\epsilon}=\{A \in \mathbb{R}^{d\times d}\ |\ A_{i,j} = 0 \ \forall\ i\neq j \text{ and } A_{i,i}\in\{1,1+\epsilon,(1+\epsilon)^2,\cdots,2\}$ $\forall i \in [d]\}$. \STATE Sample $M = O\left(\frac{1}{\epsilon^2}\log\left(\frac{|\mathcal{A}_{\epsilon}|}{\delta}\right)\right)$ labeled examples $\{(z_i, f(z_i)\}_{i=1}^M$ with each $(z_i, f(z_i)) \sim \mathcal{D}$. \\ \FOR{$i=1$ to $M$} \STATE Sample a $\tilde{z}_i$ with probability proportional to $p_{S,I}(z_i, \tilde{z}_i)$. \ENDFOR \FOR{$A \in \mathcal{A}_{\epsilon}$} \FOR{$i=1$ to $M$} \STATE Compute $\hat{p}_{S,A}(z_i,\tilde{z}_i) = \frac{K_A(z_i,\tilde{z}_i)}{\hat{S}_{S,A}(z_i)}$. \ENDFOR \STATE Compute $\hat{L}_{S,K_A} = \frac{1}{M}\sum_{i=1}^M|f(z_i)-f(\tilde{z}_i)|\frac{\hat{p}_{S,A}(z_i,\tilde{z}_i)}{p_{S,I}(z_i,\tilde{z}_i)}$. \ENDFOR \STATE \textbf{Output} $\hat{L}_S = \min_{A \in \mathcal{A}_{\epsilon}}\hat{L}_{S,K_A}$. \end{algorithmic} \end{algorithm} \begin{restatable}{theorem}{theoremkdeminerror} \label{thm:kde-min-error} \sloppy For a $d$-dimensional unlabeled pointset $S$ with $|S| = N$, Algorithm~\ref{alg-kde} queries $O(\frac{1}{\epsilon^2}(d\log(\frac{1}{\epsilon})+\log(\frac{1}{\delta}))$ labels from $S$ and outputs $\hat{L}_S$ such that \begin{small} \begin{equation*} |\hat{L}_S - \min_{A \in \mathcal{A}} \E_{x\sim \mathcal{D}}\sum_ip_{S,A}(x_i,x)|f(x_i)-f(x)| | \leq \epsilon \end{equation*} \end{small} with a failure probability of at most $\delta + \frac{d}{\epsilon^{d+2} poly(N)}\log(\frac{1}{\epsilon\delta})$ and runs in time $\tilde{O}(\frac{d^2}{\epsilon^{d+4}} + \frac{dN}{\epsilon^2})$. \end{restatable} \begin{comment} \begin{restatable}{theorem}{theoremkdeminerror} \label{thm:kde-min-error} \sloppy It is possible to estimate $L_S = \min_{A \in \mathcal{A}} \E_{x\sim \mathcal{D}}\sum_ip_{S,A}(x_i,x)|f(x_i)-f(x)|$ for a $d$-dimensional unlabeled pointset $S$ with $|S| = N$ up to an additive error of $\epsilon$ using $O(\frac{1}{\epsilon^2}(d\log(\frac{1}{\epsilon})+\log(\frac{1}{\delta}))$ labeled samples with a failure probability of at most $\delta + \frac{d}{\epsilon^{d+2} poly(N)}\log(\frac{1}{\epsilon\delta})$ and a running time of $\tilde{O}(\frac{d^2}{\epsilon^{d+4}} + \frac{dN}{\epsilon^2})$. \end{restatable} \end{comment} \section{Lipschitz Functions} \label{section:lipschitz} In this section, we describe the formal problem setup for both local learning and error estimation for the class of Lipschitz functions. We then proceed to describe our algorithms and derive label query complexity bounds for them. We restrict our attention to the one-dimensional problems here and defer the details of the more than one dimensional setup to Appendix~\ref{subsection:lipschitz-highd}. \input{oned-lipshitz} \subsection{Problem Setup} \label{subsection:lipschitz-problemsetup} For any fixed $L>0$, let $\mathcal{F}_{L}$ be the class of $d$ dimensional functions supported on the domain $[0,1]^d$ with Lipschitz constant at most $L$, that is, \begin{equation} \mathcal{F}_L = \{f:[0,1]^d\mapsto [0,1], \ f \text{ is } L\text{-Lipschitz }\}. \end{equation} Let $\mathcal{D}$ be any distribution over $[0,1]^d\times[0,1]$ and $\mathcal{D}_x$ be the corresponding marginal distribution over $[0,1]^d$. The prediction error of a function $f$ with respect to $\mathcal{D}$ and the optimal prediction error of the class $\mathcal{F}_L$ are \begin{equation*} \text{err}_\D(f) :\,= E_{x,y \sim \mathcal{D}}|y-f(x)| \quad \text{and} \quad \text{err}_\D(\mathcal{F}_L) :\,= \min_{f \in \mathcal{F}_L}E_{x,y \sim \mathcal{D}}|y-f(x)|. \end{equation*} Also, let us denote the error of a function $f$ relative to another function $f'$ and function class $\mathcal{F}_L$ as \begin{equation} \Delta_{\D}(f, f'):\,= \text{err}_\D(f) -\text{err}_\D(f') \quad \text{and} \quad \Delta_{\D}(f, \mathcal{F}_L) :\,= \text{err}_\D(f) - \text{err}_\D(\mathcal{F}_L). \end{equation} We say that a function $f\in \mathcal{F}_L$ is $\epsilon$-optimal with respect to distribution $\mathcal{D}$ and the function class $\mathcal{F}_L$ if it satisfies $\Delta_{\D}(f, \mathcal{F}_L) \leq \epsilon$. Let functions $f^*_{\mathcal{D}} \in \mathcal{F}_L$ and $\hat{f}_S \in \mathcal{F}_L$ be the minimizers of error with respect to the distribution and empirical error on set $S$ defined as \begin{align} f^*_{\mathcal{D}} = \argmin_{f \in \mathcal{F}_L } \E_{(x, y) \sim \mathcal{D}}|y-f(x)| \quad \text{ and } \quad \hat{f}_S = \argmin_{f \in \mathcal{F}_L } \sum_{(x_j, y_j) \in S}|y_j-f(x_j)|.\label{lipschitz:minemperror} \end{align} \paragraph{Local Learning.} Given access to unlabeled samples from $\mathcal{D}_x$ and a test point $x^*$, the objective of \emph{Local Learning} is to output the prediction $\tilde{f}(x^*)$ using a small number of label queries such that $\tilde{f}\in \mathcal{F}_L$ is \mbox{$\epsilon$-optimal}. In addition, we would like such a predictor, $\text{Alg}$, to be able to answer multiple such queries while being consistent with the same function $\tilde{f}$, that is, \begin{equation*} \text{Alg}(x^*) = \tilde{f}(x^*) \quad \text{for all } x^* \in [0,1]^d. \end{equation*} \paragraph{Error Estimation.} Given access to unlabeled samples from distribution $\mathcal{D}_x$, the goal of \emph{Error Estimation} is to output an estimate of the optimal prediction error $\text{err}_\D( \mathcal{F}_L)$ up to an additive error of $\epsilon$ using few label queries. \subsection{Guarantees for Local Learning} \label{subsection:lipschitz-1d-localearning} We begin by describing our proposed algorithm for Local Learning and then provide a bound on its query complexity in Theorem~\ref{thm:lipschitzerror}. Our algorithm for local predictions first involves a preprocessing step (Algorithm~\ref{alg1}) which takes as input the Lipschitz constant $L$, sampling access to distribution $\mathcal{D}_x$ and the error parameter $\epsilon$ and returns a partition $\mathcal{P}= \{I_1 = [b_0, b_1], I_2=[b_1, b_2], \ldots, \}$ and a set $S$ of unlabeled samples. The partition $\mathcal{P}$ consists of alternating intervals of length ${1}/{L}$ and ${1}/(L\epsilon)$ over the domain $[0,1]$. Let us divide these intervals\footnote{Note that long intervals at the boundary could be shorter, but those can be handled similarly. Moreover, if any long interval gets more than $\frac{1}{2\epsilon^4}\log(\frac{1}{\epsilon})$ samples, we discard the future samples which fall into that interval.} further into the two sets \begin{small} \begin{equation*} \begin{gathered} \mathcal{P}_{\sf lg}:\,=\{[b_0,b_1],[b_2,b_3],\ldots,\}\ \text{(long intervals)}\ \text{ and } \ \mathcal{P}_{\sf sh}:\,=\{[b_1,b_2],[b_3,b_4],\ldots,\}\ \text{(short intervals)}.\\ \end{gathered} \end{equation*} \end{small} \begin{algorithm}[t] \begin{algorithmic}[1] \caption{Preprocess($L, \mathcal{D}_x, \epsilon$)\label{alg1}} \STATE Sample a uniformly random offset $b_1$ from $\{1,2,\cdots,\frac{1}{\epsilon}\}\frac{1}{L}$. \STATE Divide the $[0,1]$ interval into alternating intervals of length $\frac{1}{L\epsilon}$ and $\frac{1}{L}$ with boundary at $b_1$ and let $\mathcal{P}$ be the resulting partition, that is, $\mathcal{P} = \{[b_0=0, b_1], [b_1, b_2], \ldots, \}$ where $b_2=b_1+\frac{1}{L}, b_3=b_2+\frac{1}{L\epsilon}, \ldots$. \STATE Sample a set $S = \{x_i\}_{i=1}^M$ of $M = O(\frac{L}{\epsilon^4}\log(\frac{1}{\epsilon}))$ unlabeled examples from distribution $\mathcal{D}_x$.\\ \STATE \textbf{Output} $S, \mathcal{P}$. \end{algorithmic} \end{algorithm} The Query algorithm (Algorithm~\ref{alg2}) for test point $x^*$ takes as input the set $S$ of unlabeled samples and the partition $\mathcal{P}$ returned by the Preprocess algorithm. Note that all subsequent queries use the same partition $\mathcal{P}$ and the same set of unlabeled examples $S$. The algorithm uses different learning strategies depending on whether $x^*$ belongs to one of the long intervals in $\mathcal{P}_{\sf lg}$ or short intervals in $\mathcal{P}_{\sf sh}$. For the long intervals, it outputs the prediction corresponding to the empirical risk minimizer (ERM) function restricted to that interval. Whereas for the short interval, the prediction is made by linearly interpolating the function values at the boundaries with the neighbouring long intervals. This linear interpolation ensures that the overall function is Lipschitz. We bound the expected prediction error of this scheme with respect to class $\mathcal{F}_L$ by separately bounding this error for long and short intervals. For the long intervals, we prove that the ERM has low error by ensuring that each interval contains enough unlabeled samples. On the other hand, we show that the short intervals do not contribute much to the error because of their low probability under the distribution $\mathcal{D}$. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{Query($x, S, \mathcal{P}=\{[b_0,b_1],[b_1,b_2], [b_2,b_3], \ldots\}$)\label{alg2}} \IF{query $x \in I_i = [b_{i-1}, b_i]\text{ where } I_i \in \mathcal{P}_{\sf lg}$} \STATE Query labels\footnotemark for $x \in S \cap I_i.$ \STATE \textbf{Output} $\hat{f}_{S \cap I_i}(x)$.\\ \ELSIF{query $x \in I_i = [b_{i-1}, b_i] \text{ where } I_i \in \mathcal{P}_{\sf sh}$} \STATE Query labels for $x \in S \cap (I_{i-1} \cup I_{i+1})$. \STATE \textbf{if} $b_{i-1} > 0$ \textbf{then } $v^l_i = \hat{f}_{S \cap I_{i-1}}(b_{i-1})$ \textbf{else} $v^l_i = \hat{f}_{S \cap I_{i+1}}(b_{i})$ \textbf{end if} \STATE \textbf{if} $b_{i-1} < 1$ \textbf{then } $v^u_i = \hat{f}_{S \cap I_{i+1}}(b_{i})$ \textbf{else} $v^u_i = \hat{f}_{S \cap I_{i-1}}(b_{i-1})$ \textbf{end if} \STATE \textbf{Output} $v^l_i + (x-b_{i-1})\frac{v^u_i-v^l_i}{b_i-b_{i-1}}$.\\ \ENDIF \end{algorithmic} \end{algorithm} \footnotetext{One can either think of the label as being fixed for every data point or if it is randomized, we need to use the same label for every datapoint once queried.} Next, we state the number of label queries needed to make local predictions corresponding to the Query algorithm (Algorithm \ref{alg2}). \iffalse \begin{theorem} \label{thm:lipschitzlocalquery} Let data points $(x,y)\in ([0,1],[0,1]) \sim \mathcal{D}$. Then, there exists a $L$ Lipschitz function $\tilde{f}(x):[0,1] \rightarrow [0,1]$ with $\tilde{f}(x) \in \mathcal{F}_L$ such that $|E_{x,y \sim \mathcal{D}}|y-\tilde{f}(x)| - \min_{f \in \mathcal{F}_L}E_{x,y \sim \mathcal{D}}|y-f(x)|| \leq \epsilon$. With success probability more than 0.5, for any given $x \in [0,1]$, $\tilde{f}(x)$ can be computed with $O(\frac{1}{\epsilon^4})$ active label queries on a given pool of $O(\frac{L}{\epsilon^4})$ unlabeled samples from distribution $\mathcal{D}$ \end{theorem} \fi \begin{theorem} \label{thm:lipschitzlocalquery} For any distribution $\mathcal{D}$ over $[0,1]\times[0,1]$, Lipschitz constant $L>0$ and error parameter $\epsilon \in [0,1]$, let $(S, \mathcal{P})$ be the output of (randomized) Algorithm~\ref{alg1} where $S$ is the set of unlabeled samples of size $O(\frac{L}{\epsilon^4}\log(\frac{1}{\epsilon}))$ and $\mathcal{P}$ is a partition of the domain $[0,1]$. Then, there exists a function $\tilde{f} \in \mathcal{F}_L$, such that for all $x \in [0,1]$, Algorithm~\ref{alg2} queries $O(\frac{1}{\epsilon^4}\log(\frac{1}{\epsilon}))$ labels from the set $S$ and outputs $\text{Query}(x, S, P) $ satisfying \begin{equation} \text{Query}(x, S, \mathcal{P}) = \tilde{f}(x)\;, \end{equation} and the function $\tilde{f}$ is $\epsilon$-optimal, that is, $ \Delta_{\D}(\tilde{f}, \mathcal{F}_L) \leq \epsilon$ with probability greater than $\frac{1}{2}$. \end{theorem} \begin{proof} We begin by defining some notation. Let $S= \{x_i\}^M_{i=1}$ be the set of the unlabeled samples and $\mathcal{P}=\{[b_0,b_1],[b_1,b_2], \ldots\}$ be the partition returned by the pre-processing step given by Algorithm~\ref{alg1}. We will use $y_i$ to denote the queried label for the datapoint $x_i$. Let $\mathcal{D}_i$ be the distribution of a random variable $(X, Y) \sim \mathcal{D}$ conditioned on the event $\{X \in I_i\}$. Similarly, let $\mathcal{D}_{\sf{lg}}$ and $\mathcal{D}_{\sf{sh}}$ be the conditional distribution of $\mathcal{D}$ on intervals belonging to $\mathcal{P}_{\sf lg}$ and $\mathcal{P}_{\sf sh}$ respectively. Let $p_i$ denote the probability of a point sampled from distribution $\mathcal{D}$ lying in interval $I_i$. Going forward, we use the shorthand `probability of interval $I_i$' to denote $p_i$. Let ${p}_{\sf lg}$ and ${p}_{\sf sh}$ be the probability of set of long and short intervals respectively. Recall that $f_{\mathcal{D}}^*$ is the function which minimizes $\text{err}_\D(f)$ for $f \in \mathcal{F}_L$ and $f_{\mathcal{D}_i}^*$ is the function which minimizes this error with respect to the conditional distribution $\mathcal{D}_i$. Let $M_i$ denote the number of unlabeled samples of $S$ lying in interval $I_i$. For any interval $I_i$, let $\hat{f}_{S\cap I_i}$ be the ERM with respect to that interval. \paragraph{Lipschitzness of $\tilde{f}$.} For any interval $I_i = [b_{i-1}, b_i]$, let $v_i^l = \hat{f}_{S\cap I_{i-1}}(b_{i-1})$ be the value of the function with minimum empirical error on the neighbouring interval $I_{i-1}$ at the boundary point $b_{i-1}$ and similarly $v_i^u = \hat{f}_{S\cap I_{i+1}}(b_{i})$ be the value of the function $\hat{f}_{S\cap I_{i+1}}$ at the boundary point $b_i$. Note that if $I_i$ is a boundary interval and therefor $I_{i-1}$ (or $I_{i+1}$) does not exist, we can define $v^l_i=v^u_i$ (or $v^u_i=v^l_i$). Further, let $f^{\sf{int}}_i: I_i \mapsto \mathbb{R}$ be the linear function interpolating from $v_i^l$ to $v_i^u$, \begin{small} \begin{equation*} f^{\sf{int}}_i(x) = v^l_i+(x-b_{i-1})\frac{v_i^u-v^l_i}{b_i-b_{i-1}}. \end{equation*} \end{small} Note that the Query procedure (Algorithm~\ref{alg2}) is designed to output $\tilde{f}(x)$ for each query $x$ where \begin{small} \begin{equation*} \tilde{f}(x) = \begin{cases} \hat{f}_{S\cap I_i}(x) \quad &\text{ if } x \in I_i \text{ for any } I_i \in \mathcal{P}_{\sf lg}\\ f^{\sf{int}}_i(x) \quad &\text{ if } x \in I_i \text{ for any } I_i \in \mathcal{P}_{\sf sh} \end{cases}. \end{equation*} \end{small} Now, it is easy to see that the function $\tilde{f}(x)$ is $L$-Lipschitz. The function $\hat{f}_{S\cap I_i}$ on each of the long intervals is $L$-Lipschitz by construction. The function ${f_i^{\sf{int}}}$ on each of the short intervals is also $L$-Lipschitz since the short intervals in $\mathcal{P}_{\sf sh}$ have length ${1}/{L}$ and the label values $y_j \in [0,1]$. Also, the function $\tilde{f}$ is continuous at the boundary of each interval by construction. \paragraph{Error Guarantees for $\tilde{f}$.} Now looking at the error rate of the function $\tilde{f}(x)$ and following a repeated application of tower property of expectation, we get that \begin{small} \begin{align} \Delta_{\D}(\tilde{f}, \mathcal{F}_L) &= {p}_{\sf lg}\Delta_{\mathcal{D}_{\sf{lg}}}(\tilde{f}, f_{\mathcal{D}}^*) + {p}_{\sf sh}\Delta_{\mathcal{D}_{\sf{sh}}}(\tilde{f}, f_{\mathcal{D}}^*)\nonumber\\ &= \sum_{i:I_i \in \mathcal{P}_{\sf lg}}p_i\Delta_{\mathcal{D}_i}(\tilde{f}, f_{\mathcal{D}}^*) + {p}_{\sf sh}\Delta_{\mathcal{D}_{\sf{sh}}}(\tilde{f}, f_{\mathcal{D}}^*)\label{eqn:total-error} \end{align} \end{small} We now bound both terms above to obtain a bound on the total error of the function $\tilde{f}$. \emph{Error for short intervals. } The probability of short intervals ${p}_{\sf sh}$ is small with high probability since the total length of short intervals is $\epsilon$ and the intervals are chosen uniformly randomly. More formally, from Lemma~\ref{lemma:prob_intervals}, we know that with probability at least $1-\delta$, the probability of short intervals ${p}_{\sf sh}$ is upper bounded by $\epsilon/\delta$. Also, the error for any function $f$ is bounded between $[0,1]$ since the function's range is $[0,1]$. Hence, we get that \begin{align} {p}_{\sf sh}\Delta_{\mathcal{D}_{\mathcal{P}_{\sf sh}}}(\tilde{f}, f_{\mathcal{D}}^*) &\leq \frac{\epsilon}{\delta} \label{eqn:type2-error} \end{align} \emph{Error for long intervals:} We further divide the long intervals into 3 subtypes: {\small \begin{align*} \begin{gathered} \mathcal{P}_{\sf{lg}, 1} :\,= \left\lbrace I_i \; |\; I_i \in \mathcal{P}_{\sf lg},\; p_i \geq \frac{1}{L},\; M_i \geq \frac{1}{2\epsilon^4}\log\left(\frac{1}{\epsilon}\right) \right\rbrace, {\mathcal{P}}_{\sf{lg}, 2} :\,= \left\lbrace I_i \; |\; I_i \in \mathcal{P}_{\sf lg},\; p_i \geq \frac{1}{L},\; M_i < \frac{1}{2\epsilon^4}\log\left(\frac{1}{\epsilon}\right) \right\rbrace,\\ {\mathcal{P}}_{\sf{lg}, 3} :\,= \left\lbrace I_i \; |\; I_i \in \mathcal{P}_{\sf lg},\; p_i < \frac{1}{L} \right\rbrace. \end{gathered} \end{align*}} The intervals in both first and second subtypes have large probability $p_i$ with respect to distribution $\mathcal{D}$ but differ in the number of unlabeled samples in $S$ lying in them. Finally, the intervals in third subtype ${\mathcal{P}}_{\sf{lg}, 3}$ have small probability $p_i$ with respect to distribution $\mathcal{D}$. Now, we can divide the total error of long intervals into error in these subtypes \begin{small} \begin{align} \sum_{i:I_i \in \mathcal{P}_{\sf lg}}p_i\Delta_{\mathcal{D}_i}(\tilde{f}, f_{\mathcal{D}}^*) &= \underbrace{\sum_{i:I_i \in \mathcal{P}_{\sf{lg}, 1}}p_i\Delta_{\mathcal{D}_i}(\tilde{f}, f_{\mathcal{D}}^*)}_{E1} + \underbrace{\sum_{i:I_i \in {\mathcal{P}}_{\sf{lg}, 2}}p_i\Delta_{\mathcal{D}_i}(\tilde{f}, f_{\mathcal{D}}^*)}_{E2}+ \underbrace{\sum_{i:I_i \in {\mathcal{P}}_{\sf{lg}, 3}}p_i\Delta_{\mathcal{D}_i}(\tilde{f}, f_{\mathcal{D}}^*)}_{E3}. \label{eqn:combined} \end{align} \end{small} Now, we will argue about the contribution of each of the three terms above. \underline{Bounding $E3$.} Since there are at most $L\epsilon$ long intervals and each of these intervals $I_i$ has probability $p_i$ upper bounded by $1/L$, the total probability combined in these intervals is at most $\epsilon$. Also, in the worst case, the loss can be 1. Hence, we get an upper bound of $\epsilon$ on $E_3$. \underline{Bounding $E2$.} From Lemma~\ref{lemma:number_unlabeled_samples}, we know that with failure probability at most $\delta$, these intervals have total probability upper bounded by ${\epsilon}/{\delta}$. Again, the loss can be 1 in the worst case. Hence, we can get an upper bound of ${\epsilon}/{\delta}$ on $E_2$. \underline{Bounding $E1$.} Let $F_i$ denote the event that $\Delta_{\mathcal{D}_i}(\hat{f} _{S\cap I_i}, f_{\mathcal{D}_i}^*) > \epsilon$. The expected error of intervals $I_i$ in $\mathcal{P}_{\sf{lg}, 1}$ is then {\small \begin{align*} \E[\sum_{i:I_i \in \mathcal{P}_{\sf{lg}, 1}}p_i\Delta_{\mathcal{D}}(\tilde{f}, f_{\mathcal{D}}^*)] &\stackrel{\ensuremath{{\sf (i)}}}{\leq} \E[\sum_{i:I_i \in \mathcal{P}_{\sf{lg}, 1}}p_i\Delta_{\mathcal{D}_i}(\hat{f}_{S\cap I_i}, f_{\mathcal{D}_i}^*)]\\ &= \sum_{i:I_i \in \mathcal{P}_{\sf{lg}, 1}}p_i(\E[\Delta_{\mathcal{D}_i}(\hat{f}_{S\cap I_i}, f_{\mathcal{D}_i}^*)|F_i]\Pr(F_i) + \E[\Delta_{\mathcal{D}_i}(\hat{f}_{S\cap I_i}, f_{\mathcal{D}_i}^*)|\neg F_i]\Pr(\neg F_i))\\ &\stackrel{\ensuremath{{\sf (ii)}}}{\leq} \sum_{i:I_i \in \mathcal{P}_{\sf{lg}, 1}}p_i(1\cdot \epsilon + \epsilon \cdot 1) \leq 2\epsilon, \end{align*}} where step $\ensuremath{{\sf (i)}}$ follows by noting that $\tilde{f} = \hat{f}_{S\cap I_i}$ for all long intervals $I_i \in \mathcal{P}_{\sf lg}$ and that $f^*_{\mathcal{D}_i}(x)$ is the minimizer of the error $\text{err}_{\mathcal{D}_i}(f)$ over all $L$-Lipschitz functions, and step $\ensuremath{{\sf (ii)}}$ follows since $\E[\Delta_{\mathcal{D}_i}(\hat{f}_{S\cap I_i}, f_{\mathcal{D}_i}^*)|\neg F_i] \leq \epsilon$ by the definition of event $F_i$ and $\Pr(F_i) \leq \epsilon$ follows from a standard uniform convergence argument (detailed in Lemma~\ref{lem: good type 1 intervals}). Now, using Markov's inequality, we get that $E_1 \leq {2\epsilon}/{\delta}$ with failure probability at most $\delta$. Plugging the error bounds obtained in equations~\eqref{eqn:type2-error} and ~\eqref{eqn:combined} into equation~\eqref{eqn:total-error} and setting $\delta = \frac{1}{10}$ establishes the required claim. \paragraph{Label Query Complexity.} For any query point $x^*$, $\tilde{f}(x^*)$ can be computed by querying the labels of the interval in which $x^*$ lies (if $x^*$ lies in a long interval) or the two neighboring intervals of the interval in which $x^*$ lies (if $x^*$ lies in a short interval). Hence, the computation requires $O(({1}/{\epsilon^4})\log({1}/{\epsilon}))$ label queries over the set $S$ of $O(({L}/{\epsilon^4})\log({1}/{\epsilon}))$ unlabeled samples. \end{proof} \subsection{Guarantees for Error Estimation} \label{subsection:lipschitz-1d-errorestimation} We now study the Error Estimation problem for the class of Lipschitz functions $\mathcal{F}_L$. Our proposed estimator detailed in Algorithm~\ref{alg3} uses the algorithm for locally computing the labels of query points for a nearly optimal function from the previous section. In particular, it samples a few random query points and uses them to compute the average empirical error. The final query complexity of our procedure is then obtained via standard concentration arguments, relating the empirical error to the true expected error. We formalize this guarantee in Theorem~\ref{thm:lipschitzerror} and defer its proof to Appendix~\ref{appendix:lipschitz-lowd}. \begin{algorithm}[H] \begin{algorithmic}[1] \caption{Error$(L, \mathcal{D}, \epsilon)$\label{alg3}} \STATE Let $S,\mathcal{P} = $Preprocess$(L, \mathcal{D}_x, \epsilon)$ \STATE Sample a set $\{(x_1,y_1),(x_2,y_2),\cdots, (x_N,y_N)\}$ labeled examples from distribution $\mathcal{D}$ where $ N = O(\frac{1}{\epsilon^2})$ \\ \STATE \textbf{Output} $\widehat{\text{err}_\D}(\mathcal{F}_L) = \frac{1}{N}\sum_{i=1}^N|\text{Query}(x_i, S, \mathcal{P})-y_i|$ \end{algorithmic} \end{algorithm} \begin{restatable}{theorem}{theoremlipschitzerror} \label{thm:lipschitzerror} For any distribution $\mathcal{D}$ over $[0,1]\times[0,1]$, Lipschitz constant $L>0$ and parameter $\epsilon \in [0,1]$, Algorithm~\ref{alg3} uses $O(\frac{1}{\epsilon^6}\log(\frac{1}{\epsilon}))$ active label queries on $O(\frac{L}{\epsilon^4}\log(\frac{1}{\epsilon}))$ unlabeled samples from distribution $\mathcal{D}_x$ and produces an output $\widehat{\text{err}_\D}(\mathcal{F}_L)$ satisfying {\small\begin{equation*} |\widehat{\text{err}_\D}(\mathcal{F}_L) - \text{err}_\D( \mathcal{F}_L)| \leq \epsilon \end{equation*} } with probability at least $\frac{1}{2}$. \end{restatable} \begin{comment} \begin{lemma} \label{lemma:lipschitz_sample_complexity} \sloppy Let $\{(x_1,y_1),(x_2,y_2),\cdots,(x_M,y_M)\}$ be a set of $M$ data points sampled uniformly randomly from distribution $\mathcal{D}$ and $F:[0,l]^d\mapsto[0,1]$ be the class of high dimensional Lipschitz functions with Lipschitz constant at most $L$. When $M > \frac{1}{\epsilon^2}(\frac{Ll}{\epsilon})^d$, then $E_{x,y\sim \mathcal{D}}|y-\hat{f}(x)| - E_{x,y \sim \mathcal{D}}|y-f^*(x)| \leq \epsilon$ where $f^*=\argmin_{f \in F}E_{x,y\sim \mathcal{D}}{|y-f(x)|}$ and $\hat{f}=\argmin_{f \in F}\frac{1}{M}\sum_{i=1}^M{|y_i-f(x_i)|}$. \end{lemma} Now, we will state the well-known covering number bounds for the class of high dimensional Lipschitz functions. Note that we have stated this here just for completeness and the proof essentially follows the proof from \cite{gottlieb2017efficient}. \begin{lemma} \label{lemma:lipschitz_covering} For the set of high dimensional Lipschitz function class $F:[0,l]^d \mapsto [0,1]$ where $f \in F$ satisfies $|f(x)-f(y)| \leq L||x-y||_\infty \ \forall x,y \in [0,l]^d$, we have $\log(N_\infty(\epsilon, F, m)) = O((\frac{Ll}{\epsilon})^d)$. \end{lemma} \end{comment} \section{Notation} \label{section:preliminaries} For a positive integer n, let $[n]$ denote $\{1,2,\cdots,n\}$. For two positive semidefinite matrices $A$ and $B$, let $A\preceq B (\text{or } A\succeq B)$ denote that $B-A (\text{or } A-B)$ is a positive semidefinite matrix. We use $\tilde{O}$ notation to hide polylogarithmic factors in the input parameters and error rates. \section{Related Work} \label{section:related-work} \textbf{Local Computation Algorithms. } Our work on locally learning Lipschitz functions closely resembles the concept of local computation algorithms introduced in the pioneering work of \cite{rubinfeld2011fast}. They were interested in the question of whether it is possible to compute specific parts of the output in time sublinear in the input size. As mentioned by the authors, that work was a formalization of different forms of this concept already existing in literature in the form of local distributed computation, local algorithms, locally decodable codes and local reconstruction models. The reader can refer to \cite{rubinfeld2011fast} for the detailed discussion of work in each of these subfields. In the paper, the authors looked at several graph problems in the local computation model like maximal independent set, $k$-SAT and hypergraph coloring. Since then, there has been a lot of further work on local computation algorithms for various graph problems including maximum matching, load balancing, set cover and many others (\cite{mansour2013local, alon2012space, mansour2012converting, parnas2007approximating,parter2019local,levi2014local, grunau2020improved}). There also has been work on solving linear systems locally in sublinear time \citep{andoni2018solving} for the special case of sparse symmetric diagonally dominant matrices with small condition number. However, computing the best Lipschitz function for a given set of data cannot be written as a linear system. Moreover, the primary focus of all of these works has been on sublinear computational complexity, whereas we focus primarily on sample complexity. In another related work, \cite{mansour2014robust} used local computation algorithms in the context of robust inference to give polynomial time algorithms. They formulated their inference problem as an exponentially sized linear program and showed that the linear program (LP) has a special structure which allowed them to compute the values of certain variables in the optimal solution of the linear program in time sublinear in the total number of variables. They did this by sampling a polynomial number of constraints in the LP. Note that for our setting for learning Lipschitz functions, given all the unlabeled samples, learning the value of the best Lipschitz function on a particular input query can be cast as a linear program. However, our LP does not belong to the special class of programs that they consider. Moreover, we have a continuous domain and the number of possible queries is infinite. We cannot hope to get a globally Lipschitz solution by locally solving a smaller LP with constraints sampled independently for each query. We have to carefully design the local intervals and use different learning strategies for different types of intervals to ensure that the learned function is globally Lipschitz and also has good error bounds. % In another work, \cite{feige2015learning} considered the use of local computation algorithms for inference settings. They reduced their problem of inference for a particular query to the problem of computing minimum vertex cover in a bipartite graph. However, the focus was on time complexity rather than sample complexity. The problem of computing a part of output of the learned function in sublinear sample complexity as compared to the usual notion of complexity of the function class (VC-dimension, covering numbers) has not been previously looked at in the literature to the best of our knowledge. \textbf{Transductive Inference. }Another related line of work that has been done in the learning theory community is based on transductive inference \citep{vapnik1998statistical} which as opposed to inductive inference aims to estimate the values of the function on a few given data points of interest. The philosophy behind this line of work is to avoid solving a more general problem as an intermediate step to solve a problem. This idea is very similar to the idea considered here wherein we are interested in computing the prediction at specific query points of interest. However, our prediction algorithm still requires a guarantee on the total error over the complete domain with respect to the underlying distribution though it is never explicitly constructed for the entire domain unless required. The reader can refer to chapters 24 and 25 in \cite{chapelle2009semi} for additional discussion on the topic. \textbf{Local Learning Algorithms. } The term local learning algorithms has been used to refer to the class of learning schemes where the prediction at a test point depends only on the training points in the vicinity of the point such as k-nearest neighbor schemes \citep{bottou1992local,vapnik1992principles}. However, these works have primarily focused on proposing different local learning strategies and evaluating how well they perform. In contrast, we are interested in the question of whether local algorithms can be used for simulating empirical risk minimization for a hypothesis class with global constraints such as the class of Lipschitz functions. \textbf{Property Testing.} There also has been a large body of work in the theoretical computer science community on property testing where the goal is to determine whether a function belongs to a class of functions or is far away from it with respect to some notion of distance in sublinear sample complexity. The commonly studied testing problems include testing monotonicity of a function over a boolean hypercube \citep{goldreich1998testing, chakrabarty2016n, chen2014new, khot2018monotonicity}, testing linearity over finite fields \citep{bellare1996linearity, ben2003randomness}, testing for concise DNF formulas \citep{diakonikolas2007testing, parnas2002testing} and testing for small decision trees \citep{diakonikolas2007testing}. However, all the aforementioned algorithms work in a query model where a query can be made on any arbitrary domain point of choice. The setting which is closer to learning where the labels can only be obtained from a fixed distribution was first studied by \cite{goldreich1998property, kearns2000testing}.\includecomment{ kearns and rons 200 gave better sample complexity bounds for testing various problems like decisions trees and restricted form of neural networks than would be required for learning the best function in that class.} This setting is also called as passive property testing. The notion of active property testing was first introduced by \cite{balcan2012active} where an algorithm can make active label queries on an unlabeled sample of points drawn from an unknown distribution. However, one limitation of these algorithms is that they do not give meaningful bounds to distinguish the function from being approximately close to the function class (rather than belonging to the class) vs. far away from it. \cite{parnas2006tolerant} first introduced the notion of tolerant testing where the aim is to detect whether the function is $\epsilon$ close to the class or $2\epsilon$ far from it in the query model with queries on arbitrary domains points of choice. This also relates to estimating the distance of the function from the class within an additive error of $\epsilon$. \cite{blum2018active} first studied the problem of active tolerant testing where they were interested in algorithms which are tolerant to the function not being exactly in the function class and also have active query access to the labels over the unlabeled samples from an unknown distribution. Specifically, \cite{blum2018active} gave algorithms for estimating the distance of the function from the class of union of $d$ intervals with a labeled sample complexity of $\text{poly}({1}/{\epsilon})$. The key point to be noted is that the labeled sample complexity is independent of $d$ which is the VC dimension of that class and dictates the number of samples required for learning. We note that our algorithm for error estimation for the class of Lipschitz functions in labeled sample complexity independent of $L$ is another work along these lines. Property testing has also been studied for the specific class of Lipschitz functions \citep{jha2013testing, chakrabarty2013optimal, berman2014lp}. However, all of these results either query arbitrary domain points or are in the non-tolerant setting or work only for discrete domain. A closely related work to our method of error estimation where the predicted label depends on the labels of the training data weighted according to some appropriate kernel function was also considered in \cite{blum2018active}. In particular, they looked at the setting where the predicted label is based on the $k$-nearest neighbors and showed that the $\ell_{1}$ loss can be estimated up to an additive error of $\epsilon$ using $O({1}/{\epsilon^2})$ label queries on $N+O({1}/{\epsilon^2})$ unlabeled samples. Their results also extend to the case where the prediction is the weighted average of all the unlabeled points in the sample by sampling with a probability proportional to the weight of the point. However, this sampling when repeated for $O({1}/{\epsilon^d})$ different linear transformations would lead to a labeled sample complexity depending on ${1}/{\epsilon^d}$. Moreover, sampling a point with a probability proportional to the weight would require a running time of $N$ per query and thus, would give a total running time of $O({N}/{\epsilon^{d+2}})$. \section{Our Results} \label{section:results} Consider a class of one dimensional Lipschitz functions supported on the interval $[0,1]$ with Lipschitz constant at most $L$ and data drawn from an arbitrary unknown distribution $\mathcal{D}$ with labels in the range $[0,1]$. For this setting, we show (Theorem~\ref{thm:lipschitzlocalquery}) that there exists a function $\tilde{f}$ in the class that is optimal up to an additive error of $\epsilon$ such that for any query $x$, $\tilde{f}(x)$ can be computed with $O(({1}/{\epsilon^4})\log(1/\epsilon))$ label queries to a pool of $O(({L}/{\epsilon^4})\log(1/\epsilon))$ unlabeled samples drawn from the distribution $\mathcal{D}$. Also, the function values $\tilde{f}(x)$ at these query points $x$ can be computed in parallel once the unlabeled random samples have been drawn and fixed beforehand. Note that standard empirical risk minimization approaches would require a sample complexity of $O({L}/{\epsilon^3})$ to output the value of an approximately optimal function even at a single query point.\includecomment{Using standard generalization arguments, one can learn a function f with additive epsilon error approximation to the best function with O(frac{L}{epsilon^3}) labeled samples but in this case, even if we know that we are interested in computing a function value at a fixed given query point x, this would still require solving a dynamic program or a linear program with all the samples}But, the number of samples required by our approach for constant number of queries is independent of $L$ (which determines the complexity of the function class required for learning). In this setting, we think of $L$ as large compared to $\epsilon$ which is the error parameter. At a high level, we show that it is possible to effectively reduce the hypothesis class of bounded Lipschitz functions to a strictly smaller class of piece-wise independent Lipschitz functions where the function value can be computed locally by not losing too much in terms of the total accuracy. We also show (Theorem~\ref{thm:lipschitzerror}) that for the class of $L$-Lipschitz functions considered above, it is possible to estimate the error of the optimal function in the class up to an additive error $\epsilon$ using $O(({1}/{\epsilon^6})\log(1/\epsilon))$ active label queries over an unlabeled pool of size $O(({L}/{\epsilon^4})\log(1/\epsilon))$. The idea is to compute the empirical error of the local function $\tilde{f}(x)$ constructed above by using $O({1}/{\epsilon^2})$ random samples from distribution $D$. Since $\tilde{f}(x)$ can be computed locally for a given query $x$, the total number of labels needed is independent of $L$. Using standard concentration results and the fact that $\tilde{f}$ is $\epsilon$-optimal, we get the desired estimate. We also extend the results to the case of more than one dimensions where the dimension is constant with respect to the Lipschitz constant $L$. The results are mentioned in Theorems~\ref{thm:lipschitzlocalquery-highd} and~\ref{thm:lipschitzerror-highd}. For the related setting of Nadaraya-Watson estimator, we show (Theorem~\ref{thm:kde-min-error}) that it is possible to estimate the minimum error that can be achieved under a linear diagonal transformation with eigenvalues bounded in a small range with additive error at most $\epsilon$ by making $\tilde{O}({d}/{\epsilon^2})$ label queries over a $d$-dimension unlabeled training set with size $N$ in running time $\tilde{O}({d^2}/{\epsilon^{d+4}}+{dN}/{\epsilon^2})$. Note that exactly computing the prediction for even a single data point requires going over the entire dataset thus needing $N$ label queries and $Nd$ time. Moreover, naively computing the error for each of the linear diagonal transformation with bounded eigenvalues would require number of labels depending on ${1}/{\epsilon^d}$ and a running time depending multiplicatively on $N$ and ${1}/{\epsilon^d}$. In comparison, we achieve a labeled query complexity independent of $N$ and polynomial dependence on the dimension $d$. Moreover, we separate the multiplicative dependence of $N$ and ${1}/{\epsilon^d}$ in the running time to an additive dependence. We will further elaborate on our algorithm and the comparison with the standard algorithm in Section~\ref{section:kde}. \section{Related Work} \label{section:related-work} \textbf{Local Computation Algorithms: }Our work on locally learning Lipschitz functions closely resembles the concept of local computation algorithms introduced in the pioneering work of \cite{rubinfeld2011fast}. They were interested in the question of whether it is possible to compute specific parts of the output in time sublinear in the input size. As mentioned by the authors, that work was a formalization of different forms of this concept already existing in literature in the form of local distributed computation, local algorithms, locally decodable codes and local reconstruction models. The reader can refer to \cite{rubinfeld2011fast} for the detailed discussion of work in each of these subfields. In the paper, the authors looked at several graph problems in the local computation model like maximal independent set, $k$-SAT and hypergraph coloring. Since then, there has been a lot of further work on local computation algorithms for various graph problems including maximum matching, load balancing, set cover and many others (\cite{mansour2013local, alon2012space, mansour2012converting, parnas2007approximating,parter2019local,levi2014local, grunau2020improved}). There also has been work on solving linear systems locally in sublinear time \citep{andoni2018solving} for the special case of sparse symmetric diagonally dominant matrices with small condition number. However, the primary focus of all of these works has been on sublinear computational complexity, whereas we focus primarily on sample complexity. Note that in principle if we can solve the linear program corresponding to the best Lipschitz function locally in sublinear time, we would also get sublinear sample complexity. However, the linear program for computing the best Lipschitz function has constraints and does not fit into their framework of solving linear systems. \cite{mansour2014robust, feige2015learning} considered the use of local computation algorithms for inference settings. However, the focus was on time complexity rather than sample complexity. The problem of computing a part of output of the learned function in sublinear sample complexity as compared to the usual notion of complexity of the function class (VC-dimension, covering numbers) has not been previously looked at in the literature to the best of our knowledge. For a detailed comparison to our results, the reader can refer to Appendix~\ref{appendix:relatedwork}. \textbf{Transductive Inference: }Another related line of work that has been done in the learning theory community is based on transductive inference \citep{vapnik1998statistical} which as opposed to inductive inference aims to estimate the values of the function on a few given data points of interest. The philosophy behind this line of work is to avoid solving a more general problem as an intermediate step to solve a problem. This idea is very similar to the idea considered here wherein we are interested in computing the prediction at specific query points of interest. However, our prediction algorithm still requires a guarantee on the total error over the complete domain with respect to the underlying distribution though it is never explicitly constructed for the entire domain unless required. The reader can refer to chapters 24 and 25 in \cite{chapelle2009semi} for additional discussion on the topic. \textbf{Property Testing: }There also has been a large body of work in the theoretical computer science community on property testing where the goal is to determine whether a function belongs to a class of functions or is far away from it with respect to some notion of distance in sublinear sample complexity. The commonly studied testing problems include testing monotonicity of a function over a boolean hypercube \citep{goldreich1998testing, chakrabarty2016n, chen2014new, khot2018monotonicity}, testing linearity over finite fields \citep{bellare1996linearity, ben2003randomness}, testing for concise DNF formulas \citep{diakonikolas2007testing, parnas2002testing} and testing for small decision trees \citep{diakonikolas2007testing}. However, all the aforementioned algorithms work in a query model where a query can be made on any arbitrary domain point of choice. The setting which is closer to learning where the labels can only be obtained from a fixed distribution was first studied by \cite{goldreich1998property, kearns2000testing}.\includecomment{ kearns and rons 200 gave better sample complexity bounds for testing various problems like decisions trees and restricted form of neural networks than would be required for learning the best function in that class.} This setting is also called as passive property testing. The notion of active property testing was first introduced by \cite{balcan2012active} where an algorithm can make active label queries on an unlabeled sample of points drawn from an unknown distribution. However, one limitation of these algorithms is that they do not give meaningful bounds to distinguish the function from being approximately close to the function class (rather than belonging to the class) vs. far away from it. \cite{parnas2006tolerant} first introduced the notion of tolerant testing where the aim is to detect whether the function is $\epsilon$ close to the class or $2\epsilon$ far from it in the query model with queries on arbitrary domains points of choice. This also relates to estimating the distance of the function from the class within an additive error of $\epsilon$. \cite{blum2018active} first studied the problem of active tolerant testing where they were interested in algorithms which are tolerant to the function not being exactly in the function class and also have active query access to the labels over the unlabeled samples from an unknown distribution. Specifically, \cite{blum2018active} gave algorithms for estimating the distance of the function from the class of union of $d$ intervals with a labeled sample complexity of $\text{poly}({1}/{\epsilon})$. The key point to be noted is that the labeled sample complexity is independent of $d$ which is the VC dimension of that class and dictates the number of samples required for learning. We note that our algorithm for error estimation for the class of Lipschitz functions in labeled sample complexity independent of $L$ is another work along these lines. Property testing has also been studied for the specific class of Lipschitz functions \citep{jha2013testing, chakrabarty2013optimal, berman2014lp}. However, all of these results either query arbitrary domain points or are in the non-tolerant setting or work only for discrete domain. A closely related work to our method of error estimation where the predicted label depends on the labels of the training data weighted according to some appropriate kernel function was also considered in \cite{blum2018active}. In particular, \cite{blum2018active} looked at the setting where the predicted label was based on the $k$-nearest neighbors and showed that the $\ell_{1}$ loss can be estimated up to an additive error of $\epsilon$ using $O({1}/{\epsilon^2})$ label queries on $N+O({1}/{\epsilon^2})$ unlabeled samples. Their results also extend to the case where the prediction is the weighted average of all the unlabeled points in the sample by sampling with a probability proportional to the weight of the point. However, this sampling when repeated for $O({1}/{\epsilon^d})$ number of different linear transformations would lead to a labeled sample complexity depending on ${1}/{\epsilon^d}$. Moreover, sampling a point with a probability proportional to the weight would require a running time of $N$ per query and thus, would give a total running time of $O({N}/{\epsilon^{d+2}})$.